1
0
Fork 0

SCSI misc on 20190709

This is mostly update of the usual drivers: qla2xxx, hpsa, lpfc, ufs,
 mpt3sas, ibmvscsi, megaraid_sas, bnx2fc and hisi_sas as well as the
 removal of the osst driver (I heard from Willem privately that he
 would like the driver removed because all his test hardware has
 failed).  Plus number of minor changes, spelling fixes and other
 trivia.
 
 Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCXSTl4yYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishdcxAQDCJVbd
 fPUX76/V1ldupunF97+3DTharxxbst+VnkOnCwD8D4c0KFFFOI9+F36cnMGCPegE
 fjy17dQLvsJ4GsidHy8=
 =aS5B
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "This is mostly update of the usual drivers: qla2xxx, hpsa, lpfc, ufs,
  mpt3sas, ibmvscsi, megaraid_sas, bnx2fc and hisi_sas as well as the
  removal of the osst driver (I heard from Willem privately that he
  would like the driver removed because all his test hardware has
  failed). Plus number of minor changes, spelling fixes and other
  trivia.

  The big merge conflict this time around is the SPDX licence tags.
  Following discussion on linux-next, we believe our version to be more
  accurate than the one in the tree, so the resolution is to take our
  version for all the SPDX conflicts"

Note on the SPDX license tag conversion conflicts: the SCSI tree had
done its own SPDX conversion, which in some cases conflicted with the
treewide ones done by Thomas & co.

In almost all cases, the conflicts were purely syntactic: the SCSI tree
used the old-style SPDX tags ("GPL-2.0" and "GPL-2.0+") while the
treewide conversion had used the new-style ones ("GPL-2.0-only" and
"GPL-2.0-or-later").

In these cases I picked the new-style one.

In a few cases, the SPDX conversion was actually different, though.  As
explained by James above, and in more detail in a pre-pull-request
thread:

 "The other problem is actually substantive: In the libsas code Luben
  Tuikov originally specified gpl 2.0 only by dint of stating:

  * This file is licensed under GPLv2.

  In all the libsas files, but then muddied the water by quoting GPLv2
  verbatim (which includes the or later than language). So for these
  files Christoph did the conversion to v2 only SPDX tags and Thomas
  converted to v2 or later tags"

So in those cases, where the spdx tag substantially mattered, I took the
SCSI tree conversion of it, but then also took the opportunity to turn
the old-style "GPL-2.0" into a new-style "GPL-2.0-only" tag.

Similarly, when there were whitespace differences or other differences
to the comments around the copyright notices, I took the version from
the SCSI tree as being the more specific conversion.

Finally, in the spdx conversions that had no conflicts (because the
treewide ones hadn't been done for those files), I just took the SCSI
tree version as-is, even if it was old-style.  The old-style conversions
are perfectly valid, even if the "-only" and "-or-later" versions are
perhaps more descriptive.

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (185 commits)
  scsi: qla2xxx: move IO flush to the front of NVME rport unregistration
  scsi: qla2xxx: Fix NVME cmd and LS cmd timeout race condition
  scsi: qla2xxx: on session delete, return nvme cmd
  scsi: qla2xxx: Fix kernel crash after disconnecting NVMe devices
  scsi: megaraid_sas: Update driver version to 07.710.06.00-rc1
  scsi: megaraid_sas: Introduce various Aero performance modes
  scsi: megaraid_sas: Use high IOPS queues based on IO workload
  scsi: megaraid_sas: Set affinity for high IOPS reply queues
  scsi: megaraid_sas: Enable coalescing for high IOPS queues
  scsi: megaraid_sas: Add support for High IOPS queues
  scsi: megaraid_sas: Add support for MPI toolbox commands
  scsi: megaraid_sas: Offload Aero RAID5/6 division calculations to driver
  scsi: megaraid_sas: RAID1 PCI bandwidth limit algorithm is applicable for only Ventura
  scsi: megaraid_sas: megaraid_sas: Add check for count returned by HOST_DEVICE_LIST DCMD
  scsi: megaraid_sas: Handle sequence JBOD map failure at driver level
  scsi: megaraid_sas: Don't send FPIO to RL Bypass queue
  scsi: megaraid_sas: In probe context, retry IOC INIT once if firmware is in fault
  scsi: megaraid_sas: Release Mutex lock before OCR in case of DCMD timeout
  scsi: megaraid_sas: Call disable_irq from process IRQ poll
  scsi: megaraid_sas: Remove few debug counters from IO path
  ...
alistair/sunxi64-5.4-dsi
Linus Torvalds 2019-07-11 15:14:01 -07:00
commit ba6d10ab80
133 changed files with 5183 additions and 8878 deletions

View File

@ -1,218 +0,0 @@
README file for the osst driver
===============================
(w) Kurt Garloff <garloff@suse.de> 12/2000
This file describes the osst driver as of version 0.8.x/0.9.x, the released
version of the osst driver.
It is intended to help advanced users to understand the role of osst and to
get them started using (and maybe debugging) it.
It won't address issues like "How do I compile a kernel?" or "How do I load
a module?", as these are too basic.
Once the OnStream got merged into the official kernel, the distro makers
will provide the OnStream support for those who are not familiar with
hacking their kernels.
Purpose
-------
The osst driver was developed, because the standard SCSI tape driver in
Linux, st, does not support the OnStream SC-x0 SCSI tape. The st is not to
blame for that, as the OnStream tape drives do not support the standard SCSI
command set for Serial Access Storage Devices (SASDs), which basically
corresponds to the QIC-157 spec.
Nevertheless, the OnStream tapes are nice pieces of hardware and therefore
the osst driver has been written to make these tape devs supported by Linux.
The driver is free software. It's released under the GNU GPL and planned to
be integrated into the mainstream kernel.
Implementation
--------------
The osst is a new high-level SCSI driver, just like st, sr, sd and sg. It
can be compiled into the kernel or loaded as a module.
As it represents a new device, it got assigned a new device node: /dev/osstX
are character devices with major no 206 and minor numbers like the /dev/stX
devices. If those are not present, you may create them by calling
Makedevs.sh as root (see below).
The driver started being a copy of st and as such, the osst devices'
behavior looks very much the same as st to the userspace applications.
History
-------
In the first place, osst shared its identity very much with st. That meant
that it used the same kernel structures and the same device node as st.
So you could only have either of them being present in the kernel. This has
been fixed by registering an own device, now.
st and osst can coexist, each only accessing the devices it can support by
themselves.
Installation
------------
osst got integrated into the linux kernel. Select it during kernel
configuration as module or compile statically into the kernel.
Compile your kernel and install the modules.
Now, your osst driver is inside the kernel or available as a module,
depending on your choice during kernel config. You may still need to create
the device nodes by calling the Makedevs.sh script (see below) manually.
To load your module, you may use the command
modprobe osst
as root. dmesg should show you, whether your OnStream tapes have been
recognized.
If you want to have the module autoloaded on access to /dev/osst, you may
add something like
alias char-major-206 osst
to a file under /etc/modprobe.d/ directory.
You may find it convenient to create a symbolic link
ln -s nosst0 /dev/tape
to make programs assuming a default name of /dev/tape more convenient to
use.
The device nodes for osst have to be created. Use the Makedevs.sh script
attached to this file.
Using it
--------
You may use the OnStream tape driver with your standard backup software,
which may be tar, cpio, amanda, arkeia, BRU, Lone Tar, ...
by specifying /dev/(n)osst0 as the tape device to use or using the above
symlink trick. The IOCTLs to control tape operation are also mostly
supported and you may try the mt (or mt_st) program to jump between
filemarks, eject the tape, ...
There's one limitation: You need to use a block size of 32kB.
(This limitation is worked on and will be fixed in version 0.8.8 of
this driver.)
If you just want to get started with standard software, here is an example
for creating and restoring a full backup:
# Backup
tar cvf - / --exclude /proc | buffer -s 32k -m 24M -B -t -o /dev/nosst0
# Restore
buffer -s 32k -m 8M -B -t -i /dev/osst0 | tar xvf - -C /
The buffer command has been used to buffer the data before it goes to the
tape (or the file system) in order to smooth out the data stream and prevent
the tape from needing to stop and rewind. The OnStream does have an internal
buffer and a variable speed which help this, but especially on writing, the
buffering still proves useful in most cases. It also pads the data to
guarantees the block size of 32k. (Otherwise you may pass the -b64 option to
tar.)
Expect something like 1.8MB/s for the SC-x0 drives and 0.9MB/s for the DI-30.
The USB drive will give you about 0.7MB/s.
On a fast machine, you may profit from software data compression (z flag for
tar).
USB and IDE
-----------
Via the SCSI emulation layers usb-storage and ide-scsi, you can also use the
osst driver to drive the USB-30 and the DI-30 drives. (Unfortunately, there
is no such layer for the parallel port, otherwise the DP-30 would work as
well.) For the USB support, you need the latest 2.4.0-test kernels and the
latest usb-storage driver from
http://www.linux-usb.org/
http://sourceforge.net/cvs/?group_id=3581
Note that the ide-tape driver as of 1.16f uses a slightly outdated on-tape
format and therefore is not completely interoperable with osst tapes.
The ADR-x0 line is fully SCSI-2 compliant and is supported by st, not osst.
The on-tape format is supposed to be compatible with the one used by osst.
Feedback and updates
--------------------
The driver development is coordinated through a mailing list
<osst@linux1.onstream.nl>
a CVS repository and some web pages.
The tester's pages which contain recent news and updated drivers to download
can be found on
http://sourceforge.net/projects/osst/
If you find any problems, please have a look at the tester's page in order
to see whether the problem is already known and solved. Otherwise, please
report it to the mailing list. Your feedback is welcome. (This holds also
for reports of successful usage, of course.)
In case of trouble, please do always provide the following info:
* driver and kernel version used (see syslog)
* driver messages (syslog)
* SCSI config and OnStream Firmware (/proc/scsi/scsi)
* description of error. Is it reproducible?
* software and commands used
You may subscribe to the mailing list, BTW, it's a majordomo list.
Status
------
0.8.0 was the first widespread BETA release. Since then a lot of reports
have been sent, but mostly reported success or only minor trouble.
All the issues have been addressed.
Check the web pages for more info about the current developments.
0.9.x is the tree for the 2.3/2.4 kernel.
Acknowledgments
----------------
The driver has been started by making a copy of Kai Makisara's st driver.
Most of the development has been done by Willem Riede. The presence of the
userspace program osg (onstreamsg) from Terry Hardie has been rather
helpful. The same holds for Gadi Oxman's ide-tape support for the DI-30.
I did add some patches to those drivers as well and coordinated things a
little bit.
Note that most of them did mostly spend their spare time for the creation of
this driver.
The people from OnStream, especially Jack Bombeeck did support this project
and always tried to answer HW or FW related questions. Furthermore, he
pushed the FW developers to do the right things.
SuSE did support this project by allowing me to work on it during my working
time for them and by integrating the driver into their distro.
More people did help by sending useful comments. Sorry to those who have
been forgotten. Thanks to all the GNU/FSF and Linux developers who made this
platform such an interesting, nice and stable platform.
Thanks go to those who tested the drivers and did send useful reports. Your
help is needed!
Makedevs.sh
-----------
#!/bin/sh
# Script to create OnStream SC-x0 device nodes (major 206)
# Usage: Makedevs.sh [nos [path to dev]]
# $Id: README.osst.kernel,v 1.4 2000/12/20 14:13:15 garloff Exp $
major=206
nrs=4
dir=/dev
test -z "$1" || nrs=$1
test -z "$2" || dir=$2
declare -i nr
nr=0
test -d $dir || mkdir -p $dir
while test $nr -lt $nrs; do
mknod $dir/osst$nr c $major $nr
chown 0.disk $dir/osst$nr; chmod 660 $dir/osst$nr;
mknod $dir/nosst$nr c $major $[nr+128]
chown 0.disk $dir/nosst$nr; chmod 660 $dir/nosst$nr;
mknod $dir/osst${nr}l c $major $[nr+32]
chown 0.disk $dir/osst${nr}l; chmod 660 $dir/osst${nr}l;
mknod $dir/nosst${nr}l c $major $[nr+160]
chown 0.disk $dir/nosst${nr}l; chmod 660 $dir/nosst${nr}l;
mknod $dir/osst${nr}m c $major $[nr+64]
chown 0.disk $dir/osst${nr}m; chmod 660 $dir/osst${nr}m;
mknod $dir/nosst${nr}m c $major $[nr+192]
chown 0.disk $dir/nosst${nr}m; chmod 660 $dir/nosst${nr}m;
mknod $dir/osst${nr}a c $major $[nr+96]
chown 0.disk $dir/osst${nr}a; chmod 660 $dir/osst${nr}a;
mknod $dir/nosst${nr}a c $major $[nr+224]
chown 0.disk $dir/nosst${nr}a; chmod 660 $dir/nosst${nr}a;
let nr+=1
done

View File

@ -158,6 +158,13 @@ send SG_IO with the applicable sg_io_v4:
If you wish to read or write a descriptor, use the appropriate xferp of
sg_io_v4.
The userspace tool that interacts with the ufs-bsg endpoint and uses its
upiu-based protocol is available at:
https://github.com/westerndigitalcorporation/ufs-tool
For more detailed information about the tool and its supported
features, please see the tool's README.
UFS Specifications can be found at,
UFS - http://www.jedec.org/sites/default/files/docs/JESD220.pdf

View File

@ -11779,16 +11779,6 @@ S: Maintained
F: drivers/mtd/nand/onenand/
F: include/linux/mtd/onenand*.h
ONSTREAM SCSI TAPE DRIVER
M: Willem Riede <osst@riede.org>
L: osst-users@lists.sourceforge.net
L: linux-scsi@vger.kernel.org
S: Maintained
F: Documentation/scsi/osst.txt
F: drivers/scsi/osst.*
F: drivers/scsi/osst_*.h
F: drivers/scsi/st.h
OP-TEE DRIVER
M: Jens Wiklander <jens.wiklander@linaro.org>
S: Maintained
@ -12680,8 +12670,7 @@ S: Orphan
F: drivers/scsi/pmcraid.*
PMC SIERRA PM8001 DRIVER
M: Jack Wang <jinpu.wang@profitbricks.com>
M: lindar_liu@usish.com
M: Jack Wang <jinpu.wang@cloud.ionos.com>
L: linux-scsi@vger.kernel.org
S: Supported
F: drivers/scsi/pm8001/

View File

@ -911,6 +911,10 @@ static const struct resource mac_scsi_iifx_rsrc[] __initconst = {
.flags = IORESOURCE_MEM,
.start = 0x50008000,
.end = 0x50009FFF,
}, {
.flags = IORESOURCE_MEM,
.start = 0x50008000,
.end = 0x50009FFF,
},
};
@ -1012,10 +1016,12 @@ int __init mac_platform_init(void)
case MAC_SCSI_IIFX:
/* Addresses from The Guide to Mac Family Hardware.
* $5000 8000 - $5000 9FFF: SCSI DMA
* $5000 A000 - $5000 BFFF: Alternate SCSI
* $5000 C000 - $5000 DFFF: Alternate SCSI (DMA)
* $5000 E000 - $5000 FFFF: Alternate SCSI (Hsk)
* The SCSI DMA custom IC embeds the 53C80 core. mac_scsi does
* not make use of its DMA or hardware handshaking logic.
* The A/UX header file sys/uconfig.h says $50F0 8000.
* The "SCSI DMA" custom IC embeds the 53C80 core and
* supports Programmed IO, DMA and PDMA (hardware handshake).
*/
platform_device_register_simple("mac_scsi", 0,
mac_scsi_iifx_rsrc, ARRAY_SIZE(mac_scsi_iifx_rsrc));

View File

@ -2340,7 +2340,6 @@ static void srp_handle_qp_err(struct ib_cq *cq, struct ib_wc *wc,
static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
{
struct srp_target_port *target = host_to_target(shost);
struct srp_rport *rport = target->rport;
struct srp_rdma_ch *ch;
struct srp_request *req;
struct srp_iu *iu;
@ -2350,16 +2349,6 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
u32 tag;
u16 idx;
int len, ret;
const bool in_scsi_eh = !in_interrupt() && current == shost->ehandler;
/*
* The SCSI EH thread is the only context from which srp_queuecommand()
* can get invoked for blocked devices (SDEV_BLOCK /
* SDEV_CREATED_BLOCK). Avoid racing with srp_reconnect_rport() by
* locking the rport mutex if invoked from inside the SCSI EH.
*/
if (in_scsi_eh)
mutex_lock(&rport->mutex);
scmnd->result = srp_chkready(target->rport);
if (unlikely(scmnd->result))
@ -2428,13 +2417,7 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
goto err_unmap;
}
ret = 0;
unlock_rport:
if (in_scsi_eh)
mutex_unlock(&rport->mutex);
return ret;
return 0;
err_unmap:
srp_unmap_data(scmnd, ch, req);
@ -2456,7 +2439,7 @@ err:
ret = SCSI_MLQUEUE_HOST_BUSY;
}
goto unlock_rport;
return ret;
}
/*

View File

@ -6001,13 +6001,12 @@ mpt_findImVolumes(MPT_ADAPTER *ioc)
if (mpt_config(ioc, &cfg) != 0)
goto out;
mem = kmalloc(iocpage2sz, GFP_KERNEL);
mem = kmemdup(pIoc2, iocpage2sz, GFP_KERNEL);
if (!mem) {
rc = -ENOMEM;
goto out;
}
memcpy(mem, (u8 *)pIoc2, iocpage2sz);
ioc->raid_data.pIocPg2 = (IOCPage2_t *) mem;
mpt_read_ioc_pg_3(ioc);

View File

@ -99,28 +99,6 @@ config CHR_DEV_ST
To compile this driver as a module, choose M here and read
<file:Documentation/scsi/scsi.txt>. The module will be called st.
config CHR_DEV_OSST
tristate "SCSI OnStream SC-x0 tape support"
depends on SCSI
---help---
The OnStream SC-x0 SCSI tape drives cannot be driven by the
standard st driver, but instead need this special osst driver and
use the /dev/osstX char device nodes (major 206). Via usb-storage,
you may be able to drive the USB-x0 and DI-x0 drives as well.
Note that there is also a second generation of OnStream
tape drives (ADR-x0) that supports the standard SCSI-2 commands for
tapes (QIC-157) and can be driven by the standard driver st.
For more information, you may have a look at the SCSI-HOWTO
<http://www.tldp.org/docs.html#howto> and
<file:Documentation/scsi/osst.txt> in the kernel source.
More info on the OnStream driver may be found on
<http://sourceforge.net/projects/osst/>
Please also have a look at the standard st docu, as most of it
applies to osst as well.
To compile this driver as a module, choose M here and read
<file:Documentation/scsi/scsi.txt>. The module will be called osst.
config BLK_DEV_SR
tristate "SCSI CDROM support"
depends on SCSI && BLK_DEV
@ -664,6 +642,41 @@ config SCSI_DMX3191D
To compile this driver as a module, choose M here: the
module will be called dmx3191d.
config SCSI_FDOMAIN
tristate
depends on SCSI
config SCSI_FDOMAIN_PCI
tristate "Future Domain TMC-3260/AHA-2920A PCI SCSI support"
depends on PCI && SCSI
select SCSI_FDOMAIN
help
This is support for Future Domain's PCI SCSI host adapters (TMC-3260)
and other adapters with PCI bus based on the Future Domain chipsets
(Adaptec AHA-2920A).
NOTE: Newer Adaptec AHA-2920C boards use the Adaptec AIC-7850 chip
and should use the aic7xxx driver ("Adaptec AIC7xxx chipset SCSI
controller support"). This Future Domain driver works with the older
Adaptec AHA-2920A boards with a Future Domain chip on them.
To compile this driver as a module, choose M here: the
module will be called fdomain_pci.
config SCSI_FDOMAIN_ISA
tristate "Future Domain 16xx ISA SCSI support"
depends on ISA && SCSI
select CHECK_SIGNATURE
select SCSI_FDOMAIN
help
This is support for Future Domain's 16-bit SCSI host adapters
(TMC-1660/1680, TMC-1650/1670, TMC-1610M/MER/MEX) and other adapters
with ISA bus based on the Future Domain chipsets (Quantum ISA-200S,
ISA-250MG; and at least one IBM board).
To compile this driver as a module, choose M here: the
module will be called fdomain_isa.
config SCSI_GDTH
tristate "Intel/ICP (former GDT SCSI Disk Array) RAID Controller support"
depends on PCI && SCSI

View File

@ -76,6 +76,9 @@ obj-$(CONFIG_SCSI_AIC94XX) += aic94xx/
obj-$(CONFIG_SCSI_PM8001) += pm8001/
obj-$(CONFIG_SCSI_ISCI) += isci/
obj-$(CONFIG_SCSI_IPS) += ips.o
obj-$(CONFIG_SCSI_FDOMAIN) += fdomain.o
obj-$(CONFIG_SCSI_FDOMAIN_PCI) += fdomain_pci.o
obj-$(CONFIG_SCSI_FDOMAIN_ISA) += fdomain_isa.o
obj-$(CONFIG_SCSI_GENERIC_NCR5380) += g_NCR5380.o
obj-$(CONFIG_SCSI_QLOGIC_FAS) += qlogicfas408.o qlogicfas.o
obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o
@ -143,7 +146,6 @@ obj-$(CONFIG_SCSI_WD719X) += wd719x.o
obj-$(CONFIG_ARM) += arm/
obj-$(CONFIG_CHR_DEV_ST) += st.o
obj-$(CONFIG_CHR_DEV_OSST) += osst.o
obj-$(CONFIG_BLK_DEV_SD) += sd_mod.o
obj-$(CONFIG_BLK_DEV_SR) += sr_mod.o
obj-$(CONFIG_CHR_DEV_SG) += sg.o

View File

@ -709,6 +709,8 @@ static void NCR5380_main(struct work_struct *work)
NCR5380_information_transfer(instance);
done = 0;
}
if (!hostdata->connected)
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
spin_unlock_irq(&hostdata->lock);
if (!done)
cond_resched();
@ -1110,8 +1112,6 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
spin_lock_irq(&hostdata->lock);
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
NCR5380_reselect(instance);
if (!hostdata->connected)
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
shost_printk(KERN_ERR, instance, "reselection after won arbitration?\n");
goto out;
}
@ -1119,7 +1119,6 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
if (err < 0) {
spin_lock_irq(&hostdata->lock);
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
/* Can't touch cmd if it has been reclaimed by the scsi ML */
if (!hostdata->selecting)
@ -1157,7 +1156,6 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
if (err < 0) {
shost_printk(KERN_ERR, instance, "select: REQ timeout\n");
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
goto out;
}
if (!hostdata->selecting) {
@ -1763,10 +1761,8 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
scmd_printk(KERN_INFO, cmd,
"switching to slow handshake\n");
cmd->device->borken = 1;
sink = 1;
do_abort(instance);
cmd->result = DID_ERROR << 16;
/* XXX - need to source or sink data here, as appropriate */
do_reset(instance);
bus_reset_cleanup(instance);
}
} else {
/* Transfer a small chunk so that the
@ -1826,9 +1822,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
*/
NCR5380_write(TARGET_COMMAND_REG, 0);
/* Enable reselect interrupts */
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
maybe_release_dma_irq(instance);
return;
case MESSAGE_REJECT:
@ -1860,8 +1853,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
*/
NCR5380_write(TARGET_COMMAND_REG, 0);
/* Enable reselect interrupts */
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
#ifdef SUN3_SCSI_VME
dregs->csr |= CSR_DMA_ENABLE;
#endif
@ -1964,7 +1955,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
cmd->result = DID_ERROR << 16;
complete_cmd(instance, cmd);
maybe_release_dma_irq(instance);
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
return;
}
msgout = NOP;

View File

@ -235,7 +235,7 @@ struct NCR5380_cmd {
#define NCR5380_PIO_CHUNK_SIZE 256
/* Time limit (ms) to poll registers when IRQs are disabled, e.g. during PDMA */
#define NCR5380_REG_POLL_TIME 15
#define NCR5380_REG_POLL_TIME 10
static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr)
{

View File

@ -1666,7 +1666,7 @@ scratch_ram {
size 6
/*
* These are reserved registers in the card's scratch ram on the 2742.
* The EISA configuraiton chip is mapped here. On Rev E. of the
* The EISA configuration chip is mapped here. On Rev E. of the
* aic7770, the sequencer can use this area for scratch, but the
* host cannot directly access these registers. On later chips, this
* area can be read and written by both the host and the sequencer.

View File

@ -170,9 +170,7 @@ static int asd_init_target_ddb(struct domain_device *dev)
}
} else {
flags |= CONCURRENT_CONN_SUPP;
if (!dev->parent &&
(dev->dev_type == SAS_EDGE_EXPANDER_DEVICE ||
dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE))
if (!dev->parent && dev_is_expander(dev->dev_type))
asd_ddbsite_write_byte(asd_ha, ddb, MAX_CCONN,
4);
else

View File

@ -66,7 +66,7 @@
#include "bnx2fc_constants.h"
#define BNX2FC_NAME "bnx2fc"
#define BNX2FC_VERSION "2.11.8"
#define BNX2FC_VERSION "2.12.10"
#define PFX "bnx2fc: "
@ -75,8 +75,9 @@
#define BNX2X_DOORBELL_PCI_BAR 2
#define BNX2FC_MAX_BD_LEN 0xffff
#define BNX2FC_BD_SPLIT_SZ 0x8000
#define BNX2FC_MAX_BDS_PER_CMD 256
#define BNX2FC_BD_SPLIT_SZ 0xffff
#define BNX2FC_MAX_BDS_PER_CMD 255
#define BNX2FC_FW_MAX_BDS_PER_CMD 255
#define BNX2FC_SQ_WQES_MAX 256
@ -433,8 +434,10 @@ struct bnx2fc_cmd {
void (*cb_func)(struct bnx2fc_els_cb_arg *cb_arg);
struct bnx2fc_els_cb_arg *cb_arg;
struct delayed_work timeout_work; /* timer for ULP timeouts */
struct completion tm_done;
int wait_for_comp;
struct completion abts_done;
struct completion cleanup_done;
int wait_for_abts_comp;
int wait_for_cleanup_comp;
u16 xid;
struct fcoe_err_report_entry err_entry;
struct fcoe_task_ctx_entry *task;
@ -455,6 +458,7 @@ struct bnx2fc_cmd {
#define BNX2FC_FLAG_ELS_TIMEOUT 0xb
#define BNX2FC_FLAG_CMD_LOST 0xc
#define BNX2FC_FLAG_SRR_SENT 0xd
#define BNX2FC_FLAG_ISSUE_CLEANUP_REQ 0xe
u8 rec_retry;
u8 srr_retry;
u32 srr_offset;

View File

@ -610,7 +610,6 @@ int bnx2fc_send_rec(struct bnx2fc_cmd *orig_io_req)
rc = bnx2fc_initiate_els(tgt, ELS_REC, &rec, sizeof(rec),
bnx2fc_rec_compl, cb_arg,
r_a_tov);
rec_err:
if (rc) {
BNX2FC_IO_DBG(orig_io_req, "REC failed - release\n");
spin_lock_bh(&tgt->tgt_lock);
@ -618,6 +617,7 @@ rec_err:
spin_unlock_bh(&tgt->tgt_lock);
kfree(cb_arg);
}
rec_err:
return rc;
}
@ -654,7 +654,6 @@ int bnx2fc_send_srr(struct bnx2fc_cmd *orig_io_req, u32 offset, u8 r_ctl)
rc = bnx2fc_initiate_els(tgt, ELS_SRR, &srr, sizeof(srr),
bnx2fc_srr_compl, cb_arg,
r_a_tov);
srr_err:
if (rc) {
BNX2FC_IO_DBG(orig_io_req, "SRR failed - release\n");
spin_lock_bh(&tgt->tgt_lock);
@ -664,6 +663,7 @@ srr_err:
} else
set_bit(BNX2FC_FLAG_SRR_SENT, &orig_io_req->req_flags);
srr_err:
return rc;
}
@ -854,33 +854,57 @@ void bnx2fc_process_els_compl(struct bnx2fc_cmd *els_req,
kref_put(&els_req->refcount, bnx2fc_cmd_release);
}
#define BNX2FC_FCOE_MAC_METHOD_GRANGED_MAC 1
#define BNX2FC_FCOE_MAC_METHOD_FCF_MAP 2
#define BNX2FC_FCOE_MAC_METHOD_FCOE_SET_MAC 3
static void bnx2fc_flogi_resp(struct fc_seq *seq, struct fc_frame *fp,
void *arg)
{
struct fcoe_ctlr *fip = arg;
struct fc_exch *exch = fc_seq_exch(seq);
struct fc_lport *lport = exch->lp;
u8 *mac;
u8 op;
struct fc_frame_header *fh;
u8 *granted_mac;
u8 fcoe_mac[6];
u8 fc_map[3];
int method;
if (IS_ERR(fp))
goto done;
mac = fr_cb(fp)->granted_mac;
if (is_zero_ether_addr(mac)) {
op = fc_frame_payload_op(fp);
if (lport->vport) {
if (op == ELS_LS_RJT) {
printk(KERN_ERR PFX "bnx2fc_flogi_resp is LS_RJT\n");
fc_vport_terminate(lport->vport);
fc_frame_free(fp);
return;
}
}
fcoe_ctlr_recv_flogi(fip, lport, fp);
fh = fc_frame_header_get(fp);
granted_mac = fr_cb(fp)->granted_mac;
/*
* We set the source MAC for FCoE traffic based on the Granted MAC
* address from the switch.
*
* If granted_mac is non-zero, we use that.
* If the granted_mac is zeroed out, create the FCoE MAC based on
* the sel_fcf->fc_map and the d_id fo the FLOGI frame.
* If sel_fcf->fc_map is 0, then we use the default FCF-MAC plus the
* d_id of the FLOGI frame.
*/
if (!is_zero_ether_addr(granted_mac)) {
ether_addr_copy(fcoe_mac, granted_mac);
method = BNX2FC_FCOE_MAC_METHOD_GRANGED_MAC;
} else if (fip->sel_fcf && fip->sel_fcf->fc_map != 0) {
hton24(fc_map, fip->sel_fcf->fc_map);
fcoe_mac[0] = fc_map[0];
fcoe_mac[1] = fc_map[1];
fcoe_mac[2] = fc_map[2];
fcoe_mac[3] = fh->fh_d_id[0];
fcoe_mac[4] = fh->fh_d_id[1];
fcoe_mac[5] = fh->fh_d_id[2];
method = BNX2FC_FCOE_MAC_METHOD_FCF_MAP;
} else {
fc_fcoe_set_mac(fcoe_mac, fh->fh_d_id);
method = BNX2FC_FCOE_MAC_METHOD_FCOE_SET_MAC;
}
if (!is_zero_ether_addr(mac))
fip->update_mac(lport, mac);
BNX2FC_HBA_DBG(lport, "fcoe_mac=%pM method=%d\n", fcoe_mac, method);
fip->update_mac(lport, fcoe_mac);
done:
fc_lport_flogi_resp(seq, fp, lport);
}

View File

@ -2971,7 +2971,8 @@ static struct scsi_host_template bnx2fc_shost_template = {
.this_id = -1,
.cmd_per_lun = 3,
.sg_tablesize = BNX2FC_MAX_BDS_PER_CMD,
.max_sectors = 1024,
.dma_boundary = 0x7fff,
.max_sectors = 0x3fbf,
.track_queue_depth = 1,
.slave_configure = bnx2fc_slave_configure,
.shost_attrs = bnx2fc_host_attrs,

View File

@ -70,7 +70,7 @@ static void bnx2fc_cmd_timeout(struct work_struct *work)
&io_req->req_flags)) {
/* Handle eh_abort timeout */
BNX2FC_IO_DBG(io_req, "eh_abort timed out\n");
complete(&io_req->tm_done);
complete(&io_req->abts_done);
} else if (test_bit(BNX2FC_FLAG_ISSUE_ABTS,
&io_req->req_flags)) {
/* Handle internally generated ABTS timeout */
@ -775,31 +775,32 @@ retry_tmf:
io_req->on_tmf_queue = 1;
list_add_tail(&io_req->link, &tgt->active_tm_queue);
init_completion(&io_req->tm_done);
io_req->wait_for_comp = 1;
init_completion(&io_req->abts_done);
io_req->wait_for_abts_comp = 1;
/* Ring doorbell */
bnx2fc_ring_doorbell(tgt);
spin_unlock_bh(&tgt->tgt_lock);
rc = wait_for_completion_timeout(&io_req->tm_done,
rc = wait_for_completion_timeout(&io_req->abts_done,
interface->tm_timeout * HZ);
spin_lock_bh(&tgt->tgt_lock);
io_req->wait_for_comp = 0;
io_req->wait_for_abts_comp = 0;
if (!(test_bit(BNX2FC_FLAG_TM_COMPL, &io_req->req_flags))) {
set_bit(BNX2FC_FLAG_TM_TIMEOUT, &io_req->req_flags);
if (io_req->on_tmf_queue) {
list_del_init(&io_req->link);
io_req->on_tmf_queue = 0;
}
io_req->wait_for_comp = 1;
io_req->wait_for_cleanup_comp = 1;
init_completion(&io_req->cleanup_done);
bnx2fc_initiate_cleanup(io_req);
spin_unlock_bh(&tgt->tgt_lock);
rc = wait_for_completion_timeout(&io_req->tm_done,
rc = wait_for_completion_timeout(&io_req->cleanup_done,
BNX2FC_FW_TIMEOUT);
spin_lock_bh(&tgt->tgt_lock);
io_req->wait_for_comp = 0;
io_req->wait_for_cleanup_comp = 0;
if (!rc)
kref_put(&io_req->refcount, bnx2fc_cmd_release);
}
@ -1047,6 +1048,9 @@ int bnx2fc_initiate_cleanup(struct bnx2fc_cmd *io_req)
/* Obtain free SQ entry */
bnx2fc_add_2_sq(tgt, xid);
/* Set flag that cleanup request is pending with the firmware */
set_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, &io_req->req_flags);
/* Ring doorbell */
bnx2fc_ring_doorbell(tgt);
@ -1085,7 +1089,8 @@ static int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req)
struct bnx2fc_rport *tgt = io_req->tgt;
unsigned int time_left;
io_req->wait_for_comp = 1;
init_completion(&io_req->cleanup_done);
io_req->wait_for_cleanup_comp = 1;
bnx2fc_initiate_cleanup(io_req);
spin_unlock_bh(&tgt->tgt_lock);
@ -1094,21 +1099,21 @@ static int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req)
* Can't wait forever on cleanup response lest we let the SCSI error
* handler wait forever
*/
time_left = wait_for_completion_timeout(&io_req->tm_done,
time_left = wait_for_completion_timeout(&io_req->cleanup_done,
BNX2FC_FW_TIMEOUT);
io_req->wait_for_comp = 0;
if (!time_left)
if (!time_left) {
BNX2FC_IO_DBG(io_req, "%s(): Wait for cleanup timed out.\n",
__func__);
/*
* Release reference held by SCSI command the cleanup completion
* hits the BNX2FC_CLEANUP case in bnx2fc_process_cq_compl() and
* thus the SCSI command is not returnedi by bnx2fc_scsi_done().
*/
kref_put(&io_req->refcount, bnx2fc_cmd_release);
/*
* Put the extra reference to the SCSI command since it would
* not have been returned in this case.
*/
kref_put(&io_req->refcount, bnx2fc_cmd_release);
}
spin_lock_bh(&tgt->tgt_lock);
io_req->wait_for_cleanup_comp = 0;
return SUCCESS;
}
@ -1197,7 +1202,8 @@ int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd)
/* Move IO req to retire queue */
list_add_tail(&io_req->link, &tgt->io_retire_queue);
init_completion(&io_req->tm_done);
init_completion(&io_req->abts_done);
init_completion(&io_req->cleanup_done);
if (test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags)) {
printk(KERN_ERR PFX "eh_abort: io_req (xid = 0x%x) "
@ -1225,26 +1231,28 @@ int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd)
kref_put(&io_req->refcount,
bnx2fc_cmd_release); /* drop timer hold */
set_bit(BNX2FC_FLAG_EH_ABORT, &io_req->req_flags);
io_req->wait_for_comp = 1;
io_req->wait_for_abts_comp = 1;
rc = bnx2fc_initiate_abts(io_req);
if (rc == FAILED) {
io_req->wait_for_cleanup_comp = 1;
bnx2fc_initiate_cleanup(io_req);
spin_unlock_bh(&tgt->tgt_lock);
wait_for_completion(&io_req->tm_done);
wait_for_completion(&io_req->cleanup_done);
spin_lock_bh(&tgt->tgt_lock);
io_req->wait_for_comp = 0;
io_req->wait_for_cleanup_comp = 0;
goto done;
}
spin_unlock_bh(&tgt->tgt_lock);
/* Wait 2 * RA_TOV + 1 to be sure timeout function hasn't fired */
time_left = wait_for_completion_timeout(&io_req->tm_done,
(2 * rp->r_a_tov + 1) * HZ);
time_left = wait_for_completion_timeout(&io_req->abts_done,
(2 * rp->r_a_tov + 1) * HZ);
if (time_left)
BNX2FC_IO_DBG(io_req, "Timed out in eh_abort waiting for tm_done");
BNX2FC_IO_DBG(io_req,
"Timed out in eh_abort waiting for abts_done");
spin_lock_bh(&tgt->tgt_lock);
io_req->wait_for_comp = 0;
io_req->wait_for_abts_comp = 0;
if (test_bit(BNX2FC_FLAG_IO_COMPL, &io_req->req_flags)) {
BNX2FC_IO_DBG(io_req, "IO completed in a different context\n");
rc = SUCCESS;
@ -1319,10 +1327,29 @@ void bnx2fc_process_cleanup_compl(struct bnx2fc_cmd *io_req,
BNX2FC_IO_DBG(io_req, "Entered process_cleanup_compl "
"refcnt = %d, cmd_type = %d\n",
kref_read(&io_req->refcount), io_req->cmd_type);
/*
* Test whether there is a cleanup request pending. If not just
* exit.
*/
if (!test_and_clear_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ,
&io_req->req_flags))
return;
/*
* If we receive a cleanup completion for this request then the
* firmware will not give us an abort completion for this request
* so clear any ABTS pending flags.
*/
if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags) &&
!test_bit(BNX2FC_FLAG_ABTS_DONE, &io_req->req_flags)) {
set_bit(BNX2FC_FLAG_ABTS_DONE, &io_req->req_flags);
if (io_req->wait_for_abts_comp)
complete(&io_req->abts_done);
}
bnx2fc_scsi_done(io_req, DID_ERROR);
kref_put(&io_req->refcount, bnx2fc_cmd_release);
if (io_req->wait_for_comp)
complete(&io_req->tm_done);
if (io_req->wait_for_cleanup_comp)
complete(&io_req->cleanup_done);
}
void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req,
@ -1346,6 +1373,16 @@ void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req,
return;
}
/*
* If we receive an ABTS completion here then we will not receive
* a cleanup completion so clear any cleanup pending flags.
*/
if (test_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, &io_req->req_flags)) {
clear_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, &io_req->req_flags);
if (io_req->wait_for_cleanup_comp)
complete(&io_req->cleanup_done);
}
/* Do not issue RRQ as this IO is already cleanedup */
if (test_and_set_bit(BNX2FC_FLAG_IO_CLEANUP,
&io_req->req_flags))
@ -1390,10 +1427,10 @@ void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req,
bnx2fc_cmd_timer_set(io_req, r_a_tov);
io_compl:
if (io_req->wait_for_comp) {
if (io_req->wait_for_abts_comp) {
if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT,
&io_req->req_flags))
complete(&io_req->tm_done);
complete(&io_req->abts_done);
} else {
/*
* We end up here when ABTS is issued as
@ -1577,9 +1614,9 @@ void bnx2fc_process_tm_compl(struct bnx2fc_cmd *io_req,
sc_cmd->scsi_done(sc_cmd);
kref_put(&io_req->refcount, bnx2fc_cmd_release);
if (io_req->wait_for_comp) {
if (io_req->wait_for_abts_comp) {
BNX2FC_IO_DBG(io_req, "tm_compl - wake up the waiter\n");
complete(&io_req->tm_done);
complete(&io_req->abts_done);
}
}
@ -1623,6 +1660,7 @@ static int bnx2fc_map_sg(struct bnx2fc_cmd *io_req)
u64 addr;
int i;
WARN_ON(scsi_sg_count(sc) > BNX2FC_MAX_BDS_PER_CMD);
/*
* Use dma_map_sg directly to ensure we're using the correct
* dev struct off of pcidev.
@ -1670,6 +1708,16 @@ static int bnx2fc_build_bd_list_from_sg(struct bnx2fc_cmd *io_req)
}
io_req->bd_tbl->bd_valid = bd_count;
/*
* Return the command to ML if BD count exceeds the max number
* that can be handled by FW.
*/
if (bd_count > BNX2FC_FW_MAX_BDS_PER_CMD) {
pr_err("bd_count = %d exceeded FW supported max BD(255), task_id = 0x%x\n",
bd_count, io_req->xid);
return -ENOMEM;
}
return 0;
}
@ -1926,10 +1974,10 @@ void bnx2fc_process_scsi_cmd_compl(struct bnx2fc_cmd *io_req,
* between command abort and (late) completion.
*/
BNX2FC_IO_DBG(io_req, "xid not on active_cmd_queue\n");
if (io_req->wait_for_comp)
if (io_req->wait_for_abts_comp)
if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT,
&io_req->req_flags))
complete(&io_req->tm_done);
complete(&io_req->abts_done);
}
bnx2fc_unmap_sg_list(io_req);

View File

@ -187,7 +187,7 @@ void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt)
/* Handle eh_abort timeout */
BNX2FC_IO_DBG(io_req, "eh_abort for IO "
"cleaned up\n");
complete(&io_req->tm_done);
complete(&io_req->abts_done);
}
kref_put(&io_req->refcount,
bnx2fc_cmd_release); /* drop timer hold */
@ -210,8 +210,8 @@ void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt)
list_del_init(&io_req->link);
io_req->on_tmf_queue = 0;
BNX2FC_IO_DBG(io_req, "tm_queue cleanup\n");
if (io_req->wait_for_comp)
complete(&io_req->tm_done);
if (io_req->wait_for_abts_comp)
complete(&io_req->abts_done);
}
list_for_each_entry_safe(io_req, tmp, &tgt->els_queue, link) {
@ -251,8 +251,8 @@ void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt)
/* Handle eh_abort timeout */
BNX2FC_IO_DBG(io_req, "eh_abort for IO "
"in retire_q\n");
if (io_req->wait_for_comp)
complete(&io_req->tm_done);
if (io_req->wait_for_abts_comp)
complete(&io_req->abts_done);
}
kref_put(&io_req->refcount, bnx2fc_cmd_release);
}

View File

@ -1665,8 +1665,12 @@ static u8 get_iscsi_dcb_priority(struct net_device *ndev)
return 0;
if (caps & DCB_CAP_DCBX_VER_IEEE) {
iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_ANY;
iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_STREAM;
rv = dcb_ieee_getapp_mask(ndev, &iscsi_dcb_app);
if (!rv) {
iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_ANY;
rv = dcb_ieee_getapp_mask(ndev, &iscsi_dcb_app);
}
} else if (caps & DCB_CAP_DCBX_VER_CEE) {
iscsi_dcb_app.selector = DCB_APP_IDTYPE_PORTNUM;
rv = dcb_getapp(ndev, &iscsi_dcb_app);
@ -2260,7 +2264,8 @@ cxgb4_dcb_change_notify(struct notifier_block *self, unsigned long val,
u8 priority;
if (iscsi_app->dcbx & DCB_CAP_DCBX_VER_IEEE) {
if (iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_ANY)
if ((iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_STREAM) &&
(iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_ANY))
return NOTIFY_DONE;
priority = iscsi_app->app.priority;

View File

@ -0,0 +1,597 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for Future Domain TMC-16x0 and TMC-3260 SCSI host adapters
* Copyright 2019 Ondrej Zary
*
* Original driver by
* Rickard E. Faith, faith@cs.unc.edu
*
* Future Domain BIOS versions supported for autodetect:
* 2.0, 3.0, 3.2, 3.4 (1.0), 3.5 (2.0), 3.6, 3.61
* Chips supported:
* TMC-1800, TMC-18C50, TMC-18C30, TMC-36C70
* Boards supported:
* Future Domain TMC-1650, TMC-1660, TMC-1670, TMC-1680, TMC-1610M/MER/MEX
* Future Domain TMC-3260 (PCI)
* Quantum ISA-200S, ISA-250MG
* Adaptec AHA-2920A (PCI) [BUT *NOT* AHA-2920C -- use aic7xxx instead]
* IBM ?
*
* NOTE:
*
* The Adaptec AHA-2920C has an Adaptec AIC-7850 chip on it.
* Use the aic7xxx driver for this board.
*
* The Adaptec AHA-2920A has a Future Domain chip on it, so this is the right
* driver for that card. Unfortunately, the boxes will probably just say
* "2920", so you'll have to look on the card for a Future Domain logo, or a
* letter after the 2920.
*
* If you have a TMC-8xx or TMC-9xx board, then this is not the driver for
* your board.
*
* DESCRIPTION:
*
* This is the Linux low-level SCSI driver for Future Domain TMC-1660/1680
* TMC-1650/1670, and TMC-3260 SCSI host adapters. The 1650 and 1670 have a
* 25-pin external connector, whereas the 1660 and 1680 have a SCSI-2 50-pin
* high-density external connector. The 1670 and 1680 have floppy disk
* controllers built in. The TMC-3260 is a PCI bus card.
*
* Future Domain's older boards are based on the TMC-1800 chip, and this
* driver was originally written for a TMC-1680 board with the TMC-1800 chip.
* More recently, boards are being produced with the TMC-18C50 and TMC-18C30
* chips.
*
* Please note that the drive ordering that Future Domain implemented in BIOS
* versions 3.4 and 3.5 is the opposite of the order (currently) used by the
* rest of the SCSI industry.
*
*
* REFERENCES USED:
*
* "TMC-1800 SCSI Chip Specification (FDC-1800T)", Future Domain Corporation,
* 1990.
*
* "Technical Reference Manual: 18C50 SCSI Host Adapter Chip", Future Domain
* Corporation, January 1992.
*
* "LXT SCSI Products: Specifications and OEM Technical Manual (Revision
* B/September 1991)", Maxtor Corporation, 1991.
*
* "7213S product Manual (Revision P3)", Maxtor Corporation, 1992.
*
* "Draft Proposed American National Standard: Small Computer System
* Interface - 2 (SCSI-2)", Global Engineering Documents. (X3T9.2/86-109,
* revision 10h, October 17, 1991)
*
* Private communications, Drew Eckhardt (drew@cs.colorado.edu) and Eric
* Youngdale (ericy@cais.com), 1992.
*
* Private communication, Tuong Le (Future Domain Engineering department),
* 1994. (Disk geometry computations for Future Domain BIOS version 3.4, and
* TMC-18C30 detection.)
*
* Hogan, Thom. The Programmer's PC Sourcebook. Microsoft Press, 1988. Page
* 60 (2.39: Disk Partition Table Layout).
*
* "18C30 Technical Reference Manual", Future Domain Corporation, 1993, page
* 6-1.
*/
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/workqueue.h>
#include <scsi/scsicam.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h>
#include "fdomain.h"
/*
* FIFO_COUNT: The host adapter has an 8K cache (host adapters based on the
* 18C30 chip have a 2k cache). When this many 512 byte blocks are filled by
* the SCSI device, an interrupt will be raised. Therefore, this could be as
* low as 0, or as high as 16. Note, however, that values which are too high
* or too low seem to prevent any interrupts from occurring, and thereby lock
* up the machine.
*/
#define FIFO_COUNT 2 /* Number of 512 byte blocks before INTR */
#define PARITY_MASK ACTL_PAREN /* Parity enabled, 0 = disabled */
enum chip_type {
unknown = 0x00,
tmc1800 = 0x01,
tmc18c50 = 0x02,
tmc18c30 = 0x03,
};
struct fdomain {
int base;
struct scsi_cmnd *cur_cmd;
enum chip_type chip;
struct work_struct work;
};
static inline void fdomain_make_bus_idle(struct fdomain *fd)
{
outb(0, fd->base + REG_BCTL);
outb(0, fd->base + REG_MCTL);
if (fd->chip == tmc18c50 || fd->chip == tmc18c30)
/* Clear forced intr. */
outb(ACTL_RESET | ACTL_CLRFIRQ | PARITY_MASK,
fd->base + REG_ACTL);
else
outb(ACTL_RESET | PARITY_MASK, fd->base + REG_ACTL);
}
static enum chip_type fdomain_identify(int port)
{
u16 id = inb(port + REG_ID_LSB) | inb(port + REG_ID_MSB) << 8;
switch (id) {
case 0x6127:
return tmc1800;
case 0x60e9: /* 18c50 or 18c30 */
break;
default:
return unknown;
}
/* Try to toggle 32-bit mode. This only works on an 18c30 chip. */
outb(CFG2_32BIT, port + REG_CFG2);
if ((inb(port + REG_CFG2) & CFG2_32BIT)) {
outb(0, port + REG_CFG2);
if ((inb(port + REG_CFG2) & CFG2_32BIT) == 0)
return tmc18c30;
}
/* If that failed, we are an 18c50. */
return tmc18c50;
}
static int fdomain_test_loopback(int base)
{
int i;
for (i = 0; i < 255; i++) {
outb(i, base + REG_LOOPBACK);
if (inb(base + REG_LOOPBACK) != i)
return 1;
}
return 0;
}
static void fdomain_reset(int base)
{
outb(1, base + REG_BCTL);
mdelay(20);
outb(0, base + REG_BCTL);
mdelay(1150);
outb(0, base + REG_MCTL);
outb(PARITY_MASK, base + REG_ACTL);
}
static int fdomain_select(struct Scsi_Host *sh, int target)
{
int status;
unsigned long timeout;
struct fdomain *fd = shost_priv(sh);
outb(BCTL_BUSEN | BCTL_SEL, fd->base + REG_BCTL);
outb(BIT(sh->this_id) | BIT(target), fd->base + REG_SCSI_DATA_NOACK);
/* Stop arbitration and enable parity */
outb(PARITY_MASK, fd->base + REG_ACTL);
timeout = 350; /* 350 msec */
do {
status = inb(fd->base + REG_BSTAT);
if (status & BSTAT_BSY) {
/* Enable SCSI Bus */
/* (on error, should make bus idle with 0) */
outb(BCTL_BUSEN, fd->base + REG_BCTL);
return 0;
}
mdelay(1);
} while (--timeout);
fdomain_make_bus_idle(fd);
return 1;
}
static void fdomain_finish_cmd(struct fdomain *fd, int result)
{
outb(0, fd->base + REG_ICTL);
fdomain_make_bus_idle(fd);
fd->cur_cmd->result = result;
fd->cur_cmd->scsi_done(fd->cur_cmd);
fd->cur_cmd = NULL;
}
static void fdomain_read_data(struct scsi_cmnd *cmd)
{
struct fdomain *fd = shost_priv(cmd->device->host);
unsigned char *virt, *ptr;
size_t offset, len;
while ((len = inw(fd->base + REG_FIFO_COUNT)) > 0) {
offset = scsi_bufflen(cmd) - scsi_get_resid(cmd);
virt = scsi_kmap_atomic_sg(scsi_sglist(cmd), scsi_sg_count(cmd),
&offset, &len);
ptr = virt + offset;
if (len & 1)
*ptr++ = inb(fd->base + REG_FIFO);
if (len > 1)
insw(fd->base + REG_FIFO, ptr, len >> 1);
scsi_set_resid(cmd, scsi_get_resid(cmd) - len);
scsi_kunmap_atomic_sg(virt);
}
}
static void fdomain_write_data(struct scsi_cmnd *cmd)
{
struct fdomain *fd = shost_priv(cmd->device->host);
/* 8k FIFO for pre-tmc18c30 chips, 2k FIFO for tmc18c30 */
int FIFO_Size = fd->chip == tmc18c30 ? 0x800 : 0x2000;
unsigned char *virt, *ptr;
size_t offset, len;
while ((len = FIFO_Size - inw(fd->base + REG_FIFO_COUNT)) > 512) {
offset = scsi_bufflen(cmd) - scsi_get_resid(cmd);
if (len + offset > scsi_bufflen(cmd)) {
len = scsi_bufflen(cmd) - offset;
if (len == 0)
break;
}
virt = scsi_kmap_atomic_sg(scsi_sglist(cmd), scsi_sg_count(cmd),
&offset, &len);
ptr = virt + offset;
if (len & 1)
outb(*ptr++, fd->base + REG_FIFO);
if (len > 1)
outsw(fd->base + REG_FIFO, ptr, len >> 1);
scsi_set_resid(cmd, scsi_get_resid(cmd) - len);
scsi_kunmap_atomic_sg(virt);
}
}
static void fdomain_work(struct work_struct *work)
{
struct fdomain *fd = container_of(work, struct fdomain, work);
struct Scsi_Host *sh = container_of((void *)fd, struct Scsi_Host,
hostdata);
struct scsi_cmnd *cmd = fd->cur_cmd;
unsigned long flags;
int status;
int done = 0;
spin_lock_irqsave(sh->host_lock, flags);
if (cmd->SCp.phase & in_arbitration) {
status = inb(fd->base + REG_ASTAT);
if (!(status & ASTAT_ARB)) {
fdomain_finish_cmd(fd, DID_BUS_BUSY << 16);
goto out;
}
cmd->SCp.phase = in_selection;
outb(ICTL_SEL | FIFO_COUNT, fd->base + REG_ICTL);
outb(BCTL_BUSEN | BCTL_SEL, fd->base + REG_BCTL);
outb(BIT(cmd->device->host->this_id) | BIT(scmd_id(cmd)),
fd->base + REG_SCSI_DATA_NOACK);
/* Stop arbitration and enable parity */
outb(ACTL_IRQEN | PARITY_MASK, fd->base + REG_ACTL);
goto out;
} else if (cmd->SCp.phase & in_selection) {
status = inb(fd->base + REG_BSTAT);
if (!(status & BSTAT_BSY)) {
/* Try again, for slow devices */
if (fdomain_select(cmd->device->host, scmd_id(cmd))) {
fdomain_finish_cmd(fd, DID_NO_CONNECT << 16);
goto out;
}
/* Stop arbitration and enable parity */
outb(ACTL_IRQEN | PARITY_MASK, fd->base + REG_ACTL);
}
cmd->SCp.phase = in_other;
outb(ICTL_FIFO | ICTL_REQ | FIFO_COUNT, fd->base + REG_ICTL);
outb(BCTL_BUSEN, fd->base + REG_BCTL);
goto out;
}
/* cur_cmd->SCp.phase == in_other: this is the body of the routine */
status = inb(fd->base + REG_BSTAT);
if (status & BSTAT_REQ) {
switch (status & 0x0e) {
case BSTAT_CMD: /* COMMAND OUT */
outb(cmd->cmnd[cmd->SCp.sent_command++],
fd->base + REG_SCSI_DATA);
break;
case 0: /* DATA OUT -- tmc18c50/tmc18c30 only */
if (fd->chip != tmc1800 && !cmd->SCp.have_data_in) {
cmd->SCp.have_data_in = -1;
outb(ACTL_IRQEN | ACTL_FIFOWR | ACTL_FIFOEN |
PARITY_MASK, fd->base + REG_ACTL);
}
break;
case BSTAT_IO: /* DATA IN -- tmc18c50/tmc18c30 only */
if (fd->chip != tmc1800 && !cmd->SCp.have_data_in) {
cmd->SCp.have_data_in = 1;
outb(ACTL_IRQEN | ACTL_FIFOEN | PARITY_MASK,
fd->base + REG_ACTL);
}
break;
case BSTAT_CMD | BSTAT_IO: /* STATUS IN */
cmd->SCp.Status = inb(fd->base + REG_SCSI_DATA);
break;
case BSTAT_MSG | BSTAT_CMD: /* MESSAGE OUT */
outb(MESSAGE_REJECT, fd->base + REG_SCSI_DATA);
break;
case BSTAT_MSG | BSTAT_IO | BSTAT_CMD: /* MESSAGE IN */
cmd->SCp.Message = inb(fd->base + REG_SCSI_DATA);
if (!cmd->SCp.Message)
++done;
break;
}
}
if (fd->chip == tmc1800 && !cmd->SCp.have_data_in &&
cmd->SCp.sent_command >= cmd->cmd_len) {
if (cmd->sc_data_direction == DMA_TO_DEVICE) {
cmd->SCp.have_data_in = -1;
outb(ACTL_IRQEN | ACTL_FIFOWR | ACTL_FIFOEN |
PARITY_MASK, fd->base + REG_ACTL);
} else {
cmd->SCp.have_data_in = 1;
outb(ACTL_IRQEN | ACTL_FIFOEN | PARITY_MASK,
fd->base + REG_ACTL);
}
}
if (cmd->SCp.have_data_in == -1) /* DATA OUT */
fdomain_write_data(cmd);
if (cmd->SCp.have_data_in == 1) /* DATA IN */
fdomain_read_data(cmd);
if (done) {
fdomain_finish_cmd(fd, (cmd->SCp.Status & 0xff) |
((cmd->SCp.Message & 0xff) << 8) |
(DID_OK << 16));
} else {
if (cmd->SCp.phase & disconnect) {
outb(ICTL_FIFO | ICTL_SEL | ICTL_REQ | FIFO_COUNT,
fd->base + REG_ICTL);
outb(0, fd->base + REG_BCTL);
} else
outb(ICTL_FIFO | ICTL_REQ | FIFO_COUNT,
fd->base + REG_ICTL);
}
out:
spin_unlock_irqrestore(sh->host_lock, flags);
}
static irqreturn_t fdomain_irq(int irq, void *dev_id)
{
struct fdomain *fd = dev_id;
/* Is it our IRQ? */
if ((inb(fd->base + REG_ASTAT) & ASTAT_IRQ) == 0)
return IRQ_NONE;
outb(0, fd->base + REG_ICTL);
/* We usually have one spurious interrupt after each command. */
if (!fd->cur_cmd) /* Spurious interrupt */
return IRQ_NONE;
schedule_work(&fd->work);
return IRQ_HANDLED;
}
static int fdomain_queue(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
{
struct fdomain *fd = shost_priv(cmd->device->host);
unsigned long flags;
cmd->SCp.Status = 0;
cmd->SCp.Message = 0;
cmd->SCp.have_data_in = 0;
cmd->SCp.sent_command = 0;
cmd->SCp.phase = in_arbitration;
scsi_set_resid(cmd, scsi_bufflen(cmd));
spin_lock_irqsave(sh->host_lock, flags);
fd->cur_cmd = cmd;
fdomain_make_bus_idle(fd);
/* Start arbitration */
outb(0, fd->base + REG_ICTL);
outb(0, fd->base + REG_BCTL); /* Disable data drivers */
/* Set our id bit */
outb(BIT(cmd->device->host->this_id), fd->base + REG_SCSI_DATA_NOACK);
outb(ICTL_ARB, fd->base + REG_ICTL);
/* Start arbitration */
outb(ACTL_ARB | ACTL_IRQEN | PARITY_MASK, fd->base + REG_ACTL);
spin_unlock_irqrestore(sh->host_lock, flags);
return 0;
}
static int fdomain_abort(struct scsi_cmnd *cmd)
{
struct Scsi_Host *sh = cmd->device->host;
struct fdomain *fd = shost_priv(sh);
unsigned long flags;
if (!fd->cur_cmd)
return FAILED;
spin_lock_irqsave(sh->host_lock, flags);
fdomain_make_bus_idle(fd);
fd->cur_cmd->SCp.phase |= aborted;
fd->cur_cmd->result = DID_ABORT << 16;
/* Aborts are not done well. . . */
fdomain_finish_cmd(fd, DID_ABORT << 16);
spin_unlock_irqrestore(sh->host_lock, flags);
return SUCCESS;
}
static int fdomain_host_reset(struct scsi_cmnd *cmd)
{
struct Scsi_Host *sh = cmd->device->host;
struct fdomain *fd = shost_priv(sh);
unsigned long flags;
spin_lock_irqsave(sh->host_lock, flags);
fdomain_reset(fd->base);
spin_unlock_irqrestore(sh->host_lock, flags);
return SUCCESS;
}
static int fdomain_biosparam(struct scsi_device *sdev,
struct block_device *bdev, sector_t capacity,
int geom[])
{
unsigned char *p = scsi_bios_ptable(bdev);
if (p && p[65] == 0xaa && p[64] == 0x55 /* Partition table valid */
&& p[4]) { /* Partition type */
geom[0] = p[5] + 1; /* heads */
geom[1] = p[6] & 0x3f; /* sectors */
} else {
if (capacity >= 0x7e0000) {
geom[0] = 255; /* heads */
geom[1] = 63; /* sectors */
} else if (capacity >= 0x200000) {
geom[0] = 128; /* heads */
geom[1] = 63; /* sectors */
} else {
geom[0] = 64; /* heads */
geom[1] = 32; /* sectors */
}
}
geom[2] = sector_div(capacity, geom[0] * geom[1]);
kfree(p);
return 0;
}
static struct scsi_host_template fdomain_template = {
.module = THIS_MODULE,
.name = "Future Domain TMC-16x0",
.proc_name = "fdomain",
.queuecommand = fdomain_queue,
.eh_abort_handler = fdomain_abort,
.eh_host_reset_handler = fdomain_host_reset,
.bios_param = fdomain_biosparam,
.can_queue = 1,
.this_id = 7,
.sg_tablesize = 64,
.dma_boundary = PAGE_SIZE - 1,
};
struct Scsi_Host *fdomain_create(int base, int irq, int this_id,
struct device *dev)
{
struct Scsi_Host *sh;
struct fdomain *fd;
enum chip_type chip;
static const char * const chip_names[] = {
"Unknown", "TMC-1800", "TMC-18C50", "TMC-18C30"
};
unsigned long irq_flags = 0;
chip = fdomain_identify(base);
if (!chip)
return NULL;
fdomain_reset(base);
if (fdomain_test_loopback(base))
return NULL;
if (!irq) {
dev_err(dev, "card has no IRQ assigned");
return NULL;
}
sh = scsi_host_alloc(&fdomain_template, sizeof(struct fdomain));
if (!sh)
return NULL;
if (this_id)
sh->this_id = this_id & 0x07;
sh->irq = irq;
sh->io_port = base;
sh->n_io_port = FDOMAIN_REGION_SIZE;
fd = shost_priv(sh);
fd->base = base;
fd->chip = chip;
INIT_WORK(&fd->work, fdomain_work);
if (dev_is_pci(dev) || !strcmp(dev->bus->name, "pcmcia"))
irq_flags = IRQF_SHARED;
if (request_irq(irq, fdomain_irq, irq_flags, "fdomain", fd))
goto fail_put;
shost_printk(KERN_INFO, sh, "%s chip at 0x%x irq %d SCSI ID %d\n",
dev_is_pci(dev) ? "TMC-36C70 (PCI bus)" : chip_names[chip],
base, irq, sh->this_id);
if (scsi_add_host(sh, dev))
goto fail_free_irq;
scsi_scan_host(sh);
return sh;
fail_free_irq:
free_irq(irq, fd);
fail_put:
scsi_host_put(sh);
return NULL;
}
EXPORT_SYMBOL_GPL(fdomain_create);
int fdomain_destroy(struct Scsi_Host *sh)
{
struct fdomain *fd = shost_priv(sh);
cancel_work_sync(&fd->work);
scsi_remove_host(sh);
if (sh->irq)
free_irq(sh->irq, fd);
scsi_host_put(sh);
return 0;
}
EXPORT_SYMBOL_GPL(fdomain_destroy);
#ifdef CONFIG_PM_SLEEP
static int fdomain_resume(struct device *dev)
{
struct fdomain *fd = shost_priv(dev_get_drvdata(dev));
fdomain_reset(fd->base);
return 0;
}
static SIMPLE_DEV_PM_OPS(fdomain_pm_ops, NULL, fdomain_resume);
#endif /* CONFIG_PM_SLEEP */
MODULE_AUTHOR("Ondrej Zary, Rickard E. Faith");
MODULE_DESCRIPTION("Future Domain TMC-16x0/TMC-3260 SCSI driver");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,114 @@
/* SPDX-License-Identifier: GPL-2.0 */
#define FDOMAIN_REGION_SIZE 0x10
#define FDOMAIN_BIOS_SIZE 0x2000
enum {
in_arbitration = 0x02,
in_selection = 0x04,
in_other = 0x08,
disconnect = 0x10,
aborted = 0x20,
sent_ident = 0x40,
};
/* (@) = not present on TMC1800, (#) = not present on TMC1800 and TMC18C50 */
#define REG_SCSI_DATA 0 /* R/W: SCSI Data (with ACK) */
#define REG_BSTAT 1 /* R: SCSI Bus Status */
#define BSTAT_BSY BIT(0) /* Busy */
#define BSTAT_MSG BIT(1) /* Message */
#define BSTAT_IO BIT(2) /* Input/Output */
#define BSTAT_CMD BIT(3) /* Command/Data */
#define BSTAT_REQ BIT(4) /* Request and Not Ack */
#define BSTAT_SEL BIT(5) /* Select */
#define BSTAT_ACK BIT(6) /* Acknowledge and Request */
#define BSTAT_ATN BIT(7) /* Attention */
#define REG_BCTL 1 /* W: SCSI Bus Control */
#define BCTL_RST BIT(0) /* Bus Reset */
#define BCTL_SEL BIT(1) /* Select */
#define BCTL_BSY BIT(2) /* Busy */
#define BCTL_ATN BIT(3) /* Attention */
#define BCTL_IO BIT(4) /* Input/Output */
#define BCTL_CMD BIT(5) /* Command/Data */
#define BCTL_MSG BIT(6) /* Message */
#define BCTL_BUSEN BIT(7) /* Enable bus drivers */
#define REG_ASTAT 2 /* R: Adapter Status 1 */
#define ASTAT_IRQ BIT(0) /* Interrupt active */
#define ASTAT_ARB BIT(1) /* Arbitration complete */
#define ASTAT_PARERR BIT(2) /* Parity error */
#define ASTAT_RST BIT(3) /* SCSI reset occurred */
#define ASTAT_FIFODIR BIT(4) /* FIFO direction */
#define ASTAT_FIFOEN BIT(5) /* FIFO enabled */
#define ASTAT_PAREN BIT(6) /* Parity enabled */
#define ASTAT_BUSEN BIT(7) /* Bus drivers enabled */
#define REG_ICTL 2 /* W: Interrupt Control */
#define ICTL_FIFO_MASK 0x0f /* FIFO threshold, 1/16 FIFO size */
#define ICTL_FIFO BIT(4) /* Int. on FIFO count */
#define ICTL_ARB BIT(5) /* Int. on Arbitration complete */
#define ICTL_SEL BIT(6) /* Int. on SCSI Select */
#define ICTL_REQ BIT(7) /* Int. on SCSI Request */
#define REG_FSTAT 3 /* R: Adapter Status 2 (FIFO) - (@) */
#define FSTAT_ONOTEMPTY BIT(0) /* Output FIFO not empty */
#define FSTAT_INOTEMPTY BIT(1) /* Input FIFO not empty */
#define FSTAT_NOTEMPTY BIT(2) /* Main FIFO not empty */
#define FSTAT_NOTFULL BIT(3) /* Main FIFO not full */
#define REG_MCTL 3 /* W: SCSI Data Mode Control */
#define MCTL_ACK_MASK 0x0f /* Acknowledge period */
#define MCTL_ACTDEASS BIT(4) /* Active deassert of REQ and ACK */
#define MCTL_TARGET BIT(5) /* Enable target mode */
#define MCTL_FASTSYNC BIT(6) /* Enable Fast Synchronous */
#define MCTL_SYNC BIT(7) /* Enable Synchronous */
#define REG_INTCOND 4 /* R: Interrupt Condition - (@) */
#define IRQ_FIFO BIT(1) /* FIFO interrupt */
#define IRQ_REQ BIT(2) /* SCSI Request interrupt */
#define IRQ_SEL BIT(3) /* SCSI Select interrupt */
#define IRQ_ARB BIT(4) /* SCSI Arbitration interrupt */
#define IRQ_RST BIT(5) /* SCSI Reset interrupt */
#define IRQ_FORCED BIT(6) /* Forced interrupt */
#define IRQ_TIMEOUT BIT(7) /* Bus timeout */
#define REG_ACTL 4 /* W: Adapter Control 1 */
#define ACTL_RESET BIT(0) /* Reset FIFO, parity, reset int. */
#define ACTL_FIRQ BIT(1) /* Set Forced interrupt */
#define ACTL_ARB BIT(2) /* Initiate Bus Arbitration */
#define ACTL_PAREN BIT(3) /* Enable SCSI Parity */
#define ACTL_IRQEN BIT(4) /* Enable interrupts */
#define ACTL_CLRFIRQ BIT(5) /* Clear Forced interrupt */
#define ACTL_FIFOWR BIT(6) /* FIFO Direction (1=write) */
#define ACTL_FIFOEN BIT(7) /* Enable FIFO */
#define REG_ID_LSB 5 /* R: ID Code (LSB) */
#define REG_ACTL2 5 /* Adapter Control 2 - (@) */
#define ACTL2_RAMOVRLY BIT(0) /* Enable RAM overlay */
#define ACTL2_SLEEP BIT(7) /* Sleep mode */
#define REG_ID_MSB 6 /* R: ID Code (MSB) */
#define REG_LOOPBACK 7 /* R/W: Loopback */
#define REG_SCSI_DATA_NOACK 8 /* R/W: SCSI Data (no ACK) */
#define REG_ASTAT3 9 /* R: Adapter Status 3 */
#define ASTAT3_ACTDEASS BIT(0) /* Active deassert enabled */
#define ASTAT3_RAMOVRLY BIT(1) /* RAM overlay enabled */
#define ASTAT3_TARGERR BIT(2) /* Target error */
#define ASTAT3_IRQEN BIT(3) /* Interrupts enabled */
#define ASTAT3_IRQMASK 0xf0 /* Enabled interrupts mask */
#define REG_CFG1 10 /* R: Configuration Register 1 */
#define CFG1_BUS BIT(0) /* 0 = ISA */
#define CFG1_IRQ_MASK 0x0e /* IRQ jumpers */
#define CFG1_IO_MASK 0x30 /* I/O base jumpers */
#define CFG1_BIOS_MASK 0xc0 /* BIOS base jumpers */
#define REG_CFG2 11 /* R/W: Configuration Register 2 (@) */
#define CFG2_ROMDIS BIT(0) /* ROM disabled */
#define CFG2_RAMDIS BIT(1) /* RAM disabled */
#define CFG2_IRQEDGE BIT(2) /* Edge-triggered interrupts */
#define CFG2_NOWS BIT(3) /* No wait states */
#define CFG2_32BIT BIT(7) /* 32-bit mode */
#define REG_FIFO 12 /* R/W: FIFO */
#define REG_FIFO_COUNT 14 /* R: FIFO Data Count */
#ifdef CONFIG_PM_SLEEP
static const struct dev_pm_ops fdomain_pm_ops;
#define FDOMAIN_PM_OPS (&fdomain_pm_ops)
#else
#define FDOMAIN_PM_OPS NULL
#endif /* CONFIG_PM_SLEEP */
struct Scsi_Host *fdomain_create(int base, int irq, int this_id,
struct device *dev);
int fdomain_destroy(struct Scsi_Host *sh);

View File

@ -0,0 +1,222 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/module.h>
#include <linux/io.h>
#include <linux/isa.h>
#include <scsi/scsi_host.h>
#include "fdomain.h"
#define MAXBOARDS_PARAM 4
static int io[MAXBOARDS_PARAM] = { 0, 0, 0, 0 };
module_param_hw_array(io, int, ioport, NULL, 0);
MODULE_PARM_DESC(io, "base I/O address of controller (0x140, 0x150, 0x160, 0x170)");
static int irq[MAXBOARDS_PARAM] = { 0, 0, 0, 0 };
module_param_hw_array(irq, int, irq, NULL, 0);
MODULE_PARM_DESC(irq, "IRQ of controller (0=auto [default])");
static int scsi_id[MAXBOARDS_PARAM] = { 0, 0, 0, 0 };
module_param_hw_array(scsi_id, int, other, NULL, 0);
MODULE_PARM_DESC(scsi_id, "SCSI ID of controller (default = 7)");
static unsigned long addresses[] = {
0xc8000,
0xca000,
0xce000,
0xde000,
};
#define ADDRESS_COUNT ARRAY_SIZE(addresses)
static unsigned short ports[] = { 0x140, 0x150, 0x160, 0x170 };
#define PORT_COUNT ARRAY_SIZE(ports)
static unsigned short irqs[] = { 3, 5, 10, 11, 12, 14, 15, 0 };
/* This driver works *ONLY* for Future Domain cards using the TMC-1800,
* TMC-18C50, or TMC-18C30 chip. This includes models TMC-1650, 1660, 1670,
* and 1680. These are all 16-bit cards.
* BIOS versions prior to 3.2 assigned SCSI ID 6 to SCSI adapter.
*
* The following BIOS signature signatures are for boards which do *NOT*
* work with this driver (these TMC-8xx and TMC-9xx boards may work with the
* Seagate driver):
*
* FUTURE DOMAIN CORP. (C) 1986-1988 V4.0I 03/16/88
* FUTURE DOMAIN CORP. (C) 1986-1989 V5.0C2/14/89
* FUTURE DOMAIN CORP. (C) 1986-1989 V6.0A7/28/89
* FUTURE DOMAIN CORP. (C) 1986-1990 V6.0105/31/90
* FUTURE DOMAIN CORP. (C) 1986-1990 V6.0209/18/90
* FUTURE DOMAIN CORP. (C) 1986-1990 V7.009/18/90
* FUTURE DOMAIN CORP. (C) 1992 V8.00.004/02/92
*
* (The cards which do *NOT* work are all 8-bit cards -- although some of
* them have a 16-bit form-factor, the upper 8-bits are used only for IRQs
* and are *NOT* used for data. You can tell the difference by following
* the tracings on the circuit board -- if only the IRQ lines are involved,
* you have a "8-bit" card, and should *NOT* use this driver.)
*/
static struct signature {
const char *signature;
int offset;
int length;
int this_id;
int base_offset;
} signatures[] = {
/* 1 2 3 4 5 6 */
/* 123456789012345678901234567890123456789012345678901234567890 */
{ "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V2.07/28/89", 5, 50, 6, 0x1fcc },
{ "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V1.07/28/89", 5, 50, 6, 0x1fcc },
{ "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V2.07/28/89", 72, 50, 6, 0x1fa2 },
{ "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V2.0", 73, 43, 6, 0x1fa2 },
{ "FUTURE DOMAIN CORP. (C) 1991 1800-V2.0.", 72, 39, 6, 0x1fa3 },
{ "FUTURE DOMAIN CORP. (C) 1992 V3.00.004/02/92", 5, 44, 6, 0 },
{ "FUTURE DOMAIN TMC-18XX (C) 1993 V3.203/12/93", 5, 44, 7, 0 },
{ "IBM F1 P2 BIOS v1.0011/09/92", 5, 28, 7, 0x1ff3 },
{ "IBM F1 P2 BIOS v1.0104/29/93", 5, 28, 7, 0 },
{ "Future Domain Corp. V1.0008/18/93", 5, 33, 7, 0 },
{ "Future Domain Corp. V2.0108/18/93", 5, 33, 7, 0 },
{ "FUTURE DOMAIN CORP. V3.5008/18/93", 5, 34, 7, 0 },
{ "FUTURE DOMAIN 18c30/18c50/1800 (C) 1994 V3.5", 5, 44, 7, 0 },
{ "FUTURE DOMAIN CORP. V3.6008/18/93", 5, 34, 7, 0 },
{ "FUTURE DOMAIN CORP. V3.6108/18/93", 5, 34, 7, 0 },
};
#define SIGNATURE_COUNT ARRAY_SIZE(signatures)
static int fdomain_isa_match(struct device *dev, unsigned int ndev)
{
struct Scsi_Host *sh;
int i, base = 0, irq = 0;
unsigned long bios_base = 0;
struct signature *sig = NULL;
void __iomem *p;
static struct signature *saved_sig;
int this_id = 7;
if (ndev < ADDRESS_COUNT) { /* scan supported ISA BIOS addresses */
p = ioremap(addresses[ndev], FDOMAIN_BIOS_SIZE);
if (!p)
return 0;
for (i = 0; i < SIGNATURE_COUNT; i++)
if (check_signature(p + signatures[i].offset,
signatures[i].signature,
signatures[i].length))
break;
if (i == SIGNATURE_COUNT) /* no signature found */
goto fail_unmap;
sig = &signatures[i];
bios_base = addresses[ndev];
/* read I/O base from BIOS area */
if (sig->base_offset)
base = readb(p + sig->base_offset) +
(readb(p + sig->base_offset + 1) << 8);
iounmap(p);
if (base)
dev_info(dev, "BIOS at 0x%lx specifies I/O base 0x%x\n",
bios_base, base);
else
dev_info(dev, "BIOS at 0x%lx\n", bios_base);
if (!base) { /* no I/O base in BIOS area */
/* save BIOS signature for later use in port probing */
saved_sig = sig;
return 0;
}
} else /* scan supported I/O ports */
base = ports[ndev - ADDRESS_COUNT];
/* use saved BIOS signature if present */
if (!sig && saved_sig)
sig = saved_sig;
if (!request_region(base, FDOMAIN_REGION_SIZE, "fdomain_isa"))
return 0;
irq = irqs[(inb(base + REG_CFG1) & 0x0e) >> 1];
if (sig)
this_id = sig->this_id;
sh = fdomain_create(base, irq, this_id, dev);
if (!sh) {
release_region(base, FDOMAIN_REGION_SIZE);
return 0;
}
dev_set_drvdata(dev, sh);
return 1;
fail_unmap:
iounmap(p);
return 0;
}
static int fdomain_isa_param_match(struct device *dev, unsigned int ndev)
{
struct Scsi_Host *sh;
int irq_ = irq[ndev];
if (!io[ndev])
return 0;
if (!request_region(io[ndev], FDOMAIN_REGION_SIZE, "fdomain_isa")) {
dev_err(dev, "base 0x%x already in use", io[ndev]);
return 0;
}
if (irq_ <= 0)
irq_ = irqs[(inb(io[ndev] + REG_CFG1) & 0x0e) >> 1];
sh = fdomain_create(io[ndev], irq_, scsi_id[ndev], dev);
if (!sh) {
dev_err(dev, "controller not found at base 0x%x", io[ndev]);
release_region(io[ndev], FDOMAIN_REGION_SIZE);
return 0;
}
dev_set_drvdata(dev, sh);
return 1;
}
static int fdomain_isa_remove(struct device *dev, unsigned int ndev)
{
struct Scsi_Host *sh = dev_get_drvdata(dev);
int base = sh->io_port;
fdomain_destroy(sh);
release_region(base, FDOMAIN_REGION_SIZE);
dev_set_drvdata(dev, NULL);
return 0;
}
static struct isa_driver fdomain_isa_driver = {
.match = fdomain_isa_match,
.remove = fdomain_isa_remove,
.driver = {
.name = "fdomain_isa",
.pm = FDOMAIN_PM_OPS,
},
};
static int __init fdomain_isa_init(void)
{
int isa_probe_count = ADDRESS_COUNT + PORT_COUNT;
if (io[0]) { /* use module parameters if present */
fdomain_isa_driver.match = fdomain_isa_param_match;
isa_probe_count = MAXBOARDS_PARAM;
}
return isa_register_driver(&fdomain_isa_driver, isa_probe_count);
}
static void __exit fdomain_isa_exit(void)
{
isa_unregister_driver(&fdomain_isa_driver);
}
module_init(fdomain_isa_init);
module_exit(fdomain_isa_exit);
MODULE_AUTHOR("Ondrej Zary, Rickard E. Faith");
MODULE_DESCRIPTION("Future Domain TMC-16x0 ISA SCSI driver");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,68 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/module.h>
#include <linux/pci.h>
#include "fdomain.h"
static int fdomain_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *d)
{
int err;
struct Scsi_Host *sh;
err = pci_enable_device(pdev);
if (err)
goto fail;
err = pci_request_regions(pdev, "fdomain_pci");
if (err)
goto disable_device;
err = -ENODEV;
if (pci_resource_len(pdev, 0) == 0)
goto release_region;
sh = fdomain_create(pci_resource_start(pdev, 0), pdev->irq, 7,
&pdev->dev);
if (!sh)
goto release_region;
pci_set_drvdata(pdev, sh);
return 0;
release_region:
pci_release_regions(pdev);
disable_device:
pci_disable_device(pdev);
fail:
return err;
}
static void fdomain_pci_remove(struct pci_dev *pdev)
{
struct Scsi_Host *sh = pci_get_drvdata(pdev);
fdomain_destroy(sh);
pci_release_regions(pdev);
pci_disable_device(pdev);
}
static struct pci_device_id fdomain_pci_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_FD, PCI_DEVICE_ID_FD_36C70) },
{}
};
MODULE_DEVICE_TABLE(pci, fdomain_pci_table);
static struct pci_driver fdomain_pci_driver = {
.name = "fdomain_pci",
.id_table = fdomain_pci_table,
.probe = fdomain_pci_probe,
.remove = fdomain_pci_remove,
.driver.pm = FDOMAIN_PM_OPS,
};
module_pci_driver(fdomain_pci_driver);
MODULE_AUTHOR("Ondrej Zary, Rickard E. Faith");
MODULE_DESCRIPTION("Future Domain TMC-3260 PCI SCSI driver");
MODULE_LICENSE("GPL");

View File

@ -61,10 +61,6 @@
#define HISI_SAS_MAX_SMP_RESP_SZ 1028
#define HISI_SAS_MAX_STP_RESP_SZ 28
#define DEV_IS_EXPANDER(type) \
((type == SAS_EDGE_EXPANDER_DEVICE) || \
(type == SAS_FANOUT_EXPANDER_DEVICE))
#define HISI_SAS_SATA_PROTOCOL_NONDATA 0x1
#define HISI_SAS_SATA_PROTOCOL_PIO 0x2
#define HISI_SAS_SATA_PROTOCOL_DMA 0x4
@ -479,12 +475,12 @@ struct hisi_sas_command_table_stp {
u8 atapi_cdb[ATAPI_CDB_LEN];
};
#define HISI_SAS_SGE_PAGE_CNT SG_CHUNK_SIZE
#define HISI_SAS_SGE_PAGE_CNT (124)
struct hisi_sas_sge_page {
struct hisi_sas_sge sge[HISI_SAS_SGE_PAGE_CNT];
} __aligned(16);
#define HISI_SAS_SGE_DIF_PAGE_CNT SG_CHUNK_SIZE
#define HISI_SAS_SGE_DIF_PAGE_CNT HISI_SAS_SGE_PAGE_CNT
struct hisi_sas_sge_dif_page {
struct hisi_sas_sge sge[HISI_SAS_SGE_DIF_PAGE_CNT];
} __aligned(16);

View File

@ -803,7 +803,7 @@ static int hisi_sas_dev_found(struct domain_device *device)
device->lldd_dev = sas_dev;
hisi_hba->hw->setup_itct(hisi_hba, sas_dev);
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) {
if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
int phy_no;
u8 phy_num = parent_dev->ex_dev.num_phys;
struct ex_phy *phy;
@ -1446,7 +1446,7 @@ static void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 old_state,
_sas_port = sas_port;
if (DEV_IS_EXPANDER(dev->dev_type))
if (dev_is_expander(dev->dev_type))
sas_ha->notify_port_event(sas_phy,
PORTE_BROADCAST_RCVD);
}
@ -1533,7 +1533,7 @@ static void hisi_sas_terminate_stp_reject(struct hisi_hba *hisi_hba)
struct domain_device *port_dev = sas_port->port_dev;
struct domain_device *device;
if (!port_dev || !DEV_IS_EXPANDER(port_dev->dev_type))
if (!port_dev || !dev_is_expander(port_dev->dev_type))
continue;
/* Try to find a SATA device */
@ -1903,7 +1903,7 @@ static int hisi_sas_clear_nexus_ha(struct sas_ha_struct *sas_ha)
struct domain_device *device = sas_dev->sas_device;
if ((sas_dev->dev_type == SAS_PHY_UNUSED) || !device ||
DEV_IS_EXPANDER(device->dev_type))
dev_is_expander(device->dev_type))
continue;
rc = hisi_sas_debug_I_T_nexus_reset(device);
@ -2475,6 +2475,14 @@ EXPORT_SYMBOL_GPL(hisi_sas_alloc);
void hisi_sas_free(struct hisi_hba *hisi_hba)
{
int i;
for (i = 0; i < hisi_hba->n_phy; i++) {
struct hisi_sas_phy *phy = &hisi_hba->phy[i];
del_timer_sync(&phy->timer);
}
if (hisi_hba->wq)
destroy_workqueue(hisi_hba->wq);
}

View File

@ -422,70 +422,70 @@ static const struct hisi_sas_hw_error one_bit_ecc_errors[] = {
.irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_1B_OFF),
.msk = HGC_DQE_ECC_1B_ADDR_MSK,
.shift = HGC_DQE_ECC_1B_ADDR_OFF,
.msg = "hgc_dqe_acc1b_intr found: Ram address is 0x%08X\n",
.msg = "hgc_dqe_ecc1b_intr",
.reg = HGC_DQE_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_1B_OFF),
.msk = HGC_IOST_ECC_1B_ADDR_MSK,
.shift = HGC_IOST_ECC_1B_ADDR_OFF,
.msg = "hgc_iost_acc1b_intr found: Ram address is 0x%08X\n",
.msg = "hgc_iost_ecc1b_intr",
.reg = HGC_IOST_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_1B_OFF),
.msk = HGC_ITCT_ECC_1B_ADDR_MSK,
.shift = HGC_ITCT_ECC_1B_ADDR_OFF,
.msg = "hgc_itct_acc1b_intr found: am address is 0x%08X\n",
.msg = "hgc_itct_ecc1b_intr",
.reg = HGC_ITCT_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_1B_OFF),
.msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK,
.shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF,
.msg = "hgc_iostl_acc1b_intr found: memory address is 0x%08X\n",
.msg = "hgc_iostl_ecc1b_intr",
.reg = HGC_LM_DFX_STATUS2,
},
{
.irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_1B_OFF),
.msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK,
.shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF,
.msg = "hgc_itctl_acc1b_intr found: memory address is 0x%08X\n",
.msg = "hgc_itctl_ecc1b_intr",
.reg = HGC_LM_DFX_STATUS2,
},
{
.irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_1B_OFF),
.msk = HGC_CQE_ECC_1B_ADDR_MSK,
.shift = HGC_CQE_ECC_1B_ADDR_OFF,
.msg = "hgc_cqe_acc1b_intr found: Ram address is 0x%08X\n",
.msg = "hgc_cqe_ecc1b_intr",
.reg = HGC_CQE_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_1B_OFF),
.msk = HGC_RXM_DFX_STATUS14_MEM0_MSK,
.shift = HGC_RXM_DFX_STATUS14_MEM0_OFF,
.msg = "rxm_mem0_acc1b_intr found: memory address is 0x%08X\n",
.msg = "rxm_mem0_ecc1b_intr",
.reg = HGC_RXM_DFX_STATUS14,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_1B_OFF),
.msk = HGC_RXM_DFX_STATUS14_MEM1_MSK,
.shift = HGC_RXM_DFX_STATUS14_MEM1_OFF,
.msg = "rxm_mem1_acc1b_intr found: memory address is 0x%08X\n",
.msg = "rxm_mem1_ecc1b_intr",
.reg = HGC_RXM_DFX_STATUS14,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_1B_OFF),
.msk = HGC_RXM_DFX_STATUS14_MEM2_MSK,
.shift = HGC_RXM_DFX_STATUS14_MEM2_OFF,
.msg = "rxm_mem2_acc1b_intr found: memory address is 0x%08X\n",
.msg = "rxm_mem2_ecc1b_intr",
.reg = HGC_RXM_DFX_STATUS14,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_1B_OFF),
.msk = HGC_RXM_DFX_STATUS15_MEM3_MSK,
.shift = HGC_RXM_DFX_STATUS15_MEM3_OFF,
.msg = "rxm_mem3_acc1b_intr found: memory address is 0x%08X\n",
.msg = "rxm_mem3_ecc1b_intr",
.reg = HGC_RXM_DFX_STATUS15,
},
};
@ -495,70 +495,70 @@ static const struct hisi_sas_hw_error multi_bit_ecc_errors[] = {
.irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF),
.msk = HGC_DQE_ECC_MB_ADDR_MSK,
.shift = HGC_DQE_ECC_MB_ADDR_OFF,
.msg = "hgc_dqe_accbad_intr (0x%x) found: Ram address is 0x%08X\n",
.msg = "hgc_dqe_eccbad_intr",
.reg = HGC_DQE_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF),
.msk = HGC_IOST_ECC_MB_ADDR_MSK,
.shift = HGC_IOST_ECC_MB_ADDR_OFF,
.msg = "hgc_iost_accbad_intr (0x%x) found: Ram address is 0x%08X\n",
.msg = "hgc_iost_eccbad_intr",
.reg = HGC_IOST_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF),
.msk = HGC_ITCT_ECC_MB_ADDR_MSK,
.shift = HGC_ITCT_ECC_MB_ADDR_OFF,
.msg = "hgc_itct_accbad_intr (0x%x) found: Ram address is 0x%08X\n",
.msg = "hgc_itct_eccbad_intr",
.reg = HGC_ITCT_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF),
.msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK,
.shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF,
.msg = "hgc_iostl_accbad_intr (0x%x) found: memory address is 0x%08X\n",
.msg = "hgc_iostl_eccbad_intr",
.reg = HGC_LM_DFX_STATUS2,
},
{
.irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF),
.msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK,
.shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF,
.msg = "hgc_itctl_accbad_intr (0x%x) found: memory address is 0x%08X\n",
.msg = "hgc_itctl_eccbad_intr",
.reg = HGC_LM_DFX_STATUS2,
},
{
.irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF),
.msk = HGC_CQE_ECC_MB_ADDR_MSK,
.shift = HGC_CQE_ECC_MB_ADDR_OFF,
.msg = "hgc_cqe_accbad_intr (0x%x) found: Ram address is 0x%08X\n",
.msg = "hgc_cqe_eccbad_intr",
.reg = HGC_CQE_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF),
.msk = HGC_RXM_DFX_STATUS14_MEM0_MSK,
.shift = HGC_RXM_DFX_STATUS14_MEM0_OFF,
.msg = "rxm_mem0_accbad_intr (0x%x) found: memory address is 0x%08X\n",
.msg = "rxm_mem0_eccbad_intr",
.reg = HGC_RXM_DFX_STATUS14,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF),
.msk = HGC_RXM_DFX_STATUS14_MEM1_MSK,
.shift = HGC_RXM_DFX_STATUS14_MEM1_OFF,
.msg = "rxm_mem1_accbad_intr (0x%x) found: memory address is 0x%08X\n",
.msg = "rxm_mem1_eccbad_intr",
.reg = HGC_RXM_DFX_STATUS14,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF),
.msk = HGC_RXM_DFX_STATUS14_MEM2_MSK,
.shift = HGC_RXM_DFX_STATUS14_MEM2_OFF,
.msg = "rxm_mem2_accbad_intr (0x%x) found: memory address is 0x%08X\n",
.msg = "rxm_mem2_eccbad_intr",
.reg = HGC_RXM_DFX_STATUS14,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF),
.msk = HGC_RXM_DFX_STATUS15_MEM3_MSK,
.shift = HGC_RXM_DFX_STATUS15_MEM3_OFF,
.msg = "rxm_mem3_accbad_intr (0x%x) found: memory address is 0x%08X\n",
.msg = "rxm_mem3_eccbad_intr",
.reg = HGC_RXM_DFX_STATUS15,
},
};
@ -944,7 +944,7 @@ static void setup_itct_v2_hw(struct hisi_hba *hisi_hba,
break;
case SAS_SATA_DEV:
case SAS_SATA_PENDING:
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type))
if (parent_dev && dev_is_expander(parent_dev->dev_type))
qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF;
else
qw0 = HISI_SAS_DEV_TYPE_SATA << ITCT_HDR_DEV_TYPE_OFF;
@ -2526,7 +2526,7 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba,
/* create header */
/* dw0 */
dw0 = port->id << CMD_HDR_PORT_OFF;
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type))
if (parent_dev && dev_is_expander(parent_dev->dev_type))
dw0 |= 3 << CMD_HDR_CMD_OFF;
else
dw0 |= 4 << CMD_HDR_CMD_OFF;
@ -2973,7 +2973,8 @@ one_bit_ecc_error_process_v2_hw(struct hisi_hba *hisi_hba, u32 irq_value)
val = hisi_sas_read32(hisi_hba, ecc_error->reg);
val &= ecc_error->msk;
val >>= ecc_error->shift;
dev_warn(dev, ecc_error->msg, val);
dev_warn(dev, "%s found: mem addr is 0x%08X\n",
ecc_error->msg, val);
}
}
}
@ -2992,7 +2993,8 @@ static void multi_bit_ecc_error_process_v2_hw(struct hisi_hba *hisi_hba,
val = hisi_sas_read32(hisi_hba, ecc_error->reg);
val &= ecc_error->msk;
val >>= ecc_error->shift;
dev_err(dev, ecc_error->msg, irq_value, val);
dev_err(dev, "%s (0x%x) found: mem addr is 0x%08X\n",
ecc_error->msg, irq_value, val);
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
}
}

View File

@ -23,6 +23,7 @@
#define ITCT_CLR_EN_MSK (0x1 << ITCT_CLR_EN_OFF)
#define ITCT_DEV_OFF 0
#define ITCT_DEV_MSK (0x7ff << ITCT_DEV_OFF)
#define SAS_AXI_USER3 0x50
#define IO_SATA_BROKEN_MSG_ADDR_LO 0x58
#define IO_SATA_BROKEN_MSG_ADDR_HI 0x5c
#define SATA_INITI_D2H_STORE_ADDR_LO 0x60
@ -549,6 +550,7 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba)
/* Global registers init */
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE,
(u32)((1ULL << hisi_hba->queue_count) - 1));
hisi_sas_write32(hisi_hba, SAS_AXI_USER3, 0);
hisi_sas_write32(hisi_hba, CFG_MAX_TAG, 0xfff0400);
hisi_sas_write32(hisi_hba, HGC_SAS_TXFAIL_RETRY_CTRL, 0x108);
hisi_sas_write32(hisi_hba, CFG_AGING_TIME, 0x1);
@ -752,7 +754,7 @@ static void setup_itct_v3_hw(struct hisi_hba *hisi_hba,
break;
case SAS_SATA_DEV:
case SAS_SATA_PENDING:
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type))
if (parent_dev && dev_is_expander(parent_dev->dev_type))
qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF;
else
qw0 = HISI_SAS_DEV_TYPE_SATA << ITCT_HDR_DEV_TYPE_OFF;
@ -906,8 +908,14 @@ static void enable_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
static void disable_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
{
u32 cfg = hisi_sas_phy_read32(hisi_hba, phy_no, PHY_CFG);
u32 irq_msk = hisi_sas_phy_read32(hisi_hba, phy_no, CHL_INT2_MSK);
static const u32 msk = BIT(CHL_INT2_RX_DISP_ERR_OFF) |
BIT(CHL_INT2_RX_CODE_ERR_OFF) |
BIT(CHL_INT2_RX_INVLD_DW_OFF);
u32 state;
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2_MSK, msk | irq_msk);
cfg &= ~PHY_CFG_ENA_MSK;
hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg);
@ -918,6 +926,15 @@ static void disable_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
cfg |= PHY_CFG_PHY_RST_MSK;
hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg);
}
udelay(1);
hisi_sas_phy_read32(hisi_hba, phy_no, ERR_CNT_INVLD_DW);
hisi_sas_phy_read32(hisi_hba, phy_no, ERR_CNT_DISP_ERR);
hisi_sas_phy_read32(hisi_hba, phy_no, ERR_CNT_CODE_ERR);
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2, msk);
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2_MSK, irq_msk);
}
static void start_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
@ -1336,10 +1353,10 @@ static void prep_ata_v3_hw(struct hisi_hba *hisi_hba,
u32 dw1 = 0, dw2 = 0;
hdr->dw0 = cpu_to_le32(port->id << CMD_HDR_PORT_OFF);
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type))
if (parent_dev && dev_is_expander(parent_dev->dev_type))
hdr->dw0 |= cpu_to_le32(3 << CMD_HDR_CMD_OFF);
else
hdr->dw0 |= cpu_to_le32(4 << CMD_HDR_CMD_OFF);
hdr->dw0 |= cpu_to_le32(4U << CMD_HDR_CMD_OFF);
switch (task->data_dir) {
case DMA_TO_DEVICE:
@ -1407,7 +1424,7 @@ static void prep_abort_v3_hw(struct hisi_hba *hisi_hba,
struct hisi_sas_port *port = slot->port;
/* dw0 */
hdr->dw0 = cpu_to_le32((5 << CMD_HDR_CMD_OFF) | /*abort*/
hdr->dw0 = cpu_to_le32((5U << CMD_HDR_CMD_OFF) | /*abort*/
(port->id << CMD_HDR_PORT_OFF) |
(dev_is_sata(dev)
<< CMD_HDR_ABORT_DEVICE_TYPE_OFF) |
@ -1826,77 +1843,77 @@ static const struct hisi_sas_hw_error multi_bit_ecc_errors[] = {
.irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF),
.msk = HGC_DQE_ECC_MB_ADDR_MSK,
.shift = HGC_DQE_ECC_MB_ADDR_OFF,
.msg = "hgc_dqe_eccbad_intr found: ram addr is 0x%08X\n",
.msg = "hgc_dqe_eccbad_intr",
.reg = HGC_DQE_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF),
.msk = HGC_IOST_ECC_MB_ADDR_MSK,
.shift = HGC_IOST_ECC_MB_ADDR_OFF,
.msg = "hgc_iost_eccbad_intr found: ram addr is 0x%08X\n",
.msg = "hgc_iost_eccbad_intr",
.reg = HGC_IOST_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF),
.msk = HGC_ITCT_ECC_MB_ADDR_MSK,
.shift = HGC_ITCT_ECC_MB_ADDR_OFF,
.msg = "hgc_itct_eccbad_intr found: ram addr is 0x%08X\n",
.msg = "hgc_itct_eccbad_intr",
.reg = HGC_ITCT_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF),
.msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK,
.shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF,
.msg = "hgc_iostl_eccbad_intr found: mem addr is 0x%08X\n",
.msg = "hgc_iostl_eccbad_intr",
.reg = HGC_LM_DFX_STATUS2,
},
{
.irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF),
.msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK,
.shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF,
.msg = "hgc_itctl_eccbad_intr found: mem addr is 0x%08X\n",
.msg = "hgc_itctl_eccbad_intr",
.reg = HGC_LM_DFX_STATUS2,
},
{
.irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF),
.msk = HGC_CQE_ECC_MB_ADDR_MSK,
.shift = HGC_CQE_ECC_MB_ADDR_OFF,
.msg = "hgc_cqe_eccbad_intr found: ram address is 0x%08X\n",
.msg = "hgc_cqe_eccbad_intr",
.reg = HGC_CQE_ECC_ADDR,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF),
.msk = HGC_RXM_DFX_STATUS14_MEM0_MSK,
.shift = HGC_RXM_DFX_STATUS14_MEM0_OFF,
.msg = "rxm_mem0_eccbad_intr found: mem addr is 0x%08X\n",
.msg = "rxm_mem0_eccbad_intr",
.reg = HGC_RXM_DFX_STATUS14,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF),
.msk = HGC_RXM_DFX_STATUS14_MEM1_MSK,
.shift = HGC_RXM_DFX_STATUS14_MEM1_OFF,
.msg = "rxm_mem1_eccbad_intr found: mem addr is 0x%08X\n",
.msg = "rxm_mem1_eccbad_intr",
.reg = HGC_RXM_DFX_STATUS14,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF),
.msk = HGC_RXM_DFX_STATUS14_MEM2_MSK,
.shift = HGC_RXM_DFX_STATUS14_MEM2_OFF,
.msg = "rxm_mem2_eccbad_intr found: mem addr is 0x%08X\n",
.msg = "rxm_mem2_eccbad_intr",
.reg = HGC_RXM_DFX_STATUS14,
},
{
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF),
.msk = HGC_RXM_DFX_STATUS15_MEM3_MSK,
.shift = HGC_RXM_DFX_STATUS15_MEM3_OFF,
.msg = "rxm_mem3_eccbad_intr found: mem addr is 0x%08X\n",
.msg = "rxm_mem3_eccbad_intr",
.reg = HGC_RXM_DFX_STATUS15,
},
{
.irq_msk = BIT(SAS_ECC_INTR_OOO_RAM_ECC_MB_OFF),
.msk = AM_ROB_ECC_ERR_ADDR_MSK,
.shift = AM_ROB_ECC_ERR_ADDR_OFF,
.msg = "ooo_ram_eccbad_intr found: ROB_ECC_ERR_ADDR=0x%08X\n",
.msg = "ooo_ram_eccbad_intr",
.reg = AM_ROB_ECC_ERR_ADDR,
},
};
@ -1915,7 +1932,8 @@ static void multi_bit_ecc_error_process_v3_hw(struct hisi_hba *hisi_hba,
val = hisi_sas_read32(hisi_hba, ecc_error->reg);
val &= ecc_error->msk;
val >>= ecc_error->shift;
dev_err(dev, ecc_error->msg, irq_value, val);
dev_err(dev, "%s (0x%x) found: mem addr is 0x%08X\n",
ecc_error->msg, irq_value, val);
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
}
}

View File

@ -60,7 +60,7 @@
* HPSA_DRIVER_VERSION must be 3 byte values (0-255) separated by '.'
* with an optional trailing '-' followed by a byte value (0-255).
*/
#define HPSA_DRIVER_VERSION "3.4.20-160"
#define HPSA_DRIVER_VERSION "3.4.20-170"
#define DRIVER_NAME "HP HPSA Driver (v " HPSA_DRIVER_VERSION ")"
#define HPSA "hpsa"
@ -73,6 +73,8 @@
/*define how many times we will try a command because of bus resets */
#define MAX_CMD_RETRIES 3
/* How long to wait before giving up on a command */
#define HPSA_EH_PTRAID_TIMEOUT (240 * HZ)
/* Embedded module documentation macros - see modules.h */
MODULE_AUTHOR("Hewlett-Packard Company");
@ -344,11 +346,6 @@ static inline bool hpsa_is_cmd_idle(struct CommandList *c)
return c->scsi_cmd == SCSI_CMD_IDLE;
}
static inline bool hpsa_is_pending_event(struct CommandList *c)
{
return c->reset_pending;
}
/* extract sense key, asc, and ascq from sense data. -1 means invalid. */
static void decode_sense_data(const u8 *sense_data, int sense_data_len,
u8 *sense_key, u8 *asc, u8 *ascq)
@ -1144,6 +1141,8 @@ static void __enqueue_cmd_and_start_io(struct ctlr_info *h,
{
dial_down_lockup_detection_during_fw_flash(h, c);
atomic_inc(&h->commands_outstanding);
if (c->device)
atomic_inc(&c->device->commands_outstanding);
reply_queue = h->reply_map[raw_smp_processor_id()];
switch (c->cmd_type) {
@ -1167,9 +1166,6 @@ static void __enqueue_cmd_and_start_io(struct ctlr_info *h,
static void enqueue_cmd_and_start_io(struct ctlr_info *h, struct CommandList *c)
{
if (unlikely(hpsa_is_pending_event(c)))
return finish_cmd(c);
__enqueue_cmd_and_start_io(h, c, DEFAULT_REPLY_QUEUE);
}
@ -1842,25 +1838,33 @@ static int hpsa_find_outstanding_commands_for_dev(struct ctlr_info *h,
return count;
}
#define NUM_WAIT 20
static void hpsa_wait_for_outstanding_commands_for_dev(struct ctlr_info *h,
struct hpsa_scsi_dev_t *device)
{
int cmds = 0;
int waits = 0;
int num_wait = NUM_WAIT;
if (device->external)
num_wait = HPSA_EH_PTRAID_TIMEOUT;
while (1) {
cmds = hpsa_find_outstanding_commands_for_dev(h, device);
if (cmds == 0)
break;
if (++waits > 20)
if (++waits > num_wait)
break;
msleep(1000);
}
if (waits > 20)
if (waits > num_wait) {
dev_warn(&h->pdev->dev,
"%s: removing device with %d outstanding commands!\n",
__func__, cmds);
"%s: removing device [%d:%d:%d:%d] with %d outstanding commands!\n",
__func__,
h->scsi_host->host_no,
device->bus, device->target, device->lun, cmds);
}
}
static void hpsa_remove_device(struct ctlr_info *h,
@ -2131,11 +2135,16 @@ static int hpsa_slave_configure(struct scsi_device *sdev)
sdev->no_uld_attach = !sd || !sd->expose_device;
if (sd) {
if (sd->external)
sd->was_removed = 0;
if (sd->external) {
queue_depth = EXTERNAL_QD;
else
sdev->eh_timeout = HPSA_EH_PTRAID_TIMEOUT;
blk_queue_rq_timeout(sdev->request_queue,
HPSA_EH_PTRAID_TIMEOUT);
} else {
queue_depth = sd->queue_depth != 0 ?
sd->queue_depth : sdev->host->can_queue;
}
} else
queue_depth = sdev->host->can_queue;
@ -2146,7 +2155,12 @@ static int hpsa_slave_configure(struct scsi_device *sdev)
static void hpsa_slave_destroy(struct scsi_device *sdev)
{
/* nothing to do. */
struct hpsa_scsi_dev_t *hdev = NULL;
hdev = sdev->hostdata;
if (hdev)
hdev->was_removed = 1;
}
static void hpsa_free_ioaccel2_sg_chain_blocks(struct ctlr_info *h)
@ -2414,13 +2428,16 @@ static int handle_ioaccel_mode2_error(struct ctlr_info *h,
break;
}
if (dev->in_reset)
retry = 0;
return retry; /* retry on raid path? */
}
static void hpsa_cmd_resolve_events(struct ctlr_info *h,
struct CommandList *c)
{
bool do_wake = false;
struct hpsa_scsi_dev_t *dev = c->device;
/*
* Reset c->scsi_cmd here so that the reset handler will know
@ -2429,25 +2446,12 @@ static void hpsa_cmd_resolve_events(struct ctlr_info *h,
*/
c->scsi_cmd = SCSI_CMD_IDLE;
mb(); /* Declare command idle before checking for pending events. */
if (c->reset_pending) {
unsigned long flags;
struct hpsa_scsi_dev_t *dev;
/*
* There appears to be a reset pending; lock the lock and
* reconfirm. If so, then decrement the count of outstanding
* commands and wake the reset command if this is the last one.
*/
spin_lock_irqsave(&h->lock, flags);
dev = c->reset_pending; /* Re-fetch under the lock. */
if (dev && atomic_dec_and_test(&dev->reset_cmds_out))
do_wake = true;
c->reset_pending = NULL;
spin_unlock_irqrestore(&h->lock, flags);
if (dev) {
atomic_dec(&dev->commands_outstanding);
if (dev->in_reset &&
atomic_read(&dev->commands_outstanding) <= 0)
wake_up_all(&h->event_sync_wait_queue);
}
if (do_wake)
wake_up_all(&h->event_sync_wait_queue);
}
static void hpsa_cmd_resolve_and_free(struct ctlr_info *h,
@ -2496,6 +2500,11 @@ static void process_ioaccel2_completion(struct ctlr_info *h,
dev->offload_to_be_enabled = 0;
}
if (dev->in_reset) {
cmd->result = DID_RESET << 16;
return hpsa_cmd_free_and_done(h, c, cmd);
}
return hpsa_retry_cmd(h, c);
}
@ -2574,6 +2583,12 @@ static void complete_scsi_command(struct CommandList *cp)
cmd->result = (DID_OK << 16); /* host byte */
cmd->result |= (COMMAND_COMPLETE << 8); /* msg byte */
/* SCSI command has already been cleaned up in SML */
if (dev->was_removed) {
hpsa_cmd_resolve_and_free(h, cp);
return;
}
if (cp->cmd_type == CMD_IOACCEL2 || cp->cmd_type == CMD_IOACCEL1) {
if (dev->physical_device && dev->expose_device &&
dev->removed) {
@ -2595,10 +2610,6 @@ static void complete_scsi_command(struct CommandList *cp)
return hpsa_cmd_free_and_done(h, cp, cmd);
}
if ((unlikely(hpsa_is_pending_event(cp))))
if (cp->reset_pending)
return hpsa_cmd_free_and_done(h, cp, cmd);
if (cp->cmd_type == CMD_IOACCEL2)
return process_ioaccel2_completion(h, cp, cmd, dev);
@ -3048,7 +3059,7 @@ out:
return rc;
}
static int hpsa_send_reset(struct ctlr_info *h, unsigned char *scsi3addr,
static int hpsa_send_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
u8 reset_type, int reply_queue)
{
int rc = IO_OK;
@ -3056,11 +3067,10 @@ static int hpsa_send_reset(struct ctlr_info *h, unsigned char *scsi3addr,
struct ErrorInfo *ei;
c = cmd_alloc(h);
c->device = dev;
/* fill_cmd can't fail here, no data buffer to map. */
(void) fill_cmd(c, reset_type, h, NULL, 0, 0,
scsi3addr, TYPE_MSG);
(void) fill_cmd(c, reset_type, h, NULL, 0, 0, dev->scsi3addr, TYPE_MSG);
rc = hpsa_scsi_do_simple_cmd(h, c, reply_queue, NO_TIMEOUT);
if (rc) {
dev_warn(&h->pdev->dev, "Failed to send reset command\n");
@ -3138,9 +3148,8 @@ static bool hpsa_cmd_dev_match(struct ctlr_info *h, struct CommandList *c,
}
static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
unsigned char *scsi3addr, u8 reset_type, int reply_queue)
u8 reset_type, int reply_queue)
{
int i;
int rc = 0;
/* We can really only handle one reset at a time */
@ -3149,38 +3158,14 @@ static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
return -EINTR;
}
BUG_ON(atomic_read(&dev->reset_cmds_out) != 0);
for (i = 0; i < h->nr_cmds; i++) {
struct CommandList *c = h->cmd_pool + i;
int refcount = atomic_inc_return(&c->refcount);
if (refcount > 1 && hpsa_cmd_dev_match(h, c, dev, scsi3addr)) {
unsigned long flags;
/*
* Mark the target command as having a reset pending,
* then lock a lock so that the command cannot complete
* while we're considering it. If the command is not
* idle then count it; otherwise revoke the event.
*/
c->reset_pending = dev;
spin_lock_irqsave(&h->lock, flags); /* Implied MB */
if (!hpsa_is_cmd_idle(c))
atomic_inc(&dev->reset_cmds_out);
else
c->reset_pending = NULL;
spin_unlock_irqrestore(&h->lock, flags);
}
cmd_free(h, c);
}
rc = hpsa_send_reset(h, scsi3addr, reset_type, reply_queue);
if (!rc)
rc = hpsa_send_reset(h, dev, reset_type, reply_queue);
if (!rc) {
/* incremented by sending the reset request */
atomic_dec(&dev->commands_outstanding);
wait_event(h->event_sync_wait_queue,
atomic_read(&dev->reset_cmds_out) == 0 ||
atomic_read(&dev->commands_outstanding) <= 0 ||
lockup_detected(h));
}
if (unlikely(lockup_detected(h))) {
dev_warn(&h->pdev->dev,
@ -3188,10 +3173,8 @@ static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
rc = -ENODEV;
}
if (unlikely(rc))
atomic_set(&dev->reset_cmds_out, 0);
else
rc = wait_for_device_to_become_ready(h, scsi3addr, 0);
if (!rc)
rc = wait_for_device_to_become_ready(h, dev->scsi3addr, 0);
mutex_unlock(&h->reset_mutex);
return rc;
@ -4820,6 +4803,9 @@ static int hpsa_scsi_ioaccel_direct_map(struct ctlr_info *h,
c->phys_disk = dev;
if (dev->in_reset)
return -1;
return hpsa_scsi_ioaccel_queue_command(h, c, dev->ioaccel_handle,
cmd->cmnd, cmd->cmd_len, dev->scsi3addr, dev);
}
@ -5010,6 +4996,11 @@ static int hpsa_scsi_ioaccel2_queue_command(struct ctlr_info *h,
} else
cp->sg_count = (u8) use_sg;
if (phys_disk->in_reset) {
cmd->result = DID_RESET << 16;
return -1;
}
enqueue_cmd_and_start_io(h, c);
return 0;
}
@ -5027,6 +5018,9 @@ static int hpsa_scsi_ioaccel_queue_command(struct ctlr_info *h,
if (!c->scsi_cmd->device->hostdata)
return -1;
if (phys_disk->in_reset)
return -1;
/* Try to honor the device's queue depth */
if (atomic_inc_return(&phys_disk->ioaccel_cmds_out) >
phys_disk->queue_depth) {
@ -5110,6 +5104,9 @@ static int hpsa_scsi_ioaccel_raid_map(struct ctlr_info *h,
if (!dev)
return -1;
if (dev->in_reset)
return -1;
/* check for valid opcode, get LBA and block count */
switch (cmd->cmnd[0]) {
case WRITE_6:
@ -5414,13 +5411,13 @@ static int hpsa_scsi_ioaccel_raid_map(struct ctlr_info *h,
*/
static int hpsa_ciss_submit(struct ctlr_info *h,
struct CommandList *c, struct scsi_cmnd *cmd,
unsigned char scsi3addr[])
struct hpsa_scsi_dev_t *dev)
{
cmd->host_scribble = (unsigned char *) c;
c->cmd_type = CMD_SCSI;
c->scsi_cmd = cmd;
c->Header.ReplyQueue = 0; /* unused in simple mode */
memcpy(&c->Header.LUN.LunAddrBytes[0], &scsi3addr[0], 8);
memcpy(&c->Header.LUN.LunAddrBytes[0], &dev->scsi3addr[0], 8);
c->Header.tag = cpu_to_le64((c->cmdindex << DIRECT_LOOKUP_SHIFT));
/* Fill in the request block... */
@ -5471,6 +5468,12 @@ static int hpsa_ciss_submit(struct ctlr_info *h,
hpsa_cmd_resolve_and_free(h, c);
return SCSI_MLQUEUE_HOST_BUSY;
}
if (dev->in_reset) {
hpsa_cmd_resolve_and_free(h, c);
return SCSI_MLQUEUE_HOST_BUSY;
}
enqueue_cmd_and_start_io(h, c);
/* the cmd'll come back via intr handler in complete_scsi_command() */
return 0;
@ -5522,8 +5525,7 @@ static inline void hpsa_cmd_partial_init(struct ctlr_info *h, int index,
}
static int hpsa_ioaccel_submit(struct ctlr_info *h,
struct CommandList *c, struct scsi_cmnd *cmd,
unsigned char *scsi3addr)
struct CommandList *c, struct scsi_cmnd *cmd)
{
struct hpsa_scsi_dev_t *dev = cmd->device->hostdata;
int rc = IO_ACCEL_INELIGIBLE;
@ -5531,6 +5533,12 @@ static int hpsa_ioaccel_submit(struct ctlr_info *h,
if (!dev)
return SCSI_MLQUEUE_HOST_BUSY;
if (dev->in_reset)
return SCSI_MLQUEUE_HOST_BUSY;
if (hpsa_simple_mode)
return IO_ACCEL_INELIGIBLE;
cmd->host_scribble = (unsigned char *) c;
if (dev->offload_enabled) {
@ -5563,8 +5571,12 @@ static void hpsa_command_resubmit_worker(struct work_struct *work)
cmd->result = DID_NO_CONNECT << 16;
return hpsa_cmd_free_and_done(c->h, c, cmd);
}
if (c->reset_pending)
if (dev->in_reset) {
cmd->result = DID_RESET << 16;
return hpsa_cmd_free_and_done(c->h, c, cmd);
}
if (c->cmd_type == CMD_IOACCEL2) {
struct ctlr_info *h = c->h;
struct io_accel2_cmd *c2 = &h->ioaccel2_cmd_pool[c->cmdindex];
@ -5572,7 +5584,7 @@ static void hpsa_command_resubmit_worker(struct work_struct *work)
if (c2->error_data.serv_response ==
IOACCEL2_STATUS_SR_TASK_COMP_SET_FULL) {
rc = hpsa_ioaccel_submit(h, c, cmd, dev->scsi3addr);
rc = hpsa_ioaccel_submit(h, c, cmd);
if (rc == 0)
return;
if (rc == SCSI_MLQUEUE_HOST_BUSY) {
@ -5588,7 +5600,7 @@ static void hpsa_command_resubmit_worker(struct work_struct *work)
}
}
hpsa_cmd_partial_init(c->h, c->cmdindex, c);
if (hpsa_ciss_submit(c->h, c, cmd, dev->scsi3addr)) {
if (hpsa_ciss_submit(c->h, c, cmd, dev)) {
/*
* If we get here, it means dma mapping failed. Try
* again via scsi mid layer, which will then get
@ -5607,7 +5619,6 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
{
struct ctlr_info *h;
struct hpsa_scsi_dev_t *dev;
unsigned char scsi3addr[8];
struct CommandList *c;
int rc = 0;
@ -5629,14 +5640,18 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
return 0;
}
memcpy(scsi3addr, dev->scsi3addr, sizeof(scsi3addr));
if (unlikely(lockup_detected(h))) {
cmd->result = DID_NO_CONNECT << 16;
cmd->scsi_done(cmd);
return 0;
}
if (dev->in_reset)
return SCSI_MLQUEUE_DEVICE_BUSY;
c = cmd_tagged_alloc(h, cmd);
if (c == NULL)
return SCSI_MLQUEUE_DEVICE_BUSY;
/*
* Call alternate submit routine for I/O accelerated commands.
@ -5645,7 +5660,7 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
if (likely(cmd->retries == 0 &&
!blk_rq_is_passthrough(cmd->request) &&
h->acciopath_status)) {
rc = hpsa_ioaccel_submit(h, c, cmd, scsi3addr);
rc = hpsa_ioaccel_submit(h, c, cmd);
if (rc == 0)
return 0;
if (rc == SCSI_MLQUEUE_HOST_BUSY) {
@ -5653,7 +5668,7 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
return SCSI_MLQUEUE_HOST_BUSY;
}
}
return hpsa_ciss_submit(h, c, cmd, scsi3addr);
return hpsa_ciss_submit(h, c, cmd, dev);
}
static void hpsa_scan_complete(struct ctlr_info *h)
@ -5935,8 +5950,9 @@ static int wait_for_device_to_become_ready(struct ctlr_info *h,
static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
{
int rc = SUCCESS;
int i;
struct ctlr_info *h;
struct hpsa_scsi_dev_t *dev;
struct hpsa_scsi_dev_t *dev = NULL;
u8 reset_type;
char msg[48];
unsigned long flags;
@ -6002,9 +6018,19 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
reset_type == HPSA_DEVICE_RESET_MSG ? "logical " : "physical ");
hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
/*
* wait to see if any commands will complete before sending reset
*/
dev->in_reset = true; /* block any new cmds from OS for this device */
for (i = 0; i < 10; i++) {
if (atomic_read(&dev->commands_outstanding) > 0)
msleep(1000);
else
break;
}
/* send a reset to the SCSI LUN which the command was sent to */
rc = hpsa_do_reset(h, dev, dev->scsi3addr, reset_type,
DEFAULT_REPLY_QUEUE);
rc = hpsa_do_reset(h, dev, reset_type, DEFAULT_REPLY_QUEUE);
if (rc == 0)
rc = SUCCESS;
else
@ -6018,6 +6044,8 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
return_reset_status:
spin_lock_irqsave(&h->reset_lock, flags);
h->reset_in_progress = 0;
if (dev)
dev->in_reset = false;
spin_unlock_irqrestore(&h->reset_lock, flags);
return rc;
}
@ -6043,7 +6071,6 @@ static struct CommandList *cmd_tagged_alloc(struct ctlr_info *h,
BUG();
}
atomic_inc(&c->refcount);
if (unlikely(!hpsa_is_cmd_idle(c))) {
/*
* We expect that the SCSI layer will hand us a unique tag
@ -6051,14 +6078,20 @@ static struct CommandList *cmd_tagged_alloc(struct ctlr_info *h,
* two requests...because if the selected command isn't idle
* then someone is going to be very disappointed.
*/
dev_err(&h->pdev->dev,
"tag collision (tag=%d) in cmd_tagged_alloc().\n",
idx);
if (c->scsi_cmd != NULL)
scsi_print_command(c->scsi_cmd);
scsi_print_command(scmd);
if (idx != h->last_collision_tag) { /* Print once per tag */
dev_warn(&h->pdev->dev,
"%s: tag collision (tag=%d)\n", __func__, idx);
if (c->scsi_cmd != NULL)
scsi_print_command(c->scsi_cmd);
if (scmd)
scsi_print_command(scmd);
h->last_collision_tag = idx;
}
return NULL;
}
atomic_inc(&c->refcount);
hpsa_cmd_partial_init(h, idx, c);
return c;
}
@ -6126,6 +6159,7 @@ static struct CommandList *cmd_alloc(struct ctlr_info *h)
break; /* it's ours now. */
}
hpsa_cmd_partial_init(h, i, c);
c->device = NULL;
return c;
}
@ -6579,8 +6613,7 @@ static int hpsa_ioctl(struct scsi_device *dev, unsigned int cmd,
}
}
static void hpsa_send_host_reset(struct ctlr_info *h, unsigned char *scsi3addr,
u8 reset_type)
static void hpsa_send_host_reset(struct ctlr_info *h, u8 reset_type)
{
struct CommandList *c;
@ -7983,10 +8016,15 @@ clean_up:
static void hpsa_free_irqs(struct ctlr_info *h)
{
int i;
int irq_vector = 0;
if (hpsa_simple_mode)
irq_vector = h->intr_mode;
if (!h->msix_vectors || h->intr_mode != PERF_MODE_INT) {
/* Single reply queue, only one irq to free */
free_irq(pci_irq_vector(h->pdev, 0), &h->q[h->intr_mode]);
free_irq(pci_irq_vector(h->pdev, irq_vector),
&h->q[h->intr_mode]);
h->q[h->intr_mode] = 0;
return;
}
@ -8005,6 +8043,10 @@ static int hpsa_request_irqs(struct ctlr_info *h,
irqreturn_t (*intxhandler)(int, void *))
{
int rc, i;
int irq_vector = 0;
if (hpsa_simple_mode)
irq_vector = h->intr_mode;
/*
* initialize h->q[x] = x so that interrupt handlers know which
@ -8040,14 +8082,14 @@ static int hpsa_request_irqs(struct ctlr_info *h,
if (h->msix_vectors > 0 || h->pdev->msi_enabled) {
sprintf(h->intrname[0], "%s-msi%s", h->devname,
h->msix_vectors ? "x" : "");
rc = request_irq(pci_irq_vector(h->pdev, 0),
rc = request_irq(pci_irq_vector(h->pdev, irq_vector),
msixhandler, 0,
h->intrname[0],
&h->q[h->intr_mode]);
} else {
sprintf(h->intrname[h->intr_mode],
"%s-intx", h->devname);
rc = request_irq(pci_irq_vector(h->pdev, 0),
rc = request_irq(pci_irq_vector(h->pdev, irq_vector),
intxhandler, IRQF_SHARED,
h->intrname[0],
&h->q[h->intr_mode]);
@ -8055,7 +8097,7 @@ static int hpsa_request_irqs(struct ctlr_info *h,
}
if (rc) {
dev_err(&h->pdev->dev, "failed to get irq %d for %s\n",
pci_irq_vector(h->pdev, 0), h->devname);
pci_irq_vector(h->pdev, irq_vector), h->devname);
hpsa_free_irqs(h);
return -ENODEV;
}
@ -8065,7 +8107,7 @@ static int hpsa_request_irqs(struct ctlr_info *h,
static int hpsa_kdump_soft_reset(struct ctlr_info *h)
{
int rc;
hpsa_send_host_reset(h, RAID_CTLR_LUNID, HPSA_RESET_TYPE_CONTROLLER);
hpsa_send_host_reset(h, HPSA_RESET_TYPE_CONTROLLER);
dev_info(&h->pdev->dev, "Waiting for board to soft reset.\n");
rc = hpsa_wait_for_board_state(h->pdev, h->vaddr, BOARD_NOT_READY);
@ -8121,6 +8163,11 @@ static void hpsa_undo_allocations_after_kdump_soft_reset(struct ctlr_info *h)
destroy_workqueue(h->rescan_ctlr_wq);
h->rescan_ctlr_wq = NULL;
}
if (h->monitor_ctlr_wq) {
destroy_workqueue(h->monitor_ctlr_wq);
h->monitor_ctlr_wq = NULL;
}
kfree(h); /* init_one 1 */
}
@ -8456,8 +8503,8 @@ static void hpsa_event_monitor_worker(struct work_struct *work)
spin_lock_irqsave(&h->lock, flags);
if (!h->remove_in_progress)
schedule_delayed_work(&h->event_monitor_work,
HPSA_EVENT_MONITOR_INTERVAL);
queue_delayed_work(h->monitor_ctlr_wq, &h->event_monitor_work,
HPSA_EVENT_MONITOR_INTERVAL);
spin_unlock_irqrestore(&h->lock, flags);
}
@ -8502,7 +8549,7 @@ static void hpsa_monitor_ctlr_worker(struct work_struct *work)
spin_lock_irqsave(&h->lock, flags);
if (!h->remove_in_progress)
schedule_delayed_work(&h->monitor_ctlr_work,
queue_delayed_work(h->monitor_ctlr_wq, &h->monitor_ctlr_work,
h->heartbeat_sample_interval);
spin_unlock_irqrestore(&h->lock, flags);
}
@ -8670,6 +8717,12 @@ reinit_after_soft_reset:
goto clean7; /* aer/h */
}
h->monitor_ctlr_wq = hpsa_create_controller_wq(h, "monitor");
if (!h->monitor_ctlr_wq) {
rc = -ENOMEM;
goto clean7;
}
/*
* At this point, the controller is ready to take commands.
* Now, if reset_devices and the hard reset didn't work, try
@ -8799,6 +8852,10 @@ clean1: /* wq/aer/h */
destroy_workqueue(h->rescan_ctlr_wq);
h->rescan_ctlr_wq = NULL;
}
if (h->monitor_ctlr_wq) {
destroy_workqueue(h->monitor_ctlr_wq);
h->monitor_ctlr_wq = NULL;
}
kfree(h);
return rc;
}
@ -8946,6 +9003,7 @@ static void hpsa_remove_one(struct pci_dev *pdev)
cancel_delayed_work_sync(&h->event_monitor_work);
destroy_workqueue(h->rescan_ctlr_wq);
destroy_workqueue(h->resubmit_wq);
destroy_workqueue(h->monitor_ctlr_wq);
hpsa_delete_sas_host(h);

View File

@ -65,6 +65,7 @@ struct hpsa_scsi_dev_t {
u8 physical_device : 1;
u8 expose_device;
u8 removed : 1; /* device is marked for death */
u8 was_removed : 1; /* device actually removed */
#define RAID_CTLR_LUNID "\0\0\0\0\0\0\0\0"
unsigned char device_id[16]; /* from inquiry pg. 0x83 */
u64 sas_address;
@ -75,11 +76,12 @@ struct hpsa_scsi_dev_t {
unsigned char raid_level; /* from inquiry page 0xC1 */
unsigned char volume_offline; /* discovered via TUR or VPD */
u16 queue_depth; /* max queue_depth for this device */
atomic_t reset_cmds_out; /* Count of commands to-be affected */
atomic_t commands_outstanding; /* track commands sent to device */
atomic_t ioaccel_cmds_out; /* Only used for physical devices
* counts commands sent to physical
* device via "ioaccel" path.
*/
bool in_reset;
u32 ioaccel_handle;
u8 active_path_index;
u8 path_map;
@ -174,6 +176,7 @@ struct ctlr_info {
struct CfgTable __iomem *cfgtable;
int interrupts_enabled;
int max_commands;
int last_collision_tag; /* tags are global */
atomic_t commands_outstanding;
# define PERF_MODE_INT 0
# define DOORBELL_INT 1
@ -300,6 +303,7 @@ struct ctlr_info {
int needs_abort_tags_swizzled;
struct workqueue_struct *resubmit_wq;
struct workqueue_struct *rescan_ctlr_wq;
struct workqueue_struct *monitor_ctlr_wq;
atomic_t abort_cmds_available;
wait_queue_head_t event_sync_wait_queue;
struct mutex reset_mutex;

View File

@ -448,7 +448,7 @@ struct CommandList {
struct hpsa_scsi_dev_t *phys_disk;
int abort_pending;
struct hpsa_scsi_dev_t *reset_pending;
struct hpsa_scsi_dev_t *device;
atomic_t refcount; /* Must be last to avoid memset in hpsa_cmd_init() */
} __aligned(COMMANDLIST_ALIGNMENT);

View File

@ -814,7 +814,7 @@ static void ibmvscsi_reset_host(struct ibmvscsi_host_data *hostdata)
atomic_set(&hostdata->request_limit, 0);
purge_requests(hostdata, DID_ERROR);
hostdata->reset_crq = 1;
hostdata->action = IBMVSCSI_HOST_ACTION_RESET;
wake_up(&hostdata->work_wait_q);
}
@ -1165,7 +1165,8 @@ static void login_rsp(struct srp_event_struct *evt_struct)
be32_to_cpu(evt_struct->xfer_iu->srp.login_rsp.req_lim_delta));
/* If we had any pending I/Os, kick them */
scsi_unblock_requests(hostdata->host);
hostdata->action = IBMVSCSI_HOST_ACTION_UNBLOCK;
wake_up(&hostdata->work_wait_q);
}
/**
@ -1783,7 +1784,7 @@ static void ibmvscsi_handle_crq(struct viosrp_crq *crq,
/* We need to re-setup the interpartition connection */
dev_info(hostdata->dev, "Re-enabling adapter!\n");
hostdata->client_migrated = 1;
hostdata->reenable_crq = 1;
hostdata->action = IBMVSCSI_HOST_ACTION_REENABLE;
purge_requests(hostdata, DID_REQUEUE);
wake_up(&hostdata->work_wait_q);
} else {
@ -2036,6 +2037,16 @@ static struct device_attribute ibmvscsi_host_config = {
.show = show_host_config,
};
static int ibmvscsi_host_reset(struct Scsi_Host *shost, int reset_type)
{
struct ibmvscsi_host_data *hostdata = shost_priv(shost);
dev_info(hostdata->dev, "Initiating adapter reset!\n");
ibmvscsi_reset_host(hostdata);
return 0;
}
static struct device_attribute *ibmvscsi_attrs[] = {
&ibmvscsi_host_vhost_loc,
&ibmvscsi_host_vhost_name,
@ -2062,6 +2073,7 @@ static struct scsi_host_template driver_template = {
.eh_host_reset_handler = ibmvscsi_eh_host_reset_handler,
.slave_configure = ibmvscsi_slave_configure,
.change_queue_depth = ibmvscsi_change_queue_depth,
.host_reset = ibmvscsi_host_reset,
.cmd_per_lun = IBMVSCSI_CMDS_PER_LUN_DEFAULT,
.can_queue = IBMVSCSI_MAX_REQUESTS_DEFAULT,
.this_id = -1,
@ -2091,48 +2103,75 @@ static unsigned long ibmvscsi_get_desired_dma(struct vio_dev *vdev)
static void ibmvscsi_do_work(struct ibmvscsi_host_data *hostdata)
{
unsigned long flags;
int rc;
char *action = "reset";
if (hostdata->reset_crq) {
smp_rmb();
hostdata->reset_crq = 0;
spin_lock_irqsave(hostdata->host->host_lock, flags);
switch (hostdata->action) {
case IBMVSCSI_HOST_ACTION_UNBLOCK:
rc = 0;
break;
case IBMVSCSI_HOST_ACTION_RESET:
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
rc = ibmvscsi_reset_crq_queue(&hostdata->queue, hostdata);
spin_lock_irqsave(hostdata->host->host_lock, flags);
if (!rc)
rc = ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0);
vio_enable_interrupts(to_vio_dev(hostdata->dev));
} else if (hostdata->reenable_crq) {
smp_rmb();
break;
case IBMVSCSI_HOST_ACTION_REENABLE:
action = "enable";
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
rc = ibmvscsi_reenable_crq_queue(&hostdata->queue, hostdata);
hostdata->reenable_crq = 0;
spin_lock_irqsave(hostdata->host->host_lock, flags);
if (!rc)
rc = ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0);
} else
break;
case IBMVSCSI_HOST_ACTION_NONE:
default:
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
return;
}
hostdata->action = IBMVSCSI_HOST_ACTION_NONE;
if (rc) {
atomic_set(&hostdata->request_limit, -1);
dev_err(hostdata->dev, "error after %s\n", action);
}
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
scsi_unblock_requests(hostdata->host);
}
static int ibmvscsi_work_to_do(struct ibmvscsi_host_data *hostdata)
static int __ibmvscsi_work_to_do(struct ibmvscsi_host_data *hostdata)
{
if (kthread_should_stop())
return 1;
else if (hostdata->reset_crq) {
smp_rmb();
return 1;
} else if (hostdata->reenable_crq) {
smp_rmb();
return 1;
switch (hostdata->action) {
case IBMVSCSI_HOST_ACTION_NONE:
return 0;
case IBMVSCSI_HOST_ACTION_RESET:
case IBMVSCSI_HOST_ACTION_REENABLE:
case IBMVSCSI_HOST_ACTION_UNBLOCK:
default:
break;
}
return 0;
return 1;
}
static int ibmvscsi_work_to_do(struct ibmvscsi_host_data *hostdata)
{
unsigned long flags;
int rc;
spin_lock_irqsave(hostdata->host->host_lock, flags);
rc = __ibmvscsi_work_to_do(hostdata);
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
return rc;
}
static int ibmvscsi_work(void *data)

View File

@ -74,13 +74,19 @@ struct event_pool {
dma_addr_t iu_token;
};
enum ibmvscsi_host_action {
IBMVSCSI_HOST_ACTION_NONE = 0,
IBMVSCSI_HOST_ACTION_RESET,
IBMVSCSI_HOST_ACTION_REENABLE,
IBMVSCSI_HOST_ACTION_UNBLOCK,
};
/* all driver data associated with a host adapter */
struct ibmvscsi_host_data {
struct list_head host_list;
atomic_t request_limit;
int client_migrated;
int reset_crq;
int reenable_crq;
enum ibmvscsi_host_action action;
struct device *dev;
struct event_pool pool;
struct crq_queue queue;

View File

@ -1087,7 +1087,7 @@ static void sci_remote_device_ready_state_enter(struct sci_base_state_machine *s
if (dev->dev_type == SAS_SATA_DEV || (dev->tproto & SAS_PROTOCOL_SATA)) {
sci_change_state(&idev->sm, SCI_STP_DEV_IDLE);
} else if (dev_is_expander(dev)) {
} else if (dev_is_expander(dev->dev_type)) {
sci_change_state(&idev->sm, SCI_SMP_DEV_IDLE);
} else
isci_remote_device_ready(ihost, idev);
@ -1478,7 +1478,7 @@ static enum sci_status isci_remote_device_construct(struct isci_port *iport,
struct domain_device *dev = idev->domain_dev;
enum sci_status status;
if (dev->parent && dev_is_expander(dev->parent))
if (dev->parent && dev_is_expander(dev->parent->dev_type))
status = sci_remote_device_ea_construct(iport, idev);
else
status = sci_remote_device_da_construct(iport, idev);

View File

@ -295,11 +295,6 @@ static inline struct isci_remote_device *rnc_to_dev(struct sci_remote_node_conte
return idev;
}
static inline bool dev_is_expander(struct domain_device *dev)
{
return dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE;
}
static inline void sci_remote_device_decrement_request_count(struct isci_remote_device *idev)
{
/* XXX delete this voodoo when converting to the top-level device

View File

@ -224,7 +224,7 @@ static void scu_ssp_request_construct_task_context(
idev = ireq->target_device;
iport = idev->owning_port;
/* Fill in the TC with the its required data */
/* Fill in the TC with its required data */
task_context->abort = 0;
task_context->priority = 0;
task_context->initiator_request = 1;
@ -506,7 +506,7 @@ static void scu_sata_request_construct_task_context(
idev = ireq->target_device;
iport = idev->owning_port;
/* Fill in the TC with the its required data */
/* Fill in the TC with its required data */
task_context->abort = 0;
task_context->priority = SCU_TASK_PRIORITY_NORMAL;
task_context->initiator_request = 1;
@ -3101,7 +3101,7 @@ sci_io_request_construct(struct isci_host *ihost,
/* pass */;
else if (dev_is_sata(dev))
memset(&ireq->stp.cmd, 0, sizeof(ireq->stp.cmd));
else if (dev_is_expander(dev))
else if (dev_is_expander(dev->dev_type))
/* pass */;
else
return SCI_FAILURE_UNSUPPORTED_PROTOCOL;
@ -3235,7 +3235,7 @@ sci_io_request_construct_smp(struct device *dev,
iport = idev->owning_port;
/*
* Fill in the TC with the its required data
* Fill in the TC with its required data
* 00h
*/
task_context->priority = 0;

View File

@ -511,7 +511,7 @@ int isci_task_abort_task(struct sas_task *task)
"%s: dev = %p (%s%s), task = %p, old_request == %p\n",
__func__, idev,
(dev_is_sata(task->dev) ? "STP/SATA"
: ((dev_is_expander(task->dev))
: ((dev_is_expander(task->dev->dev_type))
? "SMP"
: "SSP")),
((idev) ? ((test_bit(IDEV_GONE, &idev->flags))

View File

@ -8,8 +8,6 @@
* Copyright (C) 2006 Red Hat, Inc. All rights reserved.
* maintained by open-iscsi@googlegroups.com
*
* See the file COPYING included with this distribution for more details.
*
* Credits:
* Christoph Hellwig
* FUJITA Tomonori

View File

@ -1,25 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Serial Attached SCSI (SAS) Discover process
*
* Copyright (C) 2005 Adaptec, Inc. All rights reserved.
* Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
*
* This file is licensed under GPLv2.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
*/
#include <linux/scatterlist.h>
@ -309,7 +293,7 @@ void sas_free_device(struct kref *kref)
dev->phy = NULL;
/* remove the phys and ports, everything else should be gone */
if (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
if (dev_is_expander(dev->dev_type))
kfree(dev->ex_dev.ex_phy);
if (dev_is_sata(dev) && dev->sata_dev.ap) {
@ -519,8 +503,7 @@ static void sas_revalidate_domain(struct work_struct *work)
pr_debug("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id,
task_pid_nr(current));
if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE ||
ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE))
if (ddev && dev_is_expander(ddev->dev_type))
res = sas_ex_revalidate_domain(ddev);
pr_debug("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n",

View File

@ -1,25 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Serial Attached SCSI (SAS) Event processing
*
* Copyright (C) 2005 Adaptec, Inc. All rights reserved.
* Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
*
* This file is licensed under GPLv2.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
*/
#include <linux/export.h>

View File

@ -1,3 +1,4 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Serial Attached SCSI (SAS) Expander discovery and configuration
*
@ -5,21 +6,6 @@
* Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
*
* This file is licensed under GPLv2.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
*/
#include <linux/scatterlist.h>
@ -1106,7 +1092,7 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id)
SAS_ADDR(dev->sas_addr),
phy_id);
sas_ex_disable_phy(dev, phy_id);
break;
return res;
} else
memcpy(dev->port->disc.fanout_sas_addr,
ex_phy->attached_sas_addr, SAS_ADDR_SIZE);
@ -1118,27 +1104,9 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id)
break;
}
if (child) {
int i;
for (i = 0; i < ex->num_phys; i++) {
if (ex->ex_phy[i].phy_state == PHY_VACANT ||
ex->ex_phy[i].phy_state == PHY_NOT_PRESENT)
continue;
/*
* Due to races, the phy might not get added to the
* wide port, so we add the phy to the wide port here.
*/
if (SAS_ADDR(ex->ex_phy[i].attached_sas_addr) ==
SAS_ADDR(child->sas_addr)) {
ex->ex_phy[i].phy_state= PHY_DEVICE_DISCOVERED;
if (sas_ex_join_wide_port(dev, i))
pr_debug("Attaching ex phy%02d to wide port %016llx\n",
i, SAS_ADDR(ex->ex_phy[i].attached_sas_addr));
}
}
}
if (!child)
pr_notice("ex %016llx phy%02d failed to discover\n",
SAS_ADDR(dev->sas_addr), phy_id);
return res;
}
@ -1154,8 +1122,7 @@ static int sas_find_sub_addr(struct domain_device *dev, u8 *sub_addr)
phy->phy_state == PHY_NOT_PRESENT)
continue;
if ((phy->attached_dev_type == SAS_EDGE_EXPANDER_DEVICE ||
phy->attached_dev_type == SAS_FANOUT_EXPANDER_DEVICE) &&
if (dev_is_expander(phy->attached_dev_type) &&
phy->routing_attr == SUBTRACTIVE_ROUTING) {
memcpy(sub_addr, phy->attached_sas_addr, SAS_ADDR_SIZE);
@ -1173,8 +1140,7 @@ static int sas_check_level_subtractive_boundary(struct domain_device *dev)
u8 sub_addr[SAS_ADDR_SIZE] = {0, };
list_for_each_entry(child, &ex->children, siblings) {
if (child->dev_type != SAS_EDGE_EXPANDER_DEVICE &&
child->dev_type != SAS_FANOUT_EXPANDER_DEVICE)
if (!dev_is_expander(child->dev_type))
continue;
if (sub_addr[0] == 0) {
sas_find_sub_addr(child, sub_addr);
@ -1259,8 +1225,7 @@ static int sas_check_ex_subtractive_boundary(struct domain_device *dev)
phy->phy_state == PHY_NOT_PRESENT)
continue;
if ((phy->attached_dev_type == SAS_FANOUT_EXPANDER_DEVICE ||
phy->attached_dev_type == SAS_EDGE_EXPANDER_DEVICE) &&
if (dev_is_expander(phy->attached_dev_type) &&
phy->routing_attr == SUBTRACTIVE_ROUTING) {
if (!sub_sas_addr)
@ -1356,8 +1321,7 @@ static int sas_check_parent_topology(struct domain_device *child)
if (!child->parent)
return 0;
if (child->parent->dev_type != SAS_EDGE_EXPANDER_DEVICE &&
child->parent->dev_type != SAS_FANOUT_EXPANDER_DEVICE)
if (!dev_is_expander(child->parent->dev_type))
return 0;
parent_ex = &child->parent->ex_dev;
@ -1653,8 +1617,7 @@ static int sas_ex_level_discovery(struct asd_sas_port *port, const int level)
struct domain_device *dev;
list_for_each_entry(dev, &port->dev_list, dev_list_node) {
if (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE ||
dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) {
if (dev_is_expander(dev->dev_type)) {
struct sas_expander_device *ex =
rphy_to_expander_device(dev->rphy);
@ -1886,7 +1849,7 @@ static int sas_find_bcast_dev(struct domain_device *dev,
SAS_ADDR(dev->sas_addr));
}
list_for_each_entry(ch, &ex->children, siblings) {
if (ch->dev_type == SAS_EDGE_EXPANDER_DEVICE || ch->dev_type == SAS_FANOUT_EXPANDER_DEVICE) {
if (dev_is_expander(ch->dev_type)) {
res = sas_find_bcast_dev(ch, src_dev);
if (*src_dev)
return res;
@ -1903,8 +1866,7 @@ static void sas_unregister_ex_tree(struct asd_sas_port *port, struct domain_devi
list_for_each_entry_safe(child, n, &ex->children, siblings) {
set_bit(SAS_DEV_GONE, &child->state);
if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE ||
child->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
if (dev_is_expander(child->dev_type))
sas_unregister_ex_tree(port, child);
else
sas_unregister_dev(port, child);
@ -1924,8 +1886,7 @@ static void sas_unregister_devs_sas_addr(struct domain_device *parent,
if (SAS_ADDR(child->sas_addr) ==
SAS_ADDR(phy->attached_sas_addr)) {
set_bit(SAS_DEV_GONE, &child->state);
if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE ||
child->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
if (dev_is_expander(child->dev_type))
sas_unregister_ex_tree(parent->port, child);
else
sas_unregister_dev(parent->port, child);
@ -1954,8 +1915,7 @@ static int sas_discover_bfs_by_root_level(struct domain_device *root,
int res = 0;
list_for_each_entry(child, &ex_root->children, siblings) {
if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE ||
child->dev_type == SAS_FANOUT_EXPANDER_DEVICE) {
if (dev_is_expander(child->dev_type)) {
struct sas_expander_device *ex =
rphy_to_expander_device(child->rphy);
@ -2008,8 +1968,7 @@ static int sas_discover_new(struct domain_device *dev, int phy_id)
list_for_each_entry(child, &dev->ex_dev.children, siblings) {
if (SAS_ADDR(child->sas_addr) ==
SAS_ADDR(ex_phy->attached_sas_addr)) {
if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE ||
child->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
if (dev_is_expander(child->dev_type))
res = sas_discover_bfs_by_root(child);
break;
}

View File

@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later
// SPDX-License-Identifier: GPL-2.0-only
/*
* Serial Attached SCSI (SAS) Transport Layer initialization
*

View File

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Serial Attached SCSI (SAS) class internal header file
*

View File

@ -1,25 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Serial Attached SCSI (SAS) Phy class
*
* Copyright (C) 2005 Adaptec, Inc. All rights reserved.
* Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
*
* This file is licensed under GPLv2.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
*/
#include "sas_internal.h"

View File

@ -1,25 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Serial Attached SCSI (SAS) Port class
*
* Copyright (C) 2005 Adaptec, Inc. All rights reserved.
* Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com>
*
* This file is licensed under GPLv2.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
*/
#include "sas_internal.h"
@ -70,7 +54,7 @@ static void sas_resume_port(struct asd_sas_phy *phy)
continue;
}
if (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) {
if (dev_is_expander(dev->dev_type)) {
dev->ex_dev.ex_change_count = -1;
for (i = 0; i < dev->ex_dev.num_phys; i++) {
struct ex_phy *phy = &dev->ex_dev.ex_phy[i];
@ -195,7 +179,7 @@ static void sas_form_port(struct asd_sas_phy *phy)
sas_discover_event(phy->port, DISCE_DISCOVER_DOMAIN);
/* Only insert a revalidate event after initial discovery */
if (port_dev && sas_dev_type_is_expander(port_dev->dev_type)) {
if (port_dev && dev_is_expander(port_dev->dev_type)) {
struct expander_device *ex_dev = &port_dev->ex_dev;
ex_dev->ex_change_count = -1;
@ -264,7 +248,7 @@ void sas_deform_port(struct asd_sas_phy *phy, int gone)
spin_unlock_irqrestore(&sas_ha->phy_port_lock, flags);
/* Only insert revalidate event if the port still has members */
if (port->port && dev && sas_dev_type_is_expander(dev->dev_type)) {
if (port->port && dev && dev_is_expander(dev->dev_type)) {
struct expander_device *ex_dev = &dev->ex_dev;
ex_dev->ex_change_count = -1;

View File

@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-or-later
// SPDX-License-Identifier: GPL-2.0-only
/*
* Serial Attached SCSI (SAS) class SCSI Host glue.
*

View File

@ -4097,9 +4097,9 @@ lpfc_topology_store(struct device *dev, struct device_attribute *attr,
}
if ((phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC ||
phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) &&
val != FLAGS_TOPOLOGY_MODE_PT_PT) {
val == 4) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"3114 Only non-FC-AL mode is supported\n");
"3114 Loop mode not supported\n");
return -EINVAL;
}
phba->cfg_topology = val;
@ -5180,7 +5180,8 @@ lpfc_cq_max_proc_limit_store(struct device *dev, struct device_attribute *attr,
/* set the values on the cq's */
for (i = 0; i < phba->cfg_irq_chann; i++) {
eq = phba->sli4_hba.hdwq[i].hba_eq;
/* Get the EQ corresponding to the IRQ vector */
eq = phba->sli4_hba.hba_eq_hdl[i].eq;
if (!eq)
continue;
@ -5301,35 +5302,44 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
len += scnprintf(
buf + len, PAGE_SIZE - len,
"CPU %02d hdwq None "
"physid %d coreid %d ht %d\n",
"physid %d coreid %d ht %d ua %d\n",
phba->sli4_hba.curr_disp_cpu,
cpup->phys_id,
cpup->core_id, cpup->hyper);
cpup->phys_id, cpup->core_id,
(cpup->flag & LPFC_CPU_MAP_HYPER),
(cpup->flag & LPFC_CPU_MAP_UNASSIGN));
else
len += scnprintf(
buf + len, PAGE_SIZE - len,
"CPU %02d EQ %04d hdwq %04d "
"physid %d coreid %d ht %d\n",
"physid %d coreid %d ht %d ua %d\n",
phba->sli4_hba.curr_disp_cpu,
cpup->eq, cpup->hdwq, cpup->phys_id,
cpup->core_id, cpup->hyper);
cpup->core_id,
(cpup->flag & LPFC_CPU_MAP_HYPER),
(cpup->flag & LPFC_CPU_MAP_UNASSIGN));
} else {
if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY)
len += scnprintf(
buf + len, PAGE_SIZE - len,
"CPU %02d hdwq None "
"physid %d coreid %d ht %d IRQ %d\n",
"physid %d coreid %d ht %d ua %d IRQ %d\n",
phba->sli4_hba.curr_disp_cpu,
cpup->phys_id,
cpup->core_id, cpup->hyper, cpup->irq);
cpup->core_id,
(cpup->flag & LPFC_CPU_MAP_HYPER),
(cpup->flag & LPFC_CPU_MAP_UNASSIGN),
cpup->irq);
else
len += scnprintf(
buf + len, PAGE_SIZE - len,
"CPU %02d EQ %04d hdwq %04d "
"physid %d coreid %d ht %d IRQ %d\n",
"physid %d coreid %d ht %d ua %d IRQ %d\n",
phba->sli4_hba.curr_disp_cpu,
cpup->eq, cpup->hdwq, cpup->phys_id,
cpup->core_id, cpup->hyper, cpup->irq);
cpup->core_id,
(cpup->flag & LPFC_CPU_MAP_HYPER),
(cpup->flag & LPFC_CPU_MAP_UNASSIGN),
cpup->irq);
}
phba->sli4_hba.curr_disp_cpu++;

View File

@ -5741,7 +5741,7 @@ lpfc_get_trunk_info(struct bsg_job *job)
event_reply->port_speed = phba->sli4_hba.link_state.speed / 1000;
event_reply->logical_speed =
phba->sli4_hba.link_state.logical_speed / 100;
phba->sli4_hba.link_state.logical_speed / 1000;
job_error:
bsg_reply->result = rc;
bsg_job_done(job, bsg_reply->result,

View File

@ -572,7 +572,8 @@ void lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba);
void lpfc_nvmet_unsol_ls_event(struct lpfc_hba *phba,
struct lpfc_sli_ring *pring, struct lpfc_iocbq *piocb);
void lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba, uint32_t idx,
struct rqb_dmabuf *nvmebuf, uint64_t isr_ts);
struct rqb_dmabuf *nvmebuf, uint64_t isr_ts,
uint8_t cqflag);
void lpfc_nvme_mod_param_dep(struct lpfc_hba *phba);
void lpfc_nvme_abort_fcreq_cmpl(struct lpfc_hba *phba,
struct lpfc_iocbq *cmdiocb,

View File

@ -2358,6 +2358,7 @@ static int
lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad)
{
struct lpfc_hba *phba = vport->phba;
struct lpfc_fdmi_attr_entry *ae;
uint32_t size;
@ -2366,9 +2367,13 @@ lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
if (vport->nvmei_support || vport->phba->nvmet_support)
ae->un.AttrTypes[6] = 0x01; /* Type 0x28 - NVME */
ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
/* Check to see if Firmware supports NVME and on physical port */
if ((phba->sli_rev == LPFC_SLI_REV4) && (vport == phba->pport) &&
phba->sli4_hba.pc_sli4_params.nvme)
ae->un.AttrTypes[6] = 0x01; /* Type 0x28 - NVME */
size = FOURBYTES + 32;
ad->AttrLen = cpu_to_be16(size);
ad->AttrType = cpu_to_be16(RPRT_SUPPORTED_FC4_TYPES);
@ -2680,9 +2685,12 @@ lpfc_fdmi_port_attr_active_fc4type(struct lpfc_vport *vport,
ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
/* Check to see if NVME is configured or not */
if (vport->phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
ae->un.AttrTypes[6] = 0x1; /* Type 0x28 - NVME */
ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
size = FOURBYTES + 32;
ad->AttrLen = cpu_to_be16(size);
ad->AttrType = cpu_to_be16(RPRT_ACTIVE_FC4_TYPES);

View File

@ -4308,6 +4308,7 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
if ((rspiocb->iocb.ulpStatus == 0)
&& (ndlp->nlp_flag & NLP_ACC_REGLOGIN)) {
if (!lpfc_unreg_rpi(vport, ndlp) &&
(!(vport->fc_flag & FC_PT2PT)) &&
(ndlp->nlp_state == NLP_STE_PLOGI_ISSUE ||
ndlp->nlp_state == NLP_STE_REG_LOGIN_ISSUE)) {
lpfc_printf_vlog(vport, KERN_INFO,

View File

@ -72,7 +72,7 @@ unsigned long _dump_buf_dif_order;
spinlock_t _dump_buf_lock;
/* Used when mapping IRQ vectors in a driver centric manner */
uint32_t lpfc_present_cpu;
static uint32_t lpfc_present_cpu;
static void lpfc_get_hba_model_desc(struct lpfc_hba *, uint8_t *, uint8_t *);
static int lpfc_post_rcv_buf(struct lpfc_hba *);
@ -93,8 +93,8 @@ static void lpfc_sli4_cq_event_release_all(struct lpfc_hba *);
static void lpfc_sli4_disable_intr(struct lpfc_hba *);
static uint32_t lpfc_sli4_enable_intr(struct lpfc_hba *, uint32_t);
static void lpfc_sli4_oas_verify(struct lpfc_hba *phba);
static uint16_t lpfc_find_eq_handle(struct lpfc_hba *, uint16_t);
static uint16_t lpfc_find_cpu_handle(struct lpfc_hba *, uint16_t, int);
static void lpfc_setup_bg(struct lpfc_hba *, struct Scsi_Host *);
static struct scsi_transport_template *lpfc_transport_template = NULL;
static struct scsi_transport_template *lpfc_vport_transport_template = NULL;
@ -1274,8 +1274,10 @@ lpfc_hb_eq_delay_work(struct work_struct *work)
if (!eqcnt)
goto requeue;
/* Loop thru all IRQ vectors */
for (i = 0; i < phba->cfg_irq_chann; i++) {
eq = phba->sli4_hba.hdwq[i].hba_eq;
/* Get the EQ corresponding to the IRQ vector */
eq = phba->sli4_hba.hba_eq_hdl[i].eq;
if (eq && eqcnt[eq->last_cpu] < 2)
eqcnt[eq->last_cpu]++;
continue;
@ -4114,14 +4116,13 @@ lpfc_new_io_buf(struct lpfc_hba *phba, int num_to_alloc)
* pci bus space for an I/O. The DMA buffer includes the
* number of SGE's necessary to support the sg_tablesize.
*/
lpfc_ncmd->data = dma_pool_alloc(phba->lpfc_sg_dma_buf_pool,
GFP_KERNEL,
&lpfc_ncmd->dma_handle);
lpfc_ncmd->data = dma_pool_zalloc(phba->lpfc_sg_dma_buf_pool,
GFP_KERNEL,
&lpfc_ncmd->dma_handle);
if (!lpfc_ncmd->data) {
kfree(lpfc_ncmd);
break;
}
memset(lpfc_ncmd->data, 0, phba->cfg_sg_dma_buf_size);
/*
* 4K Page alignment is CRITICAL to BlockGuard, double check
@ -4347,6 +4348,9 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
timer_setup(&vport->delayed_disc_tmo, lpfc_delayed_disc_tmo, 0);
if (phba->sli3_options & LPFC_SLI3_BG_ENABLED)
lpfc_setup_bg(phba, shost);
error = scsi_add_host_with_dma(shost, dev, &phba->pcidev->dev);
if (error)
goto out_put_shost;
@ -5055,7 +5059,7 @@ lpfc_update_trunk_link_status(struct lpfc_hba *phba,
bf_get(lpfc_acqe_fc_la_speed, acqe_fc));
phba->sli4_hba.link_state.logical_speed =
bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc);
bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc) * 10;
/* We got FC link speed, convert to fc_linkspeed (READ_TOPOLOGY) */
phba->fc_linkspeed =
lpfc_async_link_speed_to_read_top(
@ -5158,8 +5162,14 @@ lpfc_sli4_async_fc_evt(struct lpfc_hba *phba, struct lpfc_acqe_fc_la *acqe_fc)
bf_get(lpfc_acqe_fc_la_port_number, acqe_fc);
phba->sli4_hba.link_state.fault =
bf_get(lpfc_acqe_link_fault, acqe_fc);
phba->sli4_hba.link_state.logical_speed =
if (bf_get(lpfc_acqe_fc_la_att_type, acqe_fc) ==
LPFC_FC_LA_TYPE_LINK_DOWN)
phba->sli4_hba.link_state.logical_speed = 0;
else if (!phba->sli4_hba.conf_trunk)
phba->sli4_hba.link_state.logical_speed =
bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc) * 10;
lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"2896 Async FC event - Speed:%dGBaud Topology:x%x "
"LA Type:x%x Port Type:%d Port Number:%d Logical speed:"
@ -6551,6 +6561,8 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
spin_lock_init(&phba->sli4_hba.abts_nvmet_buf_list_lock);
INIT_LIST_HEAD(&phba->sli4_hba.lpfc_abts_nvmet_ctx_list);
INIT_LIST_HEAD(&phba->sli4_hba.lpfc_nvmet_io_wait_list);
spin_lock_init(&phba->sli4_hba.t_active_list_lock);
INIT_LIST_HEAD(&phba->sli4_hba.t_active_ctx_list);
}
/* This abort list used by worker thread */
@ -7660,8 +7672,6 @@ lpfc_post_init_setup(struct lpfc_hba *phba)
*/
shost = pci_get_drvdata(phba->pcidev);
shost->can_queue = phba->cfg_hba_queue_depth - 10;
if (phba->sli3_options & LPFC_SLI3_BG_ENABLED)
lpfc_setup_bg(phba, shost);
lpfc_host_attrib_init(shost);
@ -8740,8 +8750,10 @@ int
lpfc_sli4_queue_create(struct lpfc_hba *phba)
{
struct lpfc_queue *qdesc;
int idx, eqidx, cpu;
int idx, cpu, eqcpu;
struct lpfc_sli4_hdw_queue *qp;
struct lpfc_vector_map_info *cpup;
struct lpfc_vector_map_info *eqcpup;
struct lpfc_eq_intr_info *eqi;
/*
@ -8826,40 +8838,60 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
INIT_LIST_HEAD(&phba->sli4_hba.lpfc_wq_list);
/* Create HBA Event Queues (EQs) */
for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
/* determine EQ affinity */
eqidx = lpfc_find_eq_handle(phba, idx);
cpu = lpfc_find_cpu_handle(phba, eqidx, LPFC_FIND_BY_EQ);
/*
* If there are more Hardware Queues than available
* EQs, multiple Hardware Queues may share a common EQ.
for_each_present_cpu(cpu) {
/* We only want to create 1 EQ per vector, even though
* multiple CPUs might be using that vector. so only
* selects the CPUs that are LPFC_CPU_FIRST_IRQ.
*/
if (idx >= phba->cfg_irq_chann) {
/* Share an existing EQ */
phba->sli4_hba.hdwq[idx].hba_eq =
phba->sli4_hba.hdwq[eqidx].hba_eq;
cpup = &phba->sli4_hba.cpu_map[cpu];
if (!(cpup->flag & LPFC_CPU_FIRST_IRQ))
continue;
}
/* Create an EQ */
/* Get a ptr to the Hardware Queue associated with this CPU */
qp = &phba->sli4_hba.hdwq[cpup->hdwq];
/* Allocate an EQ */
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
phba->sli4_hba.eq_esize,
phba->sli4_hba.eq_ecount, cpu);
if (!qdesc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0497 Failed allocate EQ (%d)\n", idx);
"0497 Failed allocate EQ (%d)\n",
cpup->hdwq);
goto out_error;
}
qdesc->qe_valid = 1;
qdesc->hdwq = idx;
/* Save the CPU this EQ is affinitised to */
qdesc->chann = cpu;
phba->sli4_hba.hdwq[idx].hba_eq = qdesc;
qdesc->hdwq = cpup->hdwq;
qdesc->chann = cpu; /* First CPU this EQ is affinitised to */
qdesc->last_cpu = qdesc->chann;
/* Save the allocated EQ in the Hardware Queue */
qp->hba_eq = qdesc;
eqi = per_cpu_ptr(phba->sli4_hba.eq_info, qdesc->last_cpu);
list_add(&qdesc->cpu_list, &eqi->list);
}
/* Now we need to populate the other Hardware Queues, that share
* an IRQ vector, with the associated EQ ptr.
*/
for_each_present_cpu(cpu) {
cpup = &phba->sli4_hba.cpu_map[cpu];
/* Check for EQ already allocated in previous loop */
if (cpup->flag & LPFC_CPU_FIRST_IRQ)
continue;
/* Check for multiple CPUs per hdwq */
qp = &phba->sli4_hba.hdwq[cpup->hdwq];
if (qp->hba_eq)
continue;
/* We need to share an EQ for this hdwq */
eqcpu = lpfc_find_cpu_handle(phba, cpup->eq, LPFC_FIND_BY_EQ);
eqcpup = &phba->sli4_hba.cpu_map[eqcpu];
qp->hba_eq = phba->sli4_hba.hdwq[eqcpup->hdwq].hba_eq;
}
/* Allocate SCSI SLI4 CQ/WQs */
for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
@ -9122,23 +9154,31 @@ static inline void
lpfc_sli4_release_hdwq(struct lpfc_hba *phba)
{
struct lpfc_sli4_hdw_queue *hdwq;
struct lpfc_queue *eq;
uint32_t idx;
hdwq = phba->sli4_hba.hdwq;
for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
if (idx < phba->cfg_irq_chann)
lpfc_sli4_queue_free(hdwq[idx].hba_eq);
hdwq[idx].hba_eq = NULL;
/* Loop thru all Hardware Queues */
for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
/* Free the CQ/WQ corresponding to the Hardware Queue */
lpfc_sli4_queue_free(hdwq[idx].fcp_cq);
lpfc_sli4_queue_free(hdwq[idx].nvme_cq);
lpfc_sli4_queue_free(hdwq[idx].fcp_wq);
lpfc_sli4_queue_free(hdwq[idx].nvme_wq);
hdwq[idx].hba_eq = NULL;
hdwq[idx].fcp_cq = NULL;
hdwq[idx].nvme_cq = NULL;
hdwq[idx].fcp_wq = NULL;
hdwq[idx].nvme_wq = NULL;
}
/* Loop thru all IRQ vectors */
for (idx = 0; idx < phba->cfg_irq_chann; idx++) {
/* Free the EQ corresponding to the IRQ vector */
eq = phba->sli4_hba.hba_eq_hdl[idx].eq;
lpfc_sli4_queue_free(eq);
phba->sli4_hba.hba_eq_hdl[idx].eq = NULL;
}
}
/**
@ -9316,16 +9356,17 @@ static void
lpfc_setup_cq_lookup(struct lpfc_hba *phba)
{
struct lpfc_queue *eq, *childq;
struct lpfc_sli4_hdw_queue *qp;
int qidx;
qp = phba->sli4_hba.hdwq;
memset(phba->sli4_hba.cq_lookup, 0,
(sizeof(struct lpfc_queue *) * (phba->sli4_hba.cq_max + 1)));
/* Loop thru all IRQ vectors */
for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) {
eq = qp[qidx].hba_eq;
/* Get the EQ corresponding to the IRQ vector */
eq = phba->sli4_hba.hba_eq_hdl[qidx].eq;
if (!eq)
continue;
/* Loop through all CQs associated with that EQ */
list_for_each_entry(childq, &eq->child_list, list) {
if (childq->queue_id > phba->sli4_hba.cq_max)
continue;
@ -9354,9 +9395,10 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
{
uint32_t shdr_status, shdr_add_status;
union lpfc_sli4_cfg_shdr *shdr;
struct lpfc_vector_map_info *cpup;
struct lpfc_sli4_hdw_queue *qp;
LPFC_MBOXQ_t *mboxq;
int qidx;
int qidx, cpu;
uint32_t length, usdelay;
int rc = -ENOMEM;
@ -9417,32 +9459,55 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
rc = -ENOMEM;
goto out_error;
}
/* Loop thru all IRQ vectors */
for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) {
if (!qp[qidx].hba_eq) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0522 Fast-path EQ (%d) not "
"allocated\n", qidx);
rc = -ENOMEM;
goto out_destroy;
/* Create HBA Event Queues (EQs) in order */
for_each_present_cpu(cpu) {
cpup = &phba->sli4_hba.cpu_map[cpu];
/* Look for the CPU thats using that vector with
* LPFC_CPU_FIRST_IRQ set.
*/
if (!(cpup->flag & LPFC_CPU_FIRST_IRQ))
continue;
if (qidx != cpup->eq)
continue;
/* Create an EQ for that vector */
rc = lpfc_eq_create(phba, qp[cpup->hdwq].hba_eq,
phba->cfg_fcp_imax);
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0523 Failed setup of fast-path"
" EQ (%d), rc = 0x%x\n",
cpup->eq, (uint32_t)rc);
goto out_destroy;
}
/* Save the EQ for that vector in the hba_eq_hdl */
phba->sli4_hba.hba_eq_hdl[cpup->eq].eq =
qp[cpup->hdwq].hba_eq;
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"2584 HBA EQ setup: queue[%d]-id=%d\n",
cpup->eq,
qp[cpup->hdwq].hba_eq->queue_id);
}
rc = lpfc_eq_create(phba, qp[qidx].hba_eq,
phba->cfg_fcp_imax);
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0523 Failed setup of fast-path EQ "
"(%d), rc = 0x%x\n", qidx,
(uint32_t)rc);
goto out_destroy;
}
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"2584 HBA EQ setup: queue[%d]-id=%d\n", qidx,
qp[qidx].hba_eq->queue_id);
}
/* Loop thru all Hardware Queues */
if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) {
for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) {
cpu = lpfc_find_cpu_handle(phba, qidx,
LPFC_FIND_BY_HDWQ);
cpup = &phba->sli4_hba.cpu_map[cpu];
/* Create the CQ/WQ corresponding to the
* Hardware Queue
*/
rc = lpfc_create_wq_cq(phba,
qp[qidx].hba_eq,
phba->sli4_hba.hdwq[cpup->hdwq].hba_eq,
qp[qidx].nvme_cq,
qp[qidx].nvme_wq,
&phba->sli4_hba.hdwq[qidx].nvme_cq_map,
@ -9458,8 +9523,12 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
}
for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) {
cpu = lpfc_find_cpu_handle(phba, qidx, LPFC_FIND_BY_HDWQ);
cpup = &phba->sli4_hba.cpu_map[cpu];
/* Create the CQ/WQ corresponding to the Hardware Queue */
rc = lpfc_create_wq_cq(phba,
qp[qidx].hba_eq,
phba->sli4_hba.hdwq[cpup->hdwq].hba_eq,
qp[qidx].fcp_cq,
qp[qidx].fcp_wq,
&phba->sli4_hba.hdwq[qidx].fcp_cq_map,
@ -9711,6 +9780,7 @@ void
lpfc_sli4_queue_unset(struct lpfc_hba *phba)
{
struct lpfc_sli4_hdw_queue *qp;
struct lpfc_queue *eq;
int qidx;
/* Unset mailbox command work queue */
@ -9762,14 +9832,20 @@ lpfc_sli4_queue_unset(struct lpfc_hba *phba)
/* Unset fast-path SLI4 queues */
if (phba->sli4_hba.hdwq) {
/* Loop thru all Hardware Queues */
for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) {
/* Destroy the CQ/WQ corresponding to Hardware Queue */
qp = &phba->sli4_hba.hdwq[qidx];
lpfc_wq_destroy(phba, qp->fcp_wq);
lpfc_wq_destroy(phba, qp->nvme_wq);
lpfc_cq_destroy(phba, qp->fcp_cq);
lpfc_cq_destroy(phba, qp->nvme_cq);
if (qidx < phba->cfg_irq_chann)
lpfc_eq_destroy(phba, qp->hba_eq);
}
/* Loop thru all IRQ vectors */
for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) {
/* Destroy the EQ corresponding to the IRQ vector */
eq = phba->sli4_hba.hba_eq_hdl[qidx].eq;
lpfc_eq_destroy(phba, eq);
}
}
@ -10559,11 +10635,12 @@ lpfc_sli_disable_intr(struct lpfc_hba *phba)
}
/**
* lpfc_find_cpu_handle - Find the CPU that corresponds to the specified EQ
* lpfc_find_cpu_handle - Find the CPU that corresponds to the specified Queue
* @phba: pointer to lpfc hba data structure.
* @id: EQ vector index or Hardware Queue index
* @match: LPFC_FIND_BY_EQ = match by EQ
* LPFC_FIND_BY_HDWQ = match by Hardware Queue
* Return the CPU that matches the selection criteria
*/
static uint16_t
lpfc_find_cpu_handle(struct lpfc_hba *phba, uint16_t id, int match)
@ -10571,40 +10648,27 @@ lpfc_find_cpu_handle(struct lpfc_hba *phba, uint16_t id, int match)
struct lpfc_vector_map_info *cpup;
int cpu;
/* Find the desired phys_id for the specified EQ */
/* Loop through all CPUs */
for_each_present_cpu(cpu) {
cpup = &phba->sli4_hba.cpu_map[cpu];
/* If we are matching by EQ, there may be multiple CPUs using
* using the same vector, so select the one with
* LPFC_CPU_FIRST_IRQ set.
*/
if ((match == LPFC_FIND_BY_EQ) &&
(cpup->flag & LPFC_CPU_FIRST_IRQ) &&
(cpup->irq != LPFC_VECTOR_MAP_EMPTY) &&
(cpup->eq == id))
return cpu;
/* If matching by HDWQ, select the first CPU that matches */
if ((match == LPFC_FIND_BY_HDWQ) && (cpup->hdwq == id))
return cpu;
}
return 0;
}
/**
* lpfc_find_eq_handle - Find the EQ that corresponds to the specified
* Hardware Queue
* @phba: pointer to lpfc hba data structure.
* @hdwq: Hardware Queue index
*/
static uint16_t
lpfc_find_eq_handle(struct lpfc_hba *phba, uint16_t hdwq)
{
struct lpfc_vector_map_info *cpup;
int cpu;
/* Find the desired phys_id for the specified EQ */
for_each_present_cpu(cpu) {
cpup = &phba->sli4_hba.cpu_map[cpu];
if (cpup->hdwq == hdwq)
return cpup->eq;
}
return 0;
}
#ifdef CONFIG_X86
/**
* lpfc_find_hyper - Determine if the CPU map entry is hyper-threaded
@ -10645,24 +10709,31 @@ lpfc_find_hyper(struct lpfc_hba *phba, int cpu,
static void
lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
{
int i, cpu, idx;
int i, cpu, idx, new_cpu, start_cpu, first_cpu;
int max_phys_id, min_phys_id;
int max_core_id, min_core_id;
struct lpfc_vector_map_info *cpup;
struct lpfc_vector_map_info *new_cpup;
const struct cpumask *maskp;
#ifdef CONFIG_X86
struct cpuinfo_x86 *cpuinfo;
#endif
/* Init cpu_map array */
memset(phba->sli4_hba.cpu_map, 0xff,
(sizeof(struct lpfc_vector_map_info) *
phba->sli4_hba.num_possible_cpu));
for_each_possible_cpu(cpu) {
cpup = &phba->sli4_hba.cpu_map[cpu];
cpup->phys_id = LPFC_VECTOR_MAP_EMPTY;
cpup->core_id = LPFC_VECTOR_MAP_EMPTY;
cpup->hdwq = LPFC_VECTOR_MAP_EMPTY;
cpup->eq = LPFC_VECTOR_MAP_EMPTY;
cpup->irq = LPFC_VECTOR_MAP_EMPTY;
cpup->flag = 0;
}
max_phys_id = 0;
min_phys_id = 0xffff;
min_phys_id = LPFC_VECTOR_MAP_EMPTY;
max_core_id = 0;
min_core_id = 0xffff;
min_core_id = LPFC_VECTOR_MAP_EMPTY;
/* Update CPU map with physical id and core id of each CPU */
for_each_present_cpu(cpu) {
@ -10671,13 +10742,12 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
cpuinfo = &cpu_data(cpu);
cpup->phys_id = cpuinfo->phys_proc_id;
cpup->core_id = cpuinfo->cpu_core_id;
cpup->hyper = lpfc_find_hyper(phba, cpu,
cpup->phys_id, cpup->core_id);
if (lpfc_find_hyper(phba, cpu, cpup->phys_id, cpup->core_id))
cpup->flag |= LPFC_CPU_MAP_HYPER;
#else
/* No distinction between CPUs for other platforms */
cpup->phys_id = 0;
cpup->core_id = cpu;
cpup->hyper = 0;
#endif
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
@ -10703,23 +10773,216 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
eqi->icnt = 0;
}
/* This loop sets up all CPUs that are affinitized with a
* irq vector assigned to the driver. All affinitized CPUs
* will get a link to that vectors IRQ and EQ.
*/
for (idx = 0; idx < phba->cfg_irq_chann; idx++) {
/* Get a CPU mask for all CPUs affinitized to this vector */
maskp = pci_irq_get_affinity(phba->pcidev, idx);
if (!maskp)
continue;
i = 0;
/* Loop through all CPUs associated with vector idx */
for_each_cpu_and(cpu, maskp, cpu_present_mask) {
/* Set the EQ index and IRQ for that vector */
cpup = &phba->sli4_hba.cpu_map[cpu];
cpup->eq = idx;
cpup->hdwq = idx;
cpup->irq = pci_irq_vector(phba->pcidev, idx);
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"3336 Set Affinity: CPU %d "
"hdwq %d irq %d\n",
cpu, cpup->hdwq, cpup->irq);
"irq %d eq %d\n",
cpu, cpup->irq, cpup->eq);
/* If this is the first CPU thats assigned to this
* vector, set LPFC_CPU_FIRST_IRQ.
*/
if (!i)
cpup->flag |= LPFC_CPU_FIRST_IRQ;
i++;
}
}
/* After looking at each irq vector assigned to this pcidev, its
* possible to see that not ALL CPUs have been accounted for.
* Next we will set any unassigned (unaffinitized) cpu map
* entries to a IRQ on the same phys_id.
*/
first_cpu = cpumask_first(cpu_present_mask);
start_cpu = first_cpu;
for_each_present_cpu(cpu) {
cpup = &phba->sli4_hba.cpu_map[cpu];
/* Is this CPU entry unassigned */
if (cpup->eq == LPFC_VECTOR_MAP_EMPTY) {
/* Mark CPU as IRQ not assigned by the kernel */
cpup->flag |= LPFC_CPU_MAP_UNASSIGN;
/* If so, find a new_cpup thats on the the SAME
* phys_id as cpup. start_cpu will start where we
* left off so all unassigned entries don't get assgined
* the IRQ of the first entry.
*/
new_cpu = start_cpu;
for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
new_cpup = &phba->sli4_hba.cpu_map[new_cpu];
if (!(new_cpup->flag & LPFC_CPU_MAP_UNASSIGN) &&
(new_cpup->irq != LPFC_VECTOR_MAP_EMPTY) &&
(new_cpup->phys_id == cpup->phys_id))
goto found_same;
new_cpu = cpumask_next(
new_cpu, cpu_present_mask);
if (new_cpu == nr_cpumask_bits)
new_cpu = first_cpu;
}
/* At this point, we leave the CPU as unassigned */
continue;
found_same:
/* We found a matching phys_id, so copy the IRQ info */
cpup->eq = new_cpup->eq;
cpup->irq = new_cpup->irq;
/* Bump start_cpu to the next slot to minmize the
* chance of having multiple unassigned CPU entries
* selecting the same IRQ.
*/
start_cpu = cpumask_next(new_cpu, cpu_present_mask);
if (start_cpu == nr_cpumask_bits)
start_cpu = first_cpu;
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"3337 Set Affinity: CPU %d "
"irq %d from id %d same "
"phys_id (%d)\n",
cpu, cpup->irq, new_cpu, cpup->phys_id);
}
}
/* Set any unassigned cpu map entries to a IRQ on any phys_id */
start_cpu = first_cpu;
for_each_present_cpu(cpu) {
cpup = &phba->sli4_hba.cpu_map[cpu];
/* Is this entry unassigned */
if (cpup->eq == LPFC_VECTOR_MAP_EMPTY) {
/* Mark it as IRQ not assigned by the kernel */
cpup->flag |= LPFC_CPU_MAP_UNASSIGN;
/* If so, find a new_cpup thats on ANY phys_id
* as the cpup. start_cpu will start where we
* left off so all unassigned entries don't get
* assigned the IRQ of the first entry.
*/
new_cpu = start_cpu;
for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
new_cpup = &phba->sli4_hba.cpu_map[new_cpu];
if (!(new_cpup->flag & LPFC_CPU_MAP_UNASSIGN) &&
(new_cpup->irq != LPFC_VECTOR_MAP_EMPTY))
goto found_any;
new_cpu = cpumask_next(
new_cpu, cpu_present_mask);
if (new_cpu == nr_cpumask_bits)
new_cpu = first_cpu;
}
/* We should never leave an entry unassigned */
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3339 Set Affinity: CPU %d "
"irq %d UNASSIGNED\n",
cpup->hdwq, cpup->irq);
continue;
found_any:
/* We found an available entry, copy the IRQ info */
cpup->eq = new_cpup->eq;
cpup->irq = new_cpup->irq;
/* Bump start_cpu to the next slot to minmize the
* chance of having multiple unassigned CPU entries
* selecting the same IRQ.
*/
start_cpu = cpumask_next(new_cpu, cpu_present_mask);
if (start_cpu == nr_cpumask_bits)
start_cpu = first_cpu;
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"3338 Set Affinity: CPU %d "
"irq %d from id %d (%d/%d)\n",
cpu, cpup->irq, new_cpu,
new_cpup->phys_id, new_cpup->core_id);
}
}
/* Finally we need to associate a hdwq with each cpu_map entry
* This will be 1 to 1 - hdwq to cpu, unless there are less
* hardware queues then CPUs. For that case we will just round-robin
* the available hardware queues as they get assigned to CPUs.
*/
idx = 0;
start_cpu = 0;
for_each_present_cpu(cpu) {
cpup = &phba->sli4_hba.cpu_map[cpu];
if (idx >= phba->cfg_hdw_queue) {
/* We need to reuse a Hardware Queue for another CPU,
* so be smart about it and pick one that has its
* IRQ/EQ mapped to the same phys_id (CPU package).
* and core_id.
*/
new_cpu = start_cpu;
for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
new_cpup = &phba->sli4_hba.cpu_map[new_cpu];
if ((new_cpup->hdwq != LPFC_VECTOR_MAP_EMPTY) &&
(new_cpup->phys_id == cpup->phys_id) &&
(new_cpup->core_id == cpup->core_id))
goto found_hdwq;
new_cpu = cpumask_next(
new_cpu, cpu_present_mask);
if (new_cpu == nr_cpumask_bits)
new_cpu = first_cpu;
}
/* If we can't match both phys_id and core_id,
* settle for just a phys_id match.
*/
new_cpu = start_cpu;
for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
new_cpup = &phba->sli4_hba.cpu_map[new_cpu];
if ((new_cpup->hdwq != LPFC_VECTOR_MAP_EMPTY) &&
(new_cpup->phys_id == cpup->phys_id))
goto found_hdwq;
new_cpu = cpumask_next(
new_cpu, cpu_present_mask);
if (new_cpu == nr_cpumask_bits)
new_cpu = first_cpu;
}
/* Otherwise just round robin on cfg_hdw_queue */
cpup->hdwq = idx % phba->cfg_hdw_queue;
goto logit;
found_hdwq:
/* We found an available entry, copy the IRQ info */
start_cpu = cpumask_next(new_cpu, cpu_present_mask);
if (start_cpu == nr_cpumask_bits)
start_cpu = first_cpu;
cpup->hdwq = new_cpup->hdwq;
} else {
/* 1 to 1, CPU to hdwq */
cpup->hdwq = idx;
}
logit:
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3335 Set Affinity: CPU %d (phys %d core %d): "
"hdwq %d eq %d irq %d flg x%x\n",
cpu, cpup->phys_id, cpup->core_id,
cpup->hdwq, cpup->eq, cpup->irq, cpup->flag);
idx++;
}
/* The cpu_map array will be used later during initialization
* when EQ / CQ / WQs are allocated and configured.
*/
return;
}
@ -11331,24 +11594,43 @@ lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
mbx_sli4_parameters);
phba->sli4_hba.extents_in_use = bf_get(cfg_ext, mbx_sli4_parameters);
phba->sli4_hba.rpi_hdrs_in_use = bf_get(cfg_hdrr, mbx_sli4_parameters);
phba->nvme_support = (bf_get(cfg_nvme, mbx_sli4_parameters) &&
bf_get(cfg_xib, mbx_sli4_parameters));
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP) ||
!phba->nvme_support) {
phba->nvme_support = 0;
phba->nvmet_support = 0;
phba->cfg_nvmet_mrq = 0;
lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_NVME,
"6101 Disabling NVME support: "
"Not supported by firmware: %d %d\n",
bf_get(cfg_nvme, mbx_sli4_parameters),
bf_get(cfg_xib, mbx_sli4_parameters));
/* Check for firmware nvme support */
rc = (bf_get(cfg_nvme, mbx_sli4_parameters) &&
bf_get(cfg_xib, mbx_sli4_parameters));
/* If firmware doesn't support NVME, just use SCSI support */
if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_FCP))
return -ENODEV;
phba->cfg_enable_fc4_type = LPFC_ENABLE_FCP;
if (rc) {
/* Save this to indicate the Firmware supports NVME */
sli4_params->nvme = 1;
/* Firmware NVME support, check driver FC4 NVME support */
if (phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP) {
lpfc_printf_log(phba, KERN_INFO, LOG_INIT | LOG_NVME,
"6133 Disabling NVME support: "
"FC4 type not supported: x%x\n",
phba->cfg_enable_fc4_type);
goto fcponly;
}
} else {
/* No firmware NVME support, check driver FC4 NVME support */
sli4_params->nvme = 0;
if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_NVME,
"6101 Disabling NVME support: Not "
"supported by firmware (%d %d) x%x\n",
bf_get(cfg_nvme, mbx_sli4_parameters),
bf_get(cfg_xib, mbx_sli4_parameters),
phba->cfg_enable_fc4_type);
fcponly:
phba->nvme_support = 0;
phba->nvmet_support = 0;
phba->cfg_nvmet_mrq = 0;
/* If no FC4 type support, move to just SCSI support */
if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_FCP))
return -ENODEV;
phba->cfg_enable_fc4_type = LPFC_ENABLE_FCP;
}
}
/* Only embed PBDE for if_type 6, PBDE support requires xib be set */

View File

@ -2143,7 +2143,9 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
struct completion *lport_unreg_cmp)
{
u32 wait_tmo;
int ret;
int ret, i, pending = 0;
struct lpfc_sli_ring *pring;
struct lpfc_hba *phba = vport->phba;
/* Host transport has to clean up and confirm requiring an indefinite
* wait. Print a message if a 10 second wait expires and renew the
@ -2153,10 +2155,18 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
while (true) {
ret = wait_for_completion_timeout(lport_unreg_cmp, wait_tmo);
if (unlikely(!ret)) {
pending = 0;
for (i = 0; i < phba->cfg_hdw_queue; i++) {
pring = phba->sli4_hba.hdwq[i].nvme_wq->pring;
if (!pring)
continue;
if (pring->txcmplq_cnt)
pending += pring->txcmplq_cnt;
}
lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_IOERR,
"6176 Lport %p Localport %p wait "
"timed out. Renewing.\n",
lport, vport->localport);
"timed out. Pending %d. Renewing.\n",
lport, vport->localport, pending);
continue;
}
break;

View File

@ -220,19 +220,68 @@ lpfc_nvmet_cmd_template(void)
/* Word 12, 13, 14, 15 - is zero */
}
#if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
static struct lpfc_nvmet_rcv_ctx *
lpfc_nvmet_get_ctx_for_xri(struct lpfc_hba *phba, u16 xri)
{
struct lpfc_nvmet_rcv_ctx *ctxp;
unsigned long iflag;
bool found = false;
spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
list_for_each_entry(ctxp, &phba->sli4_hba.t_active_ctx_list, list) {
if (ctxp->ctxbuf->sglq->sli4_xritag != xri)
continue;
found = true;
break;
}
spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
if (found)
return ctxp;
return NULL;
}
static struct lpfc_nvmet_rcv_ctx *
lpfc_nvmet_get_ctx_for_oxid(struct lpfc_hba *phba, u16 oxid, u32 sid)
{
struct lpfc_nvmet_rcv_ctx *ctxp;
unsigned long iflag;
bool found = false;
spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
list_for_each_entry(ctxp, &phba->sli4_hba.t_active_ctx_list, list) {
if (ctxp->oxid != oxid || ctxp->sid != sid)
continue;
found = true;
break;
}
spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
if (found)
return ctxp;
return NULL;
}
#endif
static void
lpfc_nvmet_defer_release(struct lpfc_hba *phba, struct lpfc_nvmet_rcv_ctx *ctxp)
{
lockdep_assert_held(&ctxp->ctxlock);
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
"6313 NVMET Defer ctx release xri x%x flg x%x\n",
"6313 NVMET Defer ctx release oxid x%x flg x%x\n",
ctxp->oxid, ctxp->flag);
if (ctxp->flag & LPFC_NVMET_CTX_RLS)
return;
ctxp->flag |= LPFC_NVMET_CTX_RLS;
spin_lock(&phba->sli4_hba.t_active_list_lock);
list_del(&ctxp->list);
spin_unlock(&phba->sli4_hba.t_active_list_lock);
spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
list_add_tail(&ctxp->list, &phba->sli4_hba.lpfc_abts_nvmet_ctx_list);
spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
@ -343,16 +392,23 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
}
if (ctxp->rqb_buffer) {
nvmebuf = ctxp->rqb_buffer;
spin_lock_irqsave(&ctxp->ctxlock, iflag);
ctxp->rqb_buffer = NULL;
if (ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) {
ctxp->flag &= ~LPFC_NVMET_CTX_REUSE_WQ;
spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
nvmebuf = ctxp->rqb_buffer;
/* check if freed in another path whilst acquiring lock */
if (nvmebuf) {
ctxp->rqb_buffer = NULL;
if (ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) {
ctxp->flag &= ~LPFC_NVMET_CTX_REUSE_WQ;
spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
nvmebuf->hrq->rqbp->rqb_free_buffer(phba,
nvmebuf);
} else {
spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
/* repost */
lpfc_rq_buf_free(phba, &nvmebuf->hbuf);
}
} else {
spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */
}
}
ctxp->state = LPFC_NVMET_STE_FREE;
@ -388,8 +444,9 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
spin_lock_init(&ctxp->ctxlock);
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
if (ctxp->ts_cmd_nvme) {
ctxp->ts_cmd_nvme = ktime_get_ns();
/* NOTE: isr time stamp is stale when context is re-assigned*/
if (ctxp->ts_isr_cmd) {
ctxp->ts_cmd_nvme = 0;
ctxp->ts_nvme_data = 0;
ctxp->ts_data_wqput = 0;
ctxp->ts_isr_data = 0;
@ -402,9 +459,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
#endif
atomic_inc(&tgtp->rcv_fcp_cmd_in);
/* flag new work queued, replacement buffer has already
* been reposted
*/
/* Indicate that a replacement buffer has been posted */
spin_lock_irqsave(&ctxp->ctxlock, iflag);
ctxp->flag |= LPFC_NVMET_CTX_REUSE_WQ;
spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
@ -433,6 +488,9 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
* Use the CPU context list, from the MRQ the IO was received on
* (ctxp->idx), to save context structure.
*/
spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
list_del_init(&ctxp->list);
spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
cpu = raw_smp_processor_id();
infop = lpfc_get_ctx_list(phba, cpu, ctxp->idx);
spin_lock_irqsave(&infop->nvmet_ctx_list_lock, iflag);
@ -700,8 +758,10 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
}
lpfc_printf_log(phba, KERN_INFO, logerr,
"6315 IO Error Cmpl xri x%x: %x/%x XBUSY:x%x\n",
ctxp->oxid, status, result, ctxp->flag);
"6315 IO Error Cmpl oxid: x%x xri: x%x %x/%x "
"XBUSY:x%x\n",
ctxp->oxid, ctxp->ctxbuf->sglq->sli4_xritag,
status, result, ctxp->flag);
} else {
rsp->fcp_error = NVME_SC_SUCCESS;
@ -849,7 +909,6 @@ lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
* before freeing ctxp and iocbq.
*/
lpfc_in_buf_free(phba, &nvmebuf->dbuf);
ctxp->rqb_buffer = 0;
atomic_inc(&nvmep->xmt_ls_rsp);
return 0;
}
@ -922,7 +981,7 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
(ctxp->state == LPFC_NVMET_STE_ABORT)) {
atomic_inc(&lpfc_nvmep->xmt_fcp_drop);
lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
"6102 IO xri x%x aborted\n",
"6102 IO oxid x%x aborted\n",
ctxp->oxid);
rc = -ENXIO;
goto aerr;
@ -1022,7 +1081,7 @@ lpfc_nvmet_xmt_fcp_abort(struct nvmet_fc_target_port *tgtport,
ctxp->hdwq = &phba->sli4_hba.hdwq[0];
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
"6103 NVMET Abort op: oxri x%x flg x%x ste %d\n",
"6103 NVMET Abort op: oxid x%x flg x%x ste %d\n",
ctxp->oxid, ctxp->flag, ctxp->state);
lpfc_nvmeio_data(phba, "NVMET FCP ABRT: xri x%x flg x%x ste x%x\n",
@ -1035,7 +1094,7 @@ lpfc_nvmet_xmt_fcp_abort(struct nvmet_fc_target_port *tgtport,
/* Since iaab/iaar are NOT set, we need to check
* if the firmware is in process of aborting IO
*/
if (ctxp->flag & LPFC_NVMET_XBUSY) {
if (ctxp->flag & (LPFC_NVMET_XBUSY | LPFC_NVMET_ABORT_OP)) {
spin_unlock_irqrestore(&ctxp->ctxlock, flags);
return;
}
@ -1098,6 +1157,7 @@ lpfc_nvmet_xmt_fcp_release(struct nvmet_fc_target_port *tgtport,
ctxp->state, aborting);
atomic_inc(&lpfc_nvmep->xmt_fcp_release);
ctxp->flag &= ~LPFC_NVMET_TNOTIFY;
if (aborting)
return;
@ -1122,7 +1182,7 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
if (!nvmebuf) {
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
"6425 Defer rcv: no buffer xri x%x: "
"6425 Defer rcv: no buffer oxid x%x: "
"flg %x ste %x\n",
ctxp->oxid, ctxp->flag, ctxp->state);
return;
@ -1514,10 +1574,12 @@ void
lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
struct sli4_wcqe_xri_aborted *axri)
{
#if (IS_ENABLED(CONFIG_NVME_TARGET_FC))
uint16_t xri = bf_get(lpfc_wcqe_xa_xri, axri);
uint16_t rxid = bf_get(lpfc_wcqe_xa_remote_xid, axri);
struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp;
struct lpfc_nvmet_tgtport *tgtp;
struct nvmefc_tgt_fcp_req *req = NULL;
struct lpfc_nodelist *ndlp;
unsigned long iflag = 0;
int rrq_empty = 0;
@ -1548,7 +1610,7 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
*/
if (ctxp->flag & LPFC_NVMET_CTX_RLS &&
!(ctxp->flag & LPFC_NVMET_ABORT_OP)) {
list_del(&ctxp->list);
list_del_init(&ctxp->list);
released = true;
}
ctxp->flag &= ~LPFC_NVMET_XBUSY;
@ -1568,7 +1630,7 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
}
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
"6318 XB aborted oxid %x flg x%x (%x)\n",
"6318 XB aborted oxid x%x flg x%x (%x)\n",
ctxp->oxid, ctxp->flag, released);
if (released)
lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf);
@ -1579,6 +1641,33 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
}
spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
spin_unlock_irqrestore(&phba->hbalock, iflag);
ctxp = lpfc_nvmet_get_ctx_for_xri(phba, xri);
if (ctxp) {
/*
* Abort already done by FW, so BA_ACC sent.
* However, the transport may be unaware.
*/
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
"6323 NVMET Rcv ABTS xri x%x ctxp state x%x "
"flag x%x oxid x%x rxid x%x\n",
xri, ctxp->state, ctxp->flag, ctxp->oxid,
rxid);
spin_lock_irqsave(&ctxp->ctxlock, iflag);
ctxp->flag |= LPFC_NVMET_ABTS_RCV;
ctxp->state = LPFC_NVMET_STE_ABORT;
spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
lpfc_nvmeio_data(phba,
"NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n",
xri, raw_smp_processor_id(), 0);
req = &ctxp->ctx.fcp_req;
if (req)
nvmet_fc_rcv_fcp_abort(phba->targetport, req);
}
#endif
}
int
@ -1589,19 +1678,23 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
struct lpfc_hba *phba = vport->phba;
struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp;
struct nvmefc_tgt_fcp_req *rsp;
uint16_t xri;
uint32_t sid;
uint16_t oxid, xri;
unsigned long iflag = 0;
xri = be16_to_cpu(fc_hdr->fh_ox_id);
sid = sli4_sid_from_fc_hdr(fc_hdr);
oxid = be16_to_cpu(fc_hdr->fh_ox_id);
spin_lock_irqsave(&phba->hbalock, iflag);
spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
list_for_each_entry_safe(ctxp, next_ctxp,
&phba->sli4_hba.lpfc_abts_nvmet_ctx_list,
list) {
if (ctxp->ctxbuf->sglq->sli4_xritag != xri)
if (ctxp->oxid != oxid || ctxp->sid != sid)
continue;
xri = ctxp->ctxbuf->sglq->sli4_xritag;
spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
spin_unlock_irqrestore(&phba->hbalock, iflag);
@ -1626,11 +1719,93 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
spin_unlock_irqrestore(&phba->hbalock, iflag);
lpfc_nvmeio_data(phba, "NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n",
xri, raw_smp_processor_id(), 1);
/* check the wait list */
if (phba->sli4_hba.nvmet_io_wait_cnt) {
struct rqb_dmabuf *nvmebuf;
struct fc_frame_header *fc_hdr_tmp;
u32 sid_tmp;
u16 oxid_tmp;
bool found = false;
spin_lock_irqsave(&phba->sli4_hba.nvmet_io_wait_lock, iflag);
/* match by oxid and s_id */
list_for_each_entry(nvmebuf,
&phba->sli4_hba.lpfc_nvmet_io_wait_list,
hbuf.list) {
fc_hdr_tmp = (struct fc_frame_header *)
(nvmebuf->hbuf.virt);
oxid_tmp = be16_to_cpu(fc_hdr_tmp->fh_ox_id);
sid_tmp = sli4_sid_from_fc_hdr(fc_hdr_tmp);
if (oxid_tmp != oxid || sid_tmp != sid)
continue;
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
"6321 NVMET Rcv ABTS oxid x%x from x%x "
"is waiting for a ctxp\n",
oxid, sid);
list_del_init(&nvmebuf->hbuf.list);
phba->sli4_hba.nvmet_io_wait_cnt--;
found = true;
break;
}
spin_unlock_irqrestore(&phba->sli4_hba.nvmet_io_wait_lock,
iflag);
/* free buffer since already posted a new DMA buffer to RQ */
if (found) {
nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
/* Respond with BA_ACC accordingly */
lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 1);
return 0;
}
}
/* check active list */
ctxp = lpfc_nvmet_get_ctx_for_oxid(phba, oxid, sid);
if (ctxp) {
xri = ctxp->ctxbuf->sglq->sli4_xritag;
spin_lock_irqsave(&ctxp->ctxlock, iflag);
ctxp->flag |= (LPFC_NVMET_ABTS_RCV | LPFC_NVMET_ABORT_OP);
spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
lpfc_nvmeio_data(phba,
"NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n",
xri, raw_smp_processor_id(), 0);
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
"6322 NVMET Rcv ABTS:acc oxid x%x xri x%x "
"flag x%x state x%x\n",
ctxp->oxid, xri, ctxp->flag, ctxp->state);
if (ctxp->flag & LPFC_NVMET_TNOTIFY) {
/* Notify the transport */
nvmet_fc_rcv_fcp_abort(phba->targetport,
&ctxp->ctx.fcp_req);
} else {
cancel_work_sync(&ctxp->ctxbuf->defer_work);
spin_lock_irqsave(&ctxp->ctxlock, iflag);
lpfc_nvmet_defer_release(phba, ctxp);
spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
}
if (ctxp->state == LPFC_NVMET_STE_RCV)
lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, ctxp->sid,
ctxp->oxid);
else
lpfc_nvmet_sol_fcp_issue_abort(phba, ctxp, ctxp->sid,
ctxp->oxid);
lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 1);
return 0;
}
lpfc_nvmeio_data(phba, "NVMET ABTS RCV: oxid x%x CPU %02x rjt %d\n",
oxid, raw_smp_processor_id(), 1);
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
"6320 NVMET Rcv ABTS:rjt xri x%x\n", xri);
"6320 NVMET Rcv ABTS:rjt oxid x%x\n", oxid);
/* Respond with BA_RJT accordingly */
lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 0);
@ -1714,6 +1889,18 @@ lpfc_nvmet_wqfull_process(struct lpfc_hba *phba,
spin_unlock_irqrestore(&pring->ring_lock, iflags);
return;
}
if (rc == WQE_SUCCESS) {
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
if (ctxp->ts_cmd_nvme) {
if (ctxp->ctx.fcp_req.op == NVMET_FCOP_RSP)
ctxp->ts_status_wqput = ktime_get_ns();
else
ctxp->ts_data_wqput = ktime_get_ns();
}
#endif
} else {
WARN_ON(rc);
}
}
wq->q_flag &= ~HBA_NVMET_WQFULL;
spin_unlock_irqrestore(&pring->ring_lock, iflags);
@ -1879,8 +2066,20 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
return;
}
if (ctxp->flag & LPFC_NVMET_ABTS_RCV) {
lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
"6324 IO oxid x%x aborted\n",
ctxp->oxid);
return;
}
payload = (uint32_t *)(nvmebuf->dbuf.virt);
tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
ctxp->flag |= LPFC_NVMET_TNOTIFY;
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
if (ctxp->ts_isr_cmd)
ctxp->ts_cmd_nvme = ktime_get_ns();
#endif
/*
* The calling sequence should be:
* nvmet_fc_rcv_fcp_req->lpfc_nvmet_xmt_fcp_op/cmp- req->done
@ -1930,6 +2129,7 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
phba->sli4_hba.nvmet_mrq_data[qno], 1, qno);
return;
}
ctxp->flag &= ~LPFC_NVMET_TNOTIFY;
atomic_inc(&tgtp->rcv_fcp_cmd_drop);
lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
"2582 FCP Drop IO x%x: err x%x: x%x x%x x%x\n",
@ -2019,6 +2219,8 @@ lpfc_nvmet_replenish_context(struct lpfc_hba *phba,
* @phba: pointer to lpfc hba data structure.
* @idx: relative index of MRQ vector
* @nvmebuf: pointer to lpfc nvme command HBQ data structure.
* @isr_timestamp: in jiffies.
* @cqflag: cq processing information regarding workload.
*
* This routine is used for processing the WQE associated with a unsolicited
* event. It first determines whether there is an existing ndlp that matches
@ -2031,7 +2233,8 @@ static void
lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
uint32_t idx,
struct rqb_dmabuf *nvmebuf,
uint64_t isr_timestamp)
uint64_t isr_timestamp,
uint8_t cqflag)
{
struct lpfc_nvmet_rcv_ctx *ctxp;
struct lpfc_nvmet_tgtport *tgtp;
@ -2118,6 +2321,9 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
sid = sli4_sid_from_fc_hdr(fc_hdr);
ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context;
spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag);
list_add_tail(&ctxp->list, &phba->sli4_hba.t_active_ctx_list);
spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag);
if (ctxp->state != LPFC_NVMET_STE_FREE) {
lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR,
"6414 NVMET Context corrupt %d %d oxid x%x\n",
@ -2140,24 +2346,41 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
spin_lock_init(&ctxp->ctxlock);
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
if (isr_timestamp) {
if (isr_timestamp)
ctxp->ts_isr_cmd = isr_timestamp;
ctxp->ts_cmd_nvme = ktime_get_ns();
ctxp->ts_nvme_data = 0;
ctxp->ts_data_wqput = 0;
ctxp->ts_isr_data = 0;
ctxp->ts_data_nvme = 0;
ctxp->ts_nvme_status = 0;
ctxp->ts_status_wqput = 0;
ctxp->ts_isr_status = 0;
ctxp->ts_status_nvme = 0;
} else {
ctxp->ts_cmd_nvme = 0;
}
ctxp->ts_cmd_nvme = 0;
ctxp->ts_nvme_data = 0;
ctxp->ts_data_wqput = 0;
ctxp->ts_isr_data = 0;
ctxp->ts_data_nvme = 0;
ctxp->ts_nvme_status = 0;
ctxp->ts_status_wqput = 0;
ctxp->ts_isr_status = 0;
ctxp->ts_status_nvme = 0;
#endif
atomic_inc(&tgtp->rcv_fcp_cmd_in);
lpfc_nvmet_process_rcv_fcp_req(ctx_buf);
/* check for cq processing load */
if (!cqflag) {
lpfc_nvmet_process_rcv_fcp_req(ctx_buf);
return;
}
if (!queue_work(phba->wq, &ctx_buf->defer_work)) {
atomic_inc(&tgtp->rcv_fcp_cmd_drop);
lpfc_printf_log(phba, KERN_ERR, LOG_NVME,
"6325 Unable to queue work for oxid x%x. "
"FCP Drop IO [x%x x%x x%x]\n",
ctxp->oxid,
atomic_read(&tgtp->rcv_fcp_cmd_in),
atomic_read(&tgtp->rcv_fcp_cmd_out),
atomic_read(&tgtp->xmt_fcp_release));
spin_lock_irqsave(&ctxp->ctxlock, iflag);
lpfc_nvmet_defer_release(phba, ctxp);
spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, sid, oxid);
}
}
/**
@ -2194,6 +2417,8 @@ lpfc_nvmet_unsol_ls_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
* @phba: pointer to lpfc hba data structure.
* @idx: relative index of MRQ vector
* @nvmebuf: pointer to received nvme data structure.
* @isr_timestamp: in jiffies.
* @cqflag: cq processing information regarding workload.
*
* This routine is used to process an unsolicited event received from a SLI
* (Service Level Interface) ring. The actual processing of the data buffer
@ -2205,14 +2430,14 @@ void
lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba,
uint32_t idx,
struct rqb_dmabuf *nvmebuf,
uint64_t isr_timestamp)
uint64_t isr_timestamp,
uint8_t cqflag)
{
if (phba->nvmet_support == 0) {
lpfc_rq_buf_free(phba, &nvmebuf->hbuf);
return;
}
lpfc_nvmet_unsol_fcp_buffer(phba, idx, nvmebuf,
isr_timestamp);
lpfc_nvmet_unsol_fcp_buffer(phba, idx, nvmebuf, isr_timestamp, cqflag);
}
/**
@ -2750,7 +2975,7 @@ lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
if ((ctxp->flag & LPFC_NVMET_CTX_RLS) &&
!(ctxp->flag & LPFC_NVMET_XBUSY)) {
spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
list_del(&ctxp->list);
list_del_init(&ctxp->list);
spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
released = true;
}
@ -2759,7 +2984,7 @@ lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
atomic_inc(&tgtp->xmt_abort_rsp);
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
"6165 ABORT cmpl: xri x%x flg x%x (%d) "
"6165 ABORT cmpl: oxid x%x flg x%x (%d) "
"WCQE: %08x %08x %08x %08x\n",
ctxp->oxid, ctxp->flag, released,
wcqe->word0, wcqe->total_data_placed,
@ -2834,7 +3059,7 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
if ((ctxp->flag & LPFC_NVMET_CTX_RLS) &&
!(ctxp->flag & LPFC_NVMET_XBUSY)) {
spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
list_del(&ctxp->list);
list_del_init(&ctxp->list);
spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
released = true;
}
@ -2843,7 +3068,7 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
atomic_inc(&tgtp->xmt_abort_rsp);
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
"6316 ABTS cmpl xri x%x flg x%x (%x) "
"6316 ABTS cmpl oxid x%x flg x%x (%x) "
"WCQE: %08x %08x %08x %08x\n",
ctxp->oxid, ctxp->flag, released,
wcqe->word0, wcqe->total_data_placed,
@ -3214,7 +3439,7 @@ aerr:
spin_lock_irqsave(&ctxp->ctxlock, flags);
if (ctxp->flag & LPFC_NVMET_CTX_RLS) {
spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
list_del(&ctxp->list);
list_del_init(&ctxp->list);
spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
released = true;
}
@ -3223,8 +3448,9 @@ aerr:
atomic_inc(&tgtp->xmt_abort_rsp_error);
lpfc_printf_log(phba, KERN_ERR, LOG_NVME_ABTS,
"6135 Failed to Issue ABTS for oxid x%x. Status x%x\n",
ctxp->oxid, rc);
"6135 Failed to Issue ABTS for oxid x%x. Status x%x "
"(%x)\n",
ctxp->oxid, rc, released);
if (released)
lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf);
return 1;

View File

@ -140,6 +140,7 @@ struct lpfc_nvmet_rcv_ctx {
#define LPFC_NVMET_ABTS_RCV 0x10 /* ABTS received on exchange */
#define LPFC_NVMET_CTX_REUSE_WQ 0x20 /* ctx reused via WQ */
#define LPFC_NVMET_DEFER_WQFULL 0x40 /* Waiting on a free WQE */
#define LPFC_NVMET_TNOTIFY 0x80 /* notify transport of abts */
struct rqb_dmabuf *rqb_buffer;
struct lpfc_nvmet_ctxbuf *ctxbuf;
struct lpfc_sli4_hdw_queue *hdwq;

View File

@ -3879,10 +3879,8 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
*/
spin_lock(&lpfc_cmd->buf_lock);
lpfc_cmd->cur_iocbq.iocb_flag &= ~LPFC_DRIVER_ABORTED;
if (lpfc_cmd->waitq) {
if (lpfc_cmd->waitq)
wake_up(lpfc_cmd->waitq);
lpfc_cmd->waitq = NULL;
}
spin_unlock(&lpfc_cmd->buf_lock);
lpfc_release_scsi_buf(phba, lpfc_cmd);
@ -4718,6 +4716,9 @@ wait_for_cmpl:
iocb->sli4_xritag, ret,
cmnd->device->id, cmnd->device->lun);
}
lpfc_cmd->waitq = NULL;
spin_unlock(&lpfc_cmd->buf_lock);
goto out;
@ -4797,7 +4798,12 @@ lpfc_check_fcp_rsp(struct lpfc_vport *vport, struct lpfc_io_buf *lpfc_cmd)
rsp_info,
rsp_len, rsp_info_code);
if ((fcprsp->rspStatus2&RSP_LEN_VALID) && (rsp_len == 8)) {
/* If FCP_RSP_LEN_VALID bit is one, then the FCP_RSP_LEN
* field specifies the number of valid bytes of FCP_RSP_INFO.
* The FCP_RSP_LEN field shall be set to 0x04 or 0x08
*/
if ((fcprsp->rspStatus2 & RSP_LEN_VALID) &&
((rsp_len == 8) || (rsp_len == 4))) {
switch (rsp_info_code) {
case RSP_NO_FAILURE:
lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
@ -5741,7 +5747,7 @@ lpfc_enable_oas_lun(struct lpfc_hba *phba, struct lpfc_name *vport_wwpn,
/* Create an lun info structure and add to list of luns */
lun_info = lpfc_create_device_data(phba, vport_wwpn, target_wwpn, lun,
pri, false);
pri, true);
if (lun_info) {
lun_info->oas_enabled = true;
lun_info->priority = pri;

View File

@ -108,7 +108,7 @@ lpfc_get_iocb_from_iocbq(struct lpfc_iocbq *iocbq)
* endianness. This function can be called with or without
* lock.
**/
void
static void
lpfc_sli4_pcimem_bcopy(void *srcp, void *destp, uint32_t cnt)
{
uint64_t *src = srcp;
@ -5571,6 +5571,7 @@ lpfc_sli4_arm_cqeq_intr(struct lpfc_hba *phba)
int qidx;
struct lpfc_sli4_hba *sli4_hba = &phba->sli4_hba;
struct lpfc_sli4_hdw_queue *qp;
struct lpfc_queue *eq;
sli4_hba->sli4_write_cq_db(phba, sli4_hba->mbx_cq, 0, LPFC_QUEUE_REARM);
sli4_hba->sli4_write_cq_db(phba, sli4_hba->els_cq, 0, LPFC_QUEUE_REARM);
@ -5578,18 +5579,24 @@ lpfc_sli4_arm_cqeq_intr(struct lpfc_hba *phba)
sli4_hba->sli4_write_cq_db(phba, sli4_hba->nvmels_cq, 0,
LPFC_QUEUE_REARM);
qp = sli4_hba->hdwq;
if (sli4_hba->hdwq) {
/* Loop thru all Hardware Queues */
for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) {
sli4_hba->sli4_write_cq_db(phba, qp[qidx].fcp_cq, 0,
qp = &sli4_hba->hdwq[qidx];
/* ARM the corresponding CQ */
sli4_hba->sli4_write_cq_db(phba, qp->fcp_cq, 0,
LPFC_QUEUE_REARM);
sli4_hba->sli4_write_cq_db(phba, qp[qidx].nvme_cq, 0,
sli4_hba->sli4_write_cq_db(phba, qp->nvme_cq, 0,
LPFC_QUEUE_REARM);
}
for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++)
sli4_hba->sli4_write_eq_db(phba, qp[qidx].hba_eq,
0, LPFC_QUEUE_REARM);
/* Loop thru all IRQ vectors */
for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) {
eq = sli4_hba->hba_eq_hdl[qidx].eq;
/* ARM the corresponding EQ */
sli4_hba->sli4_write_eq_db(phba, eq,
0, LPFC_QUEUE_REARM);
}
}
if (phba->nvmet_support) {
@ -7875,26 +7882,28 @@ lpfc_sli4_mbox_completions_pending(struct lpfc_hba *phba)
* and will process all the completions associated with the eq for the
* mailbox completion queue.
**/
bool
static bool
lpfc_sli4_process_missed_mbox_completions(struct lpfc_hba *phba)
{
struct lpfc_sli4_hba *sli4_hba = &phba->sli4_hba;
uint32_t eqidx;
struct lpfc_queue *fpeq = NULL;
struct lpfc_queue *eq;
bool mbox_pending;
if (unlikely(!phba) || (phba->sli_rev != LPFC_SLI_REV4))
return false;
/* Find the eq associated with the mcq */
if (sli4_hba->hdwq)
for (eqidx = 0; eqidx < phba->cfg_irq_chann; eqidx++)
if (sli4_hba->hdwq[eqidx].hba_eq->queue_id ==
sli4_hba->mbx_cq->assoc_qid) {
fpeq = sli4_hba->hdwq[eqidx].hba_eq;
/* Find the EQ associated with the mbox CQ */
if (sli4_hba->hdwq) {
for (eqidx = 0; eqidx < phba->cfg_irq_chann; eqidx++) {
eq = phba->sli4_hba.hba_eq_hdl[eqidx].eq;
if (eq->queue_id == sli4_hba->mbx_cq->assoc_qid) {
fpeq = eq;
break;
}
}
}
if (!fpeq)
return false;
@ -13605,14 +13614,9 @@ __lpfc_sli4_process_cq(struct lpfc_hba *phba, struct lpfc_queue *cq,
goto rearm_and_exit;
/* Process all the entries to the CQ */
cq->q_flag = 0;
cqe = lpfc_sli4_cq_get(cq);
while (cqe) {
#if defined(CONFIG_SCSI_LPFC_DEBUG_FS) && defined(BUILD_NVME)
if (phba->ktime_on)
cq->isr_timestamp = ktime_get_ns();
else
cq->isr_timestamp = 0;
#endif
workposted |= handler(phba, cq, cqe);
__lpfc_sli4_consume_cqe(phba, cq, cqe);
@ -13626,6 +13630,9 @@ __lpfc_sli4_process_cq(struct lpfc_hba *phba, struct lpfc_queue *cq,
consumed = 0;
}
if (count == LPFC_NVMET_CQ_NOTIFY)
cq->q_flag |= HBA_NVMET_CQ_NOTIFY;
cqe = lpfc_sli4_cq_get(cq);
}
if (count >= phba->cfg_cq_poll_threshold) {
@ -13941,10 +13948,10 @@ lpfc_sli4_nvmet_handle_rcqe(struct lpfc_hba *phba, struct lpfc_queue *cq,
goto drop;
if (fc_hdr->fh_type == FC_TYPE_FCP) {
dma_buf->bytes_recv = bf_get(lpfc_rcqe_length, rcqe);
dma_buf->bytes_recv = bf_get(lpfc_rcqe_length, rcqe);
lpfc_nvmet_unsol_fcp_event(
phba, idx, dma_buf,
cq->isr_timestamp);
phba, idx, dma_buf, cq->isr_timestamp,
cq->q_flag & HBA_NVMET_CQ_NOTIFY);
return false;
}
drop:
@ -14110,6 +14117,12 @@ process_cq:
}
work_cq:
#if defined(CONFIG_SCSI_LPFC_DEBUG_FS)
if (phba->ktime_on)
cq->isr_timestamp = ktime_get_ns();
else
cq->isr_timestamp = 0;
#endif
if (!queue_work_on(cq->chann, phba->wq, &cq->irqwork))
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"0363 Cannot schedule soft IRQ "
@ -14236,7 +14249,7 @@ lpfc_sli4_hba_intr_handler(int irq, void *dev_id)
return IRQ_NONE;
/* Get to the EQ struct associated with this vector */
fpeq = phba->sli4_hba.hdwq[hba_eqidx].hba_eq;
fpeq = phba->sli4_hba.hba_eq_hdl[hba_eqidx].eq;
if (unlikely(!fpeq))
return IRQ_NONE;
@ -14521,7 +14534,7 @@ lpfc_modify_hba_eq_delay(struct lpfc_hba *phba, uint32_t startq,
/* set values by EQ_DELAY register if supported */
if (phba->sli.sli_flag & LPFC_SLI_USE_EQDR) {
for (qidx = startq; qidx < phba->cfg_irq_chann; qidx++) {
eq = phba->sli4_hba.hdwq[qidx].hba_eq;
eq = phba->sli4_hba.hba_eq_hdl[qidx].eq;
if (!eq)
continue;
@ -14530,7 +14543,6 @@ lpfc_modify_hba_eq_delay(struct lpfc_hba *phba, uint32_t startq,
if (++cnt >= numq)
break;
}
return;
}
@ -14558,7 +14570,7 @@ lpfc_modify_hba_eq_delay(struct lpfc_hba *phba, uint32_t startq,
dmult = LPFC_DMULT_MAX;
for (qidx = startq; qidx < phba->cfg_irq_chann; qidx++) {
eq = phba->sli4_hba.hdwq[qidx].hba_eq;
eq = phba->sli4_hba.hba_eq_hdl[qidx].eq;
if (!eq)
continue;
eq->q_mode = usdelay;
@ -14660,8 +14672,10 @@ lpfc_eq_create(struct lpfc_hba *phba, struct lpfc_queue *eq, uint32_t imax)
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"0360 Unsupported EQ count. (%d)\n",
eq->entry_count);
if (eq->entry_count < 256)
return -EINVAL;
if (eq->entry_count < 256) {
status = -EINVAL;
goto out;
}
/* fall through - otherwise default to smallest count */
case 256:
bf_set(lpfc_eq_context_count, &eq_create->u.request.context,
@ -14713,7 +14727,7 @@ lpfc_eq_create(struct lpfc_hba *phba, struct lpfc_queue *eq, uint32_t imax)
eq->host_index = 0;
eq->notify_interval = LPFC_EQ_NOTIFY_INTRVL;
eq->max_proc_limit = LPFC_EQ_MAX_PROC_LIMIT;
out:
mempool_free(mbox, phba->mbox_mem_pool);
return status;
}

View File

@ -197,6 +197,8 @@ struct lpfc_queue {
#define LPFC_DB_LIST_FORMAT 0x02
uint8_t q_flag;
#define HBA_NVMET_WQFULL 0x1 /* We hit WQ Full condition for NVMET */
#define HBA_NVMET_CQ_NOTIFY 0x1 /* LPFC_NVMET_CQ_NOTIFY CQEs this EQE */
#define LPFC_NVMET_CQ_NOTIFY 4
void __iomem *db_regaddr;
uint16_t dpp_enable;
uint16_t dpp_id;
@ -450,6 +452,7 @@ struct lpfc_hba_eq_hdl {
uint32_t idx;
char handler_name[LPFC_SLI4_HANDLER_NAME_SZ];
struct lpfc_hba *phba;
struct lpfc_queue *eq;
};
/*BB Credit recovery value*/
@ -512,6 +515,7 @@ struct lpfc_pc_sli4_params {
#define LPFC_WQ_SZ64_SUPPORT 1
#define LPFC_WQ_SZ128_SUPPORT 2
uint8_t wqpcnt;
uint8_t nvme;
};
#define LPFC_CQ_4K_PAGE_SZ 0x1
@ -546,7 +550,10 @@ struct lpfc_vector_map_info {
uint16_t irq;
uint16_t eq;
uint16_t hdwq;
uint16_t hyper;
uint16_t flag;
#define LPFC_CPU_MAP_HYPER 0x1
#define LPFC_CPU_MAP_UNASSIGN 0x2
#define LPFC_CPU_FIRST_IRQ 0x4
};
#define LPFC_VECTOR_MAP_EMPTY 0xffff
@ -843,6 +850,8 @@ struct lpfc_sli4_hba {
struct list_head lpfc_nvmet_sgl_list;
spinlock_t abts_nvmet_buf_list_lock; /* list of aborted NVMET IOs */
struct list_head lpfc_abts_nvmet_ctx_list;
spinlock_t t_active_list_lock; /* list of active NVMET IOs */
struct list_head t_active_ctx_list;
struct list_head lpfc_nvmet_io_wait_list;
struct lpfc_nvmet_ctx_info *nvmet_ctx_info;
struct lpfc_sglq **lpfc_sglq_active_list;

View File

@ -20,7 +20,7 @@
* included with this package. *
*******************************************************************/
#define LPFC_DRIVER_VERSION "12.2.0.2"
#define LPFC_DRIVER_VERSION "12.2.0.3"
#define LPFC_DRIVER_NAME "lpfc"
/* Used for SLI 2/3 */

View File

@ -4,6 +4,8 @@
*
* Copyright 1998, Michael Schmitz <mschmitz@lbl.gov>
*
* Copyright 2019 Finn Thain
*
* derived in part from:
*/
/*
@ -12,6 +14,7 @@
* Copyright 1995, Russell King
*/
#include <linux/delay.h>
#include <linux/types.h>
#include <linux/module.h>
#include <linux/ioport.h>
@ -22,6 +25,7 @@
#include <asm/hwtest.h>
#include <asm/io.h>
#include <asm/macintosh.h>
#include <asm/macints.h>
#include <asm/setup.h>
@ -53,7 +57,7 @@ static int setup_cmd_per_lun = -1;
module_param(setup_cmd_per_lun, int, 0);
static int setup_sg_tablesize = -1;
module_param(setup_sg_tablesize, int, 0);
static int setup_use_pdma = -1;
static int setup_use_pdma = 512;
module_param(setup_use_pdma, int, 0);
static int setup_hostid = -1;
module_param(setup_hostid, int, 0);
@ -90,223 +94,318 @@ static int __init mac_scsi_setup(char *str)
__setup("mac5380=", mac_scsi_setup);
#endif /* !MODULE */
/* Pseudo DMA asm originally by Ove Edlund */
/*
* According to "Inside Macintosh: Devices", Mac OS requires disk drivers to
* specify the number of bytes between the delays expected from a SCSI target.
* This allows the operating system to "prevent bus errors when a target fails
* to deliver the next byte within the processor bus error timeout period."
* Linux SCSI drivers lack knowledge of the timing behaviour of SCSI targets
* so bus errors are unavoidable.
*
* If a MOVE.B instruction faults, we assume that zero bytes were transferred
* and simply retry. That assumption probably depends on target behaviour but
* seems to hold up okay. The NOP provides synchronization: without it the
* fault can sometimes occur after the program counter has moved past the
* offending instruction. Post-increment addressing can't be used.
*/
#define CP_IO_TO_MEM(s,d,n) \
__asm__ __volatile__ \
(" cmp.w #4,%2\n" \
" bls 8f\n" \
" move.w %1,%%d0\n" \
" neg.b %%d0\n" \
" and.w #3,%%d0\n" \
" sub.w %%d0,%2\n" \
" bra 2f\n" \
" 1: move.b (%0),(%1)+\n" \
" 2: dbf %%d0,1b\n" \
" move.w %2,%%d0\n" \
" lsr.w #5,%%d0\n" \
" bra 4f\n" \
" 3: move.l (%0),(%1)+\n" \
"31: move.l (%0),(%1)+\n" \
"32: move.l (%0),(%1)+\n" \
"33: move.l (%0),(%1)+\n" \
"34: move.l (%0),(%1)+\n" \
"35: move.l (%0),(%1)+\n" \
"36: move.l (%0),(%1)+\n" \
"37: move.l (%0),(%1)+\n" \
" 4: dbf %%d0,3b\n" \
" move.w %2,%%d0\n" \
" lsr.w #2,%%d0\n" \
" and.w #7,%%d0\n" \
" bra 6f\n" \
" 5: move.l (%0),(%1)+\n" \
" 6: dbf %%d0,5b\n" \
" and.w #3,%2\n" \
" bra 8f\n" \
" 7: move.b (%0),(%1)+\n" \
" 8: dbf %2,7b\n" \
" moveq.l #0, %2\n" \
" 9: \n" \
".section .fixup,\"ax\"\n" \
" .even\n" \
"91: moveq.l #1, %2\n" \
" jra 9b\n" \
"94: moveq.l #4, %2\n" \
" jra 9b\n" \
".previous\n" \
".section __ex_table,\"a\"\n" \
" .align 4\n" \
" .long 1b,91b\n" \
" .long 3b,94b\n" \
" .long 31b,94b\n" \
" .long 32b,94b\n" \
" .long 33b,94b\n" \
" .long 34b,94b\n" \
" .long 35b,94b\n" \
" .long 36b,94b\n" \
" .long 37b,94b\n" \
" .long 5b,94b\n" \
" .long 7b,91b\n" \
".previous" \
: "=a"(s), "=a"(d), "=d"(n) \
: "0"(s), "1"(d), "2"(n) \
: "d0")
#define MOVE_BYTE(operands) \
asm volatile ( \
"1: moveb " operands " \n" \
"11: nop \n" \
" addq #1,%0 \n" \
" subq #1,%1 \n" \
"40: \n" \
" \n" \
".section .fixup,\"ax\" \n" \
".even \n" \
"90: movel #1, %2 \n" \
" jra 40b \n" \
".previous \n" \
" \n" \
".section __ex_table,\"a\" \n" \
".align 4 \n" \
".long 1b,90b \n" \
".long 11b,90b \n" \
".previous \n" \
: "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
/*
* If a MOVE.W (or MOVE.L) instruction faults, it cannot be retried because
* the residual byte count would be uncertain. In that situation the MOVE_WORD
* macro clears n in the fixup section to abort the transfer.
*/
#define MOVE_WORD(operands) \
asm volatile ( \
"1: movew " operands " \n" \
"11: nop \n" \
" subq #2,%1 \n" \
"40: \n" \
" \n" \
".section .fixup,\"ax\" \n" \
".even \n" \
"90: movel #0, %1 \n" \
" movel #2, %2 \n" \
" jra 40b \n" \
".previous \n" \
" \n" \
".section __ex_table,\"a\" \n" \
".align 4 \n" \
".long 1b,90b \n" \
".long 11b,90b \n" \
".previous \n" \
: "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
#define MOVE_16_WORDS(operands) \
asm volatile ( \
"1: movew " operands " \n" \
"2: movew " operands " \n" \
"3: movew " operands " \n" \
"4: movew " operands " \n" \
"5: movew " operands " \n" \
"6: movew " operands " \n" \
"7: movew " operands " \n" \
"8: movew " operands " \n" \
"9: movew " operands " \n" \
"10: movew " operands " \n" \
"11: movew " operands " \n" \
"12: movew " operands " \n" \
"13: movew " operands " \n" \
"14: movew " operands " \n" \
"15: movew " operands " \n" \
"16: movew " operands " \n" \
"17: nop \n" \
" subl #32,%1 \n" \
"40: \n" \
" \n" \
".section .fixup,\"ax\" \n" \
".even \n" \
"90: movel #0, %1 \n" \
" movel #2, %2 \n" \
" jra 40b \n" \
".previous \n" \
" \n" \
".section __ex_table,\"a\" \n" \
".align 4 \n" \
".long 1b,90b \n" \
".long 2b,90b \n" \
".long 3b,90b \n" \
".long 4b,90b \n" \
".long 5b,90b \n" \
".long 6b,90b \n" \
".long 7b,90b \n" \
".long 8b,90b \n" \
".long 9b,90b \n" \
".long 10b,90b \n" \
".long 11b,90b \n" \
".long 12b,90b \n" \
".long 13b,90b \n" \
".long 14b,90b \n" \
".long 15b,90b \n" \
".long 16b,90b \n" \
".long 17b,90b \n" \
".previous \n" \
: "+a" (addr), "+r" (n), "+r" (result) : "a" (io))
#define MAC_PDMA_DELAY 32
static inline int mac_pdma_recv(void __iomem *io, unsigned char *start, int n)
{
unsigned char *addr = start;
int result = 0;
if (n >= 1) {
MOVE_BYTE("%3@,%0@");
if (result)
goto out;
}
if (n >= 1 && ((unsigned long)addr & 1)) {
MOVE_BYTE("%3@,%0@");
if (result)
goto out;
}
while (n >= 32)
MOVE_16_WORDS("%3@,%0@+");
while (n >= 2)
MOVE_WORD("%3@,%0@+");
if (result)
return start - addr; /* Negated to indicate uncertain length */
if (n == 1)
MOVE_BYTE("%3@,%0@");
out:
return addr - start;
}
static inline int mac_pdma_send(unsigned char *start, void __iomem *io, int n)
{
unsigned char *addr = start;
int result = 0;
if (n >= 1) {
MOVE_BYTE("%0@,%3@");
if (result)
goto out;
}
if (n >= 1 && ((unsigned long)addr & 1)) {
MOVE_BYTE("%0@,%3@");
if (result)
goto out;
}
while (n >= 32)
MOVE_16_WORDS("%0@+,%3@");
while (n >= 2)
MOVE_WORD("%0@+,%3@");
if (result)
return start - addr; /* Negated to indicate uncertain length */
if (n == 1)
MOVE_BYTE("%0@,%3@");
out:
return addr - start;
}
/* The "SCSI DMA" chip on the IIfx implements this register. */
#define CTRL_REG 0x8
#define CTRL_INTERRUPTS_ENABLE BIT(1)
#define CTRL_HANDSHAKE_MODE BIT(3)
static inline void write_ctrl_reg(struct NCR5380_hostdata *hostdata, u32 value)
{
out_be32(hostdata->io + (CTRL_REG << 4), value);
}
static inline int macscsi_pread(struct NCR5380_hostdata *hostdata,
unsigned char *dst, int len)
{
u8 __iomem *s = hostdata->pdma_io + (INPUT_DATA_REG << 4);
unsigned char *d = dst;
int n = len;
int transferred;
int result = 0;
hostdata->pdma_residual = len;
while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
BASR_DRQ | BASR_PHASE_MATCH,
BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) {
CP_IO_TO_MEM(s, d, n);
int bytes;
transferred = d - dst - n;
hostdata->pdma_residual = len - transferred;
if (macintosh_config->ident == MAC_MODEL_IIFX)
write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE |
CTRL_INTERRUPTS_ENABLE);
/* No bus error. */
if (n == 0)
return 0;
bytes = mac_pdma_recv(s, d, min(hostdata->pdma_residual, 512));
if (bytes > 0) {
d += bytes;
hostdata->pdma_residual -= bytes;
}
if (hostdata->pdma_residual == 0)
goto out;
/* Target changed phase early? */
if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0)
scmd_printk(KERN_ERR, hostdata->connected,
BUS_AND_STATUS_REG, BASR_ACK,
BASR_ACK, HZ / 64) < 0)
scmd_printk(KERN_DEBUG, hostdata->connected,
"%s: !REQ and !ACK\n", __func__);
if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
return 0;
goto out;
if (bytes == 0)
udelay(MAC_PDMA_DELAY);
if (bytes >= 0)
continue;
dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
"%s: bus error (%d/%d)\n", __func__, transferred, len);
"%s: bus error (%d/%d)\n", __func__, d - dst, len);
NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
d = dst + transferred;
n = len - transferred;
result = -1;
goto out;
}
scmd_printk(KERN_ERR, hostdata->connected,
"%s: phase mismatch or !DRQ\n", __func__);
NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
return -1;
result = -1;
out:
if (macintosh_config->ident == MAC_MODEL_IIFX)
write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
return result;
}
#define CP_MEM_TO_IO(s,d,n) \
__asm__ __volatile__ \
(" cmp.w #4,%2\n" \
" bls 8f\n" \
" move.w %0,%%d0\n" \
" neg.b %%d0\n" \
" and.w #3,%%d0\n" \
" sub.w %%d0,%2\n" \
" bra 2f\n" \
" 1: move.b (%0)+,(%1)\n" \
" 2: dbf %%d0,1b\n" \
" move.w %2,%%d0\n" \
" lsr.w #5,%%d0\n" \
" bra 4f\n" \
" 3: move.l (%0)+,(%1)\n" \
"31: move.l (%0)+,(%1)\n" \
"32: move.l (%0)+,(%1)\n" \
"33: move.l (%0)+,(%1)\n" \
"34: move.l (%0)+,(%1)\n" \
"35: move.l (%0)+,(%1)\n" \
"36: move.l (%0)+,(%1)\n" \
"37: move.l (%0)+,(%1)\n" \
" 4: dbf %%d0,3b\n" \
" move.w %2,%%d0\n" \
" lsr.w #2,%%d0\n" \
" and.w #7,%%d0\n" \
" bra 6f\n" \
" 5: move.l (%0)+,(%1)\n" \
" 6: dbf %%d0,5b\n" \
" and.w #3,%2\n" \
" bra 8f\n" \
" 7: move.b (%0)+,(%1)\n" \
" 8: dbf %2,7b\n" \
" moveq.l #0, %2\n" \
" 9: \n" \
".section .fixup,\"ax\"\n" \
" .even\n" \
"91: moveq.l #1, %2\n" \
" jra 9b\n" \
"94: moveq.l #4, %2\n" \
" jra 9b\n" \
".previous\n" \
".section __ex_table,\"a\"\n" \
" .align 4\n" \
" .long 1b,91b\n" \
" .long 3b,94b\n" \
" .long 31b,94b\n" \
" .long 32b,94b\n" \
" .long 33b,94b\n" \
" .long 34b,94b\n" \
" .long 35b,94b\n" \
" .long 36b,94b\n" \
" .long 37b,94b\n" \
" .long 5b,94b\n" \
" .long 7b,91b\n" \
".previous" \
: "=a"(s), "=a"(d), "=d"(n) \
: "0"(s), "1"(d), "2"(n) \
: "d0")
static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata,
unsigned char *src, int len)
{
unsigned char *s = src;
u8 __iomem *d = hostdata->pdma_io + (OUTPUT_DATA_REG << 4);
int n = len;
int transferred;
int result = 0;
hostdata->pdma_residual = len;
while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
BASR_DRQ | BASR_PHASE_MATCH,
BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) {
CP_MEM_TO_IO(s, d, n);
int bytes;
transferred = s - src - n;
hostdata->pdma_residual = len - transferred;
if (macintosh_config->ident == MAC_MODEL_IIFX)
write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE |
CTRL_INTERRUPTS_ENABLE);
/* Target changed phase early? */
if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0)
scmd_printk(KERN_ERR, hostdata->connected,
"%s: !REQ and !ACK\n", __func__);
if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
return 0;
bytes = mac_pdma_send(s, d, min(hostdata->pdma_residual, 512));
/* No bus error. */
if (n == 0) {
if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG,
TCR_LAST_BYTE_SENT,
TCR_LAST_BYTE_SENT, HZ / 64) < 0)
scmd_printk(KERN_ERR, hostdata->connected,
"%s: Last Byte Sent timeout\n", __func__);
return 0;
if (bytes > 0) {
s += bytes;
hostdata->pdma_residual -= bytes;
}
if (hostdata->pdma_residual == 0) {
if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG,
TCR_LAST_BYTE_SENT,
TCR_LAST_BYTE_SENT,
HZ / 64) < 0) {
scmd_printk(KERN_ERR, hostdata->connected,
"%s: Last Byte Sent timeout\n", __func__);
result = -1;
}
goto out;
}
if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
BUS_AND_STATUS_REG, BASR_ACK,
BASR_ACK, HZ / 64) < 0)
scmd_printk(KERN_DEBUG, hostdata->connected,
"%s: !REQ and !ACK\n", __func__);
if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
goto out;
if (bytes == 0)
udelay(MAC_PDMA_DELAY);
if (bytes >= 0)
continue;
dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
"%s: bus error (%d/%d)\n", __func__, transferred, len);
"%s: bus error (%d/%d)\n", __func__, s - src, len);
NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
s = src + transferred;
n = len - transferred;
result = -1;
goto out;
}
scmd_printk(KERN_ERR, hostdata->connected,
"%s: phase mismatch or !DRQ\n", __func__);
NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
return -1;
result = -1;
out:
if (macintosh_config->ident == MAC_MODEL_IIFX)
write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE);
return result;
}
static int macscsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd)
{
if (hostdata->flags & FLAG_NO_PSEUDO_DMA ||
cmd->SCp.this_residual < 16)
cmd->SCp.this_residual < setup_use_pdma)
return 0;
return cmd->SCp.this_residual;

View File

@ -79,6 +79,7 @@ config MEGARAID_LEGACY
config MEGARAID_SAS
tristate "LSI Logic MegaRAID SAS RAID Module"
depends on PCI && SCSI
select IRQ_POLL
help
Module for LSI Logic's SAS based RAID controllers.
To compile this driver as a module, choose 'm' here.

View File

@ -3,4 +3,4 @@ obj-$(CONFIG_MEGARAID_MM) += megaraid_mm.o
obj-$(CONFIG_MEGARAID_MAILBOX) += megaraid_mbox.o
obj-$(CONFIG_MEGARAID_SAS) += megaraid_sas.o
megaraid_sas-objs := megaraid_sas_base.o megaraid_sas_fusion.o \
megaraid_sas_fp.o
megaraid_sas_fp.o megaraid_sas_debugfs.o

View File

@ -21,8 +21,8 @@
/*
* MegaRAID SAS Driver meta data
*/
#define MEGASAS_VERSION "07.707.51.00-rc1"
#define MEGASAS_RELDATE "February 7, 2019"
#define MEGASAS_VERSION "07.710.06.00-rc1"
#define MEGASAS_RELDATE "June 18, 2019"
/*
* Device IDs
@ -52,6 +52,10 @@
#define PCI_DEVICE_ID_LSI_AERO_10E2 0x10e2
#define PCI_DEVICE_ID_LSI_AERO_10E5 0x10e5
#define PCI_DEVICE_ID_LSI_AERO_10E6 0x10e6
#define PCI_DEVICE_ID_LSI_AERO_10E0 0x10e0
#define PCI_DEVICE_ID_LSI_AERO_10E3 0x10e3
#define PCI_DEVICE_ID_LSI_AERO_10E4 0x10e4
#define PCI_DEVICE_ID_LSI_AERO_10E7 0x10e7
/*
* Intel HBA SSDIDs
@ -123,6 +127,8 @@
#define MFI_RESET_ADAPTER 0x00000002
#define MEGAMFI_FRAME_SIZE 64
#define MFI_STATE_FAULT_CODE 0x0FFF0000
#define MFI_STATE_FAULT_SUBCODE 0x0000FF00
/*
* During FW init, clear pending cmds & reset state using inbound_msg_0
*
@ -190,6 +196,7 @@ enum MFI_CMD_OP {
MFI_CMD_SMP = 0x7,
MFI_CMD_STP = 0x8,
MFI_CMD_NVME = 0x9,
MFI_CMD_TOOLBOX = 0xa,
MFI_CMD_OP_COUNT,
MFI_CMD_INVALID = 0xff
};
@ -1449,7 +1456,39 @@ struct megasas_ctrl_info {
u8 reserved6[64];
u32 rsvdForAdptOp[64];
struct {
#if defined(__BIG_ENDIAN_BITFIELD)
u32 reserved:19;
u32 support_pci_lane_margining: 1;
u32 support_psoc_update:1;
u32 support_force_personality_change:1;
u32 support_fde_type_mix:1;
u32 support_snap_dump:1;
u32 support_nvme_tm:1;
u32 support_oce_only:1;
u32 support_ext_mfg_vpd:1;
u32 support_pcie:1;
u32 support_cvhealth_info:1;
u32 support_profile_change:2;
u32 mr_config_ext2_supported:1;
#else
u32 mr_config_ext2_supported:1;
u32 support_profile_change:2;
u32 support_cvhealth_info:1;
u32 support_pcie:1;
u32 support_ext_mfg_vpd:1;
u32 support_oce_only:1;
u32 support_nvme_tm:1;
u32 support_snap_dump:1;
u32 support_fde_type_mix:1;
u32 support_force_personality_change:1;
u32 support_psoc_update:1;
u32 support_pci_lane_margining: 1;
u32 reserved:19;
#endif
} adapter_operations5;
u32 rsvdForAdptOp[63];
u8 reserved7[3];
@ -1483,7 +1522,9 @@ struct megasas_ctrl_info {
#define MEGASAS_FW_BUSY 1
/* Driver's internal Logging levels*/
#define OCR_LOGS (1 << 0)
#define OCR_DEBUG (1 << 0)
#define TM_DEBUG (1 << 1)
#define LD_PD_DEBUG (1 << 2)
#define SCAN_PD_CHANNEL 0x1
#define SCAN_VD_CHANNEL 0x2
@ -1559,6 +1600,7 @@ enum FW_BOOT_CONTEXT {
#define MFI_IO_TIMEOUT_SECS 180
#define MEGASAS_SRIOV_HEARTBEAT_INTERVAL_VF (5 * HZ)
#define MEGASAS_OCR_SETTLE_TIME_VF (1000 * 30)
#define MEGASAS_SRIOV_MAX_RESET_TRIES_VF 1
#define MEGASAS_ROUTINE_WAIT_TIME_VF 300
#define MFI_REPLY_1078_MESSAGE_INTERRUPT 0x80000000
#define MFI_REPLY_GEN2_MESSAGE_INTERRUPT 0x00000001
@ -1583,7 +1625,10 @@ enum FW_BOOT_CONTEXT {
#define MR_CAN_HANDLE_SYNC_CACHE_OFFSET 0X01000000
#define MR_ATOMIC_DESCRIPTOR_SUPPORT_OFFSET (1 << 24)
#define MR_CAN_HANDLE_64_BIT_DMA_OFFSET (1 << 25)
#define MR_INTR_COALESCING_SUPPORT_OFFSET (1 << 26)
#define MEGASAS_WATCHDOG_THREAD_INTERVAL 1000
#define MEGASAS_WAIT_FOR_NEXT_DMA_MSECS 20
@ -1762,7 +1807,7 @@ struct megasas_init_frame {
__le32 pad_0; /*0Ch */
__le16 flags; /*10h */
__le16 reserved_3; /*12h */
__le16 replyqueue_mask; /*12h */
__le32 data_xfer_len; /*14h */
__le32 queue_info_new_phys_addr_lo; /*18h */
@ -2160,6 +2205,10 @@ struct megasas_aen_event {
struct megasas_irq_context {
struct megasas_instance *instance;
u32 MSIxIndex;
u32 os_irq;
struct irq_poll irqpoll;
bool irq_poll_scheduled;
bool irq_line_enable;
};
struct MR_DRV_SYSTEM_INFO {
@ -2190,6 +2239,23 @@ enum MR_PD_TYPE {
#define MR_DEFAULT_NVME_MDTS_KB 128
#define MR_NVME_PAGE_SIZE_MASK 0x000000FF
/*Aero performance parameters*/
#define MR_HIGH_IOPS_QUEUE_COUNT 8
#define MR_DEVICE_HIGH_IOPS_DEPTH 8
#define MR_HIGH_IOPS_BATCH_COUNT 16
enum MR_PERF_MODE {
MR_BALANCED_PERF_MODE = 0,
MR_IOPS_PERF_MODE = 1,
MR_LATENCY_PERF_MODE = 2,
};
#define MEGASAS_PERF_MODE_2STR(mode) \
((mode) == MR_BALANCED_PERF_MODE ? "Balanced" : \
(mode) == MR_IOPS_PERF_MODE ? "IOPS" : \
(mode) == MR_LATENCY_PERF_MODE ? "Latency" : \
"Unknown")
struct megasas_instance {
unsigned int *reply_map;
@ -2246,6 +2312,7 @@ struct megasas_instance {
u32 secure_jbod_support;
u32 support_morethan256jbod; /* FW support for more than 256 PD/JBOD */
bool use_seqnum_jbod_fp; /* Added for PD sequence */
bool smp_affinity_enable;
spinlock_t crashdump_lock;
struct megasas_register_set __iomem *reg_set;
@ -2263,6 +2330,7 @@ struct megasas_instance {
u16 ldio_threshold;
u16 cur_can_queue;
u32 max_sectors_per_req;
bool msix_load_balance;
struct megasas_aen_event *ev;
struct megasas_cmd **cmd_list;
@ -2290,15 +2358,13 @@ struct megasas_instance {
struct pci_dev *pdev;
u32 unique_id;
u32 fw_support_ieee;
u32 threshold_reply_count;
atomic_t fw_outstanding;
atomic_t ldio_outstanding;
atomic_t fw_reset_no_pci_access;
atomic_t ieee_sgl;
atomic_t prp_sgl;
atomic_t sge_holes_type1;
atomic_t sge_holes_type2;
atomic_t sge_holes_type3;
atomic64_t total_io_count;
atomic64_t high_iops_outstanding;
struct megasas_instance_template *instancet;
struct tasklet_struct isr_tasklet;
@ -2366,8 +2432,18 @@ struct megasas_instance {
u8 task_abort_tmo;
u8 max_reset_tmo;
u8 snapdump_wait_time;
#ifdef CONFIG_DEBUG_FS
struct dentry *debugfs_root;
struct dentry *raidmap_dump;
#endif
u8 enable_fw_dev_list;
bool atomic_desc_support;
bool support_seqnum_jbod_fp;
bool support_pci_lane_margining;
u8 low_latency_index_start;
int perf_mode;
};
struct MR_LD_VF_MAP {
u32 size;
union MR_LD_REF ref;
@ -2623,4 +2699,9 @@ void megasas_fusion_stop_watchdog(struct megasas_instance *instance);
void megasas_set_dma_settings(struct megasas_instance *instance,
struct megasas_dcmd_frame *dcmd,
dma_addr_t dma_addr, u32 dma_len);
int megasas_adp_reset_wait_for_ready(struct megasas_instance *instance,
bool do_adp_reset,
int ocr_context);
int megasas_irqpoll(struct irq_poll *irqpoll, int budget);
void megasas_dump_fusion_io(struct scsi_cmnd *scmd);
#endif /*LSI_MEGARAID_SAS_H */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,179 @@
/*
* Linux MegaRAID driver for SAS based RAID controllers
*
* Copyright (c) 2003-2018 LSI Corporation.
* Copyright (c) 2003-2018 Avago Technologies.
* Copyright (c) 2003-2018 Broadcom Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
* Authors: Broadcom Inc.
* Kashyap Desai <kashyap.desai@broadcom.com>
* Sumit Saxena <sumit.saxena@broadcom.com>
* Shivasharan S <shivasharan.srikanteshwara@broadcom.com>
*
* Send feedback to: megaraidlinux.pdl@broadcom.com
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/interrupt.h>
#include <linux/compat.h>
#include <linux/irq_poll.h>
#include <scsi/scsi.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h>
#include "megaraid_sas_fusion.h"
#include "megaraid_sas.h"
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
struct dentry *megasas_debugfs_root;
static ssize_t
megasas_debugfs_read(struct file *filp, char __user *ubuf, size_t cnt,
loff_t *ppos)
{
struct megasas_debugfs_buffer *debug = filp->private_data;
if (!debug || !debug->buf)
return 0;
return simple_read_from_buffer(ubuf, cnt, ppos, debug->buf, debug->len);
}
static int
megasas_debugfs_raidmap_open(struct inode *inode, struct file *file)
{
struct megasas_instance *instance = inode->i_private;
struct megasas_debugfs_buffer *debug;
struct fusion_context *fusion;
fusion = instance->ctrl_context;
debug = kzalloc(sizeof(struct megasas_debugfs_buffer), GFP_KERNEL);
if (!debug)
return -ENOMEM;
debug->buf = (void *)fusion->ld_drv_map[(instance->map_id & 1)];
debug->len = fusion->drv_map_sz;
file->private_data = debug;
return 0;
}
static int
megasas_debugfs_release(struct inode *inode, struct file *file)
{
struct megasas_debug_buffer *debug = file->private_data;
if (!debug)
return 0;
file->private_data = NULL;
kfree(debug);
return 0;
}
static const struct file_operations megasas_debugfs_raidmap_fops = {
.owner = THIS_MODULE,
.open = megasas_debugfs_raidmap_open,
.read = megasas_debugfs_read,
.release = megasas_debugfs_release,
};
/*
* megasas_init_debugfs : Create debugfs root for megaraid_sas driver
*/
void megasas_init_debugfs(void)
{
megasas_debugfs_root = debugfs_create_dir("megaraid_sas", NULL);
if (!megasas_debugfs_root)
pr_info("Cannot create debugfs root\n");
}
/*
* megasas_exit_debugfs : Remove debugfs root for megaraid_sas driver
*/
void megasas_exit_debugfs(void)
{
debugfs_remove_recursive(megasas_debugfs_root);
}
/*
* megasas_setup_debugfs : Setup debugfs per Fusion adapter
* instance: Soft instance of adapter
*/
void
megasas_setup_debugfs(struct megasas_instance *instance)
{
char name[64];
struct fusion_context *fusion;
fusion = instance->ctrl_context;
if (fusion) {
snprintf(name, sizeof(name),
"scsi_host%d", instance->host->host_no);
if (!instance->debugfs_root) {
instance->debugfs_root =
debugfs_create_dir(name, megasas_debugfs_root);
if (!instance->debugfs_root) {
dev_err(&instance->pdev->dev,
"Cannot create per adapter debugfs directory\n");
return;
}
}
snprintf(name, sizeof(name), "raidmap_dump");
instance->raidmap_dump =
debugfs_create_file(name, S_IRUGO,
instance->debugfs_root, instance,
&megasas_debugfs_raidmap_fops);
if (!instance->raidmap_dump) {
dev_err(&instance->pdev->dev,
"Cannot create raidmap debugfs file\n");
debugfs_remove(instance->debugfs_root);
return;
}
}
}
/*
* megasas_destroy_debugfs : Destroy debugfs per Fusion adapter
* instance: Soft instance of adapter
*/
void megasas_destroy_debugfs(struct megasas_instance *instance)
{
debugfs_remove_recursive(instance->debugfs_root);
}
#else
void megasas_init_debugfs(void)
{
}
void megasas_exit_debugfs(void)
{
}
void megasas_setup_debugfs(struct megasas_instance *instance)
{
}
void megasas_destroy_debugfs(struct megasas_instance *instance)
{
}
#endif /*CONFIG_DEBUG_FS*/

View File

@ -33,6 +33,7 @@
#include <linux/compat.h>
#include <linux/blkdev.h>
#include <linux/poll.h>
#include <linux/irq_poll.h>
#include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
@ -45,7 +46,7 @@
#define LB_PENDING_CMDS_DEFAULT 4
static unsigned int lb_pending_cmds = LB_PENDING_CMDS_DEFAULT;
module_param(lb_pending_cmds, int, S_IRUGO);
module_param(lb_pending_cmds, int, 0444);
MODULE_PARM_DESC(lb_pending_cmds, "Change raid-1 load balancing outstanding "
"threshold. Valid Values are 1-128. Default: 4");
@ -888,6 +889,77 @@ u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow,
return retval;
}
/*
* mr_get_phy_params_r56_rmw - Calculate parameters for R56 CTIO write operation
* @instance: Adapter soft state
* @ld: LD index
* @stripNo: Strip Number
* @io_info: IO info structure pointer
* pRAID_Context: RAID context pointer
* map: RAID map pointer
*
* This routine calculates the logical arm, data Arm, row number and parity arm
* for R56 CTIO write operation.
*/
static void mr_get_phy_params_r56_rmw(struct megasas_instance *instance,
u32 ld, u64 stripNo,
struct IO_REQUEST_INFO *io_info,
struct RAID_CONTEXT_G35 *pRAID_Context,
struct MR_DRV_RAID_MAP_ALL *map)
{
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
u8 span, dataArms, arms, dataArm, logArm;
s8 rightmostParityArm, PParityArm;
u64 rowNum;
u64 *pdBlock = &io_info->pdBlock;
dataArms = raid->rowDataSize;
arms = raid->rowSize;
rowNum = mega_div64_32(stripNo, dataArms);
/* parity disk arm, first arm is 0 */
rightmostParityArm = (arms - 1) - mega_mod64(rowNum, arms);
/* logical arm within row */
logArm = mega_mod64(stripNo, dataArms);
/* physical arm for data */
dataArm = mega_mod64((rightmostParityArm + 1 + logArm), arms);
if (raid->spanDepth == 1) {
span = 0;
} else {
span = (u8)MR_GetSpanBlock(ld, rowNum, pdBlock, map);
if (span == SPAN_INVALID)
return;
}
if (raid->level == 6) {
/* P Parity arm, note this can go negative adjust if negative */
PParityArm = (arms - 2) - mega_mod64(rowNum, arms);
if (PParityArm < 0)
PParityArm += arms;
/* rightmostParityArm is P-Parity for RAID 5 and Q-Parity for RAID */
pRAID_Context->flow_specific.r56_arm_map = rightmostParityArm;
pRAID_Context->flow_specific.r56_arm_map |=
(u16)(PParityArm << RAID_CTX_R56_P_ARM_SHIFT);
} else {
pRAID_Context->flow_specific.r56_arm_map |=
(u16)(rightmostParityArm << RAID_CTX_R56_P_ARM_SHIFT);
}
pRAID_Context->reg_lock_row_lba = cpu_to_le64(rowNum);
pRAID_Context->flow_specific.r56_arm_map |=
(u16)(logArm << RAID_CTX_R56_LOG_ARM_SHIFT);
cpu_to_le16s(&pRAID_Context->flow_specific.r56_arm_map);
pRAID_Context->span_arm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) | dataArm;
pRAID_Context->raid_flags = (MR_RAID_FLAGS_IO_SUB_TYPE_R56_DIV_OFFLOAD <<
MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT);
return;
}
/*
******************************************************************************
*
@ -954,6 +1026,7 @@ MR_BuildRaidContext(struct megasas_instance *instance,
stripSize = 1 << raid->stripeShift;
stripe_mask = stripSize-1;
io_info->data_arms = raid->rowDataSize;
/*
* calculate starting row and stripe, and number of strips and rows
@ -1095,6 +1168,13 @@ MR_BuildRaidContext(struct megasas_instance *instance,
/* save pointer to raid->LUN array */
*raidLUN = raid->LUN;
/* Aero R5/6 Division Offload for WRITE */
if (fusion->r56_div_offload && (raid->level >= 5) && !isRead) {
mr_get_phy_params_r56_rmw(instance, ld, start_strip, io_info,
(struct RAID_CONTEXT_G35 *)pRAID_Context,
map);
return true;
}
/*Get Phy Params only if FP capable, or else leave it to MR firmware
to do the calculation.*/

File diff suppressed because it is too large Load Diff

View File

@ -75,7 +75,8 @@ enum MR_RAID_FLAGS_IO_SUB_TYPE {
MR_RAID_FLAGS_IO_SUB_TYPE_RMW_P = 3,
MR_RAID_FLAGS_IO_SUB_TYPE_RMW_Q = 4,
MR_RAID_FLAGS_IO_SUB_TYPE_CACHE_BYPASS = 6,
MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT = 7
MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT = 7,
MR_RAID_FLAGS_IO_SUB_TYPE_R56_DIV_OFFLOAD = 8
};
/*
@ -88,7 +89,6 @@ enum MR_RAID_FLAGS_IO_SUB_TYPE {
#define MEGASAS_FP_CMD_LEN 16
#define MEGASAS_FUSION_IN_RESET 0
#define THRESHOLD_REPLY_COUNT 50
#define RAID_1_PEER_CMDS 2
#define JBOD_MAPS_COUNT 2
#define MEGASAS_REDUCE_QD_COUNT 64
@ -140,12 +140,15 @@ struct RAID_CONTEXT_G35 {
u16 timeout_value; /* 0x02 -0x03 */
u16 routing_flags; // 0x04 -0x05 routing flags
u16 virtual_disk_tgt_id; /* 0x06 -0x07 */
u64 reg_lock_row_lba; /* 0x08 - 0x0F */
__le64 reg_lock_row_lba; /* 0x08 - 0x0F */
u32 reg_lock_length; /* 0x10 - 0x13 */
union {
u16 next_lmid; /* 0x14 - 0x15 */
u16 peer_smid; /* used for the raid 1/10 fp writes */
} smid;
union { // flow specific
u16 rmw_op_index; /* 0x14 - 0x15, R5/6 RMW: rmw operation index*/
u16 peer_smid; /* 0x14 - 0x15, R1 Write: peer smid*/
u16 r56_arm_map; /* 0x14 - 0x15, Unused [15], LogArm[14:10], P-Arm[9:5], Q-Arm[4:0] */
} flow_specific;
u8 ex_status; /* 0x16 : OUT */
u8 status; /* 0x17 status */
u8 raid_flags; /* 0x18 resvd[7:6], ioSubType[5:4],
@ -236,6 +239,13 @@ union RAID_CONTEXT_UNION {
#define RAID_CTX_SPANARM_SPAN_SHIFT (5)
#define RAID_CTX_SPANARM_SPAN_MASK (0xE0)
/* LogArm[14:10], P-Arm[9:5], Q-Arm[4:0] */
#define RAID_CTX_R56_Q_ARM_MASK (0x1F)
#define RAID_CTX_R56_P_ARM_SHIFT (5)
#define RAID_CTX_R56_P_ARM_MASK (0x3E0)
#define RAID_CTX_R56_LOG_ARM_SHIFT (10)
#define RAID_CTX_R56_LOG_ARM_MASK (0x7C00)
/* number of bits per index in U32 TrackStream */
#define BITS_PER_INDEX_STREAM 4
#define INVALID_STREAM_NUM 16
@ -940,6 +950,7 @@ struct IO_REQUEST_INFO {
u8 pd_after_lb;
u16 r1_alt_dev_handle; /* raid 1/10 only */
bool ra_capable;
u8 data_arms;
};
struct MR_LD_TARGET_SYNC {
@ -1324,7 +1335,8 @@ struct fusion_context {
dma_addr_t ioc_init_request_phys;
struct MPI2_IOC_INIT_REQUEST *ioc_init_request;
struct megasas_cmd *ioc_init_cmd;
bool pcie_bw_limitation;
bool r56_div_offload;
};
union desc_value {
@ -1349,6 +1361,11 @@ struct MR_SNAPDUMP_PROPERTIES {
u8 reserved[12];
};
struct megasas_debugfs_buffer {
void *buf;
u32 len;
};
void megasas_free_cmds_fusion(struct megasas_instance *instance);
int megasas_ioc_init_fusion(struct megasas_instance *instance);
u8 megasas_get_map_info(struct megasas_instance *instance);

View File

@ -1398,7 +1398,7 @@ typedef struct _MPI2_CONFIG_PAGE_IOC_1 {
U8 PCIBusNum; /*0x0E */
U8 PCIDomainSegment; /*0x0F */
U32 Reserved1; /*0x10 */
U32 Reserved2; /*0x14 */
U32 ProductSpecific; /* 0x14 */
} MPI2_CONFIG_PAGE_IOC_1,
*PTR_MPI2_CONFIG_PAGE_IOC_1,
Mpi2IOCPage1_t, *pMpi2IOCPage1_t;

View File

@ -74,28 +74,28 @@ static MPT_CALLBACK mpt_callbacks[MPT_MAX_CALLBACKS];
#define MAX_HBA_QUEUE_DEPTH 30000
#define MAX_CHAIN_DEPTH 100000
static int max_queue_depth = -1;
module_param(max_queue_depth, int, 0);
module_param(max_queue_depth, int, 0444);
MODULE_PARM_DESC(max_queue_depth, " max controller queue depth ");
static int max_sgl_entries = -1;
module_param(max_sgl_entries, int, 0);
module_param(max_sgl_entries, int, 0444);
MODULE_PARM_DESC(max_sgl_entries, " max sg entries ");
static int msix_disable = -1;
module_param(msix_disable, int, 0);
module_param(msix_disable, int, 0444);
MODULE_PARM_DESC(msix_disable, " disable msix routed interrupts (default=0)");
static int smp_affinity_enable = 1;
module_param(smp_affinity_enable, int, S_IRUGO);
module_param(smp_affinity_enable, int, 0444);
MODULE_PARM_DESC(smp_affinity_enable, "SMP affinity feature enable/disable Default: enable(1)");
static int max_msix_vectors = -1;
module_param(max_msix_vectors, int, 0);
module_param(max_msix_vectors, int, 0444);
MODULE_PARM_DESC(max_msix_vectors,
" max msix vectors");
static int irqpoll_weight = -1;
module_param(irqpoll_weight, int, 0);
module_param(irqpoll_weight, int, 0444);
MODULE_PARM_DESC(irqpoll_weight,
"irq poll weight (default= one fourth of HBA queue depth)");
@ -103,6 +103,26 @@ static int mpt3sas_fwfault_debug;
MODULE_PARM_DESC(mpt3sas_fwfault_debug,
" enable detection of firmware fault and halt firmware - (default=0)");
static int perf_mode = -1;
module_param(perf_mode, int, 0444);
MODULE_PARM_DESC(perf_mode,
"Performance mode (only for Aero/Sea Generation), options:\n\t\t"
"0 - balanced: high iops mode is enabled &\n\t\t"
"interrupt coalescing is enabled only on high iops queues,\n\t\t"
"1 - iops: high iops mode is disabled &\n\t\t"
"interrupt coalescing is enabled on all queues,\n\t\t"
"2 - latency: high iops mode is disabled &\n\t\t"
"interrupt coalescing is enabled on all queues with timeout value 0xA,\n"
"\t\tdefault - default perf_mode is 'balanced'"
);
enum mpt3sas_perf_mode {
MPT_PERF_MODE_DEFAULT = -1,
MPT_PERF_MODE_BALANCED = 0,
MPT_PERF_MODE_IOPS = 1,
MPT_PERF_MODE_LATENCY = 2,
};
static int
_base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc);
@ -1282,7 +1302,7 @@ _base_async_event(struct MPT3SAS_ADAPTER *ioc, u8 msix_index, u32 reply)
ack_request->EventContext = mpi_reply->EventContext;
ack_request->VF_ID = 0; /* TODO */
ack_request->VP_ID = 0;
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
out:
@ -2793,6 +2813,9 @@ _base_free_irq(struct MPT3SAS_ADAPTER *ioc)
list_for_each_entry_safe(reply_q, next, &ioc->reply_queue_list, list) {
list_del(&reply_q->list);
if (ioc->smp_affinity_enable)
irq_set_affinity_hint(pci_irq_vector(ioc->pdev,
reply_q->msix_index), NULL);
free_irq(pci_irq_vector(ioc->pdev, reply_q->msix_index),
reply_q);
kfree(reply_q);
@ -2857,14 +2880,13 @@ _base_assign_reply_queues(struct MPT3SAS_ADAPTER *ioc)
{
unsigned int cpu, nr_cpus, nr_msix, index = 0;
struct adapter_reply_queue *reply_q;
int local_numa_node;
if (!_base_is_controller_msix_enabled(ioc))
return;
ioc->msix_load_balance = false;
if (ioc->reply_queue_count < num_online_cpus()) {
ioc->msix_load_balance = true;
if (ioc->msix_load_balance)
return;
}
memset(ioc->cpu_msix_table, 0, ioc->cpu_msix_table_sz);
@ -2874,14 +2896,33 @@ _base_assign_reply_queues(struct MPT3SAS_ADAPTER *ioc)
if (!nr_msix)
return;
if (smp_affinity_enable) {
if (ioc->smp_affinity_enable) {
/*
* set irq affinity to local numa node for those irqs
* corresponding to high iops queues.
*/
if (ioc->high_iops_queues) {
local_numa_node = dev_to_node(&ioc->pdev->dev);
for (index = 0; index < ioc->high_iops_queues;
index++) {
irq_set_affinity_hint(pci_irq_vector(ioc->pdev,
index), cpumask_of_node(local_numa_node));
}
}
list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
const cpumask_t *mask = pci_irq_get_affinity(ioc->pdev,
reply_q->msix_index);
const cpumask_t *mask;
if (reply_q->msix_index < ioc->high_iops_queues)
continue;
mask = pci_irq_get_affinity(ioc->pdev,
reply_q->msix_index);
if (!mask) {
ioc_warn(ioc, "no affinity for msi %x\n",
reply_q->msix_index);
continue;
goto fall_back;
}
for_each_cpu_and(cpu, mask, cpu_online_mask) {
@ -2892,12 +2933,18 @@ _base_assign_reply_queues(struct MPT3SAS_ADAPTER *ioc)
}
return;
}
fall_back:
cpu = cpumask_first(cpu_online_mask);
nr_msix -= ioc->high_iops_queues;
index = 0;
list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
unsigned int i, group = nr_cpus / nr_msix;
if (reply_q->msix_index < ioc->high_iops_queues)
continue;
if (cpu >= nr_cpus)
break;
@ -2912,6 +2959,52 @@ _base_assign_reply_queues(struct MPT3SAS_ADAPTER *ioc)
}
}
/**
* _base_check_and_enable_high_iops_queues - enable high iops mode
* @ ioc - per adapter object
* @ hba_msix_vector_count - msix vectors supported by HBA
*
* Enable high iops queues only if
* - HBA is a SEA/AERO controller and
* - MSI-Xs vector supported by the HBA is 128 and
* - total CPU count in the system >=16 and
* - loaded driver with default max_msix_vectors module parameter and
* - system booted in non kdump mode
*
* returns nothing.
*/
static void
_base_check_and_enable_high_iops_queues(struct MPT3SAS_ADAPTER *ioc,
int hba_msix_vector_count)
{
u16 lnksta, speed;
if (perf_mode == MPT_PERF_MODE_IOPS ||
perf_mode == MPT_PERF_MODE_LATENCY) {
ioc->high_iops_queues = 0;
return;
}
if (perf_mode == MPT_PERF_MODE_DEFAULT) {
pcie_capability_read_word(ioc->pdev, PCI_EXP_LNKSTA, &lnksta);
speed = lnksta & PCI_EXP_LNKSTA_CLS;
if (speed < 0x4) {
ioc->high_iops_queues = 0;
return;
}
}
if (!reset_devices && ioc->is_aero_ioc &&
hba_msix_vector_count == MPT3SAS_GEN35_MAX_MSIX_QUEUES &&
num_online_cpus() >= MPT3SAS_HIGH_IOPS_REPLY_QUEUES &&
max_msix_vectors == -1)
ioc->high_iops_queues = MPT3SAS_HIGH_IOPS_REPLY_QUEUES;
else
ioc->high_iops_queues = 0;
}
/**
* _base_disable_msix - disables msix
* @ioc: per adapter object
@ -2922,10 +3015,37 @@ _base_disable_msix(struct MPT3SAS_ADAPTER *ioc)
{
if (!ioc->msix_enable)
return;
pci_disable_msix(ioc->pdev);
pci_free_irq_vectors(ioc->pdev);
ioc->msix_enable = 0;
}
/**
* _base_alloc_irq_vectors - allocate msix vectors
* @ioc: per adapter object
*
*/
static int
_base_alloc_irq_vectors(struct MPT3SAS_ADAPTER *ioc)
{
int i, irq_flags = PCI_IRQ_MSIX;
struct irq_affinity desc = { .pre_vectors = ioc->high_iops_queues };
struct irq_affinity *descp = &desc;
if (ioc->smp_affinity_enable)
irq_flags |= PCI_IRQ_AFFINITY;
else
descp = NULL;
ioc_info(ioc, " %d %d\n", ioc->high_iops_queues,
ioc->msix_vector_count);
i = pci_alloc_irq_vectors_affinity(ioc->pdev,
ioc->high_iops_queues,
ioc->msix_vector_count, irq_flags, descp);
return i;
}
/**
* _base_enable_msix - enables msix, failback to io_apic
* @ioc: per adapter object
@ -2937,7 +3057,8 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
int r;
int i, local_max_msix_vectors;
u8 try_msix = 0;
unsigned int irq_flags = PCI_IRQ_MSIX;
ioc->msix_load_balance = false;
if (msix_disable == -1 || msix_disable == 0)
try_msix = 1;
@ -2948,12 +3069,16 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
if (_base_check_enable_msix(ioc) != 0)
goto try_ioapic;
ioc->reply_queue_count = min_t(int, ioc->cpu_count,
ioc_info(ioc, "MSI-X vectors supported: %d\n", ioc->msix_vector_count);
pr_info("\t no of cores: %d, max_msix_vectors: %d\n",
ioc->cpu_count, max_msix_vectors);
if (ioc->is_aero_ioc)
_base_check_and_enable_high_iops_queues(ioc,
ioc->msix_vector_count);
ioc->reply_queue_count =
min_t(int, ioc->cpu_count + ioc->high_iops_queues,
ioc->msix_vector_count);
ioc_info(ioc, "MSI-X vectors supported: %d, no of cores: %d, max_msix_vectors: %d\n",
ioc->msix_vector_count, ioc->cpu_count, max_msix_vectors);
if (!ioc->rdpq_array_enable && max_msix_vectors == -1)
local_max_msix_vectors = (reset_devices) ? 1 : 8;
else
@ -2965,14 +3090,23 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
else if (local_max_msix_vectors == 0)
goto try_ioapic;
if (ioc->msix_vector_count < ioc->cpu_count)
smp_affinity_enable = 0;
/*
* Enable msix_load_balance only if combined reply queue mode is
* disabled on SAS3 & above generation HBA devices.
*/
if (!ioc->combined_reply_queue &&
ioc->hba_mpi_version_belonged != MPI2_VERSION) {
ioc->msix_load_balance = true;
}
if (smp_affinity_enable)
irq_flags |= PCI_IRQ_AFFINITY;
/*
* smp affinity setting is not need when msix load balance
* is enabled.
*/
if (ioc->msix_load_balance)
ioc->smp_affinity_enable = 0;
r = pci_alloc_irq_vectors(ioc->pdev, 1, ioc->reply_queue_count,
irq_flags);
r = _base_alloc_irq_vectors(ioc);
if (r < 0) {
dfailprintk(ioc,
ioc_info(ioc, "pci_alloc_irq_vectors failed (r=%d) !!!\n",
@ -2991,11 +3125,15 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
}
}
ioc_info(ioc, "High IOPs queues : %s\n",
ioc->high_iops_queues ? "enabled" : "disabled");
return 0;
/* failback to io_apic interrupt routing */
try_ioapic:
ioc->high_iops_queues = 0;
ioc_info(ioc, "High IOPs queues : disabled\n");
ioc->reply_queue_count = 1;
r = pci_alloc_irq_vectors(ioc->pdev, 1, 1, PCI_IRQ_LEGACY);
if (r < 0) {
@ -3265,8 +3403,18 @@ mpt3sas_base_get_reply_virt_addr(struct MPT3SAS_ADAPTER *ioc, u32 phys_addr)
return ioc->reply + (phys_addr - (u32)ioc->reply_dma);
}
/**
* _base_get_msix_index - get the msix index
* @ioc: per adapter object
* @scmd: scsi_cmnd object
*
* returns msix index of general reply queues,
* i.e. reply queue on which IO request's reply
* should be posted by the HBA firmware.
*/
static inline u8
_base_get_msix_index(struct MPT3SAS_ADAPTER *ioc)
_base_get_msix_index(struct MPT3SAS_ADAPTER *ioc,
struct scsi_cmnd *scmd)
{
/* Enables reply_queue load balancing */
if (ioc->msix_load_balance)
@ -3277,6 +3425,35 @@ _base_get_msix_index(struct MPT3SAS_ADAPTER *ioc)
return ioc->cpu_msix_table[raw_smp_processor_id()];
}
/**
* _base_get_high_iops_msix_index - get the msix index of
* high iops queues
* @ioc: per adapter object
* @scmd: scsi_cmnd object
*
* Returns: msix index of high iops reply queues.
* i.e. high iops reply queue on which IO request's
* reply should be posted by the HBA firmware.
*/
static inline u8
_base_get_high_iops_msix_index(struct MPT3SAS_ADAPTER *ioc,
struct scsi_cmnd *scmd)
{
/**
* Round robin the IO interrupts among the high iops
* reply queues in terms of batch count 16 when outstanding
* IOs on the target device is >=8.
*/
if (atomic_read(&scmd->device->device_busy) >
MPT3SAS_DEVICE_HIGH_IOPS_DEPTH)
return base_mod64((
atomic64_add_return(1, &ioc->high_iops_outstanding) /
MPT3SAS_HIGH_IOPS_BATCH_COUNT),
MPT3SAS_HIGH_IOPS_REPLY_QUEUES);
return _base_get_msix_index(ioc, scmd);
}
/**
* mpt3sas_base_get_smid - obtain a free smid from internal queue
* @ioc: per adapter object
@ -3325,8 +3502,8 @@ mpt3sas_base_get_smid_scsiio(struct MPT3SAS_ADAPTER *ioc, u8 cb_idx,
smid = tag + 1;
request->cb_idx = cb_idx;
request->msix_io = _base_get_msix_index(ioc);
request->smid = smid;
request->scmd = scmd;
INIT_LIST_HEAD(&request->chain_list);
return smid;
}
@ -3380,6 +3557,7 @@ void mpt3sas_base_clear_st(struct MPT3SAS_ADAPTER *ioc,
return;
st->cb_idx = 0xFF;
st->direct_io = 0;
st->scmd = NULL;
atomic_set(&ioc->chain_lookup[st->smid - 1].chain_offset, 0);
st->smid = 0;
}
@ -3478,6 +3656,29 @@ _base_writeq(__u64 b, volatile void __iomem *addr, spinlock_t *writeq_lock)
}
#endif
/**
* _base_set_and_get_msix_index - get the msix index and assign to msix_io
* variable of scsi tracker
* @ioc: per adapter object
* @smid: system request message index
*
* returns msix index.
*/
static u8
_base_set_and_get_msix_index(struct MPT3SAS_ADAPTER *ioc, u16 smid)
{
struct scsiio_tracker *st = NULL;
if (smid < ioc->hi_priority_smid)
st = _get_st_from_smid(ioc, smid);
if (st == NULL)
return _base_get_msix_index(ioc, NULL);
st->msix_io = ioc->get_msix_index_for_smlio(ioc, st->scmd);
return st->msix_io;
}
/**
* _base_put_smid_mpi_ep_scsi_io - send SCSI_IO request to firmware
* @ioc: per adapter object
@ -3485,7 +3686,8 @@ _base_writeq(__u64 b, volatile void __iomem *addr, spinlock_t *writeq_lock)
* @handle: device handle
*/
static void
_base_put_smid_mpi_ep_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle)
_base_put_smid_mpi_ep_scsi_io(struct MPT3SAS_ADAPTER *ioc,
u16 smid, u16 handle)
{
Mpi2RequestDescriptorUnion_t descriptor;
u64 *request = (u64 *)&descriptor;
@ -3498,7 +3700,7 @@ _base_put_smid_mpi_ep_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle)
_base_clone_mpi_to_sys_mem(mpi_req_iomem, (void *)mfp,
ioc->request_sz);
descriptor.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
descriptor.SCSIIO.MSIxIndex = _base_get_msix_index(ioc);
descriptor.SCSIIO.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
descriptor.SCSIIO.SMID = cpu_to_le16(smid);
descriptor.SCSIIO.DevHandle = cpu_to_le16(handle);
descriptor.SCSIIO.LMID = 0;
@ -3520,7 +3722,7 @@ _base_put_smid_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle)
descriptor.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
descriptor.SCSIIO.MSIxIndex = _base_get_msix_index(ioc);
descriptor.SCSIIO.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
descriptor.SCSIIO.SMID = cpu_to_le16(smid);
descriptor.SCSIIO.DevHandle = cpu_to_le16(handle);
descriptor.SCSIIO.LMID = 0;
@ -3529,13 +3731,13 @@ _base_put_smid_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle)
}
/**
* mpt3sas_base_put_smid_fast_path - send fast path request to firmware
* _base_put_smid_fast_path - send fast path request to firmware
* @ioc: per adapter object
* @smid: system request message index
* @handle: device handle
*/
void
mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
static void
_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 handle)
{
Mpi2RequestDescriptorUnion_t descriptor;
@ -3543,7 +3745,7 @@ mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
descriptor.SCSIIO.RequestFlags =
MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO;
descriptor.SCSIIO.MSIxIndex = _base_get_msix_index(ioc);
descriptor.SCSIIO.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
descriptor.SCSIIO.SMID = cpu_to_le16(smid);
descriptor.SCSIIO.DevHandle = cpu_to_le16(handle);
descriptor.SCSIIO.LMID = 0;
@ -3552,13 +3754,13 @@ mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
}
/**
* mpt3sas_base_put_smid_hi_priority - send Task Management request to firmware
* _base_put_smid_hi_priority - send Task Management request to firmware
* @ioc: per adapter object
* @smid: system request message index
* @msix_task: msix_task will be same as msix of IO incase of task abort else 0.
*/
void
mpt3sas_base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc, u16 smid,
static void
_base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 msix_task)
{
Mpi2RequestDescriptorUnion_t descriptor;
@ -3607,7 +3809,7 @@ mpt3sas_base_put_smid_nvme_encap(struct MPT3SAS_ADAPTER *ioc, u16 smid)
descriptor.Default.RequestFlags =
MPI26_REQ_DESCRIPT_FLAGS_PCIE_ENCAPSULATED;
descriptor.Default.MSIxIndex = _base_get_msix_index(ioc);
descriptor.Default.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
descriptor.Default.SMID = cpu_to_le16(smid);
descriptor.Default.LMID = 0;
descriptor.Default.DescriptorTypeDependent = 0;
@ -3616,12 +3818,12 @@ mpt3sas_base_put_smid_nvme_encap(struct MPT3SAS_ADAPTER *ioc, u16 smid)
}
/**
* mpt3sas_base_put_smid_default - Default, primarily used for config pages
* _base_put_smid_default - Default, primarily used for config pages
* @ioc: per adapter object
* @smid: system request message index
*/
void
mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
static void
_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
{
Mpi2RequestDescriptorUnion_t descriptor;
void *mpi_req_iomem;
@ -3639,7 +3841,7 @@ mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
}
request = (u64 *)&descriptor;
descriptor.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
descriptor.Default.MSIxIndex = _base_get_msix_index(ioc);
descriptor.Default.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
descriptor.Default.SMID = cpu_to_le16(smid);
descriptor.Default.LMID = 0;
descriptor.Default.DescriptorTypeDependent = 0;
@ -3652,6 +3854,95 @@ mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
&ioc->scsi_lookup_lock);
}
/**
* _base_put_smid_scsi_io_atomic - send SCSI_IO request to firmware using
* Atomic Request Descriptor
* @ioc: per adapter object
* @smid: system request message index
* @handle: device handle, unused in this function, for function type match
*
* Return nothing.
*/
static void
_base_put_smid_scsi_io_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 handle)
{
Mpi26AtomicRequestDescriptor_t descriptor;
u32 *request = (u32 *)&descriptor;
descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
descriptor.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
descriptor.SMID = cpu_to_le16(smid);
writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
}
/**
* _base_put_smid_fast_path_atomic - send fast path request to firmware
* using Atomic Request Descriptor
* @ioc: per adapter object
* @smid: system request message index
* @handle: device handle, unused in this function, for function type match
* Return nothing
*/
static void
_base_put_smid_fast_path_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 handle)
{
Mpi26AtomicRequestDescriptor_t descriptor;
u32 *request = (u32 *)&descriptor;
descriptor.RequestFlags = MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO;
descriptor.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
descriptor.SMID = cpu_to_le16(smid);
writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
}
/**
* _base_put_smid_hi_priority_atomic - send Task Management request to
* firmware using Atomic Request Descriptor
* @ioc: per adapter object
* @smid: system request message index
* @msix_task: msix_task will be same as msix of IO incase of task abort else 0
*
* Return nothing.
*/
static void
_base_put_smid_hi_priority_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 msix_task)
{
Mpi26AtomicRequestDescriptor_t descriptor;
u32 *request = (u32 *)&descriptor;
descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY;
descriptor.MSIxIndex = msix_task;
descriptor.SMID = cpu_to_le16(smid);
writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
}
/**
* _base_put_smid_default - Default, primarily used for config pages
* use Atomic Request Descriptor
* @ioc: per adapter object
* @smid: system request message index
*
* Return nothing.
*/
static void
_base_put_smid_default_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid)
{
Mpi26AtomicRequestDescriptor_t descriptor;
u32 *request = (u32 *)&descriptor;
descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
descriptor.MSIxIndex = _base_set_and_get_msix_index(ioc, smid);
descriptor.SMID = cpu_to_le16(smid);
writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
}
/**
* _base_display_OEMs_branding - Display branding string
* @ioc: per adapter object
@ -3952,7 +4243,7 @@ _base_display_fwpkg_version(struct MPT3SAS_ADAPTER *ioc)
ioc->build_sg(ioc, &mpi_request->SGL, 0, 0, fwpkg_data_dma,
data_length);
init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
/* Wait for 15 seconds */
wait_for_completion_timeout(&ioc->base_cmds.done,
FW_IMG_HDR_READ_TIMEOUT*HZ);
@ -4191,6 +4482,71 @@ out:
kfree(sas_iounit_pg1);
}
/**
* _base_update_ioc_page1_inlinewith_perf_mode - Update IOC Page1 fields
* according to performance mode.
* @ioc : per adapter object
*
* Return nothing.
*/
static void
_base_update_ioc_page1_inlinewith_perf_mode(struct MPT3SAS_ADAPTER *ioc)
{
Mpi2IOCPage1_t ioc_pg1;
Mpi2ConfigReply_t mpi_reply;
mpt3sas_config_get_ioc_pg1(ioc, &mpi_reply, &ioc->ioc_pg1_copy);
memcpy(&ioc_pg1, &ioc->ioc_pg1_copy, sizeof(Mpi2IOCPage1_t));
switch (perf_mode) {
case MPT_PERF_MODE_DEFAULT:
case MPT_PERF_MODE_BALANCED:
if (ioc->high_iops_queues) {
ioc_info(ioc,
"Enable interrupt coalescing only for first\t"
"%d reply queues\n",
MPT3SAS_HIGH_IOPS_REPLY_QUEUES);
/*
* If 31st bit is zero then interrupt coalescing is
* enabled for all reply descriptor post queues.
* If 31st bit is set to one then user can
* enable/disable interrupt coalescing on per reply
* descriptor post queue group(8) basis. So to enable
* interrupt coalescing only on first reply descriptor
* post queue group 31st bit and zero th bit is enabled.
*/
ioc_pg1.ProductSpecific = cpu_to_le32(0x80000000 |
((1 << MPT3SAS_HIGH_IOPS_REPLY_QUEUES/8) - 1));
mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1);
ioc_info(ioc, "performance mode: balanced\n");
return;
}
/* Fall through */
case MPT_PERF_MODE_LATENCY:
/*
* Enable interrupt coalescing on all reply queues
* with timeout value 0xA
*/
ioc_pg1.CoalescingTimeout = cpu_to_le32(0xa);
ioc_pg1.Flags |= cpu_to_le32(MPI2_IOCPAGE1_REPLY_COALESCING);
ioc_pg1.ProductSpecific = 0;
mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1);
ioc_info(ioc, "performance mode: latency\n");
break;
case MPT_PERF_MODE_IOPS:
/*
* Enable interrupt coalescing on all reply queues.
*/
ioc_info(ioc,
"performance mode: iops with coalescing timeout: 0x%x\n",
le32_to_cpu(ioc_pg1.CoalescingTimeout));
ioc_pg1.Flags |= cpu_to_le32(MPI2_IOCPAGE1_REPLY_COALESCING);
ioc_pg1.ProductSpecific = 0;
mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1);
break;
}
}
/**
* _base_static_config_pages - static start of day config pages
* @ioc: per adapter object
@ -4258,6 +4614,8 @@ _base_static_config_pages(struct MPT3SAS_ADAPTER *ioc)
if (ioc->iounit_pg8.NumSensors)
ioc->temp_sensors_count = ioc->iounit_pg8.NumSensors;
if (ioc->is_aero_ioc)
_base_update_ioc_page1_inlinewith_perf_mode(ioc);
}
/**
@ -5431,7 +5789,7 @@ mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc,
mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET)
ioc->ioc_link_reset_in_progress = 1;
init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->base_cmds.done,
msecs_to_jiffies(10000));
if ((mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET ||
@ -5510,7 +5868,7 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc,
ioc->base_cmds.smid = smid;
memcpy(request, mpi_request, sizeof(Mpi2SepReply_t));
init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->base_cmds.done,
msecs_to_jiffies(10000));
if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
@ -5693,6 +6051,9 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc)
if ((facts->IOCCapabilities &
MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE) && (!reset_devices))
ioc->rdpq_array_capable = 1;
if ((facts->IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ)
&& ioc->is_aero_ioc)
ioc->atomic_desc_capable = 1;
facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word);
facts->IOCRequestFrameSize =
le16_to_cpu(mpi_reply.IOCRequestFrameSize);
@ -5914,7 +6275,7 @@ _base_send_port_enable(struct MPT3SAS_ADAPTER *ioc)
mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE;
init_completion(&ioc->port_enable_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->port_enable_cmds.done, 300*HZ);
if (!(ioc->port_enable_cmds.status & MPT3_CMD_COMPLETE)) {
ioc_err(ioc, "%s: timeout\n", __func__);
@ -5973,7 +6334,7 @@ mpt3sas_port_enable(struct MPT3SAS_ADAPTER *ioc)
memset(mpi_request, 0, sizeof(Mpi2PortEnableRequest_t));
mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE;
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
return 0;
}
@ -6089,7 +6450,7 @@ _base_event_notification(struct MPT3SAS_ADAPTER *ioc)
mpi_request->EventMasks[i] =
cpu_to_le32(ioc->event_masks[i]);
init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->base_cmds.done, 30*HZ);
if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
ioc_err(ioc, "%s: timeout\n", __func__);
@ -6549,6 +6910,8 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
}
}
ioc->smp_affinity_enable = smp_affinity_enable;
ioc->rdpq_array_enable_assigned = 0;
ioc->dma_mask = 0;
if (ioc->is_aero_ioc)
@ -6569,6 +6932,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
ioc->build_sg_scmd = &_base_build_sg_scmd;
ioc->build_sg = &_base_build_sg;
ioc->build_zero_len_sge = &_base_build_zero_len_sge;
ioc->get_msix_index_for_smlio = &_base_get_msix_index;
break;
case MPI25_VERSION:
case MPI26_VERSION:
@ -6583,15 +6947,30 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
ioc->build_nvme_prp = &_base_build_nvme_prp;
ioc->build_zero_len_sge = &_base_build_zero_len_sge_ieee;
ioc->sge_size_ieee = sizeof(Mpi2IeeeSgeSimple64_t);
if (ioc->high_iops_queues)
ioc->get_msix_index_for_smlio =
&_base_get_high_iops_msix_index;
else
ioc->get_msix_index_for_smlio = &_base_get_msix_index;
break;
}
if (ioc->is_mcpu_endpoint)
ioc->put_smid_scsi_io = &_base_put_smid_mpi_ep_scsi_io;
else
ioc->put_smid_scsi_io = &_base_put_smid_scsi_io;
if (ioc->atomic_desc_capable) {
ioc->put_smid_default = &_base_put_smid_default_atomic;
ioc->put_smid_scsi_io = &_base_put_smid_scsi_io_atomic;
ioc->put_smid_fast_path =
&_base_put_smid_fast_path_atomic;
ioc->put_smid_hi_priority =
&_base_put_smid_hi_priority_atomic;
} else {
ioc->put_smid_default = &_base_put_smid_default;
ioc->put_smid_fast_path = &_base_put_smid_fast_path;
ioc->put_smid_hi_priority = &_base_put_smid_hi_priority;
if (ioc->is_mcpu_endpoint)
ioc->put_smid_scsi_io =
&_base_put_smid_mpi_ep_scsi_io;
else
ioc->put_smid_scsi_io = &_base_put_smid_scsi_io;
}
/*
* These function pointers for other requests that don't
* the require IEEE scatter gather elements.

View File

@ -76,8 +76,8 @@
#define MPT3SAS_DRIVER_NAME "mpt3sas"
#define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>"
#define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver"
#define MPT3SAS_DRIVER_VERSION "28.100.00.00"
#define MPT3SAS_MAJOR_VERSION 28
#define MPT3SAS_DRIVER_VERSION "29.100.00.00"
#define MPT3SAS_MAJOR_VERSION 29
#define MPT3SAS_MINOR_VERSION 100
#define MPT3SAS_BUILD_VERSION 0
#define MPT3SAS_RELEASE_VERSION 00
@ -355,6 +355,12 @@ struct mpt3sas_nvme_cmd {
#define VIRTUAL_IO_FAILED_RETRY (0x32010081)
/* High IOPs definitions */
#define MPT3SAS_DEVICE_HIGH_IOPS_DEPTH 8
#define MPT3SAS_HIGH_IOPS_REPLY_QUEUES 8
#define MPT3SAS_HIGH_IOPS_BATCH_COUNT 16
#define MPT3SAS_GEN35_MAX_MSIX_QUEUES 128
/* OEM Specific Flags will come from OEM specific header files */
struct Mpi2ManufacturingPage10_t {
MPI2_CONFIG_PAGE_HEADER Header; /* 00h */
@ -824,6 +830,7 @@ struct chain_lookup {
*/
struct scsiio_tracker {
u16 smid;
struct scsi_cmnd *scmd;
u8 cb_idx;
u8 direct_io;
struct pcie_sg_list pcie_sg_list;
@ -924,6 +931,12 @@ typedef void (*PUT_SMID_IO_FP_HIP) (struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 funcdep);
typedef void (*PUT_SMID_DEFAULT) (struct MPT3SAS_ADAPTER *ioc, u16 smid);
typedef u32 (*BASE_READ_REG) (const volatile void __iomem *addr);
/*
* To get high iops reply queue's msix index when high iops mode is enabled
* else get the msix index of general reply queues.
*/
typedef u8 (*GET_MSIX_INDEX) (struct MPT3SAS_ADAPTER *ioc,
struct scsi_cmnd *scmd);
/* IOC Facts and Port Facts converted from little endian to cpu */
union mpi3_version_union {
@ -1025,6 +1038,8 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
* @cpu_msix_table: table for mapping cpus to msix index
* @cpu_msix_table_sz: table size
* @total_io_cnt: Gives total IO count, used to load balance the interrupts
* @high_iops_outstanding: used to load balance the interrupts
* within high iops reply queues
* @msix_load_balance: Enables load balancing of interrupts across
* the multiple MSIXs
* @schedule_dead_ioc_flush_running_cmds: callback to flush pending commands
@ -1147,6 +1162,8 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
* path functions resulting in Null pointer reference followed by kernel
* crash. To avoid the above race condition we use mutex syncrhonization
* which ensures the syncrhonization between cli/sysfs_show path.
* @atomic_desc_capable: Atomic Request Descriptor support.
* @GET_MSIX_INDEX: Get the msix index of high iops queues.
*/
struct MPT3SAS_ADAPTER {
struct list_head list;
@ -1206,8 +1223,10 @@ struct MPT3SAS_ADAPTER {
MPT3SAS_FLUSH_RUNNING_CMDS schedule_dead_ioc_flush_running_cmds;
u32 non_operational_loop;
atomic64_t total_io_cnt;
atomic64_t high_iops_outstanding;
bool msix_load_balance;
u16 thresh_hold;
u8 high_iops_queues;
/* internal commands, callback index */
u8 scsi_io_cb_idx;
@ -1267,6 +1286,7 @@ struct MPT3SAS_ADAPTER {
Mpi2IOUnitPage0_t iounit_pg0;
Mpi2IOUnitPage1_t iounit_pg1;
Mpi2IOUnitPage8_t iounit_pg8;
Mpi2IOCPage1_t ioc_pg1_copy;
struct _boot_device req_boot_device;
struct _boot_device req_alt_boot_device;
@ -1385,6 +1405,7 @@ struct MPT3SAS_ADAPTER {
u8 combined_reply_queue;
u8 combined_reply_index_count;
u8 smp_affinity_enable;
/* reply post register index */
resource_size_t **replyPostRegisterIndex;
@ -1412,6 +1433,7 @@ struct MPT3SAS_ADAPTER {
u8 hide_drives;
spinlock_t diag_trigger_lock;
u8 diag_trigger_active;
u8 atomic_desc_capable;
BASE_READ_REG base_readl;
struct SL_WH_MASTER_TRIGGER_T diag_trigger_master;
struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event;
@ -1422,7 +1444,10 @@ struct MPT3SAS_ADAPTER {
u8 is_gen35_ioc;
u8 is_aero_ioc;
PUT_SMID_IO_FP_HIP put_smid_scsi_io;
PUT_SMID_IO_FP_HIP put_smid_fast_path;
PUT_SMID_IO_FP_HIP put_smid_hi_priority;
PUT_SMID_DEFAULT put_smid_default;
GET_MSIX_INDEX get_msix_index_for_smlio;
};
typedef u8 (*MPT_CALLBACK)(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
@ -1611,6 +1636,10 @@ int mpt3sas_config_get_sas_iounit_pg1(struct MPT3SAS_ADAPTER *ioc,
int mpt3sas_config_set_sas_iounit_pg1(struct MPT3SAS_ADAPTER *ioc,
Mpi2ConfigReply_t *mpi_reply, Mpi2SasIOUnitPage1_t *config_page,
u16 sz);
int mpt3sas_config_get_ioc_pg1(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
*mpi_reply, Mpi2IOCPage1_t *config_page);
int mpt3sas_config_set_ioc_pg1(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
*mpi_reply, Mpi2IOCPage1_t *config_page);
int mpt3sas_config_get_ioc_pg8(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
*mpi_reply, Mpi2IOCPage8_t *config_page);
int mpt3sas_config_get_expander_pg0(struct MPT3SAS_ADAPTER *ioc,

View File

@ -380,7 +380,7 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
memcpy(config_request, mpi_request, sizeof(Mpi2ConfigRequest_t));
_config_display_some_debug(ioc, smid, "config_request", NULL);
init_completion(&ioc->config_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->config_cmds.done, timeout*HZ);
if (!(ioc->config_cmds.status & MPT3_CMD_COMPLETE)) {
mpt3sas_base_check_cmd_timeout(ioc,
@ -949,6 +949,77 @@ mpt3sas_config_get_ioc_pg8(struct MPT3SAS_ADAPTER *ioc,
out:
return r;
}
/**
* mpt3sas_config_get_ioc_pg1 - obtain ioc page 1
* @ioc: per adapter object
* @mpi_reply: reply mf payload returned from firmware
* @config_page: contents of the config page
* Context: sleep.
*
* Return: 0 for success, non-zero for failure.
*/
int
mpt3sas_config_get_ioc_pg1(struct MPT3SAS_ADAPTER *ioc,
Mpi2ConfigReply_t *mpi_reply, Mpi2IOCPage1_t *config_page)
{
Mpi2ConfigRequest_t mpi_request;
int r;
memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
mpi_request.Function = MPI2_FUNCTION_CONFIG;
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IOC;
mpi_request.Header.PageNumber = 1;
mpi_request.Header.PageVersion = MPI2_IOCPAGE8_PAGEVERSION;
ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
r = _config_request(ioc, &mpi_request, mpi_reply,
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
if (r)
goto out;
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
r = _config_request(ioc, &mpi_request, mpi_reply,
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
sizeof(*config_page));
out:
return r;
}
/**
* mpt3sas_config_set_ioc_pg1 - modify ioc page 1
* @ioc: per adapter object
* @mpi_reply: reply mf payload returned from firmware
* @config_page: contents of the config page
* Context: sleep.
*
* Return: 0 for success, non-zero for failure.
*/
int
mpt3sas_config_set_ioc_pg1(struct MPT3SAS_ADAPTER *ioc,
Mpi2ConfigReply_t *mpi_reply, Mpi2IOCPage1_t *config_page)
{
Mpi2ConfigRequest_t mpi_request;
int r;
memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
mpi_request.Function = MPI2_FUNCTION_CONFIG;
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IOC;
mpi_request.Header.PageNumber = 1;
mpi_request.Header.PageVersion = MPI2_IOCPAGE8_PAGEVERSION;
ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
r = _config_request(ioc, &mpi_request, mpi_reply,
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
if (r)
goto out;
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_WRITE_CURRENT;
r = _config_request(ioc, &mpi_request, mpi_reply,
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
sizeof(*config_page));
out:
return r;
}
/**
* mpt3sas_config_get_sas_device_pg0 - obtain sas device page 0

View File

@ -822,7 +822,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST)
ioc->put_smid_scsi_io(ioc, smid, device_handle);
else
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
break;
}
case MPI2_FUNCTION_SCSI_TASK_MGMT:
@ -859,7 +859,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
tm_request->DevHandle));
ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
data_in_dma, data_in_sz);
mpt3sas_base_put_smid_hi_priority(ioc, smid, 0);
ioc->put_smid_hi_priority(ioc, smid, 0);
break;
}
case MPI2_FUNCTION_SMP_PASSTHROUGH:
@ -890,7 +890,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
}
ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
data_in_sz);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
break;
}
case MPI2_FUNCTION_SATA_PASSTHROUGH:
@ -905,7 +905,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
}
ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
data_in_sz);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
break;
}
case MPI2_FUNCTION_FW_DOWNLOAD:
@ -913,7 +913,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
{
ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
data_in_sz);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
break;
}
case MPI2_FUNCTION_TOOLBOX:
@ -928,7 +928,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
data_in_dma, data_in_sz);
}
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
break;
}
case MPI2_FUNCTION_SAS_IO_UNIT_CONTROL:
@ -948,7 +948,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
default:
ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
data_in_dma, data_in_sz);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
break;
}
@ -1576,7 +1576,7 @@ _ctl_diag_register_2(struct MPT3SAS_ADAPTER *ioc,
cpu_to_le32(ioc->product_specific[buffer_type][i]);
init_completion(&ioc->ctl_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->ctl_cmds.done,
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
@ -1903,7 +1903,7 @@ mpt3sas_send_diag_release(struct MPT3SAS_ADAPTER *ioc, u8 buffer_type,
mpi_request->VP_ID = 0;
init_completion(&ioc->ctl_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->ctl_cmds.done,
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
@ -2151,7 +2151,7 @@ _ctl_diag_read_buffer(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
mpi_request->VP_ID = 0;
init_completion(&ioc->ctl_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->ctl_cmds.done,
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
@ -2319,6 +2319,10 @@ _ctl_ioctl_main(struct file *file, unsigned int cmd, void __user *arg,
break;
}
if (karg.hdr.ioc_number != ioctl_header.ioc_number) {
ret = -EINVAL;
break;
}
if (_IOC_SIZE(cmd) == sizeof(struct mpt3_ioctl_command)) {
uarg = arg;
ret = _ctl_do_mpt_command(ioc, karg, &uarg->mf);
@ -2453,7 +2457,7 @@ _ctl_mpt2_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
/* scsi host attributes */
/**
* _ctl_version_fw_show - firmware version
* version_fw_show - firmware version
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2461,7 +2465,7 @@ _ctl_mpt2_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_version_fw_show(struct device *cdev, struct device_attribute *attr,
version_fw_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2473,10 +2477,10 @@ _ctl_version_fw_show(struct device *cdev, struct device_attribute *attr,
(ioc->facts.FWVersion.Word & 0x0000FF00) >> 8,
ioc->facts.FWVersion.Word & 0x000000FF);
}
static DEVICE_ATTR(version_fw, S_IRUGO, _ctl_version_fw_show, NULL);
static DEVICE_ATTR_RO(version_fw);
/**
* _ctl_version_bios_show - bios version
* version_bios_show - bios version
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2484,7 +2488,7 @@ static DEVICE_ATTR(version_fw, S_IRUGO, _ctl_version_fw_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_version_bios_show(struct device *cdev, struct device_attribute *attr,
version_bios_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2498,10 +2502,10 @@ _ctl_version_bios_show(struct device *cdev, struct device_attribute *attr,
(version & 0x0000FF00) >> 8,
version & 0x000000FF);
}
static DEVICE_ATTR(version_bios, S_IRUGO, _ctl_version_bios_show, NULL);
static DEVICE_ATTR_RO(version_bios);
/**
* _ctl_version_mpi_show - MPI (message passing interface) version
* version_mpi_show - MPI (message passing interface) version
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2509,7 +2513,7 @@ static DEVICE_ATTR(version_bios, S_IRUGO, _ctl_version_bios_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_version_mpi_show(struct device *cdev, struct device_attribute *attr,
version_mpi_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2518,10 +2522,10 @@ _ctl_version_mpi_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "%03x.%02x\n",
ioc->facts.MsgVersion, ioc->facts.HeaderVersion >> 8);
}
static DEVICE_ATTR(version_mpi, S_IRUGO, _ctl_version_mpi_show, NULL);
static DEVICE_ATTR_RO(version_mpi);
/**
* _ctl_version_product_show - product name
* version_product_show - product name
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2529,7 +2533,7 @@ static DEVICE_ATTR(version_mpi, S_IRUGO, _ctl_version_mpi_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_version_product_show(struct device *cdev, struct device_attribute *attr,
version_product_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2537,10 +2541,10 @@ _ctl_version_product_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, 16, "%s\n", ioc->manu_pg0.ChipName);
}
static DEVICE_ATTR(version_product, S_IRUGO, _ctl_version_product_show, NULL);
static DEVICE_ATTR_RO(version_product);
/**
* _ctl_version_nvdata_persistent_show - ndvata persistent version
* version_nvdata_persistent_show - ndvata persistent version
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2548,7 +2552,7 @@ static DEVICE_ATTR(version_product, S_IRUGO, _ctl_version_product_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_version_nvdata_persistent_show(struct device *cdev,
version_nvdata_persistent_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2557,11 +2561,10 @@ _ctl_version_nvdata_persistent_show(struct device *cdev,
return snprintf(buf, PAGE_SIZE, "%08xh\n",
le32_to_cpu(ioc->iounit_pg0.NvdataVersionPersistent.Word));
}
static DEVICE_ATTR(version_nvdata_persistent, S_IRUGO,
_ctl_version_nvdata_persistent_show, NULL);
static DEVICE_ATTR_RO(version_nvdata_persistent);
/**
* _ctl_version_nvdata_default_show - nvdata default version
* version_nvdata_default_show - nvdata default version
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2569,7 +2572,7 @@ static DEVICE_ATTR(version_nvdata_persistent, S_IRUGO,
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_version_nvdata_default_show(struct device *cdev, struct device_attribute
version_nvdata_default_show(struct device *cdev, struct device_attribute
*attr, char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2578,11 +2581,10 @@ _ctl_version_nvdata_default_show(struct device *cdev, struct device_attribute
return snprintf(buf, PAGE_SIZE, "%08xh\n",
le32_to_cpu(ioc->iounit_pg0.NvdataVersionDefault.Word));
}
static DEVICE_ATTR(version_nvdata_default, S_IRUGO,
_ctl_version_nvdata_default_show, NULL);
static DEVICE_ATTR_RO(version_nvdata_default);
/**
* _ctl_board_name_show - board name
* board_name_show - board name
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2590,7 +2592,7 @@ static DEVICE_ATTR(version_nvdata_default, S_IRUGO,
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_board_name_show(struct device *cdev, struct device_attribute *attr,
board_name_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2598,10 +2600,10 @@ _ctl_board_name_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardName);
}
static DEVICE_ATTR(board_name, S_IRUGO, _ctl_board_name_show, NULL);
static DEVICE_ATTR_RO(board_name);
/**
* _ctl_board_assembly_show - board assembly name
* board_assembly_show - board assembly name
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2609,7 +2611,7 @@ static DEVICE_ATTR(board_name, S_IRUGO, _ctl_board_name_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_board_assembly_show(struct device *cdev, struct device_attribute *attr,
board_assembly_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2617,10 +2619,10 @@ _ctl_board_assembly_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardAssembly);
}
static DEVICE_ATTR(board_assembly, S_IRUGO, _ctl_board_assembly_show, NULL);
static DEVICE_ATTR_RO(board_assembly);
/**
* _ctl_board_tracer_show - board tracer number
* board_tracer_show - board tracer number
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2628,7 +2630,7 @@ static DEVICE_ATTR(board_assembly, S_IRUGO, _ctl_board_assembly_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_board_tracer_show(struct device *cdev, struct device_attribute *attr,
board_tracer_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2636,10 +2638,10 @@ _ctl_board_tracer_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardTracerNumber);
}
static DEVICE_ATTR(board_tracer, S_IRUGO, _ctl_board_tracer_show, NULL);
static DEVICE_ATTR_RO(board_tracer);
/**
* _ctl_io_delay_show - io missing delay
* io_delay_show - io missing delay
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2650,7 +2652,7 @@ static DEVICE_ATTR(board_tracer, S_IRUGO, _ctl_board_tracer_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_io_delay_show(struct device *cdev, struct device_attribute *attr,
io_delay_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2658,10 +2660,10 @@ _ctl_io_delay_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->io_missing_delay);
}
static DEVICE_ATTR(io_delay, S_IRUGO, _ctl_io_delay_show, NULL);
static DEVICE_ATTR_RO(io_delay);
/**
* _ctl_device_delay_show - device missing delay
* device_delay_show - device missing delay
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2672,7 +2674,7 @@ static DEVICE_ATTR(io_delay, S_IRUGO, _ctl_io_delay_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_device_delay_show(struct device *cdev, struct device_attribute *attr,
device_delay_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2680,10 +2682,10 @@ _ctl_device_delay_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->device_missing_delay);
}
static DEVICE_ATTR(device_delay, S_IRUGO, _ctl_device_delay_show, NULL);
static DEVICE_ATTR_RO(device_delay);
/**
* _ctl_fw_queue_depth_show - global credits
* fw_queue_depth_show - global credits
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2693,7 +2695,7 @@ static DEVICE_ATTR(device_delay, S_IRUGO, _ctl_device_delay_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_fw_queue_depth_show(struct device *cdev, struct device_attribute *attr,
fw_queue_depth_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2701,10 +2703,10 @@ _ctl_fw_queue_depth_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->facts.RequestCredit);
}
static DEVICE_ATTR(fw_queue_depth, S_IRUGO, _ctl_fw_queue_depth_show, NULL);
static DEVICE_ATTR_RO(fw_queue_depth);
/**
* _ctl_sas_address_show - sas address
* sas_address_show - sas address
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2714,7 +2716,7 @@ static DEVICE_ATTR(fw_queue_depth, S_IRUGO, _ctl_fw_queue_depth_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_host_sas_address_show(struct device *cdev, struct device_attribute *attr,
host_sas_address_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
@ -2724,11 +2726,10 @@ _ctl_host_sas_address_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "0x%016llx\n",
(unsigned long long)ioc->sas_hba.sas_address);
}
static DEVICE_ATTR(host_sas_address, S_IRUGO,
_ctl_host_sas_address_show, NULL);
static DEVICE_ATTR_RO(host_sas_address);
/**
* _ctl_logging_level_show - logging level
* logging_level_show - logging level
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2736,7 +2737,7 @@ static DEVICE_ATTR(host_sas_address, S_IRUGO,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_logging_level_show(struct device *cdev, struct device_attribute *attr,
logging_level_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2745,7 +2746,7 @@ _ctl_logging_level_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "%08xh\n", ioc->logging_level);
}
static ssize_t
_ctl_logging_level_store(struct device *cdev, struct device_attribute *attr,
logging_level_store(struct device *cdev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2760,11 +2761,10 @@ _ctl_logging_level_store(struct device *cdev, struct device_attribute *attr,
ioc->logging_level);
return strlen(buf);
}
static DEVICE_ATTR(logging_level, S_IRUGO | S_IWUSR, _ctl_logging_level_show,
_ctl_logging_level_store);
static DEVICE_ATTR_RW(logging_level);
/**
* _ctl_fwfault_debug_show - show/store fwfault_debug
* fwfault_debug_show - show/store fwfault_debug
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2773,7 +2773,7 @@ static DEVICE_ATTR(logging_level, S_IRUGO | S_IWUSR, _ctl_logging_level_show,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_fwfault_debug_show(struct device *cdev, struct device_attribute *attr,
fwfault_debug_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2782,7 +2782,7 @@ _ctl_fwfault_debug_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "%d\n", ioc->fwfault_debug);
}
static ssize_t
_ctl_fwfault_debug_store(struct device *cdev, struct device_attribute *attr,
fwfault_debug_store(struct device *cdev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2797,11 +2797,10 @@ _ctl_fwfault_debug_store(struct device *cdev, struct device_attribute *attr,
ioc->fwfault_debug);
return strlen(buf);
}
static DEVICE_ATTR(fwfault_debug, S_IRUGO | S_IWUSR,
_ctl_fwfault_debug_show, _ctl_fwfault_debug_store);
static DEVICE_ATTR_RW(fwfault_debug);
/**
* _ctl_ioc_reset_count_show - ioc reset count
* ioc_reset_count_show - ioc reset count
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2811,7 +2810,7 @@ static DEVICE_ATTR(fwfault_debug, S_IRUGO | S_IWUSR,
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_ioc_reset_count_show(struct device *cdev, struct device_attribute *attr,
ioc_reset_count_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2819,10 +2818,10 @@ _ctl_ioc_reset_count_show(struct device *cdev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "%d\n", ioc->ioc_reset_count);
}
static DEVICE_ATTR(ioc_reset_count, S_IRUGO, _ctl_ioc_reset_count_show, NULL);
static DEVICE_ATTR_RO(ioc_reset_count);
/**
* _ctl_ioc_reply_queue_count_show - number of reply queues
* reply_queue_count_show - number of reply queues
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2832,7 +2831,7 @@ static DEVICE_ATTR(ioc_reset_count, S_IRUGO, _ctl_ioc_reset_count_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_ioc_reply_queue_count_show(struct device *cdev,
reply_queue_count_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
u8 reply_queue_count;
@ -2847,11 +2846,10 @@ _ctl_ioc_reply_queue_count_show(struct device *cdev,
return snprintf(buf, PAGE_SIZE, "%d\n", reply_queue_count);
}
static DEVICE_ATTR(reply_queue_count, S_IRUGO, _ctl_ioc_reply_queue_count_show,
NULL);
static DEVICE_ATTR_RO(reply_queue_count);
/**
* _ctl_BRM_status_show - Backup Rail Monitor Status
* BRM_status_show - Backup Rail Monitor Status
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2861,7 +2859,7 @@ static DEVICE_ATTR(reply_queue_count, S_IRUGO, _ctl_ioc_reply_queue_count_show,
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_BRM_status_show(struct device *cdev, struct device_attribute *attr,
BRM_status_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2923,7 +2921,7 @@ _ctl_BRM_status_show(struct device *cdev, struct device_attribute *attr,
mutex_unlock(&ioc->pci_access_mutex);
return rc;
}
static DEVICE_ATTR(BRM_status, S_IRUGO, _ctl_BRM_status_show, NULL);
static DEVICE_ATTR_RO(BRM_status);
struct DIAG_BUFFER_START {
__le32 Size;
@ -2936,7 +2934,7 @@ struct DIAG_BUFFER_START {
};
/**
* _ctl_host_trace_buffer_size_show - host buffer size (trace only)
* host_trace_buffer_size_show - host buffer size (trace only)
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2944,7 +2942,7 @@ struct DIAG_BUFFER_START {
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_host_trace_buffer_size_show(struct device *cdev,
host_trace_buffer_size_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -2976,11 +2974,10 @@ _ctl_host_trace_buffer_size_show(struct device *cdev,
ioc->ring_buffer_sz = size;
return snprintf(buf, PAGE_SIZE, "%d\n", size);
}
static DEVICE_ATTR(host_trace_buffer_size, S_IRUGO,
_ctl_host_trace_buffer_size_show, NULL);
static DEVICE_ATTR_RO(host_trace_buffer_size);
/**
* _ctl_host_trace_buffer_show - firmware ring buffer (trace only)
* host_trace_buffer_show - firmware ring buffer (trace only)
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -2992,7 +2989,7 @@ static DEVICE_ATTR(host_trace_buffer_size, S_IRUGO,
* offset to the same attribute, it will move the pointer.
*/
static ssize_t
_ctl_host_trace_buffer_show(struct device *cdev, struct device_attribute *attr,
host_trace_buffer_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -3024,7 +3021,7 @@ _ctl_host_trace_buffer_show(struct device *cdev, struct device_attribute *attr,
}
static ssize_t
_ctl_host_trace_buffer_store(struct device *cdev, struct device_attribute *attr,
host_trace_buffer_store(struct device *cdev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -3037,14 +3034,13 @@ _ctl_host_trace_buffer_store(struct device *cdev, struct device_attribute *attr,
ioc->ring_buffer_offset = val;
return strlen(buf);
}
static DEVICE_ATTR(host_trace_buffer, S_IRUGO | S_IWUSR,
_ctl_host_trace_buffer_show, _ctl_host_trace_buffer_store);
static DEVICE_ATTR_RW(host_trace_buffer);
/*****************************************/
/**
* _ctl_host_trace_buffer_enable_show - firmware ring buffer (trace only)
* host_trace_buffer_enable_show - firmware ring buffer (trace only)
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3054,7 +3050,7 @@ static DEVICE_ATTR(host_trace_buffer, S_IRUGO | S_IWUSR,
* This is a mechnism to post/release host_trace_buffers
*/
static ssize_t
_ctl_host_trace_buffer_enable_show(struct device *cdev,
host_trace_buffer_enable_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -3072,7 +3068,7 @@ _ctl_host_trace_buffer_enable_show(struct device *cdev,
}
static ssize_t
_ctl_host_trace_buffer_enable_store(struct device *cdev,
host_trace_buffer_enable_store(struct device *cdev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -3122,14 +3118,12 @@ _ctl_host_trace_buffer_enable_store(struct device *cdev,
out:
return strlen(buf);
}
static DEVICE_ATTR(host_trace_buffer_enable, S_IRUGO | S_IWUSR,
_ctl_host_trace_buffer_enable_show,
_ctl_host_trace_buffer_enable_store);
static DEVICE_ATTR_RW(host_trace_buffer_enable);
/*********** diagnostic trigger suppport *********************************/
/**
* _ctl_diag_trigger_master_show - show the diag_trigger_master attribute
* diag_trigger_master_show - show the diag_trigger_master attribute
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3137,7 +3131,7 @@ static DEVICE_ATTR(host_trace_buffer_enable, S_IRUGO | S_IWUSR,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_diag_trigger_master_show(struct device *cdev,
diag_trigger_master_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
@ -3154,7 +3148,7 @@ _ctl_diag_trigger_master_show(struct device *cdev,
}
/**
* _ctl_diag_trigger_master_store - store the diag_trigger_master attribute
* diag_trigger_master_store - store the diag_trigger_master attribute
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3163,7 +3157,7 @@ _ctl_diag_trigger_master_show(struct device *cdev,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_diag_trigger_master_store(struct device *cdev,
diag_trigger_master_store(struct device *cdev,
struct device_attribute *attr, const char *buf, size_t count)
{
@ -3182,12 +3176,11 @@ _ctl_diag_trigger_master_store(struct device *cdev,
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return rc;
}
static DEVICE_ATTR(diag_trigger_master, S_IRUGO | S_IWUSR,
_ctl_diag_trigger_master_show, _ctl_diag_trigger_master_store);
static DEVICE_ATTR_RW(diag_trigger_master);
/**
* _ctl_diag_trigger_event_show - show the diag_trigger_event attribute
* diag_trigger_event_show - show the diag_trigger_event attribute
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3195,7 +3188,7 @@ static DEVICE_ATTR(diag_trigger_master, S_IRUGO | S_IWUSR,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_diag_trigger_event_show(struct device *cdev,
diag_trigger_event_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -3211,7 +3204,7 @@ _ctl_diag_trigger_event_show(struct device *cdev,
}
/**
* _ctl_diag_trigger_event_store - store the diag_trigger_event attribute
* diag_trigger_event_store - store the diag_trigger_event attribute
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3220,7 +3213,7 @@ _ctl_diag_trigger_event_show(struct device *cdev,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_diag_trigger_event_store(struct device *cdev,
diag_trigger_event_store(struct device *cdev,
struct device_attribute *attr, const char *buf, size_t count)
{
@ -3239,12 +3232,11 @@ _ctl_diag_trigger_event_store(struct device *cdev,
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return sz;
}
static DEVICE_ATTR(diag_trigger_event, S_IRUGO | S_IWUSR,
_ctl_diag_trigger_event_show, _ctl_diag_trigger_event_store);
static DEVICE_ATTR_RW(diag_trigger_event);
/**
* _ctl_diag_trigger_scsi_show - show the diag_trigger_scsi attribute
* diag_trigger_scsi_show - show the diag_trigger_scsi attribute
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3252,7 +3244,7 @@ static DEVICE_ATTR(diag_trigger_event, S_IRUGO | S_IWUSR,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_diag_trigger_scsi_show(struct device *cdev,
diag_trigger_scsi_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -3268,7 +3260,7 @@ _ctl_diag_trigger_scsi_show(struct device *cdev,
}
/**
* _ctl_diag_trigger_scsi_store - store the diag_trigger_scsi attribute
* diag_trigger_scsi_store - store the diag_trigger_scsi attribute
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3277,7 +3269,7 @@ _ctl_diag_trigger_scsi_show(struct device *cdev,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_diag_trigger_scsi_store(struct device *cdev,
diag_trigger_scsi_store(struct device *cdev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -3295,12 +3287,11 @@ _ctl_diag_trigger_scsi_store(struct device *cdev,
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return sz;
}
static DEVICE_ATTR(diag_trigger_scsi, S_IRUGO | S_IWUSR,
_ctl_diag_trigger_scsi_show, _ctl_diag_trigger_scsi_store);
static DEVICE_ATTR_RW(diag_trigger_scsi);
/**
* _ctl_diag_trigger_scsi_show - show the diag_trigger_mpi attribute
* diag_trigger_scsi_show - show the diag_trigger_mpi attribute
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3308,7 +3299,7 @@ static DEVICE_ATTR(diag_trigger_scsi, S_IRUGO | S_IWUSR,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_diag_trigger_mpi_show(struct device *cdev,
diag_trigger_mpi_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -3324,7 +3315,7 @@ _ctl_diag_trigger_mpi_show(struct device *cdev,
}
/**
* _ctl_diag_trigger_mpi_store - store the diag_trigger_mpi attribute
* diag_trigger_mpi_store - store the diag_trigger_mpi attribute
* @cdev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3333,7 +3324,7 @@ _ctl_diag_trigger_mpi_show(struct device *cdev,
* A sysfs 'read/write' shost attribute.
*/
static ssize_t
_ctl_diag_trigger_mpi_store(struct device *cdev,
diag_trigger_mpi_store(struct device *cdev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct Scsi_Host *shost = class_to_shost(cdev);
@ -3352,8 +3343,7 @@ _ctl_diag_trigger_mpi_store(struct device *cdev,
return sz;
}
static DEVICE_ATTR(diag_trigger_mpi, S_IRUGO | S_IWUSR,
_ctl_diag_trigger_mpi_show, _ctl_diag_trigger_mpi_store);
static DEVICE_ATTR_RW(diag_trigger_mpi);
/*********** diagnostic trigger suppport *** END ****************************/
@ -3391,7 +3381,7 @@ struct device_attribute *mpt3sas_host_attrs[] = {
/* device attributes */
/**
* _ctl_device_sas_address_show - sas address
* sas_address_show - sas address
* @dev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3401,7 +3391,7 @@ struct device_attribute *mpt3sas_host_attrs[] = {
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_device_sas_address_show(struct device *dev, struct device_attribute *attr,
sas_address_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct scsi_device *sdev = to_scsi_device(dev);
@ -3410,10 +3400,10 @@ _ctl_device_sas_address_show(struct device *dev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "0x%016llx\n",
(unsigned long long)sas_device_priv_data->sas_target->sas_address);
}
static DEVICE_ATTR(sas_address, S_IRUGO, _ctl_device_sas_address_show, NULL);
static DEVICE_ATTR_RO(sas_address);
/**
* _ctl_device_handle_show - device handle
* sas_device_handle_show - device handle
* @dev: pointer to embedded class device
* @attr: ?
* @buf: the buffer returned
@ -3423,7 +3413,7 @@ static DEVICE_ATTR(sas_address, S_IRUGO, _ctl_device_sas_address_show, NULL);
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_device_handle_show(struct device *dev, struct device_attribute *attr,
sas_device_handle_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct scsi_device *sdev = to_scsi_device(dev);
@ -3432,10 +3422,10 @@ _ctl_device_handle_show(struct device *dev, struct device_attribute *attr,
return snprintf(buf, PAGE_SIZE, "0x%04x\n",
sas_device_priv_data->sas_target->handle);
}
static DEVICE_ATTR(sas_device_handle, S_IRUGO, _ctl_device_handle_show, NULL);
static DEVICE_ATTR_RO(sas_device_handle);
/**
* _ctl_device_ncq_io_prio_show - send prioritized io commands to device
* sas_ncq_io_prio_show - send prioritized io commands to device
* @dev: pointer to embedded device
* @attr: ?
* @buf: the buffer returned
@ -3443,7 +3433,7 @@ static DEVICE_ATTR(sas_device_handle, S_IRUGO, _ctl_device_handle_show, NULL);
* A sysfs 'read/write' sdev attribute, only works with SATA
*/
static ssize_t
_ctl_device_ncq_prio_enable_show(struct device *dev,
sas_ncq_prio_enable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scsi_device *sdev = to_scsi_device(dev);
@ -3454,7 +3444,7 @@ _ctl_device_ncq_prio_enable_show(struct device *dev,
}
static ssize_t
_ctl_device_ncq_prio_enable_store(struct device *dev,
sas_ncq_prio_enable_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -3471,9 +3461,7 @@ _ctl_device_ncq_prio_enable_store(struct device *dev,
sas_device_priv_data->ncq_prio_enable = ncq_prio_enable;
return strlen(buf);
}
static DEVICE_ATTR(sas_ncq_prio_enable, S_IRUGO | S_IWUSR,
_ctl_device_ncq_prio_enable_show,
_ctl_device_ncq_prio_enable_store);
static DEVICE_ATTR_RW(sas_ncq_prio_enable);
struct device_attribute *mpt3sas_dev_attrs[] = {
&dev_attr_sas_address,

View File

@ -113,22 +113,22 @@ MODULE_PARM_DESC(logging_level,
static ushort max_sectors = 0xFFFF;
module_param(max_sectors, ushort, 0);
module_param(max_sectors, ushort, 0444);
MODULE_PARM_DESC(max_sectors, "max sectors, range 64 to 32767 default=32767");
static int missing_delay[2] = {-1, -1};
module_param_array(missing_delay, int, NULL, 0);
module_param_array(missing_delay, int, NULL, 0444);
MODULE_PARM_DESC(missing_delay, " device missing delay , io missing delay");
/* scsi-mid layer global parmeter is max_report_luns, which is 511 */
#define MPT3SAS_MAX_LUN (16895)
static u64 max_lun = MPT3SAS_MAX_LUN;
module_param(max_lun, ullong, 0);
module_param(max_lun, ullong, 0444);
MODULE_PARM_DESC(max_lun, " max lun, default=16895 ");
static ushort hbas_to_enumerate;
module_param(hbas_to_enumerate, ushort, 0);
module_param(hbas_to_enumerate, ushort, 0444);
MODULE_PARM_DESC(hbas_to_enumerate,
" 0 - enumerates both SAS 2.0 & SAS 3.0 generation HBAs\n \
1 - enumerates only SAS 2.0 generation HBAs\n \
@ -142,17 +142,17 @@ MODULE_PARM_DESC(hbas_to_enumerate,
* Either bit can be set, or both
*/
static int diag_buffer_enable = -1;
module_param(diag_buffer_enable, int, 0);
module_param(diag_buffer_enable, int, 0444);
MODULE_PARM_DESC(diag_buffer_enable,
" post diag buffers (TRACE=1/SNAPSHOT=2/EXTENDED=4/default=0)");
static int disable_discovery = -1;
module_param(disable_discovery, int, 0);
module_param(disable_discovery, int, 0444);
MODULE_PARM_DESC(disable_discovery, " disable discovery ");
/* permit overriding the host protection capabilities mask (EEDP/T10 PI) */
static int prot_mask = -1;
module_param(prot_mask, int, 0);
module_param(prot_mask, int, 0444);
MODULE_PARM_DESC(prot_mask, " host protection capabilities mask, def=7 ");
@ -2685,7 +2685,7 @@ mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, u64 lun,
int_to_scsilun(lun, (struct scsi_lun *)mpi_request->LUN);
mpt3sas_scsih_set_tm_flag(ioc, handle);
init_completion(&ioc->tm_cmds.done);
mpt3sas_base_put_smid_hi_priority(ioc, smid, msix_task);
ioc->put_smid_hi_priority(ioc, smid, msix_task);
wait_for_completion_timeout(&ioc->tm_cmds.done, timeout*HZ);
if (!(ioc->tm_cmds.status & MPT3_CMD_COMPLETE)) {
if (mpt3sas_base_check_cmd_timeout(ioc,
@ -3659,7 +3659,7 @@ _scsih_tm_tr_send(struct MPT3SAS_ADAPTER *ioc, u16 handle)
mpi_request->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET;
mpi_request->MsgFlags = tr_method;
set_bit(handle, ioc->device_remove_in_progress);
mpt3sas_base_put_smid_hi_priority(ioc, smid, 0);
ioc->put_smid_hi_priority(ioc, smid, 0);
mpt3sas_trigger_master(ioc, MASTER_TRIGGER_DEVICE_REMOVAL);
out:
@ -3755,7 +3755,7 @@ _scsih_tm_tr_complete(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
mpi_request->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL;
mpi_request->Operation = MPI2_SAS_OP_REMOVE_DEVICE;
mpi_request->DevHandle = mpi_request_tm->DevHandle;
mpt3sas_base_put_smid_default(ioc, smid_sas_ctrl);
ioc->put_smid_default(ioc, smid_sas_ctrl);
return _scsih_check_for_pending_tm(ioc, smid);
}
@ -3881,7 +3881,7 @@ _scsih_tm_tr_volume_send(struct MPT3SAS_ADAPTER *ioc, u16 handle)
mpi_request->Function = MPI2_FUNCTION_SCSI_TASK_MGMT;
mpi_request->DevHandle = cpu_to_le16(handle);
mpi_request->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET;
mpt3sas_base_put_smid_hi_priority(ioc, smid, 0);
ioc->put_smid_hi_priority(ioc, smid, 0);
}
/**
@ -3970,7 +3970,7 @@ _scsih_issue_delayed_event_ack(struct MPT3SAS_ADAPTER *ioc, u16 smid, U16 event,
ack_request->EventContext = event_context;
ack_request->VF_ID = 0; /* TODO */
ack_request->VP_ID = 0;
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
}
/**
@ -4026,7 +4026,7 @@ _scsih_issue_delayed_sas_io_unit_ctrl(struct MPT3SAS_ADAPTER *ioc,
mpi_request->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL;
mpi_request->Operation = MPI2_SAS_OP_REMOVE_DEVICE;
mpi_request->DevHandle = cpu_to_le16(handle);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
}
/**
@ -4734,12 +4734,12 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
if (sas_target_priv_data->flags & MPT_TARGET_FASTPATH_IO) {
mpi_request->IoFlags = cpu_to_le16(scmd->cmd_len |
MPI25_SCSIIO_IOFLAGS_FAST_PATH);
mpt3sas_base_put_smid_fast_path(ioc, smid, handle);
ioc->put_smid_fast_path(ioc, smid, handle);
} else
ioc->put_smid_scsi_io(ioc, smid,
le16_to_cpu(mpi_request->DevHandle));
} else
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
return 0;
out:
@ -5210,6 +5210,7 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
((ioc_status & MPI2_IOCSTATUS_MASK)
!= MPI2_IOCSTATUS_SCSI_TASK_TERMINATED)) {
st->direct_io = 0;
st->scmd = scmd;
memcpy(mpi_request->CDB.CDB32, scmd->cmnd, scmd->cmd_len);
mpi_request->DevHandle =
cpu_to_le16(sas_device_priv_data->sas_target->handle);
@ -7601,7 +7602,7 @@ _scsih_ir_fastpath(struct MPT3SAS_ADAPTER *ioc, u16 handle, u8 phys_disk_num)
handle, phys_disk_num));
init_completion(&ioc->scsih_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ);
if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) {
@ -9633,7 +9634,7 @@ _scsih_ir_shutdown(struct MPT3SAS_ADAPTER *ioc)
if (!ioc->hide_ir_msg)
ioc_info(ioc, "IR shutdown (sending)\n");
init_completion(&ioc->scsih_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ);
if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) {
@ -9670,6 +9671,7 @@ static void scsih_remove(struct pci_dev *pdev)
struct _pcie_device *pcie_device, *pcienext;
struct workqueue_struct *wq;
unsigned long flags;
Mpi2ConfigReply_t mpi_reply;
ioc->remove_host = 1;
@ -9684,7 +9686,13 @@ static void scsih_remove(struct pci_dev *pdev)
spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
if (wq)
destroy_workqueue(wq);
/*
* Copy back the unmodified ioc page1. so that on next driver load,
* current modified changes on ioc page1 won't take effect.
*/
if (ioc->is_aero_ioc)
mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply,
&ioc->ioc_pg1_copy);
/* release all the volumes */
_scsih_ir_shutdown(ioc);
sas_remove_host(shost);
@ -9747,6 +9755,7 @@ scsih_shutdown(struct pci_dev *pdev)
struct MPT3SAS_ADAPTER *ioc = shost_priv(shost);
struct workqueue_struct *wq;
unsigned long flags;
Mpi2ConfigReply_t mpi_reply;
ioc->remove_host = 1;
@ -9761,6 +9770,13 @@ scsih_shutdown(struct pci_dev *pdev)
spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
if (wq)
destroy_workqueue(wq);
/*
* Copy back the unmodified ioc page1 so that on next driver load,
* current modified changes on ioc page1 won't take effect.
*/
if (ioc->is_aero_ioc)
mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply,
&ioc->ioc_pg1_copy);
_scsih_ir_shutdown(ioc);
mpt3sas_base_detach(ioc);

View File

@ -367,7 +367,7 @@ _transport_expander_report_manufacture(struct MPT3SAS_ADAPTER *ioc,
ioc_info(ioc, "report_manufacture - send to sas_addr(0x%016llx)\n",
(u64)sas_address));
init_completion(&ioc->transport_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
@ -1139,7 +1139,7 @@ _transport_get_expander_phy_error_log(struct MPT3SAS_ADAPTER *ioc,
(u64)phy->identify.sas_address,
phy->number));
init_completion(&ioc->transport_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
@ -1434,7 +1434,7 @@ _transport_expander_phy_control(struct MPT3SAS_ADAPTER *ioc,
(u64)phy->identify.sas_address,
phy->number, phy_operation));
init_completion(&ioc->transport_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
@ -1911,7 +1911,7 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
ioc_info(ioc, "%s: sending smp request\n", __func__));
init_completion(&ioc->transport_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {

View File

@ -1193,7 +1193,7 @@ static int mvs_dev_found_notify(struct domain_device *dev, int lock)
mvi_device->dev_type = dev->dev_type;
mvi_device->mvi_info = mvi;
mvi_device->sas_device = dev;
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) {
if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
int phy_id;
u8 phy_num = parent_dev->ex_dev.num_phys;
struct ex_phy *phy;

View File

@ -50,9 +50,6 @@ extern struct mvs_info *tgt_mvi;
extern const struct mvs_dispatch mvs_64xx_dispatch;
extern const struct mvs_dispatch mvs_94xx_dispatch;
#define DEV_IS_EXPANDER(type) \
((type == SAS_EDGE_EXPANDER_DEVICE) || (type == SAS_FANOUT_EXPANDER_DEVICE))
#define bit(n) ((u64)1 << n)
#define for_each_phy(__lseq_mask, __mc, __lseq) \

File diff suppressed because it is too large Load Diff

View File

@ -1,651 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* $Header: /cvsroot/osst/Driver/osst.h,v 1.16 2005/01/01 21:13:35 wriede Exp $
*/
#include <asm/byteorder.h>
#include <linux/completion.h>
#include <linux/mutex.h>
/* FIXME - rename and use the following two types or delete them!
* and the types really should go to st.h anyway...
* INQUIRY packet command - Data Format (From Table 6-8 of QIC-157C)
*/
typedef struct {
unsigned device_type :5; /* Peripheral Device Type */
unsigned reserved0_765 :3; /* Peripheral Qualifier - Reserved */
unsigned reserved1_6t0 :7; /* Reserved */
unsigned rmb :1; /* Removable Medium Bit */
unsigned ansi_version :3; /* ANSI Version */
unsigned ecma_version :3; /* ECMA Version */
unsigned iso_version :2; /* ISO Version */
unsigned response_format :4; /* Response Data Format */
unsigned reserved3_45 :2; /* Reserved */
unsigned reserved3_6 :1; /* TrmIOP - Reserved */
unsigned reserved3_7 :1; /* AENC - Reserved */
u8 additional_length; /* Additional Length (total_length-4) */
u8 rsv5, rsv6, rsv7; /* Reserved */
u8 vendor_id[8]; /* Vendor Identification */
u8 product_id[16]; /* Product Identification */
u8 revision_level[4]; /* Revision Level */
u8 vendor_specific[20]; /* Vendor Specific - Optional */
u8 reserved56t95[40]; /* Reserved - Optional */
/* Additional information may be returned */
} idetape_inquiry_result_t;
/*
* READ POSITION packet command - Data Format (From Table 6-57)
*/
typedef struct {
unsigned reserved0_10 :2; /* Reserved */
unsigned bpu :1; /* Block Position Unknown */
unsigned reserved0_543 :3; /* Reserved */
unsigned eop :1; /* End Of Partition */
unsigned bop :1; /* Beginning Of Partition */
u8 partition; /* Partition Number */
u8 reserved2, reserved3; /* Reserved */
u32 first_block; /* First Block Location */
u32 last_block; /* Last Block Location (Optional) */
u8 reserved12; /* Reserved */
u8 blocks_in_buffer[3]; /* Blocks In Buffer - (Optional) */
u32 bytes_in_buffer; /* Bytes In Buffer (Optional) */
} idetape_read_position_result_t;
/*
* Follows structures which are related to the SELECT SENSE / MODE SENSE
* packet commands.
*/
#define COMPRESSION_PAGE 0x0f
#define COMPRESSION_PAGE_LENGTH 16
#define CAPABILITIES_PAGE 0x2a
#define CAPABILITIES_PAGE_LENGTH 20
#define TAPE_PARAMTR_PAGE 0x2b
#define TAPE_PARAMTR_PAGE_LENGTH 16
#define NUMBER_RETRIES_PAGE 0x2f
#define NUMBER_RETRIES_PAGE_LENGTH 4
#define BLOCK_SIZE_PAGE 0x30
#define BLOCK_SIZE_PAGE_LENGTH 4
#define BUFFER_FILLING_PAGE 0x33
#define BUFFER_FILLING_PAGE_LENGTH 4
#define VENDOR_IDENT_PAGE 0x36
#define VENDOR_IDENT_PAGE_LENGTH 8
#define LOCATE_STATUS_PAGE 0x37
#define LOCATE_STATUS_PAGE_LENGTH 0
#define MODE_HEADER_LENGTH 4
/*
* REQUEST SENSE packet command result - Data Format.
*/
typedef struct {
unsigned error_code :7; /* Current of deferred errors */
unsigned valid :1; /* The information field conforms to QIC-157C */
u8 reserved1 :8; /* Segment Number - Reserved */
unsigned sense_key :4; /* Sense Key */
unsigned reserved2_4 :1; /* Reserved */
unsigned ili :1; /* Incorrect Length Indicator */
unsigned eom :1; /* End Of Medium */
unsigned filemark :1; /* Filemark */
u32 information __attribute__ ((packed));
u8 asl; /* Additional sense length (n-7) */
u32 command_specific; /* Additional command specific information */
u8 asc; /* Additional Sense Code */
u8 ascq; /* Additional Sense Code Qualifier */
u8 replaceable_unit_code; /* Field Replaceable Unit Code */
unsigned sk_specific1 :7; /* Sense Key Specific */
unsigned sksv :1; /* Sense Key Specific information is valid */
u8 sk_specific2; /* Sense Key Specific */
u8 sk_specific3; /* Sense Key Specific */
u8 pad[2]; /* Padding to 20 bytes */
} idetape_request_sense_result_t;
/*
* Mode Parameter Header for the MODE SENSE packet command
*/
typedef struct {
u8 mode_data_length; /* Length of the following data transfer */
u8 medium_type; /* Medium Type */
u8 dsp; /* Device Specific Parameter */
u8 bdl; /* Block Descriptor Length */
} osst_mode_parameter_header_t;
/*
* Mode Parameter Block Descriptor the MODE SENSE packet command
*
* Support for block descriptors is optional.
*/
typedef struct {
u8 density_code; /* Medium density code */
u8 blocks[3]; /* Number of blocks */
u8 reserved4; /* Reserved */
u8 length[3]; /* Block Length */
} osst_parameter_block_descriptor_t;
/*
* The Data Compression Page, as returned by the MODE SENSE packet command.
*/
typedef struct {
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned ps :1;
unsigned reserved0 :1; /* Reserved */
unsigned page_code :6; /* Page Code - Should be 0xf */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned page_code :6; /* Page Code - Should be 0xf */
unsigned reserved0 :1; /* Reserved */
unsigned ps :1;
#else
#error "Please fix <asm/byteorder.h>"
#endif
u8 page_length; /* Page Length - Should be 14 */
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned dce :1; /* Data Compression Enable */
unsigned dcc :1; /* Data Compression Capable */
unsigned reserved2 :6; /* Reserved */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned reserved2 :6; /* Reserved */
unsigned dcc :1; /* Data Compression Capable */
unsigned dce :1; /* Data Compression Enable */
#else
#error "Please fix <asm/byteorder.h>"
#endif
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned dde :1; /* Data Decompression Enable */
unsigned red :2; /* Report Exception on Decompression */
unsigned reserved3 :5; /* Reserved */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned reserved3 :5; /* Reserved */
unsigned red :2; /* Report Exception on Decompression */
unsigned dde :1; /* Data Decompression Enable */
#else
#error "Please fix <asm/byteorder.h>"
#endif
u32 ca; /* Compression Algorithm */
u32 da; /* Decompression Algorithm */
u8 reserved[4]; /* Reserved */
} osst_data_compression_page_t;
/*
* The Medium Partition Page, as returned by the MODE SENSE packet command.
*/
typedef struct {
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned ps :1;
unsigned reserved1_6 :1; /* Reserved */
unsigned page_code :6; /* Page Code - Should be 0x11 */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned page_code :6; /* Page Code - Should be 0x11 */
unsigned reserved1_6 :1; /* Reserved */
unsigned ps :1;
#else
#error "Please fix <asm/byteorder.h>"
#endif
u8 page_length; /* Page Length - Should be 6 */
u8 map; /* Maximum Additional Partitions - Should be 0 */
u8 apd; /* Additional Partitions Defined - Should be 0 */
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned fdp :1; /* Fixed Data Partitions */
unsigned sdp :1; /* Should be 0 */
unsigned idp :1; /* Should be 0 */
unsigned psum :2; /* Should be 0 */
unsigned reserved4_012 :3; /* Reserved */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned reserved4_012 :3; /* Reserved */
unsigned psum :2; /* Should be 0 */
unsigned idp :1; /* Should be 0 */
unsigned sdp :1; /* Should be 0 */
unsigned fdp :1; /* Fixed Data Partitions */
#else
#error "Please fix <asm/byteorder.h>"
#endif
u8 mfr; /* Medium Format Recognition */
u8 reserved[2]; /* Reserved */
} osst_medium_partition_page_t;
/*
* Capabilities and Mechanical Status Page
*/
typedef struct {
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned reserved1_67 :2;
unsigned page_code :6; /* Page code - Should be 0x2a */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned page_code :6; /* Page code - Should be 0x2a */
unsigned reserved1_67 :2;
#else
#error "Please fix <asm/byteorder.h>"
#endif
u8 page_length; /* Page Length - Should be 0x12 */
u8 reserved2, reserved3;
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned reserved4_67 :2;
unsigned sprev :1; /* Supports SPACE in the reverse direction */
unsigned reserved4_1234 :4;
unsigned ro :1; /* Read Only Mode */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned ro :1; /* Read Only Mode */
unsigned reserved4_1234 :4;
unsigned sprev :1; /* Supports SPACE in the reverse direction */
unsigned reserved4_67 :2;
#else
#error "Please fix <asm/byteorder.h>"
#endif
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned reserved5_67 :2;
unsigned qfa :1; /* Supports the QFA two partition formats */
unsigned reserved5_4 :1;
unsigned efmt :1; /* Supports ERASE command initiated formatting */
unsigned reserved5_012 :3;
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned reserved5_012 :3;
unsigned efmt :1; /* Supports ERASE command initiated formatting */
unsigned reserved5_4 :1;
unsigned qfa :1; /* Supports the QFA two partition formats */
unsigned reserved5_67 :2;
#else
#error "Please fix <asm/byteorder.h>"
#endif
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned cmprs :1; /* Supports data compression */
unsigned ecc :1; /* Supports error correction */
unsigned reserved6_45 :2; /* Reserved */
unsigned eject :1; /* The device can eject the volume */
unsigned prevent :1; /* The device defaults in the prevent state after power up */
unsigned locked :1; /* The volume is locked */
unsigned lock :1; /* Supports locking the volume */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned lock :1; /* Supports locking the volume */
unsigned locked :1; /* The volume is locked */
unsigned prevent :1; /* The device defaults in the prevent state after power up */
unsigned eject :1; /* The device can eject the volume */
unsigned reserved6_45 :2; /* Reserved */
unsigned ecc :1; /* Supports error correction */
unsigned cmprs :1; /* Supports data compression */
#else
#error "Please fix <asm/byteorder.h>"
#endif
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned blk32768 :1; /* slowb - the device restricts the byte count for PIO */
/* transfers for slow buffer memory ??? */
/* Also 32768 block size in some cases */
unsigned reserved7_3_6 :4;
unsigned blk1024 :1; /* Supports 1024 bytes block size */
unsigned blk512 :1; /* Supports 512 bytes block size */
unsigned reserved7_0 :1;
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned reserved7_0 :1;
unsigned blk512 :1; /* Supports 512 bytes block size */
unsigned blk1024 :1; /* Supports 1024 bytes block size */
unsigned reserved7_3_6 :4;
unsigned blk32768 :1; /* slowb - the device restricts the byte count for PIO */
/* transfers for slow buffer memory ??? */
/* Also 32768 block size in some cases */
#else
#error "Please fix <asm/byteorder.h>"
#endif
__be16 max_speed; /* Maximum speed supported in KBps */
u8 reserved10, reserved11;
__be16 ctl; /* Continuous Transfer Limit in blocks */
__be16 speed; /* Current Speed, in KBps */
__be16 buffer_size; /* Buffer Size, in 512 bytes */
u8 reserved18, reserved19;
} osst_capabilities_page_t;
/*
* Block Size Page
*/
typedef struct {
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned ps :1;
unsigned reserved1_6 :1;
unsigned page_code :6; /* Page code - Should be 0x30 */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned page_code :6; /* Page code - Should be 0x30 */
unsigned reserved1_6 :1;
unsigned ps :1;
#else
#error "Please fix <asm/byteorder.h>"
#endif
u8 page_length; /* Page Length - Should be 2 */
u8 reserved2;
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned one :1;
unsigned reserved2_6 :1;
unsigned record32_5 :1;
unsigned record32 :1;
unsigned reserved2_23 :2;
unsigned play32_5 :1;
unsigned play32 :1;
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned play32 :1;
unsigned play32_5 :1;
unsigned reserved2_23 :2;
unsigned record32 :1;
unsigned record32_5 :1;
unsigned reserved2_6 :1;
unsigned one :1;
#else
#error "Please fix <asm/byteorder.h>"
#endif
} osst_block_size_page_t;
/*
* Tape Parameters Page
*/
typedef struct {
#if defined(__BIG_ENDIAN_BITFIELD)
unsigned ps :1;
unsigned reserved1_6 :1;
unsigned page_code :6; /* Page code - Should be 0x2b */
#elif defined(__LITTLE_ENDIAN_BITFIELD)
unsigned page_code :6; /* Page code - Should be 0x2b */
unsigned reserved1_6 :1;
unsigned ps :1;
#else
#error "Please fix <asm/byteorder.h>"
#endif
u8 reserved2;
u8 density;
u8 reserved3,reserved4;
__be16 segtrk;
__be16 trks;
u8 reserved5,reserved6,reserved7,reserved8,reserved9,reserved10;
} osst_tape_paramtr_page_t;
/* OnStream definitions */
#define OS_CONFIG_PARTITION (0xff)
#define OS_DATA_PARTITION (0)
#define OS_PARTITION_VERSION (1)
/*
* partition
*/
typedef struct os_partition_s {
__u8 partition_num;
__u8 par_desc_ver;
__be16 wrt_pass_cntr;
__be32 first_frame_ppos;
__be32 last_frame_ppos;
__be32 eod_frame_ppos;
} os_partition_t;
/*
* DAT entry
*/
typedef struct os_dat_entry_s {
__be32 blk_sz;
__be16 blk_cnt;
__u8 flags;
__u8 reserved;
} os_dat_entry_t;
/*
* DAT
*/
#define OS_DAT_FLAGS_DATA (0xc)
#define OS_DAT_FLAGS_MARK (0x1)
typedef struct os_dat_s {
__u8 dat_sz;
__u8 reserved1;
__u8 entry_cnt;
__u8 reserved3;
os_dat_entry_t dat_list[16];
} os_dat_t;
/*
* Frame types
*/
#define OS_FRAME_TYPE_FILL (0)
#define OS_FRAME_TYPE_EOD (1 << 0)
#define OS_FRAME_TYPE_MARKER (1 << 1)
#define OS_FRAME_TYPE_HEADER (1 << 3)
#define OS_FRAME_TYPE_DATA (1 << 7)
/*
* AUX
*/
typedef struct os_aux_s {
__be32 format_id; /* hardware compatibility AUX is based on */
char application_sig[4]; /* driver used to write this media */
__be32 hdwr; /* reserved */
__be32 update_frame_cntr; /* for configuration frame */
__u8 frame_type;
__u8 frame_type_reserved;
__u8 reserved_18_19[2];
os_partition_t partition;
__u8 reserved_36_43[8];
__be32 frame_seq_num;
__be32 logical_blk_num_high;
__be32 logical_blk_num;
os_dat_t dat;
__u8 reserved188_191[4];
__be32 filemark_cnt;
__be32 phys_fm;
__be32 last_mark_ppos;
__u8 reserved204_223[20];
/*
* __u8 app_specific[32];
*
* Linux specific fields:
*/
__be32 next_mark_ppos; /* when known, points to next marker */
__be32 last_mark_lbn; /* storing log_blk_num of last mark is extends ADR spec */
__u8 linux_specific[24];
__u8 reserved_256_511[256];
} os_aux_t;
#define OS_FM_TAB_MAX 1024
typedef struct os_fm_tab_s {
__u8 fm_part_num;
__u8 reserved_1;
__u8 fm_tab_ent_sz;
__u8 reserved_3;
__be16 fm_tab_ent_cnt;
__u8 reserved6_15[10];
__be32 fm_tab_ent[OS_FM_TAB_MAX];
} os_fm_tab_t;
typedef struct os_ext_trk_ey_s {
__u8 et_part_num;
__u8 fmt;
__be16 fm_tab_off;
__u8 reserved4_7[4];
__be32 last_hlb_hi;
__be32 last_hlb;
__be32 last_pp;
__u8 reserved20_31[12];
} os_ext_trk_ey_t;
typedef struct os_ext_trk_tb_s {
__u8 nr_stream_part;
__u8 reserved_1;
__u8 et_ent_sz;
__u8 reserved3_15[13];
os_ext_trk_ey_t dat_ext_trk_ey;
os_ext_trk_ey_t qfa_ext_trk_ey;
} os_ext_trk_tb_t;
typedef struct os_header_s {
char ident_str[8];
__u8 major_rev;
__u8 minor_rev;
__be16 ext_trk_tb_off;
__u8 reserved12_15[4];
__u8 pt_par_num;
__u8 pt_reserved1_3[3];
os_partition_t partition[16];
__be32 cfg_col_width;
__be32 dat_col_width;
__be32 qfa_col_width;
__u8 cartridge[16];
__u8 reserved304_511[208];
__be32 old_filemark_list[16680/4]; /* in ADR 1.4 __u8 track_table[16680] */
os_ext_trk_tb_t ext_track_tb;
__u8 reserved17272_17735[464];
os_fm_tab_t dat_fm_tab;
os_fm_tab_t qfa_fm_tab;
__u8 reserved25960_32767[6808];
} os_header_t;
/*
* OnStream ADRL frame
*/
#define OS_FRAME_SIZE (32 * 1024 + 512)
#define OS_DATA_SIZE (32 * 1024)
#define OS_AUX_SIZE (512)
//#define OSST_MAX_SG 2
/* The OnStream tape buffer descriptor. */
struct osst_buffer {
unsigned char in_use;
unsigned char dma; /* DMA-able buffer */
int buffer_size;
int buffer_blocks;
int buffer_bytes;
int read_pointer;
int writing;
int midlevel_result;
int syscall_result;
struct osst_request *last_SRpnt;
struct st_cmdstatus cmdstat;
struct rq_map_data map_data;
unsigned char *b_data;
os_aux_t *aux; /* onstream AUX structure at end of each block */
unsigned short use_sg; /* zero or number of s/g segments for this adapter */
unsigned short sg_segs; /* number of segments in s/g list */
unsigned short orig_sg_segs; /* number of segments allocated at first try */
struct scatterlist sg[1]; /* MUST BE last item */
} ;
/* The OnStream tape drive descriptor */
struct osst_tape {
struct scsi_driver *driver;
unsigned capacity;
struct scsi_device *device;
struct mutex lock; /* for serialization */
struct completion wait; /* for SCSI commands */
struct osst_buffer * buffer;
/* Drive characteristics */
unsigned char omit_blklims;
unsigned char do_auto_lock;
unsigned char can_bsr;
unsigned char can_partitions;
unsigned char two_fm;
unsigned char fast_mteom;
unsigned char restr_dma;
unsigned char scsi2_logical;
unsigned char default_drvbuffer; /* 0xff = don't touch, value 3 bits */
unsigned char pos_unknown; /* after reset position unknown */
int write_threshold;
int timeout; /* timeout for normal commands */
int long_timeout; /* timeout for commands known to take long time*/
/* Mode characteristics */
struct st_modedef modes[ST_NBR_MODES];
int current_mode;
/* Status variables */
int partition;
int new_partition;
int nbr_partitions; /* zero until partition support enabled */
struct st_partstat ps[ST_NBR_PARTITIONS];
unsigned char dirty;
unsigned char ready;
unsigned char write_prot;
unsigned char drv_write_prot;
unsigned char in_use;
unsigned char blksize_changed;
unsigned char density_changed;
unsigned char compression_changed;
unsigned char drv_buffer;
unsigned char density;
unsigned char door_locked;
unsigned char rew_at_close;
unsigned char inited;
int block_size;
int min_block;
int max_block;
int recover_count; /* from tape opening */
int abort_count;
int write_count;
int read_count;
int recover_erreg; /* from last status call */
/*
* OnStream specific data
*/
int os_fw_rev; /* the firmware revision * 10000 */
unsigned char raw; /* flag OnStream raw access (32.5KB block size) */
unsigned char poll; /* flag that this drive needs polling (IDE|firmware) */
unsigned char frame_in_buffer; /* flag that the frame as per frame_seq_number
* has been read into STp->buffer and is valid */
int frame_seq_number; /* logical frame number */
int logical_blk_num; /* logical block number */
unsigned first_frame_position; /* physical frame to be transferred to/from host */
unsigned last_frame_position; /* physical frame to be transferd to/from tape */
int cur_frames; /* current number of frames in internal buffer */
int max_frames; /* max number of frames in internal buffer */
char application_sig[5]; /* application signature */
unsigned char fast_open; /* flag that reminds us we didn't check headers at open */
unsigned short wrt_pass_cntr; /* write pass counter */
int update_frame_cntr; /* update frame counter */
int onstream_write_error; /* write error recovery active */
int header_ok; /* header frame verified ok */
int linux_media; /* reading linux-specifc media */
int linux_media_version;
os_header_t * header_cache; /* cache is kept for filemark positions */
int filemark_cnt;
int first_mark_ppos;
int last_mark_ppos;
int last_mark_lbn; /* storing log_blk_num of last mark is extends ADR spec */
int first_data_ppos;
int eod_frame_ppos;
int eod_frame_lfa;
int write_type; /* used in write error recovery */
int read_error_frame; /* used in read error recovery */
unsigned long cmd_start_time;
unsigned long max_cmd_time;
#if DEBUG
unsigned char write_pending;
int nbr_finished;
int nbr_waits;
unsigned char last_cmnd[6];
unsigned char last_sense[16];
#endif
struct gendisk *drive;
} ;
/* scsi tape command */
struct osst_request {
unsigned char cmd[MAX_COMMAND_SIZE];
unsigned char sense[SCSI_SENSE_BUFFERSIZE];
int result;
struct osst_tape *stp;
struct completion *waiting;
struct bio *bio;
};
/* Values of write_type */
#define OS_WRITE_DATA 0
#define OS_WRITE_EOD 1
#define OS_WRITE_NEW_MARK 2
#define OS_WRITE_LAST_MARK 3
#define OS_WRITE_HEADER 4
#define OS_WRITE_FILLER 5
/* Additional rw state */
#define OS_WRITING_COMPLETE 3

View File

@ -1,7 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#define SIGS_FROM_OSST \
{"OnStream", "SC-", "", "osst"}, \
{"OnStream", "DI-", "", "osst"}, \
{"OnStream", "DP-", "", "osst"}, \
{"OnStream", "FW-", "", "osst"}, \
{"OnStream", "USB", "", "osst"}

View File

@ -1,107 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
The compile-time configurable defaults for the Linux SCSI tape driver.
Copyright 1995 Kai Makisara.
Last modified: Wed Sep 2 21:24:07 1998 by root@home
Changed (and renamed) for OnStream SCSI drives garloff@suse.de
2000-06-21
$Header: /cvsroot/osst/Driver/osst_options.h,v 1.6 2003/12/23 14:22:12 wriede Exp $
*/
#ifndef _OSST_OPTIONS_H
#define _OSST_OPTIONS_H
/* The minimum limit for the number of SCSI tape devices is determined by
OSST_MAX_TAPES. If the number of tape devices and the "slack" defined by
OSST_EXTRA_DEVS exceeds OSST_MAX_TAPES, the large number is used. */
#define OSST_MAX_TAPES 4
/* If OSST_IN_FILE_POS is nonzero, the driver positions the tape after the
record been read by the user program even if the tape has moved further
because of buffered reads. Should be set to zero to support also drives
that can't space backwards over records. NOTE: The tape will be
spaced backwards over an "accidentally" crossed filemark in any case. */
#define OSST_IN_FILE_POS 1
/* The tape driver buffer size in kilobytes. */
/* Don't change, as this is the HW blocksize */
#define OSST_BUFFER_BLOCKS 32
/* The number of kilobytes of data in the buffer that triggers an
asynchronous write in fixed block mode. See also OSST_ASYNC_WRITES
below. */
#define OSST_WRITE_THRESHOLD_BLOCKS 32
/* OSST_EOM_RESERVE defines the number of frames are kept in reserve for
* * write error recovery when writing near end of medium. ENOSPC is returned
* * when write() is called and the tape write position is within this number
* * of blocks from the tape capacity. */
#define OSST_EOM_RESERVE 300
/* The maximum number of tape buffers the driver allocates. The number
is also constrained by the number of drives detected. Determines the
maximum number of concurrently active tape drives. */
#define OSST_MAX_BUFFERS OSST_MAX_TAPES
/* Maximum number of scatter/gather segments */
/* Fit one buffer in pages and add one for the AUX header */
#define OSST_MAX_SG (((OSST_BUFFER_BLOCKS*1024) / PAGE_SIZE) + 1)
/* The number of scatter/gather segments to allocate at first try (must be
smaller or equal to the maximum). */
#define OSST_FIRST_SG ((OSST_BUFFER_BLOCKS*1024) / PAGE_SIZE)
/* The size of the first scatter/gather segments (determines the maximum block
size for SCSI adapters not supporting scatter/gather). The default is set
to try to allocate the buffer as one chunk. */
#define OSST_FIRST_ORDER (15-PAGE_SHIFT)
/* The following lines define defaults for properties that can be set
separately for each drive using the MTSTOPTIONS ioctl. */
/* If OSST_TWO_FM is non-zero, the driver writes two filemarks after a
file being written. Some drives can't handle two filemarks at the
end of data. */
#define OSST_TWO_FM 0
/* If OSST_BUFFER_WRITES is non-zero, writes in fixed block mode are
buffered until the driver buffer is full or asynchronous write is
triggered. */
#define OSST_BUFFER_WRITES 1
/* If OSST_ASYNC_WRITES is non-zero, the SCSI write command may be started
without waiting for it to finish. May cause problems in multiple
tape backups. */
#define OSST_ASYNC_WRITES 1
/* If OSST_READ_AHEAD is non-zero, blocks are read ahead in fixed block
mode. */
#define OSST_READ_AHEAD 1
/* If OSST_AUTO_LOCK is non-zero, the drive door is locked at the first
read or write command after the device is opened. The door is opened
when the device is closed. */
#define OSST_AUTO_LOCK 0
/* If OSST_FAST_MTEOM is non-zero, the MTEOM ioctl is done using the
direct SCSI command. The file number status is lost but this method
is fast with some drives. Otherwise MTEOM is done by spacing over
files and the file number status is retained. */
#define OSST_FAST_MTEOM 0
/* If OSST_SCSI2LOGICAL is nonzero, the logical block addresses are used for
MTIOCPOS and MTSEEK by default. Vendor addresses are used if OSST_SCSI2LOGICAL
is zero. */
#define OSST_SCSI2LOGICAL 0
/* If OSST_SYSV is non-zero, the tape behaves according to the SYS V semantics.
The default is BSD semantics. */
#define OSST_SYSV 0
#endif

View File

@ -20,6 +20,16 @@ config PCMCIA_AHA152X
To compile this driver as a module, choose M here: the
module will be called aha152x_cs.
config PCMCIA_FDOMAIN
tristate "Future Domain PCMCIA support"
select SCSI_FDOMAIN
help
Say Y here if you intend to attach this type of PCMCIA SCSI host
adapter to your computer.
To compile this driver as a module, choose M here: the
module will be called fdomain_cs.
config PCMCIA_NINJA_SCSI
tristate "NinjaSCSI-3 / NinjaSCSI-32Bi (16bit) PCMCIA support"
depends on !64BIT

View File

@ -4,6 +4,7 @@ ccflags-y := -I $(srctree)/drivers/scsi
# 16-bit client drivers
obj-$(CONFIG_PCMCIA_QLOGIC) += qlogic_cs.o
obj-$(CONFIG_PCMCIA_FDOMAIN) += fdomain_cs.o
obj-$(CONFIG_PCMCIA_AHA152X) += aha152x_cs.o
obj-$(CONFIG_PCMCIA_NINJA_SCSI) += nsp_cs.o
obj-$(CONFIG_PCMCIA_SYM53C500) += sym53c500_cs.o

View File

@ -0,0 +1,95 @@
// SPDX-License-Identifier: (GPL-2.0 OR MPL-1.1)
/*
* Driver for Future Domain-compatible PCMCIA SCSI cards
* Copyright 2019 Ondrej Zary
*
* The initial developer of the original code is David A. Hinds
* <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
* are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
*/
#include <linux/module.h>
#include <linux/init.h>
#include <scsi/scsi_host.h>
#include <pcmcia/cistpl.h>
#include <pcmcia/ds.h>
#include "fdomain.h"
MODULE_AUTHOR("Ondrej Zary, David Hinds");
MODULE_DESCRIPTION("Future Domain PCMCIA SCSI driver");
MODULE_LICENSE("Dual MPL/GPL");
static int fdomain_config_check(struct pcmcia_device *p_dev, void *priv_data)
{
p_dev->io_lines = 10;
p_dev->resource[0]->end = FDOMAIN_REGION_SIZE;
p_dev->resource[0]->flags &= ~IO_DATA_PATH_WIDTH;
p_dev->resource[0]->flags |= IO_DATA_PATH_WIDTH_AUTO;
return pcmcia_request_io(p_dev);
}
static int fdomain_probe(struct pcmcia_device *link)
{
int ret;
struct Scsi_Host *sh;
link->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_SET_IO;
link->config_regs = PRESENT_OPTION;
ret = pcmcia_loop_config(link, fdomain_config_check, NULL);
if (ret)
return ret;
ret = pcmcia_enable_device(link);
if (ret)
goto fail_disable;
if (!request_region(link->resource[0]->start, FDOMAIN_REGION_SIZE,
"fdomain_cs"))
goto fail_disable;
sh = fdomain_create(link->resource[0]->start, link->irq, 7, &link->dev);
if (!sh) {
dev_err(&link->dev, "Controller initialization failed");
ret = -ENODEV;
goto fail_release;
}
link->priv = sh;
return 0;
fail_release:
release_region(link->resource[0]->start, FDOMAIN_REGION_SIZE);
fail_disable:
pcmcia_disable_device(link);
return ret;
}
static void fdomain_remove(struct pcmcia_device *link)
{
fdomain_destroy(link->priv);
release_region(link->resource[0]->start, FDOMAIN_REGION_SIZE);
pcmcia_disable_device(link);
}
static const struct pcmcia_device_id fdomain_ids[] = {
PCMCIA_DEVICE_PROD_ID12("IBM Corp.", "SCSI PCMCIA Card", 0xe3736c88,
0x859cad20),
PCMCIA_DEVICE_PROD_ID1("SCSI PCMCIA Adapter Card", 0x8dacb57e),
PCMCIA_DEVICE_PROD_ID12(" SIMPLE TECHNOLOGY Corporation",
"SCSI PCMCIA Credit Card Controller",
0x182bdafe, 0xc80d106f),
PCMCIA_DEVICE_NULL,
};
MODULE_DEVICE_TABLE(pcmcia, fdomain_ids);
static struct pcmcia_driver fdomain_cs_driver = {
.owner = THIS_MODULE,
.name = "fdomain_cs",
.probe = fdomain_probe,
.remove = fdomain_remove,
.id_table = fdomain_ids,
};
module_pcmcia_driver(fdomain_cs_driver);

View File

@ -461,6 +461,24 @@ static ssize_t pm8001_ctl_bios_version_show(struct device *cdev,
return str - buf;
}
static DEVICE_ATTR(bios_version, S_IRUGO, pm8001_ctl_bios_version_show, NULL);
/**
* event_log_size_show - event log size
* @cdev: pointer to embedded class device
* @buf: the buffer returned
*
* A sysfs read shost attribute.
*/
static ssize_t event_log_size_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
return snprintf(buf, PAGE_SIZE, "%d\n",
pm8001_ha->main_cfg_tbl.pm80xx_tbl.event_log_size);
}
static DEVICE_ATTR_RO(event_log_size);
/**
* pm8001_ctl_aap_log_show - IOP event log
* @cdev: pointer to embedded class device
@ -474,25 +492,26 @@ static ssize_t pm8001_ctl_iop_log_show(struct device *cdev,
struct Scsi_Host *shost = class_to_shost(cdev);
struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
#define IOP_MEMMAP(r, c) \
(*(u32 *)((u8*)pm8001_ha->memoryMap.region[IOP].virt_ptr + (r) * 32 \
+ (c)))
int i;
char *str = buf;
int max = 2;
for (i = 0; i < max; i++) {
str += sprintf(str, "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x"
"0x%08x 0x%08x\n",
IOP_MEMMAP(i, 0),
IOP_MEMMAP(i, 4),
IOP_MEMMAP(i, 8),
IOP_MEMMAP(i, 12),
IOP_MEMMAP(i, 16),
IOP_MEMMAP(i, 20),
IOP_MEMMAP(i, 24),
IOP_MEMMAP(i, 28));
u32 read_size =
pm8001_ha->main_cfg_tbl.pm80xx_tbl.event_log_size / 1024;
static u32 start, end, count;
u32 max_read_times = 32;
u32 max_count = (read_size * 1024) / (max_read_times * 4);
u32 *temp = (u32 *)pm8001_ha->memoryMap.region[IOP].virt_ptr;
if ((count % max_count) == 0) {
start = 0;
end = max_read_times;
count = 0;
} else {
start = end;
end = end + max_read_times;
}
for (; start < end; start++)
str += sprintf(str, "%08x ", *(temp+start));
count++;
return str - buf;
}
static DEVICE_ATTR(iop_log, S_IRUGO, pm8001_ctl_iop_log_show, NULL);
@ -796,6 +815,7 @@ struct device_attribute *pm8001_host_attrs[] = {
&dev_attr_max_sg_list,
&dev_attr_sas_spec_support,
&dev_attr_logging_level,
&dev_attr_event_log_size,
&dev_attr_host_sas_address,
&dev_attr_bios_version,
&dev_attr_ib_log,

View File

@ -2356,7 +2356,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) &&
(status != IO_UNDERFLOW)) {
if (!((t->dev->parent) &&
(DEV_IS_EXPANDER(t->dev->parent->dev_type)))) {
(dev_is_expander(t->dev->parent->dev_type)))) {
for (i = 0 , j = 4; j <= 7 && i <= 3; i++ , j++)
sata_addr_low[i] = pm8001_ha->sas_addr[j];
for (i = 0 , j = 0; j <= 3 && i <= 3; i++ , j++)
@ -4560,7 +4560,7 @@ static int pm8001_chip_reg_dev_req(struct pm8001_hba_info *pm8001_ha,
pm8001_dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
stp_sspsmp_sata = 0x01; /*ssp or smp*/
}
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type))
if (parent_dev && dev_is_expander(parent_dev->dev_type))
phy_id = parent_dev->ex_dev.ex_phy->phy_id;
else
phy_id = pm8001_dev->attached_phy;

View File

@ -634,7 +634,7 @@ static int pm8001_dev_found_notify(struct domain_device *dev)
dev->lldd_dev = pm8001_device;
pm8001_device->dev_type = dev->dev_type;
pm8001_device->dcompletion = &completion;
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) {
if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
int phy_id;
struct ex_phy *phy;
for (phy_id = 0; phy_id < parent_dev->ex_dev.num_phys;
@ -1181,7 +1181,7 @@ int pm8001_query_task(struct sas_task *task)
return rc;
}
/* mandatory SAM-3, still need free task/ccb info, abord the specified task */
/* mandatory SAM-3, still need free task/ccb info, abort the specified task */
int pm8001_abort_task(struct sas_task *task)
{
unsigned long flags;

View File

@ -103,7 +103,6 @@ do { \
#define PM8001_READ_VPD
#define DEV_IS_EXPANDER(type) ((type == SAS_EDGE_EXPANDER_DEVICE) || (type == SAS_FANOUT_EXPANDER_DEVICE))
#define IS_SPCV_12G(dev) ((dev->device == 0X8074) \
|| (dev->device == 0X8076) \
|| (dev->device == 0X8077) \

View File

@ -2066,7 +2066,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) &&
(status != IO_UNDERFLOW)) {
if (!((t->dev->parent) &&
(DEV_IS_EXPANDER(t->dev->parent->dev_type)))) {
(dev_is_expander(t->dev->parent->dev_type)))) {
for (i = 0 , j = 4; i <= 3 && j <= 7; i++ , j++)
sata_addr_low[i] = pm8001_ha->sas_addr[j];
for (i = 0 , j = 0; i <= 3 && j <= 3; i++ , j++)
@ -4561,7 +4561,7 @@ static int pm80xx_chip_reg_dev_req(struct pm8001_hba_info *pm8001_ha,
pm8001_dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE)
stp_sspsmp_sata = 0x01; /*ssp or smp*/
}
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type))
if (parent_dev && dev_is_expander(parent_dev->dev_type))
phy_id = parent_dev->ex_dev.ex_phy->phy_id;
else
phy_id = pm8001_dev->attached_phy;

View File

@ -532,6 +532,8 @@ typedef struct srb {
uint8_t cmd_type;
uint8_t pad[3];
atomic_t ref_count;
struct kref cmd_kref; /* need to migrate ref_count over to this */
void *priv;
wait_queue_head_t nvme_ls_waitq;
struct fc_port *fcport;
struct scsi_qla_host *vha;
@ -554,6 +556,7 @@ typedef struct srb {
} u;
void (*done)(void *, int);
void (*free)(void *);
void (*put_fn)(struct kref *kref);
} srb_t;
#define GET_CMD_SP(sp) (sp->u.scmd.cmd)
@ -2336,7 +2339,6 @@ typedef struct fc_port {
unsigned int id_changed:1;
unsigned int scan_needed:1;
struct work_struct nvme_del_work;
struct completion nvme_del_done;
uint32_t nvme_prli_service_param;
#define NVME_PRLI_SP_CONF BIT_7
@ -4376,7 +4378,6 @@ typedef struct scsi_qla_host {
struct nvme_fc_local_port *nvme_local_port;
struct completion nvme_del_done;
struct list_head nvme_rport_list;
uint16_t fcoe_vlan_id;
uint16_t fcoe_fcf_idx;

View File

@ -908,4 +908,6 @@ void qlt_clr_qp_table(struct scsi_qla_host *vha);
void qlt_set_mode(struct scsi_qla_host *);
int qla2x00_set_data_rate(scsi_qla_host_t *vha, uint16_t mode);
/* nvme.c */
void qla_nvme_unregister_remote_port(struct fc_port *fcport);
#endif /* _QLA_GBL_H */

View File

@ -5403,7 +5403,6 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT);
fcport->deleted = 0;
fcport->logout_on_delete = 1;
fcport->login_retry = vha->hw->login_retry_count;
fcport->n2n_chip_reset = fcport->n2n_link_reset_cnt = 0;
switch (vha->hw->current_topology) {

View File

@ -12,8 +12,6 @@
static struct nvme_fc_port_template qla_nvme_fc_transport;
static void qla_nvme_unregister_remote_port(struct work_struct *);
int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
{
struct qla_nvme_rport *rport;
@ -38,7 +36,6 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
(fcport->nvme_flag & NVME_FLAG_REGISTERED))
return 0;
INIT_WORK(&fcport->nvme_del_work, qla_nvme_unregister_remote_port);
fcport->nvme_flag &= ~NVME_FLAG_RESETTING;
memset(&req, 0, sizeof(struct nvme_fc_port_info));
@ -74,7 +71,6 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
rport = fcport->nvme_remote_port->private;
rport->fcport = fcport;
list_add_tail(&rport->list, &vha->nvme_rport_list);
fcport->nvme_flag |= NVME_FLAG_REGISTERED;
return 0;
@ -124,53 +120,91 @@ static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport,
return 0;
}
static void qla_nvme_sp_ls_done(void *ptr, int res)
static void qla_nvme_release_fcp_cmd_kref(struct kref *kref)
{
srb_t *sp = ptr;
struct srb_iocb *nvme;
struct nvmefc_ls_req *fd;
struct nvme_private *priv;
if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
return;
atomic_dec(&sp->ref_count);
if (res)
res = -EINVAL;
nvme = &sp->u.iocb_cmd;
fd = nvme->u.nvme.desc;
priv = fd->private;
priv->comp_status = res;
schedule_work(&priv->ls_work);
/* work schedule doesn't need the sp */
qla2x00_rel_sp(sp);
}
static void qla_nvme_sp_done(void *ptr, int res)
{
srb_t *sp = ptr;
struct srb_iocb *nvme;
struct srb *sp = container_of(kref, struct srb, cmd_kref);
struct nvme_private *priv = (struct nvme_private *)sp->priv;
struct nvmefc_fcp_req *fd;
struct srb_iocb *nvme;
unsigned long flags;
if (!priv)
goto out;
nvme = &sp->u.iocb_cmd;
fd = nvme->u.nvme.desc;
if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
return;
atomic_dec(&sp->ref_count);
if (res == QLA_SUCCESS) {
spin_lock_irqsave(&priv->cmd_lock, flags);
priv->sp = NULL;
sp->priv = NULL;
if (priv->comp_status == QLA_SUCCESS) {
fd->rcv_rsplen = nvme->u.nvme.rsp_pyld_len;
} else {
fd->rcv_rsplen = 0;
fd->transferred_length = 0;
}
fd->status = 0;
spin_unlock_irqrestore(&priv->cmd_lock, flags);
fd->done(fd);
out:
qla2xxx_rel_qpair_sp(sp->qpair, sp);
}
static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
{
struct srb *sp = container_of(kref, struct srb, cmd_kref);
struct nvme_private *priv = (struct nvme_private *)sp->priv;
struct nvmefc_ls_req *fd;
unsigned long flags;
if (!priv)
goto out;
spin_lock_irqsave(&priv->cmd_lock, flags);
priv->sp = NULL;
sp->priv = NULL;
spin_unlock_irqrestore(&priv->cmd_lock, flags);
fd = priv->fd;
fd->done(fd, priv->comp_status);
out:
qla2x00_rel_sp(sp);
}
static void qla_nvme_ls_complete(struct work_struct *work)
{
struct nvme_private *priv =
container_of(work, struct nvme_private, ls_work);
kref_put(&priv->sp->cmd_kref, qla_nvme_release_ls_cmd_kref);
}
static void qla_nvme_sp_ls_done(void *ptr, int res)
{
srb_t *sp = ptr;
struct nvme_private *priv;
if (WARN_ON_ONCE(kref_read(&sp->cmd_kref) == 0))
return;
if (res)
res = -EINVAL;
priv = (struct nvme_private *)sp->priv;
priv->comp_status = res;
INIT_WORK(&priv->ls_work, qla_nvme_ls_complete);
schedule_work(&priv->ls_work);
}
/* it assumed that QPair lock is held. */
static void qla_nvme_sp_done(void *ptr, int res)
{
srb_t *sp = ptr;
struct nvme_private *priv = (struct nvme_private *)sp->priv;
priv->comp_status = res;
kref_put(&sp->cmd_kref, qla_nvme_release_fcp_cmd_kref);
return;
}
@ -189,44 +223,50 @@ static void qla_nvme_abort_work(struct work_struct *work)
__func__, sp, sp->handle, fcport, fcport->deleted);
if (!ha->flags.fw_started && (fcport && fcport->deleted))
return;
goto out;
if (ha->flags.host_shutting_down) {
ql_log(ql_log_info, sp->fcport->vha, 0xffff,
"%s Calling done on sp: %p, type: 0x%x, sp->ref_count: 0x%x\n",
__func__, sp, sp->type, atomic_read(&sp->ref_count));
sp->done(sp, 0);
return;
goto out;
}
if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
return;
rval = ha->isp_ops->abort_command(sp);
ql_dbg(ql_dbg_io, fcport->vha, 0x212b,
"%s: %s command for sp=%p, handle=%x on fcport=%p rval=%x\n",
__func__, (rval != QLA_SUCCESS) ? "Failed to abort" : "Aborted",
sp, sp->handle, fcport, rval);
out:
/* kref_get was done before work was schedule. */
kref_put(&sp->cmd_kref, sp->put_fn);
}
static void qla_nvme_ls_abort(struct nvme_fc_local_port *lport,
struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd)
{
struct nvme_private *priv = fd->private;
unsigned long flags;
spin_lock_irqsave(&priv->cmd_lock, flags);
if (!priv->sp) {
spin_unlock_irqrestore(&priv->cmd_lock, flags);
return;
}
if (!kref_get_unless_zero(&priv->sp->cmd_kref)) {
spin_unlock_irqrestore(&priv->cmd_lock, flags);
return;
}
spin_unlock_irqrestore(&priv->cmd_lock, flags);
INIT_WORK(&priv->abort_work, qla_nvme_abort_work);
schedule_work(&priv->abort_work);
}
static void qla_nvme_ls_complete(struct work_struct *work)
{
struct nvme_private *priv =
container_of(work, struct nvme_private, ls_work);
struct nvmefc_ls_req *fd = priv->fd;
fd->done(fd, priv->comp_status);
}
static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd)
@ -240,8 +280,16 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
struct qla_hw_data *ha;
srb_t *sp;
if (!fcport || (fcport && fcport->deleted))
return rval;
vha = fcport->vha;
ha = vha->hw;
if (!ha->flags.fw_started)
return rval;
/* Alloc SRB structure */
sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
if (!sp)
@ -250,11 +298,13 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
sp->type = SRB_NVME_LS;
sp->name = "nvme_ls";
sp->done = qla_nvme_sp_ls_done;
atomic_set(&sp->ref_count, 1);
nvme = &sp->u.iocb_cmd;
sp->put_fn = qla_nvme_release_ls_cmd_kref;
sp->priv = (void *)priv;
priv->sp = sp;
kref_init(&sp->cmd_kref);
spin_lock_init(&priv->cmd_lock);
nvme = &sp->u.iocb_cmd;
priv->fd = fd;
INIT_WORK(&priv->ls_work, qla_nvme_ls_complete);
nvme->u.nvme.desc = fd;
nvme->u.nvme.dir = 0;
nvme->u.nvme.dl = 0;
@ -271,8 +321,10 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
if (rval != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x700e,
"qla2x00_start_sp failed = %d\n", rval);
atomic_dec(&sp->ref_count);
wake_up(&sp->nvme_ls_waitq);
sp->priv = NULL;
priv->sp = NULL;
qla2x00_rel_sp(sp);
return rval;
}
@ -284,6 +336,18 @@ static void qla_nvme_fcp_abort(struct nvme_fc_local_port *lport,
struct nvmefc_fcp_req *fd)
{
struct nvme_private *priv = fd->private;
unsigned long flags;
spin_lock_irqsave(&priv->cmd_lock, flags);
if (!priv->sp) {
spin_unlock_irqrestore(&priv->cmd_lock, flags);
return;
}
if (!kref_get_unless_zero(&priv->sp->cmd_kref)) {
spin_unlock_irqrestore(&priv->cmd_lock, flags);
return;
}
spin_unlock_irqrestore(&priv->cmd_lock, flags);
INIT_WORK(&priv->abort_work, qla_nvme_abort_work);
schedule_work(&priv->abort_work);
@ -487,11 +551,11 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
fcport = qla_rport->fcport;
vha = fcport->vha;
if (test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags))
if (!qpair || !fcport || (qpair && !qpair->fw_started) ||
(fcport && fcport->deleted))
return rval;
vha = fcport->vha;
/*
* If we know the dev is going away while the transport is still sending
* IO's return busy back to stall the IO Q. This happens when the
@ -507,12 +571,15 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
if (!sp)
return -EBUSY;
atomic_set(&sp->ref_count, 1);
init_waitqueue_head(&sp->nvme_ls_waitq);
kref_init(&sp->cmd_kref);
spin_lock_init(&priv->cmd_lock);
sp->priv = (void *)priv;
priv->sp = sp;
sp->type = SRB_NVME_CMD;
sp->name = "nvme_cmd";
sp->done = qla_nvme_sp_done;
sp->put_fn = qla_nvme_release_fcp_cmd_kref;
sp->qpair = qpair;
sp->vha = vha;
nvme = &sp->u.iocb_cmd;
@ -522,8 +589,10 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
if (rval != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x212d,
"qla2x00_start_nvme_mq failed = %d\n", rval);
atomic_dec(&sp->ref_count);
wake_up(&sp->nvme_ls_waitq);
sp->priv = NULL;
priv->sp = NULL;
qla2xxx_rel_qpair_sp(sp->qpair, sp);
}
return rval;
@ -542,29 +611,16 @@ static void qla_nvme_localport_delete(struct nvme_fc_local_port *lport)
static void qla_nvme_remoteport_delete(struct nvme_fc_remote_port *rport)
{
fc_port_t *fcport;
struct qla_nvme_rport *qla_rport = rport->private, *trport;
struct qla_nvme_rport *qla_rport = rport->private;
fcport = qla_rport->fcport;
fcport->nvme_remote_port = NULL;
fcport->nvme_flag &= ~NVME_FLAG_REGISTERED;
list_for_each_entry_safe(qla_rport, trport,
&fcport->vha->nvme_rport_list, list) {
if (qla_rport->fcport == fcport) {
list_del(&qla_rport->list);
break;
}
}
complete(&fcport->nvme_del_done);
if (!test_bit(UNLOADING, &fcport->vha->dpc_flags)) {
INIT_WORK(&fcport->free_work, qlt_free_session_done);
schedule_work(&fcport->free_work);
}
fcport->nvme_flag &= ~NVME_FLAG_DELETING;
ql_log(ql_log_info, fcport->vha, 0x2110,
"remoteport_delete of %p completed.\n", fcport);
"remoteport_delete of %p %8phN completed.\n",
fcport, fcport->port_name);
complete(&fcport->nvme_del_done);
}
static struct nvme_fc_port_template qla_nvme_fc_transport = {
@ -586,35 +642,25 @@ static struct nvme_fc_port_template qla_nvme_fc_transport = {
.fcprqst_priv_sz = sizeof(struct nvme_private),
};
static void qla_nvme_unregister_remote_port(struct work_struct *work)
void qla_nvme_unregister_remote_port(struct fc_port *fcport)
{
struct fc_port *fcport = container_of(work, struct fc_port,
nvme_del_work);
struct qla_nvme_rport *qla_rport, *trport;
int ret;
if (!IS_ENABLED(CONFIG_NVME_FC))
return;
ql_log(ql_log_warn, NULL, 0x2112,
"%s: unregister remoteport on %p\n",__func__, fcport);
"%s: unregister remoteport on %p %8phN\n",
__func__, fcport, fcport->port_name);
list_for_each_entry_safe(qla_rport, trport,
&fcport->vha->nvme_rport_list, list) {
if (qla_rport->fcport == fcport) {
ql_log(ql_log_info, fcport->vha, 0x2113,
"%s: fcport=%p\n", __func__, fcport);
nvme_fc_set_remoteport_devloss
(fcport->nvme_remote_port, 0);
init_completion(&fcport->nvme_del_done);
if (nvme_fc_unregister_remoteport
(fcport->nvme_remote_port))
ql_log(ql_log_info, fcport->vha, 0x2114,
"%s: Failed to unregister nvme_remote_port\n",
__func__);
wait_for_completion(&fcport->nvme_del_done);
break;
}
}
nvme_fc_set_remoteport_devloss(fcport->nvme_remote_port, 0);
init_completion(&fcport->nvme_del_done);
ret = nvme_fc_unregister_remoteport(fcport->nvme_remote_port);
if (ret)
ql_log(ql_log_info, fcport->vha, 0x2114,
"%s: Failed to unregister nvme_remote_port (%d)\n",
__func__, ret);
wait_for_completion(&fcport->nvme_del_done);
}
void qla_nvme_delete(struct scsi_qla_host *vha)

View File

@ -34,10 +34,10 @@ struct nvme_private {
struct work_struct ls_work;
struct work_struct abort_work;
int comp_status;
spinlock_t cmd_lock;
};
struct qla_nvme_rport {
struct list_head list;
struct fc_port *fcport;
};

View File

@ -4789,7 +4789,6 @@ struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *sht,
INIT_LIST_HEAD(&vha->plogi_ack_list);
INIT_LIST_HEAD(&vha->qp_list);
INIT_LIST_HEAD(&vha->gnl.fcports);
INIT_LIST_HEAD(&vha->nvme_rport_list);
INIT_LIST_HEAD(&vha->gpnid_list);
INIT_WORK(&vha->iocb_work, qla2x00_iocb_work_fn);

View File

@ -1004,6 +1004,12 @@ void qlt_free_session_done(struct work_struct *work)
else
logout_started = true;
}
} /* if sess->logout_on_delete */
if (sess->nvme_flag & NVME_FLAG_REGISTERED &&
!(sess->nvme_flag & NVME_FLAG_DELETING)) {
sess->nvme_flag |= NVME_FLAG_DELETING;
qla_nvme_unregister_remote_port(sess);
}
}
@ -1155,14 +1161,8 @@ void qlt_unreg_sess(struct fc_port *sess)
sess->last_rscn_gen = sess->rscn_gen;
sess->last_login_gen = sess->login_gen;
if (sess->nvme_flag & NVME_FLAG_REGISTERED &&
!(sess->nvme_flag & NVME_FLAG_DELETING)) {
sess->nvme_flag |= NVME_FLAG_DELETING;
schedule_work(&sess->nvme_del_work);
} else {
INIT_WORK(&sess->free_work, qlt_free_session_done);
schedule_work(&sess->free_work);
}
INIT_WORK(&sess->free_work, qlt_free_session_done);
schedule_work(&sess->free_work);
}
EXPORT_SYMBOL(qlt_unreg_sess);

View File

@ -86,15 +86,10 @@ unsigned int scsi_logging_level;
EXPORT_SYMBOL(scsi_logging_level);
#endif
/* sd, scsi core and power management need to coordinate flushing async actions */
ASYNC_DOMAIN(scsi_sd_probe_domain);
EXPORT_SYMBOL(scsi_sd_probe_domain);
/*
* Separate domain (from scsi_sd_probe_domain) to maximize the benefit of
* asynchronous system resume operations. It is marked 'exclusive' to avoid
* being included in the async_synchronize_full() that is invoked by
* dpm_resume()
* Domain for asynchronous system resume operations. It is marked 'exclusive'
* to avoid being included in the async_synchronize_full() that is invoked by
* dpm_resume().
*/
ASYNC_DOMAIN_EXCLUSIVE(scsi_sd_pm_domain);
EXPORT_SYMBOL(scsi_sd_pm_domain);
@ -821,7 +816,6 @@ static void __exit exit_scsi(void)
scsi_exit_devinfo();
scsi_exit_procfs();
scsi_exit_queue();
async_unregister_domain(&scsi_sd_probe_domain);
}
subsys_initcall(init_scsi);

View File

@ -1,3 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0 */
struct request;
struct seq_file;

View File

@ -1055,7 +1055,7 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, unsigned char *cmnd,
struct scsi_device *sdev = scmd->device;
struct Scsi_Host *shost = sdev->host;
DECLARE_COMPLETION_ONSTACK(done);
unsigned long timeleft = timeout;
unsigned long timeleft = timeout, delay;
struct scsi_eh_save ses;
const unsigned long stall_for = msecs_to_jiffies(100);
int rtn;
@ -1066,7 +1066,29 @@ retry:
scsi_log_send(scmd);
scmd->scsi_done = scsi_eh_done;
rtn = shost->hostt->queuecommand(shost, scmd);
/*
* Lock sdev->state_mutex to avoid that scsi_device_quiesce() can
* change the SCSI device state after we have examined it and before
* .queuecommand() is called.
*/
mutex_lock(&sdev->state_mutex);
while (sdev->sdev_state == SDEV_BLOCK && timeleft > 0) {
mutex_unlock(&sdev->state_mutex);
SCSI_LOG_ERROR_RECOVERY(5, sdev_printk(KERN_DEBUG, sdev,
"%s: state %d <> %d\n", __func__, sdev->sdev_state,
SDEV_BLOCK));
delay = min(timeleft, stall_for);
timeleft -= delay;
msleep(jiffies_to_msecs(delay));
mutex_lock(&sdev->state_mutex);
}
if (sdev->sdev_state != SDEV_BLOCK)
rtn = shost->hostt->queuecommand(shost, scmd);
else
rtn = SCSI_MLQUEUE_DEVICE_BUSY;
mutex_unlock(&sdev->state_mutex);
if (rtn) {
if (timeleft > stall_for) {
scsi_eh_restore_cmnd(scmd, &ses);

View File

@ -2616,10 +2616,6 @@ EXPORT_SYMBOL_GPL(scsi_internal_device_block_nowait);
* a legal transition). When the device is in this state, command processing
* is paused until the device leaves the SDEV_BLOCK state. See also
* scsi_internal_device_unblock().
*
* To do: avoid that scsi_send_eh_cmnd() calls queuecommand() after
* scsi_internal_device_block() has blocked a SCSI device and also
* remove the rport mutex lock and unlock calls from srp_queuecommand().
*/
static int scsi_internal_device_block(struct scsi_device *sdev)
{

View File

@ -176,11 +176,7 @@ static int scsi_bus_resume_common(struct device *dev,
static int scsi_bus_prepare(struct device *dev)
{
if (scsi_is_sdev_device(dev)) {
/* sd probing uses async_schedule. Wait until it finishes. */
async_synchronize_full_domain(&scsi_sd_probe_domain);
} else if (scsi_is_host_device(dev)) {
if (scsi_is_host_device(dev)) {
/* Wait until async scanning is finished */
scsi_complete_async_scans();
}

View File

@ -175,7 +175,6 @@ static inline void scsi_autopm_put_host(struct Scsi_Host *h) {}
#endif /* CONFIG_PM */
extern struct async_domain scsi_sd_pm_domain;
extern struct async_domain scsi_sd_probe_domain;
/* scsi_dh.c */
#ifdef CONFIG_SCSI_DH

Some files were not shown because too many files have changed in this diff Show More