1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (204 commits)
  [SCSI] qla4xxx: export address/port of connection (fix udev disk names)
  [SCSI] ipr: Fix BUG on adapter dump timeout
  [SCSI] megaraid_sas: Fix instance access in megasas_reset_timer
  [SCSI] hpsa: change confusing message to be more clear
  [SCSI] iscsi class: fix vlan configuration
  [SCSI] qla4xxx: fix data alignment and use nl helpers
  [SCSI] iscsi class: fix link local mispelling
  [SCSI] iscsi class: Replace iscsi_get_next_target_id with IDA
  [SCSI] aacraid: use lower snprintf() limit
  [SCSI] lpfc 8.3.27: Change driver version to 8.3.27
  [SCSI] lpfc 8.3.27: T10 additions for SLI4
  [SCSI] lpfc 8.3.27: Fix queue allocation failure recovery
  [SCSI] lpfc 8.3.27: Change algorithm for getting physical port name
  [SCSI] lpfc 8.3.27: Changed worst case mailbox timeout
  [SCSI] lpfc 8.3.27: Miscellanous logic and interface fixes
  [SCSI] megaraid_sas: Changelog and version update
  [SCSI] megaraid_sas: Add driver workaround for PERC5/1068 kdump kernel panic
  [SCSI] megaraid_sas: Add multiple MSI-X vector/multiple reply queue support
  [SCSI] megaraid_sas: Add support for MegaRAID 9360/9380 12GB/s controllers
  [SCSI] megaraid_sas: Clear FUSION_IN_RESET before enabling interrupts
  ...
hifive-unleashed-5.1
Linus Torvalds 2011-10-28 16:44:18 -07:00
commit ec7ae51753
162 changed files with 14151 additions and 3361 deletions

View File

@ -206,3 +206,16 @@ Description:
when a discarded area is read the discard_zeroes_data
parameter will be set to one. Otherwise it will be 0 and
the result of reading a discarded area is undefined.
What: /sys/block/<disk>/alias
Date: Aug 2011
Contact: Nao Nishijima <nao.nishijima.xt@hitachi.com>
Description:
A raw device name of a disk does not always point a same disk
each boot-up time. Therefore, users have to use persistent
device names, which udev creates when the kernel finds a disk,
instead of raw device name. However, kernel doesn't show those
persistent names on its messages (e.g. dmesg).
This file can store an alias of the disk and it would be
appeared in kernel messages if it is set. A disk can have an
alias which length is up to 255bytes. Users can use alphabets,
numbers, "-" and "_" in alias name. This file is writeonce.

View File

@ -28,6 +28,8 @@ LICENSE.FlashPoint
- Licence of the Flashpoint driver
LICENSE.qla2xxx
- License for QLogic Linux Fibre Channel HBA Driver firmware.
LICENSE.qla4xxx
- License for QLogic Linux iSCSI HBA Driver.
Mylex.txt
- info on driver for Mylex adapters
NinjaSCSI.txt

View File

@ -1,3 +1,18 @@
Release Date : Wed. Oct 5, 2011 17:00:00 PST 2010 -
(emaild-id:megaraidlinux@lsi.com)
Adam Radford
Current Version : 00.00.06.12-rc1
Old Version : 00.00.05.40-rc1
1. Continue booting immediately if FW in FAULT at driver load time.
2. Increase default cmds per lun to 256.
3. Fix mismatch in megasas_reset_fusion() mutex lock-unlock.
4. Remove some un-necessary code.
5. Clear state change interrupts for Fusion/Invader.
6. Clear FUSION_IN_RESET before enabling interrupts.
7. Add support for MegaRAID 9360/9380 12GB/s controllers.
8. Add multiple MSI-X vector/multiple reply queue support.
9. Add driver workaround for PERC5/1068 kdump kernel panic.
-------------------------------------------------------------------------------
Release Date : Tue. Jul 26, 2011 17:00:00 PST 2010 -
(emaild-id:megaraidlinux@lsi.com)
Adam Radford

View File

@ -0,0 +1,310 @@
Copyright (c) 2003-2011 QLogic Corporation
QLogic Linux iSCSI HBA Driver
This program includes a device driver for Linux 3.x.
You may modify and redistribute the device driver code under the
GNU General Public License (a copy of which is attached hereto as
Exhibit A) published by the Free Software Foundation (version 2).
REGARDLESS OF WHAT LICENSING MECHANISM IS USED OR APPLICABLE,
THIS PROGRAM IS PROVIDED BY QLOGIC CORPORATION "AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
USER ACKNOWLEDGES AND AGREES THAT USE OF THIS PROGRAM WILL NOT
CREATE OR GIVE GROUNDS FOR A LICENSE BY IMPLICATION, ESTOPPEL, OR
OTHERWISE IN ANY INTELLECTUAL PROPERTY RIGHTS (PATENT, COPYRIGHT,
TRADE SECRET, MASK WORK, OR OTHER PROPRIETARY RIGHT) EMBODIED IN
ANY OTHER QLOGIC HARDWARE OR SOFTWARE EITHER SOLELY OR IN
COMBINATION WITH THIS PROGRAM.
EXHIBIT A
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

View File

@ -0,0 +1,75 @@
Operating FCoE using bnx2fc
===========================
Broadcom FCoE offload through bnx2fc is full stateful hardware offload that
cooperates with all interfaces provided by the Linux ecosystem for FC/FCoE and
SCSI controllers. As such, FCoE functionality, once enabled is largely
transparent. Devices discovered on the SAN will be registered and unregistered
automatically with the upper storage layers.
Despite the fact that the Broadcom's FCoE offload is fully offloaded, it does
depend on the state of the network interfaces to operate. As such, the network
interface (e.g. eth0) associated with the FCoE offload initiator must be 'up'.
It is recommended that the network interfaces be configured to be brought up
automatically at boot time.
Furthermore, the Broadcom FCoE offload solution creates VLAN interfaces to
support the VLANs that have been discovered for FCoE operation (e.g.
eth0.1001-fcoe). Do not delete or disable these interfaces or FCoE operation
will be disrupted.
Driver Usage Model:
===================
1. Ensure that fcoe-utils package is installed.
2. Configure the interfaces on which bnx2fc driver has to operate on.
Here are the steps to configure:
a. cd /etc/fcoe
b. copy cfg-ethx to cfg-eth5 if FCoE has to be enabled on eth5.
c. Repeat this for all the interfaces where FCoE has to be enabled.
d. Edit all the cfg-eth files to set "no" for DCB_REQUIRED** field, and
"yes" for AUTO_VLAN.
e. Other configuration parameters should be left as default
3. Ensure that "bnx2fc" is in SUPPORTED_DRIVERS list in /etc/fcoe/config.
4. Start fcoe service. (service fcoe start). If Broadcom devices are present in
the system, bnx2fc driver would automatically claim the interfaces, starts vlan
discovery and log into the targets.
5. "Symbolic Name" in 'fcoeadm -i' output would display if bnx2fc has claimed
the interface.
Eg:
[root@bh2 ~]# fcoeadm -i
Description: NetXtreme II BCM57712 10 Gigabit Ethernet
Revision: 01
Manufacturer: Broadcom Corporation
Serial Number: 0010186FD558
Driver: bnx2x 1.70.00-0
Number of Ports: 2
Symbolic Name: bnx2fc v1.0.5 over eth5.4
OS Device Name: host11
Node Name: 0x10000010186FD559
Port Name: 0x20000010186FD559
FabricName: 0x2001000DECB3B681
Speed: 10 Gbit
Supported Speed: 10 Gbit
MaxFrameSize: 2048
FC-ID (Port ID): 0x0F0377
State: Online
6. Verify the vlan discovery is performed by running ifconfig and notice
<INTERFACE>.<VLAN>-fcoe interfaces are automatically created.
Refer to fcoeadm manpage for more information on fcoeadm operations to
create/destroy interfaces or to display lun/target information.
NOTE:
====
** Broadcom FCoE capable devices implement a DCBX/LLDP client on-chip. Only one
LLDP client is allowed per interface. For proper operation all host software
based DCBX/LLDP clients (e.g. lldpad) must be disabled. To disable lldpad on a
given interface, run the following command:
lldptool set-lldp -i <interface_name> adminStatus=disabled

View File

@ -46,6 +46,8 @@ struct qdesfmt0 {
u32 : 16;
} __attribute__ ((packed));
#define QDR_AC_MULTI_BUFFER_ENABLE 0x01
/**
* struct qdr - queue description record (QDR)
* @qfmt: queue format
@ -256,6 +258,8 @@ struct slsb {
u8 val[QDIO_MAX_BUFFERS_PER_Q];
} __attribute__ ((packed, aligned(256)));
#define CHSC_AC2_MULTI_BUFFER_AVAILABLE 0x0080
#define CHSC_AC2_MULTI_BUFFER_ENABLED 0x0040
#define CHSC_AC2_DATA_DIV_AVAILABLE 0x0010
#define CHSC_AC2_DATA_DIV_ENABLED 0x0002
@ -357,6 +361,7 @@ typedef void qdio_handler_t(struct ccw_device *, unsigned int, int,
struct qdio_initialize {
struct ccw_device *cdev;
unsigned char q_format;
unsigned char qdr_ac;
unsigned char adapter_name[8];
unsigned int qib_param_field_format;
unsigned char *qib_param_field;

View File

@ -19,6 +19,7 @@
#include <linux/mutex.h>
#include <linux/idr.h>
#include <linux/log2.h>
#include <linux/ctype.h>
#include "blk.h"
@ -909,6 +910,74 @@ static int __init genhd_device_init(void)
subsys_initcall(genhd_device_init);
static ssize_t alias_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct gendisk *disk = dev_to_disk(dev);
ssize_t ret = 0;
if (disk->alias)
ret = snprintf(buf, ALIAS_LEN, "%s\n", disk->alias);
return ret;
}
static ssize_t alias_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct gendisk *disk = dev_to_disk(dev);
char *alias;
char *envp[] = { NULL, NULL };
unsigned char c;
int i;
ssize_t ret = count;
if (!count)
return -EINVAL;
if (count >= ALIAS_LEN) {
printk(KERN_ERR "alias: alias is too long\n");
return -EINVAL;
}
/* Validation check */
for (i = 0; i < count; i++) {
c = buf[i];
if (i == count - 1 && c == '\n')
break;
if (!isalnum(c) && c != '_' && c != '-') {
printk(KERN_ERR "alias: invalid alias\n");
return -EINVAL;
}
}
if (disk->alias) {
printk(KERN_INFO "alias: %s is already assigned (%s)\n",
disk->disk_name, disk->alias);
return -EINVAL;
}
alias = kasprintf(GFP_KERNEL, "%s", buf);
if (!alias)
return -ENOMEM;
if (alias[count - 1] == '\n')
alias[count - 1] = '\0';
envp[0] = kasprintf(GFP_KERNEL, "ALIAS=%s", alias);
if (!envp[0]) {
kfree(alias);
return -ENOMEM;
}
disk->alias = alias;
printk(KERN_INFO "alias: assigned %s to %s\n", alias, disk->disk_name);
kobject_uevent_env(&dev->kobj, KOBJ_ADD, envp);
kfree(envp[0]);
return ret;
}
static ssize_t disk_range_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@ -968,6 +1037,7 @@ static ssize_t disk_discard_alignment_show(struct device *dev,
return sprintf(buf, "%d\n", queue_discard_alignment(disk->queue));
}
static DEVICE_ATTR(alias, S_IRUGO|S_IWUSR, alias_show, alias_store);
static DEVICE_ATTR(range, S_IRUGO, disk_range_show, NULL);
static DEVICE_ATTR(ext_range, S_IRUGO, disk_ext_range_show, NULL);
static DEVICE_ATTR(removable, S_IRUGO, disk_removable_show, NULL);
@ -990,6 +1060,7 @@ static struct device_attribute dev_attr_fail_timeout =
#endif
static struct attribute *disk_attrs[] = {
&dev_attr_alias.attr,
&dev_attr_range.attr,
&dev_attr_ext_range.attr,
&dev_attr_removable.attr,

View File

@ -6713,6 +6713,7 @@ EXPORT_SYMBOL_GPL(ata_scsi_queuecmd);
EXPORT_SYMBOL_GPL(ata_scsi_slave_config);
EXPORT_SYMBOL_GPL(ata_scsi_slave_destroy);
EXPORT_SYMBOL_GPL(ata_scsi_change_queue_depth);
EXPORT_SYMBOL_GPL(__ata_change_queue_depth);
EXPORT_SYMBOL_GPL(sata_scr_valid);
EXPORT_SYMBOL_GPL(sata_scr_read);
EXPORT_SYMBOL_GPL(sata_scr_write);

View File

@ -1215,25 +1215,15 @@ void ata_scsi_slave_destroy(struct scsi_device *sdev)
}
/**
* ata_scsi_change_queue_depth - SCSI callback for queue depth config
* @sdev: SCSI device to configure queue depth for
* @queue_depth: new queue depth
* @reason: calling context
* __ata_change_queue_depth - helper for ata_scsi_change_queue_depth
*
* This is libata standard hostt->change_queue_depth callback.
* SCSI will call into this callback when user tries to set queue
* depth via sysfs.
* libsas and libata have different approaches for associating a sdev to
* its ata_port.
*
* LOCKING:
* SCSI layer (we don't care)
*
* RETURNS:
* Newly configured queue depth.
*/
int ata_scsi_change_queue_depth(struct scsi_device *sdev, int queue_depth,
int reason)
int __ata_change_queue_depth(struct ata_port *ap, struct scsi_device *sdev,
int queue_depth, int reason)
{
struct ata_port *ap = ata_shost_to_port(sdev->host);
struct ata_device *dev;
unsigned long flags;
@ -1268,6 +1258,30 @@ int ata_scsi_change_queue_depth(struct scsi_device *sdev, int queue_depth,
return queue_depth;
}
/**
* ata_scsi_change_queue_depth - SCSI callback for queue depth config
* @sdev: SCSI device to configure queue depth for
* @queue_depth: new queue depth
* @reason: calling context
*
* This is libata standard hostt->change_queue_depth callback.
* SCSI will call into this callback when user tries to set queue
* depth via sysfs.
*
* LOCKING:
* SCSI layer (we don't care)
*
* RETURNS:
* Newly configured queue depth.
*/
int ata_scsi_change_queue_depth(struct scsi_device *sdev, int queue_depth,
int reason)
{
struct ata_port *ap = ata_shost_to_port(sdev->host);
return __ata_change_queue_depth(ap, sdev, queue_depth, reason);
}
/**
* ata_scsi_start_stop_xlat - Translate SCSI START STOP UNIT command
* @qc: Storage for translated ATA taskfile

View File

@ -632,6 +632,59 @@ iscsi_iser_ep_disconnect(struct iscsi_endpoint *ep)
iser_conn_terminate(ib_conn);
}
static mode_t iser_attr_is_visible(int param_type, int param)
{
switch (param_type) {
case ISCSI_HOST_PARAM:
switch (param) {
case ISCSI_HOST_PARAM_NETDEV_NAME:
case ISCSI_HOST_PARAM_HWADDRESS:
case ISCSI_HOST_PARAM_INITIATOR_NAME:
return S_IRUGO;
default:
return 0;
}
case ISCSI_PARAM:
switch (param) {
case ISCSI_PARAM_MAX_RECV_DLENGTH:
case ISCSI_PARAM_MAX_XMIT_DLENGTH:
case ISCSI_PARAM_HDRDGST_EN:
case ISCSI_PARAM_DATADGST_EN:
case ISCSI_PARAM_CONN_ADDRESS:
case ISCSI_PARAM_CONN_PORT:
case ISCSI_PARAM_EXP_STATSN:
case ISCSI_PARAM_PERSISTENT_ADDRESS:
case ISCSI_PARAM_PERSISTENT_PORT:
case ISCSI_PARAM_PING_TMO:
case ISCSI_PARAM_RECV_TMO:
case ISCSI_PARAM_INITIAL_R2T_EN:
case ISCSI_PARAM_MAX_R2T:
case ISCSI_PARAM_IMM_DATA_EN:
case ISCSI_PARAM_FIRST_BURST:
case ISCSI_PARAM_MAX_BURST:
case ISCSI_PARAM_PDU_INORDER_EN:
case ISCSI_PARAM_DATASEQ_INORDER_EN:
case ISCSI_PARAM_TARGET_NAME:
case ISCSI_PARAM_TPGT:
case ISCSI_PARAM_USERNAME:
case ISCSI_PARAM_PASSWORD:
case ISCSI_PARAM_USERNAME_IN:
case ISCSI_PARAM_PASSWORD_IN:
case ISCSI_PARAM_FAST_ABORT:
case ISCSI_PARAM_ABORT_TMO:
case ISCSI_PARAM_LU_RESET_TMO:
case ISCSI_PARAM_TGT_RESET_TMO:
case ISCSI_PARAM_IFACE_NAME:
case ISCSI_PARAM_INITIATOR_NAME:
return S_IRUGO;
default:
return 0;
}
}
return 0;
}
static struct scsi_host_template iscsi_iser_sht = {
.module = THIS_MODULE,
.name = "iSCSI Initiator over iSER, v." DRV_VER,
@ -653,32 +706,6 @@ static struct iscsi_transport iscsi_iser_transport = {
.owner = THIS_MODULE,
.name = "iser",
.caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T,
.param_mask = ISCSI_MAX_RECV_DLENGTH |
ISCSI_MAX_XMIT_DLENGTH |
ISCSI_HDRDGST_EN |
ISCSI_DATADGST_EN |
ISCSI_INITIAL_R2T_EN |
ISCSI_MAX_R2T |
ISCSI_IMM_DATA_EN |
ISCSI_FIRST_BURST |
ISCSI_MAX_BURST |
ISCSI_PDU_INORDER_EN |
ISCSI_DATASEQ_INORDER_EN |
ISCSI_CONN_PORT |
ISCSI_CONN_ADDRESS |
ISCSI_EXP_STATSN |
ISCSI_PERSISTENT_PORT |
ISCSI_PERSISTENT_ADDRESS |
ISCSI_TARGET_NAME | ISCSI_TPGT |
ISCSI_USERNAME | ISCSI_PASSWORD |
ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN |
ISCSI_FAST_ABORT | ISCSI_ABORT_TMO |
ISCSI_LU_RESET_TMO | ISCSI_TGT_RESET_TMO |
ISCSI_PING_TMO | ISCSI_RECV_TMO |
ISCSI_IFACE_NAME | ISCSI_INITIATOR_NAME,
.host_param_mask = ISCSI_HOST_HWADDRESS |
ISCSI_HOST_NETDEV_NAME |
ISCSI_HOST_INITIATOR_NAME,
/* session management */
.create_session = iscsi_iser_session_create,
.destroy_session = iscsi_iser_session_destroy,
@ -686,6 +713,7 @@ static struct iscsi_transport iscsi_iser_transport = {
.create_conn = iscsi_iser_conn_create,
.bind_conn = iscsi_iser_conn_bind,
.destroy_conn = iscsi_iser_conn_destroy,
.attr_is_visible = iser_attr_is_visible,
.set_param = iscsi_iser_set_param,
.get_conn_param = iscsi_conn_get_param,
.get_ep_param = iscsi_iser_get_ep_param,

View File

@ -63,6 +63,8 @@
#ifdef CONFIG_MTRR
#include <asm/mtrr.h>
#endif
#include <linux/kthread.h>
#include <scsi/scsi_host.h>
#include "mptbase.h"
#include "lsi/mpi_log_fc.h"
@ -323,6 +325,32 @@ mpt_is_discovery_complete(MPT_ADAPTER *ioc)
return rc;
}
/**
* mpt_remove_dead_ioc_func - kthread context to remove dead ioc
* @arg: input argument, used to derive ioc
*
* Return 0 if controller is removed from pci subsystem.
* Return -1 for other case.
*/
static int mpt_remove_dead_ioc_func(void *arg)
{
MPT_ADAPTER *ioc = (MPT_ADAPTER *)arg;
struct pci_dev *pdev;
if ((ioc == NULL))
return -1;
pdev = ioc->pcidev;
if ((pdev == NULL))
return -1;
pci_remove_bus_device(pdev);
return 0;
}
/**
* mpt_fault_reset_work - work performed on workq after ioc fault
* @work: input argument, used to derive ioc
@ -336,12 +364,45 @@ mpt_fault_reset_work(struct work_struct *work)
u32 ioc_raw_state;
int rc;
unsigned long flags;
MPT_SCSI_HOST *hd;
struct task_struct *p;
if (ioc->ioc_reset_in_progress || !ioc->active)
goto out;
ioc_raw_state = mpt_GetIocState(ioc, 0);
if ((ioc_raw_state & MPI_IOC_STATE_MASK) == MPI_IOC_STATE_FAULT) {
if ((ioc_raw_state & MPI_IOC_STATE_MASK) == MPI_IOC_STATE_MASK) {
printk(MYIOC_s_INFO_FMT "%s: IOC is non-operational !!!!\n",
ioc->name, __func__);
/*
* Call mptscsih_flush_pending_cmds callback so that we
* flush all pending commands back to OS.
* This call is required to aovid deadlock at block layer.
* Dead IOC will fail to do diag reset,and this call is safe
* since dead ioc will never return any command back from HW.
*/
hd = shost_priv(ioc->sh);
ioc->schedule_dead_ioc_flush_running_cmds(hd);
/*Remove the Dead Host */
p = kthread_run(mpt_remove_dead_ioc_func, ioc,
"mpt_dead_ioc_%d", ioc->id);
if (IS_ERR(p)) {
printk(MYIOC_s_ERR_FMT
"%s: Running mpt_dead_ioc thread failed !\n",
ioc->name, __func__);
} else {
printk(MYIOC_s_WARN_FMT
"%s: Running mpt_dead_ioc thread success !\n",
ioc->name, __func__);
}
return; /* don't rearm timer */
}
if ((ioc_raw_state & MPI_IOC_STATE_MASK)
== MPI_IOC_STATE_FAULT) {
printk(MYIOC_s_WARN_FMT "IOC is in FAULT state (%04xh)!!!\n",
ioc->name, ioc_raw_state & MPI_DOORBELL_DATA_MASK);
printk(MYIOC_s_WARN_FMT "Issuing HardReset from %s!!\n",
@ -6413,8 +6474,19 @@ mpt_config(MPT_ADAPTER *ioc, CONFIGPARMS *pCfg)
pReq->Action, ioc->mptbase_cmds.status, timeleft));
if (ioc->mptbase_cmds.status & MPT_MGMT_STATUS_DID_IOCRESET)
goto out;
if (!timeleft)
if (!timeleft) {
spin_lock_irqsave(&ioc->taskmgmt_lock, flags);
if (ioc->ioc_reset_in_progress) {
spin_unlock_irqrestore(&ioc->taskmgmt_lock,
flags);
printk(MYIOC_s_INFO_FMT "%s: host reset in"
" progress mpt_config timed out.!!\n",
__func__, ioc->name);
return -EFAULT;
}
spin_unlock_irqrestore(&ioc->taskmgmt_lock, flags);
issue_hard_reset = 1;
}
goto out;
}
@ -7128,7 +7200,18 @@ mpt_HardResetHandler(MPT_ADAPTER *ioc, int sleepFlag)
spin_lock_irqsave(&ioc->taskmgmt_lock, flags);
if (ioc->ioc_reset_in_progress) {
spin_unlock_irqrestore(&ioc->taskmgmt_lock, flags);
return 0;
ioc->wait_on_reset_completion = 1;
do {
ssleep(1);
} while (ioc->ioc_reset_in_progress == 1);
ioc->wait_on_reset_completion = 0;
return ioc->reset_status;
}
if (ioc->wait_on_reset_completion) {
spin_unlock_irqrestore(&ioc->taskmgmt_lock, flags);
rc = 0;
time_count = jiffies;
goto exit;
}
ioc->ioc_reset_in_progress = 1;
if (ioc->alt_ioc)
@ -7165,6 +7248,7 @@ mpt_HardResetHandler(MPT_ADAPTER *ioc, int sleepFlag)
ioc->ioc_reset_in_progress = 0;
ioc->taskmgmt_quiesce_io = 0;
ioc->taskmgmt_in_progress = 0;
ioc->reset_status = rc;
if (ioc->alt_ioc) {
ioc->alt_ioc->ioc_reset_in_progress = 0;
ioc->alt_ioc->taskmgmt_quiesce_io = 0;
@ -7180,7 +7264,7 @@ mpt_HardResetHandler(MPT_ADAPTER *ioc, int sleepFlag)
ioc->alt_ioc, MPT_IOC_POST_RESET);
}
}
exit:
dtmprintk(ioc,
printk(MYIOC_s_DEBUG_FMT
"HardResetHandler: completed (%d seconds): %s\n", ioc->name,

View File

@ -76,8 +76,8 @@
#define COPYRIGHT "Copyright (c) 1999-2008 " MODULEAUTHOR
#endif
#define MPT_LINUX_VERSION_COMMON "3.04.19"
#define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.04.19"
#define MPT_LINUX_VERSION_COMMON "3.04.20"
#define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.04.20"
#define WHAT_MAGIC_STRING "@" "(" "#" ")"
#define show_mptmod_ver(s,ver) \
@ -554,10 +554,47 @@ struct mptfc_rport_info
u8 flags;
};
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
/*
* MPT_SCSI_HOST defines - Used by the IOCTL and the SCSI drivers
* Private to the driver.
*/
#define MPT_HOST_BUS_UNKNOWN (0xFF)
#define MPT_HOST_TOO_MANY_TM (0x05)
#define MPT_HOST_NVRAM_INVALID (0xFFFFFFFF)
#define MPT_HOST_NO_CHAIN (0xFFFFFFFF)
#define MPT_NVRAM_MASK_TIMEOUT (0x000000FF)
#define MPT_NVRAM_SYNC_MASK (0x0000FF00)
#define MPT_NVRAM_SYNC_SHIFT (8)
#define MPT_NVRAM_DISCONNECT_ENABLE (0x00010000)
#define MPT_NVRAM_ID_SCAN_ENABLE (0x00020000)
#define MPT_NVRAM_LUN_SCAN_ENABLE (0x00040000)
#define MPT_NVRAM_TAG_QUEUE_ENABLE (0x00080000)
#define MPT_NVRAM_WIDE_DISABLE (0x00100000)
#define MPT_NVRAM_BOOT_CHOICE (0x00200000)
typedef enum {
FC,
SPI,
SAS
} BUS_TYPE;
typedef struct _MPT_SCSI_HOST {
struct _MPT_ADAPTER *ioc;
ushort sel_timeout[MPT_MAX_FC_DEVICES];
char *info_kbuf;
long last_queue_full;
u16 spi_pending;
struct list_head target_reset_list;
} MPT_SCSI_HOST;
typedef void (*MPT_ADD_SGE)(void *pAddr, u32 flagslength, dma_addr_t dma_addr);
typedef void (*MPT_ADD_CHAIN)(void *pAddr, u8 next, u16 length,
dma_addr_t dma_addr);
typedef void (*MPT_SCHEDULE_TARGET_RESET)(void *ioc);
typedef void (*MPT_FLUSH_RUNNING_CMDS)(MPT_SCSI_HOST *hd);
/*
* Adapter Structure - pci_dev specific. Maximum: MPT_MAX_ADAPTERS
@ -716,7 +753,10 @@ typedef struct _MPT_ADAPTER
int taskmgmt_in_progress;
u8 taskmgmt_quiesce_io;
u8 ioc_reset_in_progress;
u8 reset_status;
u8 wait_on_reset_completion;
MPT_SCHEDULE_TARGET_RESET schedule_target_reset;
MPT_FLUSH_RUNNING_CMDS schedule_dead_ioc_flush_running_cmds;
struct work_struct sas_persist_task;
struct work_struct fc_setup_reset_work;
@ -830,19 +870,6 @@ typedef struct _MPT_LOCAL_REPLY {
u32 pad;
} MPT_LOCAL_REPLY;
#define MPT_HOST_BUS_UNKNOWN (0xFF)
#define MPT_HOST_TOO_MANY_TM (0x05)
#define MPT_HOST_NVRAM_INVALID (0xFFFFFFFF)
#define MPT_HOST_NO_CHAIN (0xFFFFFFFF)
#define MPT_NVRAM_MASK_TIMEOUT (0x000000FF)
#define MPT_NVRAM_SYNC_MASK (0x0000FF00)
#define MPT_NVRAM_SYNC_SHIFT (8)
#define MPT_NVRAM_DISCONNECT_ENABLE (0x00010000)
#define MPT_NVRAM_ID_SCAN_ENABLE (0x00020000)
#define MPT_NVRAM_LUN_SCAN_ENABLE (0x00040000)
#define MPT_NVRAM_TAG_QUEUE_ENABLE (0x00080000)
#define MPT_NVRAM_WIDE_DISABLE (0x00100000)
#define MPT_NVRAM_BOOT_CHOICE (0x00200000)
/* The TM_STATE variable is used to provide strict single threading of TM
* requests as well as communicate TM error conditions.
@ -851,21 +878,6 @@ typedef struct _MPT_LOCAL_REPLY {
#define TM_STATE_IN_PROGRESS (1)
#define TM_STATE_ERROR (2)
typedef enum {
FC,
SPI,
SAS
} BUS_TYPE;
typedef struct _MPT_SCSI_HOST {
MPT_ADAPTER *ioc;
ushort sel_timeout[MPT_MAX_FC_DEVICES];
char *info_kbuf;
long last_queue_full;
u16 spi_pending;
struct list_head target_reset_list;
} MPT_SCSI_HOST;
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
/*
* More Dynamic Multi-Pathing stuff...

View File

@ -92,6 +92,11 @@ static int max_lun = MPTSAS_MAX_LUN;
module_param(max_lun, int, 0);
MODULE_PARM_DESC(max_lun, " max lun, default=16895 ");
static int mpt_loadtime_max_sectors = 8192;
module_param(mpt_loadtime_max_sectors, int, 0);
MODULE_PARM_DESC(mpt_loadtime_max_sectors,
" Maximum sector define for Host Bus Adaptor.Range 64 to 8192 default=8192");
static u8 mptsasDoneCtx = MPT_MAX_PROTOCOL_DRIVERS;
static u8 mptsasTaskCtx = MPT_MAX_PROTOCOL_DRIVERS;
static u8 mptsasInternalCtx = MPT_MAX_PROTOCOL_DRIVERS; /* Used only for internal commands */
@ -285,10 +290,11 @@ mptsas_add_fw_event(MPT_ADAPTER *ioc, struct fw_event_work *fw_event,
spin_lock_irqsave(&ioc->fw_event_lock, flags);
list_add_tail(&fw_event->list, &ioc->fw_event_list);
INIT_DELAYED_WORK(&fw_event->work, mptsas_firmware_event_work);
devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT "%s: add (fw_event=0x%p)\n",
ioc->name, __func__, fw_event));
queue_delayed_work(ioc->fw_event_q, &fw_event->work,
delay);
devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT "%s: add (fw_event=0x%p)"
"on cpuid %d\n", ioc->name, __func__,
fw_event, smp_processor_id()));
queue_delayed_work_on(smp_processor_id(), ioc->fw_event_q,
&fw_event->work, delay);
spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
}
@ -300,10 +306,11 @@ mptsas_requeue_fw_event(MPT_ADAPTER *ioc, struct fw_event_work *fw_event,
unsigned long flags;
spin_lock_irqsave(&ioc->fw_event_lock, flags);
devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT "%s: reschedule task "
"(fw_event=0x%p)\n", ioc->name, __func__, fw_event));
"(fw_event=0x%p)on cpuid %d\n", ioc->name, __func__,
fw_event, smp_processor_id()));
fw_event->retries++;
queue_delayed_work(ioc->fw_event_q, &fw_event->work,
msecs_to_jiffies(delay));
queue_delayed_work_on(smp_processor_id(), ioc->fw_event_q,
&fw_event->work, msecs_to_jiffies(delay));
spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
}
@ -1943,6 +1950,15 @@ static enum blk_eh_timer_return mptsas_eh_timed_out(struct scsi_cmnd *sc)
goto done;
}
/* In case if IOC is in reset from internal context.
* Do not execute EEH for the same IOC. SML should to reset timer.
*/
if (ioc->ioc_reset_in_progress) {
dtmprintk(ioc, printk(MYIOC_s_WARN_FMT ": %s: ioc is in reset,"
"SML need to reset the timer (sc=%p)\n",
ioc->name, __func__, sc));
rc = BLK_EH_RESET_TIMER;
}
vdevice = sc->device->hostdata;
if (vdevice && vdevice->vtarget && (vdevice->vtarget->inDMD
|| vdevice->vtarget->deleted)) {
@ -5142,6 +5158,8 @@ mptsas_probe(struct pci_dev *pdev, const struct pci_device_id *id)
ioc->TaskCtx = mptsasTaskCtx;
ioc->InternalCtx = mptsasInternalCtx;
ioc->schedule_target_reset = &mptsas_schedule_target_reset;
ioc->schedule_dead_ioc_flush_running_cmds =
&mptscsih_flush_running_cmds;
/* Added sanity check on readiness of the MPT adapter.
*/
if (ioc->last_state != MPI_IOC_STATE_OPERATIONAL) {
@ -5239,6 +5257,21 @@ mptsas_probe(struct pci_dev *pdev, const struct pci_device_id *id)
sh->sg_tablesize = numSGE;
}
if (mpt_loadtime_max_sectors) {
if (mpt_loadtime_max_sectors < 64 ||
mpt_loadtime_max_sectors > 8192) {
printk(MYIOC_s_INFO_FMT "Invalid value passed for"
"mpt_loadtime_max_sectors %d."
"Range from 64 to 8192\n", ioc->name,
mpt_loadtime_max_sectors);
}
mpt_loadtime_max_sectors &= 0xFFFFFFFE;
dprintk(ioc, printk(MYIOC_s_DEBUG_FMT
"Resetting max sector to %d from %d\n",
ioc->name, mpt_loadtime_max_sectors, sh->max_sectors));
sh->max_sectors = mpt_loadtime_max_sectors;
}
hd = shost_priv(sh);
hd->ioc = ioc;

View File

@ -830,7 +830,8 @@ mptscsih_io_done(MPT_ADAPTER *ioc, MPT_FRAME_HDR *mf, MPT_FRAME_HDR *mr)
if ((pScsiReq->CDB[0] == READ_6 && ((pScsiReq->CDB[1] & 0x02) == 0)) ||
pScsiReq->CDB[0] == READ_10 ||
pScsiReq->CDB[0] == READ_12 ||
pScsiReq->CDB[0] == READ_16 ||
(pScsiReq->CDB[0] == READ_16 &&
((pScsiReq->CDB[1] & 0x02) == 0)) ||
pScsiReq->CDB[0] == VERIFY ||
pScsiReq->CDB[0] == VERIFY_16) {
if (scsi_bufflen(sc) !=
@ -1024,7 +1025,7 @@ out:
*
* Must be called while new I/Os are being queued.
*/
static void
void
mptscsih_flush_running_cmds(MPT_SCSI_HOST *hd)
{
MPT_ADAPTER *ioc = hd->ioc;
@ -1055,6 +1056,7 @@ mptscsih_flush_running_cmds(MPT_SCSI_HOST *hd)
sc->scsi_done(sc);
}
}
EXPORT_SYMBOL(mptscsih_flush_running_cmds);
/*
* mptscsih_search_running_cmds - Delete any commands associated
@ -1629,7 +1631,13 @@ mptscsih_IssueTaskMgmt(MPT_SCSI_HOST *hd, u8 type, u8 channel, u8 id, int lun,
return 0;
}
if (ioc_raw_state & MPI_DOORBELL_ACTIVE) {
/* DOORBELL ACTIVE check is not required if
* MPI_IOCFACTS_CAPABILITY_HIGH_PRI_Q is supported.
*/
if (!((ioc->facts.IOCCapabilities & MPI_IOCFACTS_CAPABILITY_HIGH_PRI_Q)
&& (ioc->facts.MsgVersion >= MPI_VERSION_01_05)) &&
(ioc_raw_state & MPI_DOORBELL_ACTIVE)) {
printk(MYIOC_s_WARN_FMT
"TaskMgmt type=%x: ioc_state: "
"DOORBELL_ACTIVE (0x%x)!\n",
@ -1728,7 +1736,9 @@ mptscsih_IssueTaskMgmt(MPT_SCSI_HOST *hd, u8 type, u8 channel, u8 id, int lun,
printk(MYIOC_s_WARN_FMT
"Issuing Reset from %s!! doorbell=0x%08x\n",
ioc->name, __func__, mpt_GetIocState(ioc, 0));
retval = mpt_Soft_Hard_ResetHandler(ioc, CAN_SLEEP);
retval = (ioc->bus_type == SAS) ?
mpt_HardResetHandler(ioc, CAN_SLEEP) :
mpt_Soft_Hard_ResetHandler(ioc, CAN_SLEEP);
mpt_free_msg_frame(ioc, mf);
}

View File

@ -135,3 +135,4 @@ extern int mptscsih_is_phys_disk(MPT_ADAPTER *ioc, u8 channel, u8 id);
extern struct device_attribute *mptscsih_host_attrs[];
extern struct scsi_cmnd *mptscsih_get_scsi_lookup(MPT_ADAPTER *ioc, int i);
extern void mptscsih_taskmgmt_response_code(MPT_ADAPTER *ioc, u8 response_code);
extern void mptscsih_flush_running_cmds(MPT_SCSI_HOST *hd);

View File

@ -160,7 +160,8 @@ again:
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
q->handler(q->irq_ptr->cdev,
QDIO_ERROR_ACTIVATE_CHECK_CONDITION,
0, -1, -1, q->irq_ptr->int_parm);
q->nr, q->first_to_kick, count,
q->irq_ptr->int_parm);
return 0;
}
return count - tmp_count;
@ -206,7 +207,8 @@ again:
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
q->handler(q->irq_ptr->cdev,
QDIO_ERROR_ACTIVATE_CHECK_CONDITION,
0, -1, -1, q->irq_ptr->int_parm);
q->nr, q->first_to_kick, count,
q->irq_ptr->int_parm);
return 0;
}
WARN_ON(tmp_count);
@ -1070,6 +1072,7 @@ static void qdio_handle_activate_check(struct ccw_device *cdev,
{
struct qdio_irq *irq_ptr = cdev->private->qdio_data;
struct qdio_q *q;
int count;
DBF_ERROR("%4x ACT CHECK", irq_ptr->schid.sch_no);
DBF_ERROR("intp :%lx", intparm);
@ -1083,8 +1086,10 @@ static void qdio_handle_activate_check(struct ccw_device *cdev,
dump_stack();
goto no_handler;
}
count = sub_buf(q->first_to_check, q->first_to_kick);
q->handler(q->irq_ptr->cdev, QDIO_ERROR_ACTIVATE_CHECK_CONDITION,
0, -1, -1, irq_ptr->int_parm);
q->nr, q->first_to_kick, count, irq_ptr->int_parm);
no_handler:
qdio_set_state(irq_ptr, QDIO_IRQ_STATE_STOPPED);
}

View File

@ -381,6 +381,7 @@ static void setup_qdr(struct qdio_irq *irq_ptr,
int i;
irq_ptr->qdr->qfmt = qdio_init->q_format;
irq_ptr->qdr->ac = qdio_init->qdr_ac;
irq_ptr->qdr->iqdcnt = qdio_init->no_input_qs;
irq_ptr->qdr->oqdcnt = qdio_init->no_output_qs;
irq_ptr->qdr->iqdsz = sizeof(struct qdesfmt0) / 4; /* size in words */

View File

@ -163,6 +163,42 @@ void zfcp_dbf_hba_bit_err(char *tag, struct zfcp_fsf_req *req)
spin_unlock_irqrestore(&dbf->hba_lock, flags);
}
/**
* zfcp_dbf_hba_def_err - trace event for deferred error messages
* @adapter: pointer to struct zfcp_adapter
* @req_id: request id which caused the deferred error message
* @scount: number of sbals incl. the signaling sbal
* @pl: array of all involved sbals
*/
void zfcp_dbf_hba_def_err(struct zfcp_adapter *adapter, u64 req_id, u16 scount,
void **pl)
{
struct zfcp_dbf *dbf = adapter->dbf;
struct zfcp_dbf_pay *payload = &dbf->pay_buf;
unsigned long flags;
u16 length;
if (!pl)
return;
spin_lock_irqsave(&dbf->pay_lock, flags);
memset(payload, 0, sizeof(*payload));
memcpy(payload->area, "def_err", 7);
payload->fsf_req_id = req_id;
payload->counter = 0;
length = min((u16)sizeof(struct qdio_buffer),
(u16)ZFCP_DBF_PAY_MAX_REC);
while ((char *)pl[payload->counter] && payload->counter < scount) {
memcpy(payload->data, (char *)pl[payload->counter], length);
debug_event(dbf->pay, 1, payload, zfcp_dbf_plen(length));
payload->counter++;
}
spin_unlock_irqrestore(&dbf->pay_lock, flags);
}
static void zfcp_dbf_set_common(struct zfcp_dbf_rec *rec,
struct zfcp_adapter *adapter,
struct zfcp_port *port,

View File

@ -72,6 +72,7 @@ struct zfcp_reqlist;
#define ZFCP_STATUS_COMMON_NOESC 0x00200000
/* adapter status */
#define ZFCP_STATUS_ADAPTER_MB_ACT 0x00000001
#define ZFCP_STATUS_ADAPTER_QDIOUP 0x00000002
#define ZFCP_STATUS_ADAPTER_SIOSL_ISSUED 0x00000004
#define ZFCP_STATUS_ADAPTER_XCONFIG_OK 0x00000008
@ -314,4 +315,10 @@ struct zfcp_fsf_req {
void (*handler)(struct zfcp_fsf_req *);
};
static inline
int zfcp_adapter_multi_buffer_active(struct zfcp_adapter *adapter)
{
return atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_MB_ACT;
}
#endif /* ZFCP_DEF_H */

View File

@ -53,6 +53,7 @@ extern void zfcp_dbf_hba_fsf_uss(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_fsf_res(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_bit_err(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_berr(struct zfcp_dbf *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_def_err(struct zfcp_adapter *, u64, u16, void **);
extern void zfcp_dbf_san_req(char *, struct zfcp_fsf_req *, u32);
extern void zfcp_dbf_san_res(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_san_in_els(char *, struct zfcp_fsf_req *);

View File

@ -936,39 +936,47 @@ static int zfcp_fsf_setup_ct_els_sbals(struct zfcp_fsf_req *req,
struct scatterlist *sg_resp)
{
struct zfcp_adapter *adapter = req->adapter;
struct zfcp_qdio *qdio = adapter->qdio;
struct fsf_qtcb *qtcb = req->qtcb;
u32 feat = adapter->adapter_features;
int bytes;
if (!(feat & FSF_FEATURE_ELS_CT_CHAINED_SBALS)) {
if (!zfcp_qdio_sg_one_sbale(sg_req) ||
!zfcp_qdio_sg_one_sbale(sg_resp))
return -EOPNOTSUPP;
if (zfcp_adapter_multi_buffer_active(adapter)) {
if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_req))
return -EIO;
if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_resp))
return -EIO;
zfcp_fsf_setup_ct_els_unchained(adapter->qdio, &req->qdio_req,
sg_req, sg_resp);
zfcp_qdio_set_data_div(qdio, &req->qdio_req,
zfcp_qdio_sbale_count(sg_req));
zfcp_qdio_set_sbale_last(qdio, &req->qdio_req);
zfcp_qdio_set_scount(qdio, &req->qdio_req);
return 0;
}
/* use single, unchained SBAL if it can hold the request */
if (zfcp_qdio_sg_one_sbale(sg_req) && zfcp_qdio_sg_one_sbale(sg_resp)) {
zfcp_fsf_setup_ct_els_unchained(adapter->qdio, &req->qdio_req,
zfcp_fsf_setup_ct_els_unchained(qdio, &req->qdio_req,
sg_req, sg_resp);
return 0;
}
bytes = zfcp_qdio_sbals_from_sg(adapter->qdio, &req->qdio_req, sg_req);
if (bytes <= 0)
return -EIO;
zfcp_qdio_set_sbale_last(adapter->qdio, &req->qdio_req);
req->qtcb->bottom.support.req_buf_length = bytes;
zfcp_qdio_skip_to_last_sbale(&req->qdio_req);
if (!(feat & FSF_FEATURE_ELS_CT_CHAINED_SBALS))
return -EOPNOTSUPP;
bytes = zfcp_qdio_sbals_from_sg(adapter->qdio, &req->qdio_req,
sg_resp);
req->qtcb->bottom.support.resp_buf_length = bytes;
if (bytes <= 0)
if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_req))
return -EIO;
zfcp_qdio_set_sbale_last(adapter->qdio, &req->qdio_req);
qtcb->bottom.support.req_buf_length = zfcp_qdio_real_bytes(sg_req);
zfcp_qdio_set_sbale_last(qdio, &req->qdio_req);
zfcp_qdio_skip_to_last_sbale(qdio, &req->qdio_req);
if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_resp))
return -EIO;
qtcb->bottom.support.resp_buf_length = zfcp_qdio_real_bytes(sg_resp);
zfcp_qdio_set_sbale_last(qdio, &req->qdio_req);
return 0;
}
@ -1119,7 +1127,8 @@ int zfcp_fsf_send_els(struct zfcp_adapter *adapter, u32 d_id,
req->status |= ZFCP_STATUS_FSFREQ_CLEANUP;
zfcp_qdio_sbal_limit(qdio, &req->qdio_req, 2);
if (!zfcp_adapter_multi_buffer_active(adapter))
zfcp_qdio_sbal_limit(qdio, &req->qdio_req, 2);
ret = zfcp_fsf_setup_ct_els(req, els->req, els->resp, timeout);
@ -2162,7 +2171,7 @@ int zfcp_fsf_fcp_cmnd(struct scsi_cmnd *scsi_cmnd)
struct zfcp_fsf_req *req;
struct fcp_cmnd *fcp_cmnd;
u8 sbtype = SBAL_SFLAGS0_TYPE_READ;
int real_bytes, retval = -EIO, dix_bytes = 0;
int retval = -EIO;
struct scsi_device *sdev = scsi_cmnd->device;
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
struct zfcp_adapter *adapter = zfcp_sdev->port->adapter;
@ -2207,7 +2216,8 @@ int zfcp_fsf_fcp_cmnd(struct scsi_cmnd *scsi_cmnd)
io->ref_tag_value = scsi_get_lba(scsi_cmnd) & 0xFFFFFFFF;
}
zfcp_fsf_set_data_dir(scsi_cmnd, &io->data_direction);
if (zfcp_fsf_set_data_dir(scsi_cmnd, &io->data_direction))
goto failed_scsi_cmnd;
fcp_cmnd = (struct fcp_cmnd *) &req->qtcb->bottom.io.fcp_cmnd;
zfcp_fc_scsi_to_fcp(fcp_cmnd, scsi_cmnd, 0);
@ -2215,18 +2225,22 @@ int zfcp_fsf_fcp_cmnd(struct scsi_cmnd *scsi_cmnd)
if (scsi_prot_sg_count(scsi_cmnd)) {
zfcp_qdio_set_data_div(qdio, &req->qdio_req,
scsi_prot_sg_count(scsi_cmnd));
dix_bytes = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req,
retval = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req,
scsi_prot_sglist(scsi_cmnd));
if (retval)
goto failed_scsi_cmnd;
io->prot_data_length = zfcp_qdio_real_bytes(
scsi_prot_sglist(scsi_cmnd));
io->prot_data_length = dix_bytes;
}
real_bytes = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req,
scsi_sglist(scsi_cmnd));
if (unlikely(real_bytes < 0) || unlikely(dix_bytes < 0))
retval = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req,
scsi_sglist(scsi_cmnd));
if (unlikely(retval))
goto failed_scsi_cmnd;
zfcp_qdio_set_sbale_last(adapter->qdio, &req->qdio_req);
if (zfcp_adapter_multi_buffer_active(adapter))
zfcp_qdio_set_scount(qdio, &req->qdio_req);
retval = zfcp_fsf_req_send(req);
if (unlikely(retval))
@ -2328,7 +2342,7 @@ struct zfcp_fsf_req *zfcp_fsf_control_file(struct zfcp_adapter *adapter,
struct zfcp_qdio *qdio = adapter->qdio;
struct zfcp_fsf_req *req = NULL;
struct fsf_qtcb_bottom_support *bottom;
int retval = -EIO, bytes;
int retval = -EIO;
u8 direction;
if (!(adapter->adapter_features & FSF_FEATURE_CFDC))
@ -2361,13 +2375,17 @@ struct zfcp_fsf_req *zfcp_fsf_control_file(struct zfcp_adapter *adapter,
bottom->operation_subtype = FSF_CFDC_OPERATION_SUBTYPE;
bottom->option = fsf_cfdc->option;
bytes = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, fsf_cfdc->sg);
retval = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, fsf_cfdc->sg);
if (bytes != ZFCP_CFDC_MAX_SIZE) {
if (retval ||
(zfcp_qdio_real_bytes(fsf_cfdc->sg) != ZFCP_CFDC_MAX_SIZE)) {
zfcp_fsf_req_free(req);
retval = -EIO;
goto out;
}
zfcp_qdio_set_sbale_last(adapter->qdio, &req->qdio_req);
zfcp_qdio_set_sbale_last(qdio, &req->qdio_req);
if (zfcp_adapter_multi_buffer_active(adapter))
zfcp_qdio_set_scount(qdio, &req->qdio_req);
zfcp_fsf_start_timer(req, ZFCP_FSF_REQUEST_TIMEOUT);
retval = zfcp_fsf_req_send(req);

View File

@ -15,6 +15,10 @@
#define QBUFF_PER_PAGE (PAGE_SIZE / sizeof(struct qdio_buffer))
static bool enable_multibuffer;
module_param_named(datarouter, enable_multibuffer, bool, 0400);
MODULE_PARM_DESC(datarouter, "Enable hardware data router support");
static int zfcp_qdio_buffers_enqueue(struct qdio_buffer **sbal)
{
int pos;
@ -37,8 +41,11 @@ static void zfcp_qdio_handler_error(struct zfcp_qdio *qdio, char *id,
dev_warn(&adapter->ccw_device->dev, "A QDIO problem occurred\n");
if (qdio_err & QDIO_ERROR_SLSB_STATE)
if (qdio_err & QDIO_ERROR_SLSB_STATE) {
zfcp_qdio_siosl(adapter);
zfcp_erp_adapter_shutdown(adapter, 0, id);
return;
}
zfcp_erp_adapter_reopen(adapter,
ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED |
ZFCP_STATUS_COMMON_ERP_FAILED, id);
@ -93,9 +100,27 @@ static void zfcp_qdio_int_resp(struct ccw_device *cdev, unsigned int qdio_err,
unsigned long parm)
{
struct zfcp_qdio *qdio = (struct zfcp_qdio *) parm;
int sbal_idx, sbal_no;
struct zfcp_adapter *adapter = qdio->adapter;
struct qdio_buffer_element *sbale;
int sbal_no, sbal_idx;
void *pl[ZFCP_QDIO_MAX_SBALS_PER_REQ + 1];
u64 req_id;
u8 scount;
if (unlikely(qdio_err)) {
memset(pl, 0, ZFCP_QDIO_MAX_SBALS_PER_REQ * sizeof(void *));
if (zfcp_adapter_multi_buffer_active(adapter)) {
sbale = qdio->res_q[idx]->element;
req_id = (u64) sbale->addr;
scount = sbale->scount + 1; /* incl. signaling SBAL */
for (sbal_no = 0; sbal_no < scount; sbal_no++) {
sbal_idx = (idx + sbal_no) %
QDIO_MAX_BUFFERS_PER_Q;
pl[sbal_no] = qdio->res_q[sbal_idx];
}
zfcp_dbf_hba_def_err(adapter, req_id, scount, pl);
}
zfcp_qdio_handler_error(qdio, "qdires1", qdio_err);
return;
}
@ -155,7 +180,7 @@ zfcp_qdio_sbal_chain(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
static struct qdio_buffer_element *
zfcp_qdio_sbale_next(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
{
if (q_req->sbale_curr == ZFCP_QDIO_LAST_SBALE_PER_SBAL)
if (q_req->sbale_curr == qdio->max_sbale_per_sbal - 1)
return zfcp_qdio_sbal_chain(qdio, q_req);
q_req->sbale_curr++;
return zfcp_qdio_sbale_curr(qdio, q_req);
@ -167,13 +192,12 @@ zfcp_qdio_sbale_next(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
* @q_req: pointer to struct zfcp_qdio_req
* @sg: scatter-gather list
* @max_sbals: upper bound for number of SBALs to be used
* Returns: number of bytes, or error (negativ)
* Returns: zero or -EINVAL on error
*/
int zfcp_qdio_sbals_from_sg(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req,
struct scatterlist *sg)
{
struct qdio_buffer_element *sbale;
int bytes = 0;
/* set storage-block type for this request */
sbale = zfcp_qdio_sbale_req(qdio, q_req);
@ -187,14 +211,10 @@ int zfcp_qdio_sbals_from_sg(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req,
q_req->sbal_number);
return -EINVAL;
}
sbale->addr = sg_virt(sg);
sbale->length = sg->length;
bytes += sg->length;
}
return bytes;
return 0;
}
static int zfcp_qdio_sbal_check(struct zfcp_qdio *qdio)
@ -283,6 +303,8 @@ static void zfcp_qdio_setup_init_data(struct qdio_initialize *id,
memcpy(id->adapter_name, dev_name(&id->cdev->dev), 8);
ASCEBC(id->adapter_name, 8);
id->qib_rflags = QIB_RFLAGS_ENABLE_DATA_DIV;
if (enable_multibuffer)
id->qdr_ac |= QDR_AC_MULTI_BUFFER_ENABLE;
id->no_input_qs = 1;
id->no_output_qs = 1;
id->input_handler = zfcp_qdio_int_resp;
@ -378,6 +400,17 @@ int zfcp_qdio_open(struct zfcp_qdio *qdio)
atomic_set_mask(ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED,
&qdio->adapter->status);
if (ssqd.qdioac2 & CHSC_AC2_MULTI_BUFFER_ENABLED) {
atomic_set_mask(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status);
qdio->max_sbale_per_sbal = QDIO_MAX_ELEMENTS_PER_BUFFER;
} else {
atomic_clear_mask(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status);
qdio->max_sbale_per_sbal = QDIO_MAX_ELEMENTS_PER_BUFFER - 1;
}
qdio->max_sbale_per_req =
ZFCP_QDIO_MAX_SBALS_PER_REQ * qdio->max_sbale_per_sbal
- 2;
if (qdio_activate(cdev))
goto failed_qdio;
@ -397,6 +430,11 @@ int zfcp_qdio_open(struct zfcp_qdio *qdio)
atomic_set(&qdio->req_q_free, QDIO_MAX_BUFFERS_PER_Q);
atomic_set_mask(ZFCP_STATUS_ADAPTER_QDIOUP, &qdio->adapter->status);
if (adapter->scsi_host) {
adapter->scsi_host->sg_tablesize = qdio->max_sbale_per_req;
adapter->scsi_host->max_sectors = qdio->max_sbale_per_req * 8;
}
return 0;
failed_qdio:

View File

@ -13,20 +13,9 @@
#define ZFCP_QDIO_SBALE_LEN PAGE_SIZE
/* DMQ bug workaround: don't use last SBALE */
#define ZFCP_QDIO_MAX_SBALES_PER_SBAL (QDIO_MAX_ELEMENTS_PER_BUFFER - 1)
/* index of last SBALE (with respect to DMQ bug workaround) */
#define ZFCP_QDIO_LAST_SBALE_PER_SBAL (ZFCP_QDIO_MAX_SBALES_PER_SBAL - 1)
/* Max SBALS for chaining */
#define ZFCP_QDIO_MAX_SBALS_PER_REQ 36
/* max. number of (data buffer) SBALEs in largest SBAL chain
* request ID + QTCB in SBALE 0 + 1 of first SBAL in chain */
#define ZFCP_QDIO_MAX_SBALES_PER_REQ \
(ZFCP_QDIO_MAX_SBALS_PER_REQ * ZFCP_QDIO_MAX_SBALES_PER_SBAL - 2)
/**
* struct zfcp_qdio - basic qdio data structure
* @res_q: response queue
@ -53,6 +42,8 @@ struct zfcp_qdio {
atomic_t req_q_full;
wait_queue_head_t req_q_wq;
struct zfcp_adapter *adapter;
u16 max_sbale_per_sbal;
u16 max_sbale_per_req;
};
/**
@ -155,7 +146,7 @@ void zfcp_qdio_fill_next(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req,
{
struct qdio_buffer_element *sbale;
BUG_ON(q_req->sbale_curr == ZFCP_QDIO_LAST_SBALE_PER_SBAL);
BUG_ON(q_req->sbale_curr == qdio->max_sbale_per_sbal - 1);
q_req->sbale_curr++;
sbale = zfcp_qdio_sbale_curr(qdio, q_req);
sbale->addr = data;
@ -195,9 +186,10 @@ int zfcp_qdio_sg_one_sbale(struct scatterlist *sg)
* @q_req: The current zfcp_qdio_req
*/
static inline
void zfcp_qdio_skip_to_last_sbale(struct zfcp_qdio_req *q_req)
void zfcp_qdio_skip_to_last_sbale(struct zfcp_qdio *qdio,
struct zfcp_qdio_req *q_req)
{
q_req->sbale_curr = ZFCP_QDIO_LAST_SBALE_PER_SBAL;
q_req->sbale_curr = qdio->max_sbale_per_sbal - 1;
}
/**
@ -228,8 +220,52 @@ void zfcp_qdio_set_data_div(struct zfcp_qdio *qdio,
{
struct qdio_buffer_element *sbale;
sbale = &qdio->req_q[q_req->sbal_first]->element[0];
sbale = qdio->req_q[q_req->sbal_first]->element;
sbale->length = count;
}
/**
* zfcp_qdio_sbale_count - count sbale used
* @sg: pointer to struct scatterlist
*/
static inline
unsigned int zfcp_qdio_sbale_count(struct scatterlist *sg)
{
unsigned int count = 0;
for (; sg; sg = sg_next(sg))
count++;
return count;
}
/**
* zfcp_qdio_real_bytes - count bytes used
* @sg: pointer to struct scatterlist
*/
static inline
unsigned int zfcp_qdio_real_bytes(struct scatterlist *sg)
{
unsigned int real_bytes = 0;
for (; sg; sg = sg_next(sg))
real_bytes += sg->length;
return real_bytes;
}
/**
* zfcp_qdio_set_scount - set SBAL count value
* @qdio: pointer to struct zfcp_qdio
* @q_req: The current zfcp_qdio_req
*/
static inline
void zfcp_qdio_set_scount(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
{
struct qdio_buffer_element *sbale;
sbale = qdio->req_q[q_req->sbal_first]->element;
sbale->scount = q_req->sbal_number - 1;
}
#endif /* ZFCP_QDIO_H */

View File

@ -24,11 +24,8 @@ module_param_named(queue_depth, default_depth, uint, 0600);
MODULE_PARM_DESC(queue_depth, "Default queue depth for new SCSI devices");
static bool enable_dif;
#ifdef CONFIG_ZFCP_DIF
module_param_named(dif, enable_dif, bool, 0600);
module_param_named(dif, enable_dif, bool, 0400);
MODULE_PARM_DESC(dif, "Enable DIF/DIX data integrity support");
#endif
static bool allow_lun_scan = 1;
module_param(allow_lun_scan, bool, 0600);
@ -309,8 +306,8 @@ static struct scsi_host_template zfcp_scsi_host_template = {
.proc_name = "zfcp",
.can_queue = 4096,
.this_id = -1,
.sg_tablesize = ZFCP_QDIO_MAX_SBALES_PER_REQ,
.max_sectors = (ZFCP_QDIO_MAX_SBALES_PER_REQ * 8),
.sg_tablesize = 1, /* adjusted later */
.max_sectors = 8, /* adjusted later */
.dma_boundary = ZFCP_QDIO_SBALE_LEN - 1,
.cmd_per_lun = 1,
.use_clustering = 1,
@ -668,9 +665,9 @@ void zfcp_scsi_set_prot(struct zfcp_adapter *adapter)
adapter->adapter_features & FSF_FEATURE_DIX_PROT_TCPIP) {
mask |= SHOST_DIX_TYPE1_PROTECTION;
scsi_host_set_guard(shost, SHOST_DIX_GUARD_IP);
shost->sg_prot_tablesize = ZFCP_QDIO_MAX_SBALES_PER_REQ / 2;
shost->sg_tablesize = ZFCP_QDIO_MAX_SBALES_PER_REQ / 2;
shost->max_sectors = ZFCP_QDIO_MAX_SBALES_PER_REQ * 8 / 2;
shost->sg_prot_tablesize = adapter->qdio->max_sbale_per_req / 2;
shost->sg_tablesize = adapter->qdio->max_sbale_per_req / 2;
shost->max_sectors = shost->sg_tablesize * 8;
}
scsi_host_set_prot(shost, mask);

View File

@ -309,6 +309,7 @@ config SCSI_FC_TGT_ATTRS
config SCSI_ISCSI_ATTRS
tristate "iSCSI Transport Attributes"
depends on SCSI && NET
select BLK_DEV_BSGLIB
help
If you wish to export transport-specific information about
each attached iSCSI device to sysfs, say Y.
@ -559,6 +560,15 @@ source "drivers/scsi/aic7xxx/Kconfig.aic79xx"
source "drivers/scsi/aic94xx/Kconfig"
source "drivers/scsi/mvsas/Kconfig"
config SCSI_MVUMI
tristate "Marvell UMI driver"
depends on SCSI && PCI
help
Module for Marvell Universal Message Interface(UMI) driver
To compile this driver as a module, choose M here: the
module will be called mvumi.
config SCSI_DPT_I2O
tristate "Adaptec I2O RAID support "
depends on SCSI && PCI && VIRT_TO_BUS
@ -1872,10 +1882,6 @@ config ZFCP
called zfcp. If you want to compile it as a module, say M here
and read <file:Documentation/kbuild/modules.txt>.
config ZFCP_DIF
tristate "T10 DIF/DIX support for the zfcp driver (EXPERIMENTAL)"
depends on ZFCP && EXPERIMENTAL
config SCSI_PMCRAID
tristate "PMC SIERRA Linux MaxRAID adapter support"
depends on PCI && SCSI && NET

View File

@ -134,6 +134,7 @@ obj-$(CONFIG_SCSI_IBMVFC) += ibmvscsi/
obj-$(CONFIG_SCSI_HPTIOP) += hptiop.o
obj-$(CONFIG_SCSI_STEX) += stex.o
obj-$(CONFIG_SCSI_MVSAS) += mvsas/
obj-$(CONFIG_SCSI_MVUMI) += mvumi.o
obj-$(CONFIG_PS3_ROM) += ps3rom.o
obj-$(CONFIG_SCSI_CXGB3_ISCSI) += libiscsi.o libiscsi_tcp.o cxgbi/
obj-$(CONFIG_SCSI_CXGB4_ISCSI) += libiscsi.o libiscsi_tcp.o cxgbi/

View File

@ -894,16 +894,17 @@ static ssize_t aac_show_serial_number(struct device *device,
int len = 0;
if (le32_to_cpu(dev->adapter_info.serial[0]) != 0xBAD0)
len = snprintf(buf, PAGE_SIZE, "%06X\n",
len = snprintf(buf, 16, "%06X\n",
le32_to_cpu(dev->adapter_info.serial[0]));
if (len &&
!memcmp(&dev->supplement_adapter_info.MfgPcbaSerialNo[
sizeof(dev->supplement_adapter_info.MfgPcbaSerialNo)-len],
buf, len-1))
len = snprintf(buf, PAGE_SIZE, "%.*s\n",
len = snprintf(buf, 16, "%.*s\n",
(int)sizeof(dev->supplement_adapter_info.MfgPcbaSerialNo),
dev->supplement_adapter_info.MfgPcbaSerialNo);
return len;
return min(len, 16);
}
static ssize_t aac_show_max_channel(struct device *device,

View File

@ -906,6 +906,7 @@ int asd_control_phy(struct asd_sas_phy *phy, enum phy_func func, void *arg)
switch (func) {
case PHY_FUNC_CLEAR_ERROR_LOG:
case PHY_FUNC_GET_EVENTS:
return -ENOSYS;
case PHY_FUNC_SET_LINK_RATE:
rates = arg;

View File

@ -660,6 +660,7 @@ int beiscsi_cmd_mccq_create(struct beiscsi_hba *phba,
spin_lock(&phba->ctrl.mbox_lock);
ctrl = &phba->ctrl;
wrb = wrb_from_mbox(&ctrl->mbox_mem);
memset(wrb, 0, sizeof(*wrb));
req = embedded_payload(wrb);
ctxt = &req->context;
@ -868,3 +869,22 @@ error:
beiscsi_cmd_q_destroy(ctrl, NULL, QTYPE_SGL);
return status;
}
int beiscsi_cmd_reset_function(struct beiscsi_hba *phba)
{
struct be_ctrl_info *ctrl = &phba->ctrl;
struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem);
struct be_post_sgl_pages_req *req = embedded_payload(wrb);
int status;
spin_lock(&ctrl->mbox_lock);
req = embedded_payload(wrb);
be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0);
be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON,
OPCODE_COMMON_FUNCTION_RESET, sizeof(*req));
status = be_mbox_notify_wait(phba);
spin_unlock(&ctrl->mbox_lock);
return status;
}

View File

@ -561,6 +561,8 @@ int be_cmd_iscsi_post_sgl_pages(struct be_ctrl_info *ctrl,
struct be_dma_mem *q_mem, u32 page_offset,
u32 num_pages);
int beiscsi_cmd_reset_function(struct beiscsi_hba *phba);
int be_cmd_wrbq_create(struct be_ctrl_info *ctrl, struct be_dma_mem *q_mem,
struct be_queue_info *wrbq);

View File

@ -177,9 +177,8 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
{
struct iscsi_conn *conn = cls_conn->dd_data;
struct beiscsi_conn *beiscsi_conn = conn->dd_data;
struct Scsi_Host *shost =
(struct Scsi_Host *)iscsi_session_to_shost(cls_session);
struct beiscsi_hba *phba = (struct beiscsi_hba *)iscsi_host_priv(shost);
struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
struct beiscsi_hba *phba = iscsi_host_priv(shost);
struct beiscsi_endpoint *beiscsi_ep;
struct iscsi_endpoint *ep;
@ -290,7 +289,7 @@ int beiscsi_set_param(struct iscsi_cls_conn *cls_conn,
int beiscsi_get_host_param(struct Scsi_Host *shost,
enum iscsi_host_param param, char *buf)
{
struct beiscsi_hba *phba = (struct beiscsi_hba *)iscsi_host_priv(shost);
struct beiscsi_hba *phba = iscsi_host_priv(shost);
int status = 0;
SE_DEBUG(DBG_LVL_8, "In beiscsi_get_host_param, param= %d\n", param);
@ -733,3 +732,56 @@ void beiscsi_ep_disconnect(struct iscsi_endpoint *ep)
beiscsi_unbind_conn_to_cid(phba, beiscsi_ep->ep_cid);
iscsi_destroy_endpoint(beiscsi_ep->openiscsi_ep);
}
mode_t be2iscsi_attr_is_visible(int param_type, int param)
{
switch (param_type) {
case ISCSI_HOST_PARAM:
switch (param) {
case ISCSI_HOST_PARAM_HWADDRESS:
case ISCSI_HOST_PARAM_IPADDRESS:
case ISCSI_HOST_PARAM_INITIATOR_NAME:
return S_IRUGO;
default:
return 0;
}
case ISCSI_PARAM:
switch (param) {
case ISCSI_PARAM_MAX_RECV_DLENGTH:
case ISCSI_PARAM_MAX_XMIT_DLENGTH:
case ISCSI_PARAM_HDRDGST_EN:
case ISCSI_PARAM_DATADGST_EN:
case ISCSI_PARAM_CONN_ADDRESS:
case ISCSI_PARAM_CONN_PORT:
case ISCSI_PARAM_EXP_STATSN:
case ISCSI_PARAM_PERSISTENT_ADDRESS:
case ISCSI_PARAM_PERSISTENT_PORT:
case ISCSI_PARAM_PING_TMO:
case ISCSI_PARAM_RECV_TMO:
case ISCSI_PARAM_INITIAL_R2T_EN:
case ISCSI_PARAM_MAX_R2T:
case ISCSI_PARAM_IMM_DATA_EN:
case ISCSI_PARAM_FIRST_BURST:
case ISCSI_PARAM_MAX_BURST:
case ISCSI_PARAM_PDU_INORDER_EN:
case ISCSI_PARAM_DATASEQ_INORDER_EN:
case ISCSI_PARAM_ERL:
case ISCSI_PARAM_TARGET_NAME:
case ISCSI_PARAM_TPGT:
case ISCSI_PARAM_USERNAME:
case ISCSI_PARAM_PASSWORD:
case ISCSI_PARAM_USERNAME_IN:
case ISCSI_PARAM_PASSWORD_IN:
case ISCSI_PARAM_FAST_ABORT:
case ISCSI_PARAM_ABORT_TMO:
case ISCSI_PARAM_LU_RESET_TMO:
case ISCSI_PARAM_IFACE_NAME:
case ISCSI_PARAM_INITIATOR_NAME:
return S_IRUGO;
default:
return 0;
}
}
return 0;
}

View File

@ -26,6 +26,8 @@
#define BE2_IPV4 0x1
#define BE2_IPV6 0x10
mode_t be2iscsi_attr_is_visible(int param_type, int param);
void beiscsi_offload_connection(struct beiscsi_conn *beiscsi_conn,
struct beiscsi_offload_params *params);

View File

@ -822,33 +822,47 @@ static int beiscsi_init_irqs(struct beiscsi_hba *phba)
struct hwi_controller *phwi_ctrlr;
struct hwi_context_memory *phwi_context;
int ret, msix_vec, i, j;
char desc[32];
phwi_ctrlr = phba->phwi_ctrlr;
phwi_context = phwi_ctrlr->phwi_ctxt;
if (phba->msix_enabled) {
for (i = 0; i < phba->num_cpus; i++) {
sprintf(desc, "beiscsi_msix_%04x", i);
phba->msi_name[i] = kzalloc(BEISCSI_MSI_NAME,
GFP_KERNEL);
if (!phba->msi_name[i]) {
ret = -ENOMEM;
goto free_msix_irqs;
}
sprintf(phba->msi_name[i], "beiscsi_%02x_%02x",
phba->shost->host_no, i);
msix_vec = phba->msix_entries[i].vector;
ret = request_irq(msix_vec, be_isr_msix, 0, desc,
ret = request_irq(msix_vec, be_isr_msix, 0,
phba->msi_name[i],
&phwi_context->be_eq[i]);
if (ret) {
shost_printk(KERN_ERR, phba->shost,
"beiscsi_init_irqs-Failed to"
"register msix for i = %d\n", i);
if (!i)
return ret;
kfree(phba->msi_name[i]);
goto free_msix_irqs;
}
}
phba->msi_name[i] = kzalloc(BEISCSI_MSI_NAME, GFP_KERNEL);
if (!phba->msi_name[i]) {
ret = -ENOMEM;
goto free_msix_irqs;
}
sprintf(phba->msi_name[i], "beiscsi_mcc_%02x",
phba->shost->host_no);
msix_vec = phba->msix_entries[i].vector;
ret = request_irq(msix_vec, be_isr_mcc, 0, "beiscsi_msix_mcc",
ret = request_irq(msix_vec, be_isr_mcc, 0, phba->msi_name[i],
&phwi_context->be_eq[i]);
if (ret) {
shost_printk(KERN_ERR, phba->shost, "beiscsi_init_irqs-"
"Failed to register beiscsi_msix_mcc\n");
i++;
kfree(phba->msi_name[i]);
goto free_msix_irqs;
}
@ -863,8 +877,11 @@ static int beiscsi_init_irqs(struct beiscsi_hba *phba)
}
return 0;
free_msix_irqs:
for (j = i - 1; j == 0; j++)
for (j = i - 1; j >= 0; j--) {
kfree(phba->msi_name[j]);
msix_vec = phba->msix_entries[j].vector;
free_irq(msix_vec, &phwi_context->be_eq[j]);
}
return ret;
}
@ -1106,7 +1123,12 @@ be_complete_io(struct beiscsi_conn *beiscsi_conn,
& SOL_STS_MASK) >> 8);
flags = ((psol->dw[offsetof(struct amap_sol_cqe, i_flags) / 32]
& SOL_FLAGS_MASK) >> 24) | 0x80;
if (!task->sc) {
if (io_task->scsi_cmnd)
scsi_dma_unmap(io_task->scsi_cmnd);
return;
}
task->sc->result = (DID_OK << 16) | status;
if (rsp != ISCSI_STATUS_CMD_COMPLETED) {
task->sc->result = DID_ERROR << 16;
@ -4027,11 +4049,11 @@ static int beiscsi_mtask(struct iscsi_task *task)
TGT_DM_CMD);
AMAP_SET_BITS(struct amap_iscsi_wrb, cmdsn_itt,
pwrb, 0);
AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 0);
AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 1);
} else {
AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb,
INI_RD_CMD);
AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 1);
AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 0);
}
hwi_write_buffer(pwrb, task);
break;
@ -4102,9 +4124,8 @@ static int beiscsi_task_xmit(struct iscsi_task *task)
return beiscsi_iotask(task, sg, num_sg, xferlen, writedir);
}
static void beiscsi_remove(struct pci_dev *pcidev)
static void beiscsi_quiesce(struct beiscsi_hba *phba)
{
struct beiscsi_hba *phba = NULL;
struct hwi_controller *phwi_ctrlr;
struct hwi_context_memory *phwi_context;
struct be_eq_obj *pbe_eq;
@ -4112,12 +4133,6 @@ static void beiscsi_remove(struct pci_dev *pcidev)
u8 *real_offset = 0;
u32 value = 0;
phba = (struct beiscsi_hba *)pci_get_drvdata(pcidev);
if (!phba) {
dev_err(&pcidev->dev, "beiscsi_remove called with no phba\n");
return;
}
phwi_ctrlr = phba->phwi_ctrlr;
phwi_context = phwi_ctrlr->phwi_ctxt;
hwi_disable_intr(phba);
@ -4125,6 +4140,7 @@ static void beiscsi_remove(struct pci_dev *pcidev)
for (i = 0; i <= phba->num_cpus; i++) {
msix_vec = phba->msix_entries[i].vector;
free_irq(msix_vec, &phwi_context->be_eq[i]);
kfree(phba->msi_name[i]);
}
} else
if (phba->pcidev->irq)
@ -4152,10 +4168,40 @@ static void beiscsi_remove(struct pci_dev *pcidev)
phba->ctrl.mbox_mem_alloced.size,
phba->ctrl.mbox_mem_alloced.va,
phba->ctrl.mbox_mem_alloced.dma);
}
static void beiscsi_remove(struct pci_dev *pcidev)
{
struct beiscsi_hba *phba = NULL;
phba = pci_get_drvdata(pcidev);
if (!phba) {
dev_err(&pcidev->dev, "beiscsi_remove called with no phba\n");
return;
}
beiscsi_quiesce(phba);
iscsi_boot_destroy_kset(phba->boot_kset);
iscsi_host_remove(phba->shost);
pci_dev_put(phba->pcidev);
iscsi_host_free(phba->shost);
pci_disable_device(pcidev);
}
static void beiscsi_shutdown(struct pci_dev *pcidev)
{
struct beiscsi_hba *phba = NULL;
phba = (struct beiscsi_hba *)pci_get_drvdata(pcidev);
if (!phba) {
dev_err(&pcidev->dev, "beiscsi_shutdown called with no phba\n");
return;
}
beiscsi_quiesce(phba);
pci_disable_device(pcidev);
}
static void beiscsi_msix_enable(struct beiscsi_hba *phba)
@ -4235,7 +4281,7 @@ static int __devinit beiscsi_dev_probe(struct pci_dev *pcidev,
gcrashmode++;
shost_printk(KERN_ERR, phba->shost,
"Loading Driver in crashdump mode\n");
ret = beiscsi_pci_soft_reset(phba);
ret = beiscsi_cmd_reset_function(phba);
if (ret) {
shost_printk(KERN_ERR, phba->shost,
"Reset Failed. Aborting Crashdump\n");
@ -4364,37 +4410,12 @@ struct iscsi_transport beiscsi_iscsi_transport = {
.name = DRV_NAME,
.caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_TEXT_NEGO |
CAP_MULTI_R2T | CAP_DATADGST | CAP_DATA_PATH_OFFLOAD,
.param_mask = ISCSI_MAX_RECV_DLENGTH |
ISCSI_MAX_XMIT_DLENGTH |
ISCSI_HDRDGST_EN |
ISCSI_DATADGST_EN |
ISCSI_INITIAL_R2T_EN |
ISCSI_MAX_R2T |
ISCSI_IMM_DATA_EN |
ISCSI_FIRST_BURST |
ISCSI_MAX_BURST |
ISCSI_PDU_INORDER_EN |
ISCSI_DATASEQ_INORDER_EN |
ISCSI_ERL |
ISCSI_CONN_PORT |
ISCSI_CONN_ADDRESS |
ISCSI_EXP_STATSN |
ISCSI_PERSISTENT_PORT |
ISCSI_PERSISTENT_ADDRESS |
ISCSI_TARGET_NAME | ISCSI_TPGT |
ISCSI_USERNAME | ISCSI_PASSWORD |
ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN |
ISCSI_FAST_ABORT | ISCSI_ABORT_TMO |
ISCSI_LU_RESET_TMO |
ISCSI_PING_TMO | ISCSI_RECV_TMO |
ISCSI_IFACE_NAME | ISCSI_INITIATOR_NAME,
.host_param_mask = ISCSI_HOST_HWADDRESS | ISCSI_HOST_IPADDRESS |
ISCSI_HOST_INITIATOR_NAME,
.create_session = beiscsi_session_create,
.destroy_session = beiscsi_session_destroy,
.create_conn = beiscsi_conn_create,
.bind_conn = beiscsi_conn_bind,
.destroy_conn = iscsi_conn_teardown,
.attr_is_visible = be2iscsi_attr_is_visible,
.set_param = beiscsi_set_param,
.get_conn_param = iscsi_conn_get_param,
.get_session_param = iscsi_session_get_param,
@ -4418,6 +4439,7 @@ static struct pci_driver beiscsi_pci_driver = {
.name = DRV_NAME,
.probe = beiscsi_dev_probe,
.remove = beiscsi_remove,
.shutdown = beiscsi_shutdown,
.id_table = beiscsi_pci_id_table
};

View File

@ -34,7 +34,7 @@
#include "be.h"
#define DRV_NAME "be2iscsi"
#define BUILD_STR "2.103.298.0"
#define BUILD_STR "4.1.239.0"
#define BE_NAME "ServerEngines BladeEngine2" \
"Linux iSCSI Driver version" BUILD_STR
#define DRV_DESC BE_NAME " " "Driver"
@ -162,6 +162,8 @@ do { \
#define PAGES_REQUIRED(x) \
((x < PAGE_SIZE) ? 1 : ((x + PAGE_SIZE - 1) / PAGE_SIZE))
#define BEISCSI_MSI_NAME 20 /* size of msi_name string */
enum be_mem_enum {
HWI_MEM_ADDN_CONTEXT,
HWI_MEM_WRB,
@ -287,6 +289,7 @@ struct beiscsi_hba {
unsigned int num_cpus;
unsigned int nxt_cqid;
struct msix_entry msix_entries[MAX_CPUS + 1];
char *msi_name[MAX_CPUS + 1];
bool msix_enabled;
struct be_mem_descriptor *init_mem;

View File

@ -62,7 +62,7 @@
#include "bnx2fc_constants.h"
#define BNX2FC_NAME "bnx2fc"
#define BNX2FC_VERSION "1.0.4"
#define BNX2FC_VERSION "1.0.8"
#define PFX "bnx2fc: "
@ -224,6 +224,7 @@ struct bnx2fc_interface {
struct fcoe_ctlr ctlr;
u8 vlan_enabled;
int vlan_id;
bool enabled;
};
#define bnx2fc_from_ctlr(fip) container_of(fip, struct bnx2fc_interface, ctlr)

View File

@ -391,18 +391,6 @@ void bnx2fc_rec_compl(struct bnx2fc_els_cb_arg *cb_arg)
BNX2FC_IO_DBG(rec_req, "rec_compl: orig xid = 0x%x", orig_io_req->xid);
tgt = orig_io_req->tgt;
if (test_bit(BNX2FC_FLAG_IO_COMPL, &orig_io_req->req_flags)) {
BNX2FC_IO_DBG(rec_req, "completed"
"orig_io - 0x%x\n",
orig_io_req->xid);
goto rec_compl_done;
}
if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, &orig_io_req->req_flags)) {
BNX2FC_IO_DBG(rec_req, "abts in prog "
"orig_io - 0x%x\n",
orig_io_req->xid);
goto rec_compl_done;
}
/* Handle REC timeout case */
if (test_and_clear_bit(BNX2FC_FLAG_ELS_TIMEOUT, &rec_req->req_flags)) {
BNX2FC_IO_DBG(rec_req, "timed out, abort "
@ -433,6 +421,20 @@ void bnx2fc_rec_compl(struct bnx2fc_els_cb_arg *cb_arg)
}
goto rec_compl_done;
}
if (test_bit(BNX2FC_FLAG_IO_COMPL, &orig_io_req->req_flags)) {
BNX2FC_IO_DBG(rec_req, "completed"
"orig_io - 0x%x\n",
orig_io_req->xid);
goto rec_compl_done;
}
if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, &orig_io_req->req_flags)) {
BNX2FC_IO_DBG(rec_req, "abts in prog "
"orig_io - 0x%x\n",
orig_io_req->xid);
goto rec_compl_done;
}
mp_req = &(rec_req->mp_req);
fc_hdr = &(mp_req->resp_fc_hdr);
resp_len = mp_req->resp_len;

View File

@ -22,7 +22,7 @@ DEFINE_PER_CPU(struct bnx2fc_percpu_s, bnx2fc_percpu);
#define DRV_MODULE_NAME "bnx2fc"
#define DRV_MODULE_VERSION BNX2FC_VERSION
#define DRV_MODULE_RELDATE "Jun 23, 2011"
#define DRV_MODULE_RELDATE "Oct 02, 2011"
static char version[] __devinitdata =
@ -56,6 +56,7 @@ static struct scsi_host_template bnx2fc_shost_template;
static struct fc_function_template bnx2fc_transport_function;
static struct fc_function_template bnx2fc_vport_xport_function;
static int bnx2fc_create(struct net_device *netdev, enum fip_state fip_mode);
static void __bnx2fc_destroy(struct bnx2fc_interface *interface);
static int bnx2fc_destroy(struct net_device *net_device);
static int bnx2fc_enable(struct net_device *netdev);
static int bnx2fc_disable(struct net_device *netdev);
@ -64,7 +65,6 @@ static void bnx2fc_recv_frame(struct sk_buff *skb);
static void bnx2fc_start_disc(struct bnx2fc_interface *interface);
static int bnx2fc_shost_config(struct fc_lport *lport, struct device *dev);
static int bnx2fc_net_config(struct fc_lport *lp);
static int bnx2fc_lport_config(struct fc_lport *lport);
static int bnx2fc_em_config(struct fc_lport *lport);
static int bnx2fc_bind_adapter_devices(struct bnx2fc_hba *hba);
@ -78,6 +78,7 @@ static void bnx2fc_destroy_work(struct work_struct *work);
static struct bnx2fc_hba *bnx2fc_hba_lookup(struct net_device *phys_dev);
static struct bnx2fc_interface *bnx2fc_interface_lookup(struct net_device
*phys_dev);
static inline void bnx2fc_interface_put(struct bnx2fc_interface *interface);
static struct bnx2fc_hba *bnx2fc_find_hba_for_cnic(struct cnic_dev *cnic);
static int bnx2fc_fw_init(struct bnx2fc_hba *hba);
@ -98,6 +99,25 @@ static struct notifier_block bnx2fc_cpu_notifier = {
.notifier_call = bnx2fc_cpu_callback,
};
static inline struct net_device *bnx2fc_netdev(const struct fc_lport *lport)
{
return ((struct bnx2fc_interface *)
((struct fcoe_port *)lport_priv(lport))->priv)->netdev;
}
/**
* bnx2fc_get_lesb() - Fill the FCoE Link Error Status Block
* @lport: the local port
* @fc_lesb: the link error status block
*/
static void bnx2fc_get_lesb(struct fc_lport *lport,
struct fc_els_lesb *fc_lesb)
{
struct net_device *netdev = bnx2fc_netdev(lport);
__fcoe_get_lesb(lport, fc_lesb, netdev);
}
static void bnx2fc_clean_rx_queue(struct fc_lport *lp)
{
struct fcoe_percpu_s *bg;
@ -545,6 +565,14 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
break;
}
}
if (fh->fh_r_ctl == FC_RCTL_BA_ABTS) {
/* Drop incoming ABTS */
put_cpu();
kfree_skb(skb);
return;
}
if (le32_to_cpu(fr_crc(fp)) !=
~crc32(~0, skb->data, fr_len)) {
if (stats->InvalidCRCCount < 5)
@ -727,7 +755,7 @@ void bnx2fc_get_link_state(struct bnx2fc_hba *hba)
clear_bit(ADAPTER_STATE_LINK_DOWN, &hba->adapter_state);
}
static int bnx2fc_net_config(struct fc_lport *lport)
static int bnx2fc_net_config(struct fc_lport *lport, struct net_device *netdev)
{
struct bnx2fc_hba *hba;
struct bnx2fc_interface *interface;
@ -753,11 +781,16 @@ static int bnx2fc_net_config(struct fc_lport *lport)
bnx2fc_link_speed_update(lport);
if (!lport->vport) {
wwnn = fcoe_wwn_from_mac(interface->ctlr.ctl_src_addr, 1, 0);
if (fcoe_get_wwn(netdev, &wwnn, NETDEV_FCOE_WWNN))
wwnn = fcoe_wwn_from_mac(interface->ctlr.ctl_src_addr,
1, 0);
BNX2FC_HBA_DBG(lport, "WWNN = 0x%llx\n", wwnn);
fc_set_wwnn(lport, wwnn);
wwpn = fcoe_wwn_from_mac(interface->ctlr.ctl_src_addr, 2, 0);
if (fcoe_get_wwn(netdev, &wwpn, NETDEV_FCOE_WWPN))
wwpn = fcoe_wwn_from_mac(interface->ctlr.ctl_src_addr,
2, 0);
BNX2FC_HBA_DBG(lport, "WWPN = 0x%llx\n", wwpn);
fc_set_wwpn(lport, wwpn);
}
@ -769,8 +802,8 @@ static void bnx2fc_destroy_timer(unsigned long data)
{
struct bnx2fc_hba *hba = (struct bnx2fc_hba *)data;
BNX2FC_MISC_DBG("ERROR:bnx2fc_destroy_timer - "
"Destroy compl not received!!\n");
printk(KERN_ERR PFX "ERROR:bnx2fc_destroy_timer - "
"Destroy compl not received!!\n");
set_bit(BNX2FC_FLAG_DESTROY_CMPL, &hba->flags);
wake_up_interruptible(&hba->destroy_wait);
}
@ -783,7 +816,7 @@ static void bnx2fc_destroy_timer(unsigned long data)
* @vlan_id: vlan id - associated vlan id with this event
*
* Handles NETDEV_UP, NETDEV_DOWN, NETDEV_GOING_DOWN,NETDEV_CHANGE and
* NETDEV_CHANGE_MTU events
* NETDEV_CHANGE_MTU events. Handle NETDEV_UNREGISTER only for vlans.
*/
static void bnx2fc_indicate_netevent(void *context, unsigned long event,
u16 vlan_id)
@ -791,12 +824,11 @@ static void bnx2fc_indicate_netevent(void *context, unsigned long event,
struct bnx2fc_hba *hba = (struct bnx2fc_hba *)context;
struct fc_lport *lport;
struct fc_lport *vport;
struct bnx2fc_interface *interface;
struct bnx2fc_interface *interface, *tmp;
int wait_for_upload = 0;
u32 link_possible = 1;
/* Ignore vlans for now */
if (vlan_id != 0)
if (vlan_id != 0 && event != NETDEV_UNREGISTER)
return;
switch (event) {
@ -820,6 +852,18 @@ static void bnx2fc_indicate_netevent(void *context, unsigned long event,
case NETDEV_CHANGE:
break;
case NETDEV_UNREGISTER:
if (!vlan_id)
return;
mutex_lock(&bnx2fc_dev_lock);
list_for_each_entry_safe(interface, tmp, &if_list, list) {
if (interface->hba == hba &&
interface->vlan_id == (vlan_id & VLAN_VID_MASK))
__bnx2fc_destroy(interface);
}
mutex_unlock(&bnx2fc_dev_lock);
return;
default:
printk(KERN_ERR PFX "Unkonwn netevent %ld", event);
return;
@ -838,8 +882,15 @@ static void bnx2fc_indicate_netevent(void *context, unsigned long event,
bnx2fc_link_speed_update(lport);
if (link_possible && !bnx2fc_link_ok(lport)) {
printk(KERN_ERR "indicate_netevent: ctlr_link_up\n");
fcoe_ctlr_link_up(&interface->ctlr);
/* Reset max recv frame size to default */
fc_set_mfs(lport, BNX2FC_MFS);
/*
* ctlr link up will only be handled during
* enable to avoid sending discovery solicitation
* on a stale vlan
*/
if (interface->enabled)
fcoe_ctlr_link_up(&interface->ctlr);
} else if (fcoe_ctlr_link_down(&interface->ctlr)) {
mutex_lock(&lport->lp_mutex);
list_for_each_entry(vport, &lport->vports, list)
@ -995,6 +1046,17 @@ static int bnx2fc_vport_create(struct fc_vport *vport, bool disabled)
struct bnx2fc_interface *interface = port->priv;
struct net_device *netdev = interface->netdev;
struct fc_lport *vn_port;
int rc;
char buf[32];
rc = fcoe_validate_vport_create(vport);
if (rc) {
fcoe_wwn_to_str(vport->port_name, buf, sizeof(buf));
printk(KERN_ERR PFX "Failed to create vport, "
"WWPN (0x%s) already exists\n",
buf);
return rc;
}
if (!test_bit(BNX2FC_FLAG_FW_INIT_DONE, &interface->hba->flags)) {
printk(KERN_ERR PFX "vn ports cannot be created on"
@ -1024,16 +1086,46 @@ static int bnx2fc_vport_create(struct fc_vport *vport, bool disabled)
return 0;
}
static void bnx2fc_free_vport(struct bnx2fc_hba *hba, struct fc_lport *lport)
{
struct bnx2fc_lport *blport, *tmp;
spin_lock_bh(&hba->hba_lock);
list_for_each_entry_safe(blport, tmp, &hba->vports, list) {
if (blport->lport == lport) {
list_del(&blport->list);
kfree(blport);
}
}
spin_unlock_bh(&hba->hba_lock);
}
static int bnx2fc_vport_destroy(struct fc_vport *vport)
{
struct Scsi_Host *shost = vport_to_shost(vport);
struct fc_lport *n_port = shost_priv(shost);
struct fc_lport *vn_port = vport->dd_data;
struct fcoe_port *port = lport_priv(vn_port);
struct bnx2fc_interface *interface = port->priv;
struct fc_lport *v_port;
bool found = false;
mutex_lock(&n_port->lp_mutex);
list_for_each_entry(v_port, &n_port->vports, list)
if (v_port->vport == vport) {
found = true;
break;
}
if (!found) {
mutex_unlock(&n_port->lp_mutex);
return -ENOENT;
}
list_del(&vn_port->list);
mutex_unlock(&n_port->lp_mutex);
bnx2fc_free_vport(interface->hba, port->lport);
bnx2fc_port_shutdown(port->lport);
bnx2fc_interface_put(interface);
queue_work(bnx2fc_wq, &port->destroy_work);
return 0;
}
@ -1054,7 +1146,7 @@ static int bnx2fc_vport_disable(struct fc_vport *vport, bool disable)
}
static int bnx2fc_netdev_setup(struct bnx2fc_interface *interface)
static int bnx2fc_interface_setup(struct bnx2fc_interface *interface)
{
struct net_device *netdev = interface->netdev;
struct net_device *physdev = interface->hba->phys_dev;
@ -1252,7 +1344,7 @@ struct bnx2fc_interface *bnx2fc_interface_create(struct bnx2fc_hba *hba,
interface->ctlr.get_src_addr = bnx2fc_get_src_mac;
set_bit(BNX2FC_CTLR_INIT_DONE, &interface->if_flags);
rc = bnx2fc_netdev_setup(interface);
rc = bnx2fc_interface_setup(interface);
if (!rc)
return interface;
@ -1318,7 +1410,7 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
fc_set_wwpn(lport, vport->port_name);
}
/* Configure netdev and networking properties of the lport */
rc = bnx2fc_net_config(lport);
rc = bnx2fc_net_config(lport, interface->netdev);
if (rc) {
printk(KERN_ERR PFX "Error on bnx2fc_net_config\n");
goto lp_config_err;
@ -1372,7 +1464,7 @@ free_blport:
return NULL;
}
static void bnx2fc_netdev_cleanup(struct bnx2fc_interface *interface)
static void bnx2fc_net_cleanup(struct bnx2fc_interface *interface)
{
/* Dont listen for Ethernet packets anymore */
__dev_remove_pack(&interface->fcoe_packet_type);
@ -1380,10 +1472,11 @@ static void bnx2fc_netdev_cleanup(struct bnx2fc_interface *interface)
synchronize_net();
}
static void bnx2fc_if_destroy(struct fc_lport *lport, struct bnx2fc_hba *hba)
static void bnx2fc_interface_cleanup(struct bnx2fc_interface *interface)
{
struct fc_lport *lport = interface->ctlr.lp;
struct fcoe_port *port = lport_priv(lport);
struct bnx2fc_lport *blport, *tmp;
struct bnx2fc_hba *hba = interface->hba;
/* Stop the transmit retry timer */
del_timer_sync(&port->timer);
@ -1391,6 +1484,14 @@ static void bnx2fc_if_destroy(struct fc_lport *lport, struct bnx2fc_hba *hba)
/* Free existing transmit skbs */
fcoe_clean_pending_queue(lport);
bnx2fc_net_cleanup(interface);
bnx2fc_free_vport(hba, lport);
}
static void bnx2fc_if_destroy(struct fc_lport *lport)
{
/* Free queued packets for the receive thread */
bnx2fc_clean_rx_queue(lport);
@ -1407,19 +1508,22 @@ static void bnx2fc_if_destroy(struct fc_lport *lport, struct bnx2fc_hba *hba)
/* Free memory used by statistical counters */
fc_lport_free_stats(lport);
spin_lock_bh(&hba->hba_lock);
list_for_each_entry_safe(blport, tmp, &hba->vports, list) {
if (blport->lport == lport) {
list_del(&blport->list);
kfree(blport);
}
}
spin_unlock_bh(&hba->hba_lock);
/* Release Scsi_Host */
scsi_host_put(lport->host);
}
static void __bnx2fc_destroy(struct bnx2fc_interface *interface)
{
struct fc_lport *lport = interface->ctlr.lp;
struct fcoe_port *port = lport_priv(lport);
bnx2fc_interface_cleanup(interface);
bnx2fc_stop(interface);
list_del(&interface->list);
bnx2fc_interface_put(interface);
queue_work(bnx2fc_wq, &port->destroy_work);
}
/**
* bnx2fc_destroy - Destroy a bnx2fc FCoE interface
*
@ -1433,8 +1537,6 @@ static void bnx2fc_if_destroy(struct fc_lport *lport, struct bnx2fc_hba *hba)
static int bnx2fc_destroy(struct net_device *netdev)
{
struct bnx2fc_interface *interface = NULL;
struct bnx2fc_hba *hba;
struct fc_lport *lport;
int rc = 0;
rtnl_lock();
@ -1447,15 +1549,9 @@ static int bnx2fc_destroy(struct net_device *netdev)
goto netdev_err;
}
hba = interface->hba;
bnx2fc_netdev_cleanup(interface);
lport = interface->ctlr.lp;
bnx2fc_stop(interface);
list_del(&interface->list);
destroy_workqueue(interface->timer_work_queue);
bnx2fc_interface_put(interface);
bnx2fc_if_destroy(lport, hba);
__bnx2fc_destroy(interface);
netdev_err:
mutex_unlock(&bnx2fc_dev_lock);
@ -1467,22 +1563,13 @@ static void bnx2fc_destroy_work(struct work_struct *work)
{
struct fcoe_port *port;
struct fc_lport *lport;
struct bnx2fc_interface *interface;
struct bnx2fc_hba *hba;
port = container_of(work, struct fcoe_port, destroy_work);
lport = port->lport;
interface = port->priv;
hba = interface->hba;
BNX2FC_HBA_DBG(lport, "Entered bnx2fc_destroy_work\n");
bnx2fc_port_shutdown(lport);
rtnl_lock();
mutex_lock(&bnx2fc_dev_lock);
bnx2fc_if_destroy(lport, hba);
mutex_unlock(&bnx2fc_dev_lock);
rtnl_unlock();
bnx2fc_if_destroy(lport);
}
static void bnx2fc_unbind_adapter_devices(struct bnx2fc_hba *hba)
@ -1661,6 +1748,7 @@ static void bnx2fc_fw_destroy(struct bnx2fc_hba *hba)
wait_event_interruptible(hba->destroy_wait,
test_bit(BNX2FC_FLAG_DESTROY_CMPL,
&hba->flags));
clear_bit(BNX2FC_FLAG_DESTROY_CMPL, &hba->flags);
/* This should never happen */
if (signal_pending(current))
flush_signals(current);
@ -1723,7 +1811,7 @@ static void bnx2fc_start_disc(struct bnx2fc_interface *interface)
lport = interface->ctlr.lp;
BNX2FC_HBA_DBG(lport, "calling fc_fabric_login\n");
if (!bnx2fc_link_ok(lport)) {
if (!bnx2fc_link_ok(lport) && interface->enabled) {
BNX2FC_HBA_DBG(lport, "ctlr_link_up\n");
fcoe_ctlr_link_up(&interface->ctlr);
fc_host_port_type(lport->host) = FC_PORTTYPE_NPORT;
@ -1737,6 +1825,11 @@ static void bnx2fc_start_disc(struct bnx2fc_interface *interface)
if (++wait_cnt > 12)
break;
}
/* Reset max receive frame size to default */
if (fc_set_mfs(lport, BNX2FC_MFS))
return;
fc_lport_init(lport);
fc_fabric_login(lport);
}
@ -1800,6 +1893,7 @@ static int bnx2fc_disable(struct net_device *netdev)
rc = -ENODEV;
printk(KERN_ERR PFX "bnx2fc_disable: interface or lport not found\n");
} else {
interface->enabled = false;
fcoe_ctlr_link_down(&interface->ctlr);
fcoe_clean_pending_queue(interface->ctlr.lp);
}
@ -1822,8 +1916,10 @@ static int bnx2fc_enable(struct net_device *netdev)
if (!interface || !interface->ctlr.lp) {
rc = -ENODEV;
printk(KERN_ERR PFX "bnx2fc_enable: interface or lport not found\n");
} else if (!bnx2fc_link_ok(interface->ctlr.lp))
} else if (!bnx2fc_link_ok(interface->ctlr.lp)) {
fcoe_ctlr_link_up(&interface->ctlr);
interface->enabled = true;
}
mutex_unlock(&bnx2fc_dev_lock);
rtnl_unlock();
@ -1923,7 +2019,6 @@ static int bnx2fc_create(struct net_device *netdev, enum fip_state fip_mode)
if (!lport) {
printk(KERN_ERR PFX "Failed to create interface (%s)\n",
netdev->name);
bnx2fc_netdev_cleanup(interface);
rc = -EINVAL;
goto if_create_err;
}
@ -1936,8 +2031,15 @@ static int bnx2fc_create(struct net_device *netdev, enum fip_state fip_mode)
/* Make this master N_port */
interface->ctlr.lp = lport;
if (!bnx2fc_link_ok(lport)) {
fcoe_ctlr_link_up(&interface->ctlr);
fc_host_port_type(lport->host) = FC_PORTTYPE_NPORT;
set_bit(ADAPTER_STATE_READY, &interface->hba->adapter_state);
}
BNX2FC_HBA_DBG(lport, "create: START DISC\n");
bnx2fc_start_disc(interface);
interface->enabled = true;
/*
* Release from kref_init in bnx2fc_interface_setup, on success
* lport should be holding a reference taken in bnx2fc_if_create
@ -1951,6 +2053,7 @@ static int bnx2fc_create(struct net_device *netdev, enum fip_state fip_mode)
if_create_err:
destroy_workqueue(interface->timer_work_queue);
ifput_err:
bnx2fc_net_cleanup(interface);
bnx2fc_interface_put(interface);
netdev_err:
module_put(THIS_MODULE);
@ -2017,7 +2120,6 @@ static void bnx2fc_ulp_exit(struct cnic_dev *dev)
{
struct bnx2fc_hba *hba;
struct bnx2fc_interface *interface, *tmp;
struct fc_lport *lport;
BNX2FC_MISC_DBG("Entered bnx2fc_ulp_exit\n");
@ -2039,18 +2141,10 @@ static void bnx2fc_ulp_exit(struct cnic_dev *dev)
list_del_init(&hba->list);
adapter_count--;
list_for_each_entry_safe(interface, tmp, &if_list, list) {
list_for_each_entry_safe(interface, tmp, &if_list, list)
/* destroy not called yet, move to quiesced list */
if (interface->hba == hba) {
bnx2fc_netdev_cleanup(interface);
bnx2fc_stop(interface);
list_del(&interface->list);
lport = interface->ctlr.lp;
bnx2fc_interface_put(interface);
bnx2fc_if_destroy(lport, hba);
}
}
if (interface->hba == hba)
__bnx2fc_destroy(interface);
mutex_unlock(&bnx2fc_dev_lock);
bnx2fc_ulp_stop(hba);
@ -2119,7 +2213,7 @@ static void bnx2fc_percpu_thread_create(unsigned int cpu)
(void *)p,
"bnx2fc_thread/%d", cpu);
/* bind thread to the cpu */
if (likely(!IS_ERR(p->iothread))) {
if (likely(!IS_ERR(thread))) {
kthread_bind(thread, cpu);
p->iothread = thread;
wake_up_process(thread);
@ -2131,7 +2225,6 @@ static void bnx2fc_percpu_thread_destroy(unsigned int cpu)
struct bnx2fc_percpu_s *p;
struct task_struct *thread;
struct bnx2fc_work *work, *tmp;
LIST_HEAD(work_list);
BNX2FC_MISC_DBG("destroying io thread for CPU %d\n", cpu);
@ -2143,7 +2236,7 @@ static void bnx2fc_percpu_thread_destroy(unsigned int cpu)
/* Free all work in the list */
list_for_each_entry_safe(work, tmp, &work_list, list) {
list_for_each_entry_safe(work, tmp, &p->work_list, list) {
list_del_init(&work->list);
bnx2fc_process_cq_compl(work->tgt, work->wqe);
kfree(work);
@ -2376,6 +2469,7 @@ static struct fc_function_template bnx2fc_transport_function = {
.vport_create = bnx2fc_vport_create,
.vport_delete = bnx2fc_vport_destroy,
.vport_disable = bnx2fc_vport_disable,
.bsg_request = fc_lport_bsg_request,
};
static struct fc_function_template bnx2fc_vport_xport_function = {
@ -2409,6 +2503,7 @@ static struct fc_function_template bnx2fc_vport_xport_function = {
.get_fc_host_stats = fc_get_host_stats,
.issue_fc_host_lip = bnx2fc_fcoe_reset,
.terminate_rport_io = fc_rport_terminate_io,
.bsg_request = fc_lport_bsg_request,
};
/**
@ -2438,6 +2533,7 @@ static struct libfc_function_template bnx2fc_libfc_fcn_templ = {
.elsct_send = bnx2fc_elsct_send,
.fcp_abort_io = bnx2fc_abort_io,
.fcp_cleanup = bnx2fc_cleanup,
.get_lesb = bnx2fc_get_lesb,
.rport_event_callback = bnx2fc_rport_event_handler,
};

View File

@ -1009,6 +1009,7 @@ int bnx2fc_process_new_cqes(struct bnx2fc_rport *tgt)
u32 cq_cons;
struct fcoe_cqe *cqe;
u32 num_free_sqes = 0;
u32 num_cqes = 0;
u16 wqe;
/*
@ -1058,10 +1059,11 @@ unlock:
wake_up_process(fps->iothread);
else
bnx2fc_process_cq_compl(tgt, wqe);
num_free_sqes++;
}
cqe++;
tgt->cq_cons_idx++;
num_free_sqes++;
num_cqes++;
if (tgt->cq_cons_idx == BNX2FC_CQ_WQES_MAX) {
tgt->cq_cons_idx = 0;
@ -1070,8 +1072,10 @@ unlock:
1 - tgt->cq_curr_toggle_bit;
}
}
if (num_free_sqes) {
bnx2fc_arm_cq(tgt);
if (num_cqes) {
/* Arm CQ only if doorbell is mapped */
if (tgt->ctx_base)
bnx2fc_arm_cq(tgt);
atomic_add(num_free_sqes, &tgt->free_sqes);
}
spin_unlock_bh(&tgt->cq_lock);
@ -1739,11 +1743,13 @@ void bnx2fc_init_task(struct bnx2fc_cmd *io_req,
/* Init state to NORMAL */
task->txwr_rxrd.const_ctx.init_flags |= task_type <<
FCOE_TCE_TX_WR_RX_RD_CONST_TASK_TYPE_SHIFT;
if (dev_type == TYPE_TAPE)
if (dev_type == TYPE_TAPE) {
task->txwr_rxrd.const_ctx.init_flags |=
FCOE_TASK_DEV_TYPE_TAPE <<
FCOE_TCE_TX_WR_RX_RD_CONST_DEV_TYPE_SHIFT;
else
io_req->rec_retry = 0;
io_req->rec_retry = 0;
} else
task->txwr_rxrd.const_ctx.init_flags |=
FCOE_TASK_DEV_TYPE_DISK <<
FCOE_TCE_TX_WR_RX_RD_CONST_DEV_TYPE_SHIFT;

View File

@ -17,7 +17,7 @@
static int bnx2fc_split_bd(struct bnx2fc_cmd *io_req, u64 addr, int sg_len,
int bd_index);
static int bnx2fc_map_sg(struct bnx2fc_cmd *io_req);
static void bnx2fc_build_bd_list_from_sg(struct bnx2fc_cmd *io_req);
static int bnx2fc_build_bd_list_from_sg(struct bnx2fc_cmd *io_req);
static void bnx2fc_unmap_sg_list(struct bnx2fc_cmd *io_req);
static void bnx2fc_free_mp_resc(struct bnx2fc_cmd *io_req);
static void bnx2fc_parse_fcp_rsp(struct bnx2fc_cmd *io_req,
@ -1251,7 +1251,6 @@ void bnx2fc_process_seq_cleanup_compl(struct bnx2fc_cmd *seq_clnp_req,
seq_clnp_req->xid);
goto free_cb_arg;
}
kref_get(&orig_io_req->refcount);
spin_unlock_bh(&tgt->tgt_lock);
rc = bnx2fc_send_srr(orig_io_req, offset, r_ctl);
@ -1569,6 +1568,8 @@ static int bnx2fc_split_bd(struct bnx2fc_cmd *io_req, u64 addr, int sg_len,
static int bnx2fc_map_sg(struct bnx2fc_cmd *io_req)
{
struct bnx2fc_interface *interface = io_req->port->priv;
struct bnx2fc_hba *hba = interface->hba;
struct scsi_cmnd *sc = io_req->sc_cmd;
struct fcoe_bd_ctx *bd = io_req->bd_tbl->bd_tbl;
struct scatterlist *sg;
@ -1580,7 +1581,8 @@ static int bnx2fc_map_sg(struct bnx2fc_cmd *io_req)
u64 addr;
int i;
sg_count = scsi_dma_map(sc);
sg_count = dma_map_sg(&hba->pcidev->dev, scsi_sglist(sc),
scsi_sg_count(sc), sc->sc_data_direction);
scsi_for_each_sg(sc, sg, sg_count, i) {
sg_len = sg_dma_len(sg);
addr = sg_dma_address(sg);
@ -1605,20 +1607,24 @@ static int bnx2fc_map_sg(struct bnx2fc_cmd *io_req)
return bd_count;
}
static void bnx2fc_build_bd_list_from_sg(struct bnx2fc_cmd *io_req)
static int bnx2fc_build_bd_list_from_sg(struct bnx2fc_cmd *io_req)
{
struct scsi_cmnd *sc = io_req->sc_cmd;
struct fcoe_bd_ctx *bd = io_req->bd_tbl->bd_tbl;
int bd_count;
if (scsi_sg_count(sc))
if (scsi_sg_count(sc)) {
bd_count = bnx2fc_map_sg(io_req);
else {
if (bd_count == 0)
return -ENOMEM;
} else {
bd_count = 0;
bd[0].buf_addr_lo = bd[0].buf_addr_hi = 0;
bd[0].buf_len = bd[0].flags = 0;
}
io_req->bd_tbl->bd_valid = bd_count;
return 0;
}
static void bnx2fc_unmap_sg_list(struct bnx2fc_cmd *io_req)
@ -1790,12 +1796,6 @@ int bnx2fc_queuecommand(struct Scsi_Host *host,
tgt = (struct bnx2fc_rport *)&rp[1];
if (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) {
if (test_bit(BNX2FC_FLAG_UPLD_REQ_COMPL, &tgt->flags)) {
sc_cmd->result = DID_NO_CONNECT << 16;
sc_cmd->scsi_done(sc_cmd);
return 0;
}
/*
* Session is not offloaded yet. Let SCSI-ml retry
* the command.
@ -1946,7 +1946,13 @@ int bnx2fc_post_io_req(struct bnx2fc_rport *tgt,
xid = io_req->xid;
/* Build buffer descriptor list for firmware from sg list */
bnx2fc_build_bd_list_from_sg(io_req);
if (bnx2fc_build_bd_list_from_sg(io_req)) {
printk(KERN_ERR PFX "BD list creation failed\n");
spin_lock_bh(&tgt->tgt_lock);
kref_put(&io_req->refcount, bnx2fc_cmd_release);
spin_unlock_bh(&tgt->tgt_lock);
return -EAGAIN;
}
task_idx = xid / BNX2FC_TASKS_PER_PAGE;
index = xid % BNX2FC_TASKS_PER_PAGE;

View File

@ -76,7 +76,7 @@ static void bnx2fc_offload_session(struct fcoe_port *port,
if (rval) {
printk(KERN_ERR PFX "Failed to allocate conn id for "
"port_id (%6x)\n", rport->port_id);
goto ofld_err;
goto tgt_init_err;
}
/* Allocate session resources */
@ -134,18 +134,17 @@ retry_ofld:
/* upload will take care of cleaning up sess resc */
lport->tt.rport_logoff(rdata);
}
/* Arm CQ */
bnx2fc_arm_cq(tgt);
return;
ofld_err:
/* couldn't offload the session. log off from this rport */
BNX2FC_TGT_DBG(tgt, "bnx2fc_offload_session - offload error\n");
lport->tt.rport_logoff(rdata);
/* Free session resources */
bnx2fc_free_session_resc(hba, tgt);
tgt_init_err:
if (tgt->fcoe_conn_id != -1)
bnx2fc_free_conn_id(hba, tgt->fcoe_conn_id);
lport->tt.rport_logoff(rdata);
}
void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt)
@ -624,7 +623,6 @@ static void bnx2fc_free_conn_id(struct bnx2fc_hba *hba, u32 conn_id)
/* called with hba mutex held */
spin_lock_bh(&hba->hba_lock);
hba->tgt_ofld_list[conn_id] = NULL;
hba->next_conn_id = conn_id;
spin_unlock_bh(&hba->hba_lock);
}
@ -791,8 +789,6 @@ static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba,
return 0;
mem_alloc_failure:
bnx2fc_free_session_resc(hba, tgt);
bnx2fc_free_conn_id(hba, tgt->fcoe_conn_id);
return -ENOMEM;
}
@ -807,14 +803,14 @@ mem_alloc_failure:
static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba,
struct bnx2fc_rport *tgt)
{
void __iomem *ctx_base_ptr;
BNX2FC_TGT_DBG(tgt, "Freeing up session resources\n");
if (tgt->ctx_base) {
iounmap(tgt->ctx_base);
tgt->ctx_base = NULL;
}
spin_lock_bh(&tgt->cq_lock);
ctx_base_ptr = tgt->ctx_base;
tgt->ctx_base = NULL;
/* Free LCQ */
if (tgt->lcq) {
dma_free_coherent(&hba->pcidev->dev, tgt->lcq_mem_size,
@ -868,4 +864,7 @@ static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba,
tgt->sq = NULL;
}
spin_unlock_bh(&tgt->cq_lock);
if (ctx_base_ptr)
iounmap(ctx_base_ptr);
}

View File

@ -2177,6 +2177,59 @@ static int bnx2i_nl_set_path(struct Scsi_Host *shost, struct iscsi_path *params)
return 0;
}
static mode_t bnx2i_attr_is_visible(int param_type, int param)
{
switch (param_type) {
case ISCSI_HOST_PARAM:
switch (param) {
case ISCSI_HOST_PARAM_NETDEV_NAME:
case ISCSI_HOST_PARAM_HWADDRESS:
case ISCSI_HOST_PARAM_IPADDRESS:
return S_IRUGO;
default:
return 0;
}
case ISCSI_PARAM:
switch (param) {
case ISCSI_PARAM_MAX_RECV_DLENGTH:
case ISCSI_PARAM_MAX_XMIT_DLENGTH:
case ISCSI_PARAM_HDRDGST_EN:
case ISCSI_PARAM_DATADGST_EN:
case ISCSI_PARAM_CONN_ADDRESS:
case ISCSI_PARAM_CONN_PORT:
case ISCSI_PARAM_EXP_STATSN:
case ISCSI_PARAM_PERSISTENT_ADDRESS:
case ISCSI_PARAM_PERSISTENT_PORT:
case ISCSI_PARAM_PING_TMO:
case ISCSI_PARAM_RECV_TMO:
case ISCSI_PARAM_INITIAL_R2T_EN:
case ISCSI_PARAM_MAX_R2T:
case ISCSI_PARAM_IMM_DATA_EN:
case ISCSI_PARAM_FIRST_BURST:
case ISCSI_PARAM_MAX_BURST:
case ISCSI_PARAM_PDU_INORDER_EN:
case ISCSI_PARAM_DATASEQ_INORDER_EN:
case ISCSI_PARAM_ERL:
case ISCSI_PARAM_TARGET_NAME:
case ISCSI_PARAM_TPGT:
case ISCSI_PARAM_USERNAME:
case ISCSI_PARAM_PASSWORD:
case ISCSI_PARAM_USERNAME_IN:
case ISCSI_PARAM_PASSWORD_IN:
case ISCSI_PARAM_FAST_ABORT:
case ISCSI_PARAM_ABORT_TMO:
case ISCSI_PARAM_LU_RESET_TMO:
case ISCSI_PARAM_TGT_RESET_TMO:
case ISCSI_PARAM_IFACE_NAME:
case ISCSI_PARAM_INITIATOR_NAME:
return S_IRUGO;
default:
return 0;
}
}
return 0;
}
/*
* 'Scsi_Host_Template' structure and 'iscsi_tranport' structure template
@ -2207,37 +2260,12 @@ struct iscsi_transport bnx2i_iscsi_transport = {
CAP_MULTI_R2T | CAP_DATADGST |
CAP_DATA_PATH_OFFLOAD |
CAP_TEXT_NEGO,
.param_mask = ISCSI_MAX_RECV_DLENGTH |
ISCSI_MAX_XMIT_DLENGTH |
ISCSI_HDRDGST_EN |
ISCSI_DATADGST_EN |
ISCSI_INITIAL_R2T_EN |
ISCSI_MAX_R2T |
ISCSI_IMM_DATA_EN |
ISCSI_FIRST_BURST |
ISCSI_MAX_BURST |
ISCSI_PDU_INORDER_EN |
ISCSI_DATASEQ_INORDER_EN |
ISCSI_ERL |
ISCSI_CONN_PORT |
ISCSI_CONN_ADDRESS |
ISCSI_EXP_STATSN |
ISCSI_PERSISTENT_PORT |
ISCSI_PERSISTENT_ADDRESS |
ISCSI_TARGET_NAME | ISCSI_TPGT |
ISCSI_USERNAME | ISCSI_PASSWORD |
ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN |
ISCSI_FAST_ABORT | ISCSI_ABORT_TMO |
ISCSI_LU_RESET_TMO | ISCSI_TGT_RESET_TMO |
ISCSI_PING_TMO | ISCSI_RECV_TMO |
ISCSI_IFACE_NAME | ISCSI_INITIATOR_NAME,
.host_param_mask = ISCSI_HOST_HWADDRESS | ISCSI_HOST_IPADDRESS |
ISCSI_HOST_NETDEV_NAME,
.create_session = bnx2i_session_create,
.destroy_session = bnx2i_session_destroy,
.create_conn = bnx2i_conn_create,
.bind_conn = bnx2i_conn_bind,
.destroy_conn = bnx2i_conn_destroy,
.attr_is_visible = bnx2i_attr_is_visible,
.set_param = iscsi_set_param,
.get_conn_param = iscsi_conn_get_param,
.get_session_param = iscsi_session_get_param,

View File

@ -105,25 +105,7 @@ static struct iscsi_transport cxgb3i_iscsi_transport = {
.caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_HDRDGST
| CAP_DATADGST | CAP_DIGEST_OFFLOAD |
CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO,
.param_mask = ISCSI_MAX_RECV_DLENGTH | ISCSI_MAX_XMIT_DLENGTH |
ISCSI_HDRDGST_EN | ISCSI_DATADGST_EN |
ISCSI_INITIAL_R2T_EN | ISCSI_MAX_R2T |
ISCSI_IMM_DATA_EN | ISCSI_FIRST_BURST |
ISCSI_MAX_BURST | ISCSI_PDU_INORDER_EN |
ISCSI_DATASEQ_INORDER_EN | ISCSI_ERL |
ISCSI_CONN_PORT | ISCSI_CONN_ADDRESS |
ISCSI_EXP_STATSN | ISCSI_PERSISTENT_PORT |
ISCSI_PERSISTENT_ADDRESS |
ISCSI_TARGET_NAME | ISCSI_TPGT |
ISCSI_USERNAME | ISCSI_PASSWORD |
ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN |
ISCSI_FAST_ABORT | ISCSI_ABORT_TMO |
ISCSI_LU_RESET_TMO | ISCSI_TGT_RESET_TMO |
ISCSI_PING_TMO | ISCSI_RECV_TMO |
ISCSI_IFACE_NAME | ISCSI_INITIATOR_NAME,
.host_param_mask = ISCSI_HOST_HWADDRESS | ISCSI_HOST_IPADDRESS |
ISCSI_HOST_INITIATOR_NAME |
ISCSI_HOST_NETDEV_NAME,
.attr_is_visible = cxgbi_attr_is_visible,
.get_host_param = cxgbi_get_host_param,
.set_host_param = cxgbi_set_host_param,
/* session management */

View File

@ -106,25 +106,7 @@ static struct iscsi_transport cxgb4i_iscsi_transport = {
.caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_HDRDGST |
CAP_DATADGST | CAP_DIGEST_OFFLOAD |
CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO,
.param_mask = ISCSI_MAX_RECV_DLENGTH | ISCSI_MAX_XMIT_DLENGTH |
ISCSI_HDRDGST_EN | ISCSI_DATADGST_EN |
ISCSI_INITIAL_R2T_EN | ISCSI_MAX_R2T |
ISCSI_IMM_DATA_EN | ISCSI_FIRST_BURST |
ISCSI_MAX_BURST | ISCSI_PDU_INORDER_EN |
ISCSI_DATASEQ_INORDER_EN | ISCSI_ERL |
ISCSI_CONN_PORT | ISCSI_CONN_ADDRESS |
ISCSI_EXP_STATSN | ISCSI_PERSISTENT_PORT |
ISCSI_PERSISTENT_ADDRESS |
ISCSI_TARGET_NAME | ISCSI_TPGT |
ISCSI_USERNAME | ISCSI_PASSWORD |
ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN |
ISCSI_FAST_ABORT | ISCSI_ABORT_TMO |
ISCSI_LU_RESET_TMO | ISCSI_TGT_RESET_TMO |
ISCSI_PING_TMO | ISCSI_RECV_TMO |
ISCSI_IFACE_NAME | ISCSI_INITIATOR_NAME,
.host_param_mask = ISCSI_HOST_HWADDRESS | ISCSI_HOST_IPADDRESS |
ISCSI_HOST_INITIATOR_NAME |
ISCSI_HOST_NETDEV_NAME,
.attr_is_visible = cxgbi_attr_is_visible,
.get_host_param = cxgbi_get_host_param,
.set_host_param = cxgbi_set_host_param,
/* session management */

View File

@ -2568,6 +2568,62 @@ void cxgbi_iscsi_cleanup(struct iscsi_transport *itp,
}
EXPORT_SYMBOL_GPL(cxgbi_iscsi_cleanup);
mode_t cxgbi_attr_is_visible(int param_type, int param)
{
switch (param_type) {
case ISCSI_HOST_PARAM:
switch (param) {
case ISCSI_HOST_PARAM_NETDEV_NAME:
case ISCSI_HOST_PARAM_HWADDRESS:
case ISCSI_HOST_PARAM_IPADDRESS:
case ISCSI_HOST_PARAM_INITIATOR_NAME:
return S_IRUGO;
default:
return 0;
}
case ISCSI_PARAM:
switch (param) {
case ISCSI_PARAM_MAX_RECV_DLENGTH:
case ISCSI_PARAM_MAX_XMIT_DLENGTH:
case ISCSI_PARAM_HDRDGST_EN:
case ISCSI_PARAM_DATADGST_EN:
case ISCSI_PARAM_CONN_ADDRESS:
case ISCSI_PARAM_CONN_PORT:
case ISCSI_PARAM_EXP_STATSN:
case ISCSI_PARAM_PERSISTENT_ADDRESS:
case ISCSI_PARAM_PERSISTENT_PORT:
case ISCSI_PARAM_PING_TMO:
case ISCSI_PARAM_RECV_TMO:
case ISCSI_PARAM_INITIAL_R2T_EN:
case ISCSI_PARAM_MAX_R2T:
case ISCSI_PARAM_IMM_DATA_EN:
case ISCSI_PARAM_FIRST_BURST:
case ISCSI_PARAM_MAX_BURST:
case ISCSI_PARAM_PDU_INORDER_EN:
case ISCSI_PARAM_DATASEQ_INORDER_EN:
case ISCSI_PARAM_ERL:
case ISCSI_PARAM_TARGET_NAME:
case ISCSI_PARAM_TPGT:
case ISCSI_PARAM_USERNAME:
case ISCSI_PARAM_PASSWORD:
case ISCSI_PARAM_USERNAME_IN:
case ISCSI_PARAM_PASSWORD_IN:
case ISCSI_PARAM_FAST_ABORT:
case ISCSI_PARAM_ABORT_TMO:
case ISCSI_PARAM_LU_RESET_TMO:
case ISCSI_PARAM_TGT_RESET_TMO:
case ISCSI_PARAM_IFACE_NAME:
case ISCSI_PARAM_INITIATOR_NAME:
return S_IRUGO;
default:
return 0;
}
}
return 0;
}
EXPORT_SYMBOL_GPL(cxgbi_attr_is_visible);
static int __init libcxgbi_init_module(void)
{
sw_tag_idx_bits = (__ilog2_u32(ISCSI_ITT_MASK)) + 1;

View File

@ -709,6 +709,7 @@ int cxgbi_conn_xmit_pdu(struct iscsi_task *);
void cxgbi_cleanup_task(struct iscsi_task *task);
mode_t cxgbi_attr_is_visible(int param_type, int param);
void cxgbi_get_conn_stats(struct iscsi_cls_conn *, struct iscsi_stats *);
int cxgbi_set_conn_param(struct iscsi_cls_conn *,
enum iscsi_param, char *, int);

View File

@ -59,6 +59,46 @@ static struct scsi_device_handler *get_device_handler_by_idx(int idx)
return found;
}
/*
* device_handler_match_function - Match a device handler to a device
* @sdev - SCSI device to be tested
*
* Tests @sdev against the match function of all registered device_handler.
* Returns the found device handler or NULL if not found.
*/
static struct scsi_device_handler *
device_handler_match_function(struct scsi_device *sdev)
{
struct scsi_device_handler *tmp_dh, *found_dh = NULL;
spin_lock(&list_lock);
list_for_each_entry(tmp_dh, &scsi_dh_list, list) {
if (tmp_dh->match && tmp_dh->match(sdev)) {
found_dh = tmp_dh;
break;
}
}
spin_unlock(&list_lock);
return found_dh;
}
/*
* device_handler_match_devlist - Match a device handler to a device
* @sdev - SCSI device to be tested
*
* Tests @sdev against all device_handler registered in the devlist.
* Returns the found device handler or NULL if not found.
*/
static struct scsi_device_handler *
device_handler_match_devlist(struct scsi_device *sdev)
{
int idx;
idx = scsi_get_device_flags_keyed(sdev, sdev->vendor, sdev->model,
SCSI_DEVINFO_DH);
return get_device_handler_by_idx(idx);
}
/*
* device_handler_match - Attach a device handler to a device
* @scsi_dh - The device handler to match against or NULL
@ -72,12 +112,11 @@ static struct scsi_device_handler *
device_handler_match(struct scsi_device_handler *scsi_dh,
struct scsi_device *sdev)
{
struct scsi_device_handler *found_dh = NULL;
int idx;
struct scsi_device_handler *found_dh;
idx = scsi_get_device_flags_keyed(sdev, sdev->vendor, sdev->model,
SCSI_DEVINFO_DH);
found_dh = get_device_handler_by_idx(idx);
found_dh = device_handler_match_function(sdev);
if (!found_dh)
found_dh = device_handler_match_devlist(sdev);
if (scsi_dh && found_dh != scsi_dh)
found_dh = NULL;
@ -151,6 +190,10 @@ store_dh_state(struct device *dev, struct device_attribute *attr,
struct scsi_device_handler *scsi_dh;
int err = -EINVAL;
if (sdev->sdev_state == SDEV_CANCEL ||
sdev->sdev_state == SDEV_DEL)
return -ENODEV;
if (!sdev->scsi_dh_data) {
/*
* Attach to a device handler
@ -327,7 +370,7 @@ int scsi_register_device_handler(struct scsi_device_handler *scsi_dh)
list_add(&scsi_dh->list, &scsi_dh_list);
spin_unlock(&list_lock);
for (i = 0; scsi_dh->devlist[i].vendor; i++) {
for (i = 0; scsi_dh->devlist && scsi_dh->devlist[i].vendor; i++) {
scsi_dev_info_list_add_keyed(0,
scsi_dh->devlist[i].vendor,
scsi_dh->devlist[i].model,
@ -360,7 +403,7 @@ int scsi_unregister_device_handler(struct scsi_device_handler *scsi_dh)
bus_for_each_dev(&scsi_bus_type, NULL, scsi_dh,
scsi_dh_notifier_remove);
for (i = 0; scsi_dh->devlist[i].vendor; i++) {
for (i = 0; scsi_dh->devlist && scsi_dh->devlist[i].vendor; i++) {
scsi_dev_info_list_del_keyed(scsi_dh->devlist[i].vendor,
scsi_dh->devlist[i].model,
SCSI_DEVINFO_DH);
@ -468,7 +511,7 @@ int scsi_dh_handler_exist(const char *name)
EXPORT_SYMBOL_GPL(scsi_dh_handler_exist);
/*
* scsi_dh_handler_attach - Attach device handler
* scsi_dh_attach - Attach device handler
* @sdev - sdev the handler should be attached to
* @name - name of the handler to attach
*/
@ -498,7 +541,7 @@ int scsi_dh_attach(struct request_queue *q, const char *name)
EXPORT_SYMBOL_GPL(scsi_dh_attach);
/*
* scsi_dh_handler_detach - Detach device handler
* scsi_dh_detach - Detach device handler
* @sdev - sdev the handler should be detached from
*
* This function will detach the device handler only

View File

@ -127,43 +127,6 @@ static struct request *get_alua_req(struct scsi_device *sdev,
return rq;
}
/*
* submit_std_inquiry - Issue a standard INQUIRY command
* @sdev: sdev the command should be send to
*/
static int submit_std_inquiry(struct scsi_device *sdev, struct alua_dh_data *h)
{
struct request *rq;
int err = SCSI_DH_RES_TEMP_UNAVAIL;
rq = get_alua_req(sdev, h->inq, ALUA_INQUIRY_SIZE, READ);
if (!rq)
goto done;
/* Prepare the command. */
rq->cmd[0] = INQUIRY;
rq->cmd[1] = 0;
rq->cmd[2] = 0;
rq->cmd[4] = ALUA_INQUIRY_SIZE;
rq->cmd_len = COMMAND_SIZE(INQUIRY);
rq->sense = h->sense;
memset(rq->sense, 0, SCSI_SENSE_BUFFERSIZE);
rq->sense_len = h->senselen = 0;
err = blk_execute_rq(rq->q, NULL, rq, 1);
if (err == -EIO) {
sdev_printk(KERN_INFO, sdev,
"%s: std inquiry failed with %x\n",
ALUA_DH_NAME, rq->errors);
h->senselen = rq->sense_len;
err = SCSI_DH_IO;
}
blk_put_request(rq);
done:
return err;
}
/*
* submit_vpd_inquiry - Issue an INQUIRY VPD page 0x83 command
* @sdev: sdev the command should be sent to
@ -338,23 +301,17 @@ static unsigned submit_stpg(struct alua_dh_data *h)
}
/*
* alua_std_inquiry - Evaluate standard INQUIRY command
* alua_check_tpgs - Evaluate TPGS setting
* @sdev: device to be checked
*
* Just extract the TPGS setting to find out if ALUA
* Examine the TPGS setting of the sdev to find out if ALUA
* is supported.
*/
static int alua_std_inquiry(struct scsi_device *sdev, struct alua_dh_data *h)
static int alua_check_tpgs(struct scsi_device *sdev, struct alua_dh_data *h)
{
int err;
int err = SCSI_DH_OK;
err = submit_std_inquiry(sdev, h);
if (err != SCSI_DH_OK)
return err;
/* Check TPGS setting */
h->tpgs = (h->inq[5] >> 4) & 0x3;
h->tpgs = scsi_device_tpgs(sdev);
switch (h->tpgs) {
case TPGS_MODE_EXPLICIT|TPGS_MODE_IMPLICIT:
sdev_printk(KERN_INFO, sdev,
@ -508,27 +465,28 @@ static int alua_check_sense(struct scsi_device *sdev,
* Power On, Reset, or Bus Device Reset, just retry.
*/
return ADD_TO_MLQUEUE;
if (sense_hdr->asc == 0x2a && sense_hdr->ascq == 0x06) {
if (sense_hdr->asc == 0x2a && sense_hdr->ascq == 0x06)
/*
* ALUA state changed
*/
return ADD_TO_MLQUEUE;
}
if (sense_hdr->asc == 0x2a && sense_hdr->ascq == 0x07) {
if (sense_hdr->asc == 0x2a && sense_hdr->ascq == 0x07)
/*
* Implicit ALUA state transition failed
*/
return ADD_TO_MLQUEUE;
}
if (sense_hdr->asc == 0x3f && sense_hdr->ascq == 0x0e) {
if (sense_hdr->asc == 0x3f && sense_hdr->ascq == 0x03)
/*
* Inquiry data has changed
*/
return ADD_TO_MLQUEUE;
if (sense_hdr->asc == 0x3f && sense_hdr->ascq == 0x0e)
/*
* REPORTED_LUNS_DATA_HAS_CHANGED is reported
* when switching controllers on targets like
* Intel Multi-Flex. We can just retry.
*/
return ADD_TO_MLQUEUE;
}
break;
}
@ -547,9 +505,9 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_dh_data *h)
{
struct scsi_sense_hdr sense_hdr;
int len, k, off, valid_states = 0;
char *ucp;
unsigned char *ucp;
unsigned err;
unsigned long expiry, interval = 10;
unsigned long expiry, interval = 1;
expiry = round_jiffies_up(jiffies + ALUA_FAILOVER_TIMEOUT);
retry:
@ -610,7 +568,7 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_dh_data *h)
case TPGS_STATE_TRANSITIONING:
if (time_before(jiffies, expiry)) {
/* State transition, retry */
interval *= 10;
interval *= 2;
msleep(interval);
goto retry;
}
@ -642,7 +600,7 @@ static int alua_initialize(struct scsi_device *sdev, struct alua_dh_data *h)
{
int err;
err = alua_std_inquiry(sdev, h);
err = alua_check_tpgs(sdev, h);
if (err != SCSI_DH_OK)
goto out;
@ -674,11 +632,9 @@ static int alua_activate(struct scsi_device *sdev,
struct alua_dh_data *h = get_alua_data(sdev);
int err = SCSI_DH_OK;
if (h->group_id != -1) {
err = alua_rtpg(sdev, h);
if (err != SCSI_DH_OK)
goto out;
}
err = alua_rtpg(sdev, h);
if (err != SCSI_DH_OK)
goto out;
if (h->tpgs & TPGS_MODE_EXPLICIT &&
h->state != TPGS_STATE_OPTIMIZED &&
@ -720,23 +676,10 @@ static int alua_prep_fn(struct scsi_device *sdev, struct request *req)
}
static const struct scsi_dh_devlist alua_dev_list[] = {
{"HP", "MSA VOLUME" },
{"HP", "HSV101" },
{"HP", "HSV111" },
{"HP", "HSV200" },
{"HP", "HSV210" },
{"HP", "HSV300" },
{"IBM", "2107900" },
{"IBM", "2145" },
{"Pillar", "Axiom" },
{"Intel", "Multi-Flex"},
{"NETAPP", "LUN"},
{"NETAPP", "LUN C-Mode"},
{"AIX", "NVDISK"},
{"Promise", "VTrak"},
{NULL, NULL}
};
static bool alua_match(struct scsi_device *sdev)
{
return (scsi_device_tpgs(sdev) != 0);
}
static int alua_bus_attach(struct scsi_device *sdev);
static void alua_bus_detach(struct scsi_device *sdev);
@ -744,12 +687,12 @@ static void alua_bus_detach(struct scsi_device *sdev);
static struct scsi_device_handler alua_dh = {
.name = ALUA_DH_NAME,
.module = THIS_MODULE,
.devlist = alua_dev_list,
.attach = alua_bus_attach,
.detach = alua_bus_detach,
.prep_fn = alua_prep_fn,
.check_sense = alua_check_sense,
.activate = alua_activate,
.match = alua_match,
};
/*

View File

@ -1,5 +1,5 @@
/*
* Engenio/LSI RDAC SCSI Device Handler
* LSI/Engenio/NetApp E-Series RDAC SCSI Device Handler
*
* Copyright (C) 2005 Mike Christie. All rights reserved.
* Copyright (C) Chandra Seetharaman, IBM Corp. 2007
@ -795,6 +795,7 @@ static const struct scsi_dh_devlist rdac_dev_list[] = {
{"IBM", "3526"},
{"SGI", "TP9400"},
{"SGI", "TP9500"},
{"SGI", "TP9700"},
{"SGI", "IS"},
{"STK", "OPENstorage D280"},
{"SUN", "CSM200_R"},
@ -814,6 +815,7 @@ static const struct scsi_dh_devlist rdac_dev_list[] = {
{"SUN", "CSM100_R_FC"},
{"SUN", "STK6580_6780"},
{"SUN", "SUN_6180"},
{"SUN", "ArrayStorage"},
{NULL, NULL},
};
@ -945,7 +947,7 @@ static void __exit rdac_exit(void)
module_init(rdac_init);
module_exit(rdac_exit);
MODULE_DESCRIPTION("Multipath LSI/Engenio RDAC driver");
MODULE_DESCRIPTION("Multipath LSI/Engenio/NetApp E-Series RDAC driver");
MODULE_AUTHOR("Mike Christie, Chandra Seetharaman");
MODULE_VERSION("01.00.0000.0000");
MODULE_LICENSE("GPL");

View File

@ -51,7 +51,7 @@ MODULE_DESCRIPTION("FCoE");
MODULE_LICENSE("GPL v2");
/* Performance tuning parameters for fcoe */
static unsigned int fcoe_ddp_min;
static unsigned int fcoe_ddp_min = 4096;
module_param_named(ddp_min, fcoe_ddp_min, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(ddp_min, "Minimum I/O size in bytes for " \
"Direct Data Placement (DDP).");
@ -137,7 +137,6 @@ static int fcoe_vport_create(struct fc_vport *, bool disabled);
static int fcoe_vport_disable(struct fc_vport *, bool disable);
static void fcoe_set_vport_symbolic_name(struct fc_vport *);
static void fcoe_set_port_id(struct fc_lport *, u32, struct fc_frame *);
static int fcoe_validate_vport_create(struct fc_vport *);
static struct libfc_function_template fcoe_libfc_fcn_templ = {
.frame_send = fcoe_xmit,
@ -280,6 +279,7 @@ static int fcoe_interface_setup(struct fcoe_interface *fcoe,
* use the first one for SPMA */
real_dev = (netdev->priv_flags & IFF_802_1Q_VLAN) ?
vlan_dev_real_dev(netdev) : netdev;
fcoe->realdev = real_dev;
rcu_read_lock();
for_each_dev_addr(real_dev, ha) {
if ((ha->type == NETDEV_HW_ADDR_T_SAN) &&
@ -579,23 +579,6 @@ static int fcoe_lport_config(struct fc_lport *lport)
return 0;
}
/**
* fcoe_get_wwn() - Get the world wide name from LLD if it supports it
* @netdev: the associated net device
* @wwn: the output WWN
* @type: the type of WWN (WWPN or WWNN)
*
* Returns: 0 for success
*/
static int fcoe_get_wwn(struct net_device *netdev, u64 *wwn, int type)
{
const struct net_device_ops *ops = netdev->netdev_ops;
if (ops->ndo_fcoe_get_wwn)
return ops->ndo_fcoe_get_wwn(netdev, wwn, type);
return -EINVAL;
}
/**
* fcoe_netdev_features_change - Updates the lport's offload flags based
* on the LLD netdev's FCoE feature flags
@ -1134,8 +1117,9 @@ static void fcoe_percpu_thread_create(unsigned int cpu)
p = &per_cpu(fcoe_percpu, cpu);
thread = kthread_create(fcoe_percpu_receive_thread,
(void *)p, "fcoethread/%d", cpu);
thread = kthread_create_on_node(fcoe_percpu_receive_thread,
(void *)p, cpu_to_node(cpu),
"fcoethread/%d", cpu);
if (likely(!IS_ERR(thread))) {
kthread_bind(thread, cpu);
@ -1538,7 +1522,13 @@ int fcoe_xmit(struct fc_lport *lport, struct fc_frame *fp)
skb_reset_network_header(skb);
skb->mac_len = elen;
skb->protocol = htons(ETH_P_FCOE);
skb->dev = fcoe->netdev;
if (fcoe->netdev->priv_flags & IFF_802_1Q_VLAN &&
fcoe->realdev->features & NETIF_F_HW_VLAN_TX) {
skb->vlan_tci = VLAN_TAG_PRESENT |
vlan_dev_vlan_id(fcoe->netdev);
skb->dev = fcoe->realdev;
} else
skb->dev = fcoe->netdev;
/* fill up mac and fcoe headers */
eh = eth_hdr(skb);
@ -2446,7 +2436,7 @@ static int fcoe_vport_create(struct fc_vport *vport, bool disabled)
rc = fcoe_validate_vport_create(vport);
if (rc) {
wwn_to_str(vport->port_name, buf, sizeof(buf));
fcoe_wwn_to_str(vport->port_name, buf, sizeof(buf));
printk(KERN_ERR "fcoe: Failed to create vport, "
"WWPN (0x%s) already exists\n",
buf);
@ -2555,28 +2545,9 @@ static void fcoe_set_vport_symbolic_name(struct fc_vport *vport)
static void fcoe_get_lesb(struct fc_lport *lport,
struct fc_els_lesb *fc_lesb)
{
unsigned int cpu;
u32 lfc, vlfc, mdac;
struct fcoe_dev_stats *devst;
struct fcoe_fc_els_lesb *lesb;
struct rtnl_link_stats64 temp;
struct net_device *netdev = fcoe_netdev(lport);
lfc = 0;
vlfc = 0;
mdac = 0;
lesb = (struct fcoe_fc_els_lesb *)fc_lesb;
memset(lesb, 0, sizeof(*lesb));
for_each_possible_cpu(cpu) {
devst = per_cpu_ptr(lport->dev_stats, cpu);
lfc += devst->LinkFailureCount;
vlfc += devst->VLinkFailureCount;
mdac += devst->MissDiscAdvCount;
}
lesb->lesb_link_fail = htonl(lfc);
lesb->lesb_vlink_fail = htonl(vlfc);
lesb->lesb_miss_fka = htonl(mdac);
lesb->lesb_fcs_error = htonl(dev_get_stats(netdev, &temp)->rx_crc_errors);
__fcoe_get_lesb(lport, fc_lesb, netdev);
}
/**
@ -2600,49 +2571,3 @@ static void fcoe_set_port_id(struct fc_lport *lport,
if (fp && fc_frame_payload_op(fp) == ELS_FLOGI)
fcoe_ctlr_recv_flogi(&fcoe->ctlr, lport, fp);
}
/**
* fcoe_validate_vport_create() - Validate a vport before creating it
* @vport: NPIV port to be created
*
* This routine is meant to add validation for a vport before creating it
* via fcoe_vport_create().
* Current validations are:
* - WWPN supplied is unique for given lport
*
*
*/
static int fcoe_validate_vport_create(struct fc_vport *vport)
{
struct Scsi_Host *shost = vport_to_shost(vport);
struct fc_lport *n_port = shost_priv(shost);
struct fc_lport *vn_port;
int rc = 0;
char buf[32];
mutex_lock(&n_port->lp_mutex);
wwn_to_str(vport->port_name, buf, sizeof(buf));
/* Check if the wwpn is not same as that of the lport */
if (!memcmp(&n_port->wwpn, &vport->port_name, sizeof(u64))) {
FCOE_DBG("vport WWPN 0x%s is same as that of the "
"base port WWPN\n", buf);
rc = -EINVAL;
goto out;
}
/* Check if there is any existing vport with same wwpn */
list_for_each_entry(vn_port, &n_port->vports, list) {
if (!memcmp(&vn_port->wwpn, &vport->port_name, sizeof(u64))) {
FCOE_DBG("vport with given WWPN 0x%s already "
"exists\n", buf);
rc = -EINVAL;
break;
}
}
out:
mutex_unlock(&n_port->lp_mutex);
return rc;
}

View File

@ -80,6 +80,7 @@ do { \
struct fcoe_interface {
struct list_head list;
struct net_device *netdev;
struct net_device *realdev;
struct packet_type fcoe_packet_type;
struct packet_type fip_packet_type;
struct fcoe_ctlr ctlr;
@ -99,14 +100,4 @@ static inline struct net_device *fcoe_netdev(const struct fc_lport *lport)
((struct fcoe_port *)lport_priv(lport))->priv)->netdev;
}
static inline void wwn_to_str(u64 wwn, char *buf, int len)
{
u8 wwpn[8];
u64_to_wwn(wwn, wwpn);
snprintf(buf, len, "%02x%02x%02x%02x%02x%02x%02x%02x",
wwpn[0], wwpn[1], wwpn[2], wwpn[3],
wwpn[4], wwpn[5], wwpn[6], wwpn[7]);
}
#endif /* _FCOE_H_ */

View File

@ -83,6 +83,107 @@ static struct notifier_block libfcoe_notifier = {
.notifier_call = libfcoe_device_notification,
};
void __fcoe_get_lesb(struct fc_lport *lport,
struct fc_els_lesb *fc_lesb,
struct net_device *netdev)
{
unsigned int cpu;
u32 lfc, vlfc, mdac;
struct fcoe_dev_stats *devst;
struct fcoe_fc_els_lesb *lesb;
struct rtnl_link_stats64 temp;
lfc = 0;
vlfc = 0;
mdac = 0;
lesb = (struct fcoe_fc_els_lesb *)fc_lesb;
memset(lesb, 0, sizeof(*lesb));
for_each_possible_cpu(cpu) {
devst = per_cpu_ptr(lport->dev_stats, cpu);
lfc += devst->LinkFailureCount;
vlfc += devst->VLinkFailureCount;
mdac += devst->MissDiscAdvCount;
}
lesb->lesb_link_fail = htonl(lfc);
lesb->lesb_vlink_fail = htonl(vlfc);
lesb->lesb_miss_fka = htonl(mdac);
lesb->lesb_fcs_error =
htonl(dev_get_stats(netdev, &temp)->rx_crc_errors);
}
EXPORT_SYMBOL_GPL(__fcoe_get_lesb);
void fcoe_wwn_to_str(u64 wwn, char *buf, int len)
{
u8 wwpn[8];
u64_to_wwn(wwn, wwpn);
snprintf(buf, len, "%02x%02x%02x%02x%02x%02x%02x%02x",
wwpn[0], wwpn[1], wwpn[2], wwpn[3],
wwpn[4], wwpn[5], wwpn[6], wwpn[7]);
}
EXPORT_SYMBOL_GPL(fcoe_wwn_to_str);
/**
* fcoe_validate_vport_create() - Validate a vport before creating it
* @vport: NPIV port to be created
*
* This routine is meant to add validation for a vport before creating it
* via fcoe_vport_create().
* Current validations are:
* - WWPN supplied is unique for given lport
*/
int fcoe_validate_vport_create(struct fc_vport *vport)
{
struct Scsi_Host *shost = vport_to_shost(vport);
struct fc_lport *n_port = shost_priv(shost);
struct fc_lport *vn_port;
int rc = 0;
char buf[32];
mutex_lock(&n_port->lp_mutex);
fcoe_wwn_to_str(vport->port_name, buf, sizeof(buf));
/* Check if the wwpn is not same as that of the lport */
if (!memcmp(&n_port->wwpn, &vport->port_name, sizeof(u64))) {
LIBFCOE_TRANSPORT_DBG("vport WWPN 0x%s is same as that of the "
"base port WWPN\n", buf);
rc = -EINVAL;
goto out;
}
/* Check if there is any existing vport with same wwpn */
list_for_each_entry(vn_port, &n_port->vports, list) {
if (!memcmp(&vn_port->wwpn, &vport->port_name, sizeof(u64))) {
LIBFCOE_TRANSPORT_DBG("vport with given WWPN 0x%s "
"already exists\n", buf);
rc = -EINVAL;
break;
}
}
out:
mutex_unlock(&n_port->lp_mutex);
return rc;
}
EXPORT_SYMBOL_GPL(fcoe_validate_vport_create);
/**
* fcoe_get_wwn() - Get the world wide name from LLD if it supports it
* @netdev: the associated net device
* @wwn: the output WWN
* @type: the type of WWN (WWPN or WWNN)
*
* Returns: 0 for success
*/
int fcoe_get_wwn(struct net_device *netdev, u64 *wwn, int type)
{
const struct net_device_ops *ops = netdev->netdev_ops;
if (ops->ndo_fcoe_get_wwn)
return ops->ndo_fcoe_get_wwn(netdev, wwn, type);
return -EINVAL;
}
EXPORT_SYMBOL_GPL(fcoe_get_wwn);
/**
* fcoe_fc_crc() - Calculates the CRC for a given frame
* @fp: The frame to be checksumed

View File

@ -3438,10 +3438,8 @@ static __devinit int hpsa_kdump_hard_reset_controller(struct pci_dev *pdev)
} else {
use_doorbell = misc_fw_support & MISC_FW_DOORBELL_RESET;
if (use_doorbell) {
dev_warn(&pdev->dev, "Controller claims that "
"'Bit 2 doorbell reset' is "
"supported, but not 'bit 5 doorbell reset'. "
"Firmware update is recommended.\n");
dev_warn(&pdev->dev, "Soft reset not supported. "
"Firmware update is required.\n");
rc = -ENOTSUPP; /* try soft reset */
goto unmap_cfgtable;
}

View File

@ -2901,7 +2901,7 @@ static void ipr_get_ioa_dump(struct ipr_ioa_cfg *ioa_cfg, struct ipr_dump *dump)
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
if (ioa_cfg->sdt_state != GET_DUMP) {
if (ioa_cfg->sdt_state != READ_DUMP) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return;
}
@ -3097,7 +3097,7 @@ static void ipr_worker_thread(struct work_struct *work)
ENTER;
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
if (ioa_cfg->sdt_state == GET_DUMP) {
if (ioa_cfg->sdt_state == READ_DUMP) {
dump = ioa_cfg->dump;
if (!dump) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
@ -3109,7 +3109,7 @@ static void ipr_worker_thread(struct work_struct *work)
kref_put(&dump->kref, ipr_release_dump);
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
if (ioa_cfg->sdt_state == DUMP_OBTAINED)
if (ioa_cfg->sdt_state == DUMP_OBTAINED && !ioa_cfg->dump_timeout)
ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return;
@ -3751,14 +3751,6 @@ static ssize_t ipr_store_update_fw(struct device *dev,
image_hdr = (struct ipr_ucode_image_header *)fw_entry->data;
if (be32_to_cpu(image_hdr->header_length) > fw_entry->size ||
(ioa_cfg->vpd_cbs->page3_data.card_type &&
ioa_cfg->vpd_cbs->page3_data.card_type != image_hdr->card_type)) {
dev_err(&ioa_cfg->pdev->dev, "Invalid microcode buffer\n");
release_firmware(fw_entry);
return -EINVAL;
}
src = (u8 *)image_hdr + be32_to_cpu(image_hdr->header_length);
dnld_size = fw_entry->size - be32_to_cpu(image_hdr->header_length);
sglist = ipr_alloc_ucode_buffer(dnld_size);
@ -3777,6 +3769,8 @@ static ssize_t ipr_store_update_fw(struct device *dev,
goto out;
}
ipr_info("Updating microcode, please be patient. This may take up to 30 minutes.\n");
result = ipr_update_ioa_ucode(ioa_cfg, sglist);
if (!result)
@ -7449,8 +7443,11 @@ static int ipr_reset_wait_for_dump(struct ipr_cmnd *ipr_cmd)
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
if (ioa_cfg->sdt_state == GET_DUMP)
ioa_cfg->sdt_state = WAIT_FOR_DUMP;
else if (ioa_cfg->sdt_state == READ_DUMP)
ioa_cfg->sdt_state = ABORT_DUMP;
ioa_cfg->dump_timeout = 1;
ipr_cmd->job_step = ipr_reset_alert;
return IPR_RC_JOB_CONTINUE;
@ -7614,6 +7611,8 @@ static int ipr_reset_restore_cfg_space(struct ipr_cmnd *ipr_cmd)
ipr_cmd->job_step = ipr_reset_enable_ioa;
if (GET_DUMP == ioa_cfg->sdt_state) {
ioa_cfg->sdt_state = READ_DUMP;
ioa_cfg->dump_timeout = 0;
if (ioa_cfg->sis64)
ipr_reset_start_timer(ipr_cmd, IPR_SIS64_DUMP_TIMEOUT);
else
@ -8003,8 +8002,12 @@ static void ipr_initiate_ioa_reset(struct ipr_ioa_cfg *ioa_cfg,
if (ioa_cfg->ioa_is_dead)
return;
if (ioa_cfg->in_reset_reload && ioa_cfg->sdt_state == GET_DUMP)
ioa_cfg->sdt_state = ABORT_DUMP;
if (ioa_cfg->in_reset_reload) {
if (ioa_cfg->sdt_state == GET_DUMP)
ioa_cfg->sdt_state = WAIT_FOR_DUMP;
else if (ioa_cfg->sdt_state == READ_DUMP)
ioa_cfg->sdt_state = ABORT_DUMP;
}
if (ioa_cfg->reset_retries++ >= IPR_NUM_RESET_RELOAD_RETRIES) {
dev_err(&ioa_cfg->pdev->dev,
@ -8812,7 +8815,7 @@ static int __devinit ipr_probe_ioa(struct pci_dev *pdev,
uproc = readl(ioa_cfg->regs.sense_uproc_interrupt_reg32);
if ((mask & IPR_PCII_HRRQ_UPDATED) == 0 || (uproc & IPR_UPROCI_RESET_ALERT))
ioa_cfg->needs_hard_reset = 1;
if (interrupts & IPR_PCII_ERROR_INTERRUPTS)
if ((interrupts & IPR_PCII_ERROR_INTERRUPTS) || reset_devices)
ioa_cfg->needs_hard_reset = 1;
if (interrupts & IPR_PCII_IOA_UNIT_CHECKED)
ioa_cfg->ioa_unit_checked = 1;

View File

@ -208,7 +208,7 @@
#define IPR_CANCEL_ALL_TIMEOUT (ipr_fastfail ? 10 * HZ : 30 * HZ)
#define IPR_ABORT_TASK_TIMEOUT (ipr_fastfail ? 10 * HZ : 30 * HZ)
#define IPR_INTERNAL_TIMEOUT (ipr_fastfail ? 10 * HZ : 30 * HZ)
#define IPR_WRITE_BUFFER_TIMEOUT (10 * 60 * HZ)
#define IPR_WRITE_BUFFER_TIMEOUT (30 * 60 * HZ)
#define IPR_SET_SUP_DEVICE_TIMEOUT (2 * 60 * HZ)
#define IPR_REQUEST_SENSE_TIMEOUT (10 * HZ)
#define IPR_OPERATIONAL_TIMEOUT (5 * 60)
@ -1360,6 +1360,7 @@ enum ipr_sdt_state {
INACTIVE,
WAIT_FOR_DUMP,
GET_DUMP,
READ_DUMP,
ABORT_DUMP,
DUMP_OBTAINED
};
@ -1384,6 +1385,7 @@ struct ipr_ioa_cfg {
u8 needs_warm_reset:1;
u8 msi_received:1;
u8 sis64:1;
u8 dump_timeout:1;
u8 revid;

View File

@ -1263,6 +1263,10 @@ void isci_host_deinit(struct isci_host *ihost)
{
int i;
/* disable output data selects */
for (i = 0; i < isci_gpio_count(ihost); i++)
writel(SGPIO_HW_CONTROL, &ihost->scu_registers->peg0.sgpio.output_data_select[i]);
isci_host_change_state(ihost, isci_stopping);
for (i = 0; i < SCI_MAX_PORTS; i++) {
struct isci_port *iport = &ihost->ports[i];
@ -1281,6 +1285,12 @@ void isci_host_deinit(struct isci_host *ihost)
spin_unlock_irq(&ihost->scic_lock);
wait_for_stop(ihost);
/* disable sgpio: where the above wait should give time for the
* enclosure to sample the gpios going inactive
*/
writel(0, &ihost->scu_registers->peg0.sgpio.interface_control);
sci_controller_reset(ihost);
/* Cancel any/all outstanding port timers */
@ -2365,6 +2375,12 @@ int isci_host_init(struct isci_host *ihost)
for (i = 0; i < SCI_MAX_PHYS; i++)
isci_phy_init(&ihost->phys[i], ihost, i);
/* enable sgpio */
writel(1, &ihost->scu_registers->peg0.sgpio.interface_control);
for (i = 0; i < isci_gpio_count(ihost); i++)
writel(SGPIO_HW_CONTROL, &ihost->scu_registers->peg0.sgpio.output_data_select[i]);
writel(0, &ihost->scu_registers->peg0.sgpio.vendor_specific_code);
for (i = 0; i < SCI_MAX_REMOTE_DEVICES; i++) {
struct isci_remote_device *idev = &ihost->devices[i];
@ -2760,3 +2776,56 @@ enum sci_task_status sci_controller_start_task(struct isci_host *ihost,
return status;
}
static int sci_write_gpio_tx_gp(struct isci_host *ihost, u8 reg_index, u8 reg_count, u8 *write_data)
{
int d;
/* no support for TX_GP_CFG */
if (reg_index == 0)
return -EINVAL;
for (d = 0; d < isci_gpio_count(ihost); d++) {
u32 val = 0x444; /* all ODx.n clear */
int i;
for (i = 0; i < 3; i++) {
int bit = (i << 2) + 2;
bit = try_test_sas_gpio_gp_bit(to_sas_gpio_od(d, i),
write_data, reg_index,
reg_count);
if (bit < 0)
break;
/* if od is set, clear the 'invert' bit */
val &= ~(bit << ((i << 2) + 2));
}
if (i < 3)
break;
writel(val, &ihost->scu_registers->peg0.sgpio.output_data_select[d]);
}
/* unless reg_index is > 1, we should always be able to write at
* least one register
*/
return d > 0;
}
int isci_gpio_write(struct sas_ha_struct *sas_ha, u8 reg_type, u8 reg_index,
u8 reg_count, u8 *write_data)
{
struct isci_host *ihost = sas_ha->lldd_ha;
int written;
switch (reg_type) {
case SAS_GPIO_REG_TX_GP:
written = sci_write_gpio_tx_gp(ihost, reg_index, reg_count, write_data);
break;
default:
written = -EINVAL;
}
return written;
}

View File

@ -440,6 +440,18 @@ static inline bool is_c0(struct pci_dev *pdev)
return false;
}
/* set hw control for 'activity', even though active enclosures seem to drive
* the activity led on their own. Skip setting FSENG control on 'status' due
* to unexpected operation and 'error' due to not being a supported automatic
* FSENG output
*/
#define SGPIO_HW_CONTROL 0x00000443
static inline int isci_gpio_count(struct isci_host *ihost)
{
return ARRAY_SIZE(ihost->scu_registers->peg0.sgpio.output_data_select);
}
void sci_controller_post_request(struct isci_host *ihost,
u32 request);
void sci_controller_release_frame(struct isci_host *ihost,
@ -542,4 +554,7 @@ void sci_port_configuration_agent_construct(
enum sci_status sci_port_configuration_agent_initialize(
struct isci_host *ihost,
struct sci_port_configuration_agent *port_agent);
int isci_gpio_write(struct sas_ha_struct *, u8 reg_type, u8 reg_index,
u8 reg_count, u8 *write_data);
#endif

View File

@ -192,6 +192,9 @@ static struct sas_domain_function_template isci_transport_ops = {
/* Phy management */
.lldd_control_phy = isci_phy_control,
/* GPIO support */
.lldd_write_gpio = isci_gpio_write,
};

View File

@ -97,7 +97,7 @@
#define SCU_MAX_COMPLETION_QUEUE_SHIFT (ilog2(SCU_MAX_COMPLETION_QUEUE_ENTRIES))
#define SCU_ABSOLUTE_MAX_UNSOLICITED_FRAMES (4096)
#define SCU_UNSOLICITED_FRAME_BUFFER_SIZE (1024)
#define SCU_UNSOLICITED_FRAME_BUFFER_SIZE (1024U)
#define SCU_INVALID_FRAME_INDEX (0xFFFF)
#define SCU_IO_REQUEST_MAX_SGE_SIZE (0x00FFFFFF)

View File

@ -1313,6 +1313,17 @@ int isci_phy_control(struct asd_sas_phy *sas_phy,
ret = isci_port_perform_hard_reset(ihost, iport, iphy);
break;
case PHY_FUNC_GET_EVENTS: {
struct scu_link_layer_registers __iomem *r;
struct sas_phy *phy = sas_phy->phy;
r = iphy->link_layer_registers;
phy->running_disparity_error_count = readl(&r->running_disparity_error_count);
phy->loss_of_dword_sync_count = readl(&r->loss_of_sync_error_count);
phy->phy_reset_problem_count = readl(&r->phy_reset_problem_count);
phy->invalid_dword_count = readl(&r->invalid_dword_counter);
break;
}
default:
dev_dbg(&ihost->pdev->dev,

View File

@ -294,8 +294,8 @@ static void isci_port_link_down(struct isci_host *isci_host,
__func__, isci_device);
set_bit(IDEV_GONE, &isci_device->flags);
}
isci_port_change_state(isci_port, isci_stopping);
}
isci_port_change_state(isci_port, isci_stopping);
}
/* Notify libsas of the borken link, this will trigger calls to our

View File

@ -678,7 +678,7 @@ static void apc_agent_timeout(unsigned long data)
configure_phy_mask = ~port_agent->phy_configured_mask & port_agent->phy_ready_mask;
if (!configure_phy_mask)
return;
goto done;
for (index = 0; index < SCI_MAX_PHYS; index++) {
if ((configure_phy_mask & (1 << index)) == 0)

View File

@ -875,122 +875,6 @@ struct scu_iit_entry {
#define SCU_PTSxSR_GEN_BIT(name) \
SCU_GEN_BIT(SCU_PTSG_PORT_TASK_SCHEDULER_STATUS_ ## name)
/*
* *****************************************************************************
* * SGPIO Register shift and mask values
* ***************************************************************************** */
#define SCU_SGPIO_CONTROL_SGPIO_ENABLE_SHIFT (0)
#define SCU_SGPIO_CONTROL_SGPIO_ENABLE_MASK (0x00000001)
#define SCU_SGPIO_CONTROL_SGPIO_SERIAL_CLOCK_SELECT_SHIFT (1)
#define SCU_SGPIO_CONTROL_SGPIO_SERIAL_CLOCK_SELECT_MASK (0x00000002)
#define SCU_SGPIO_CONTROL_SGPIO_SERIAL_SHIFT_WIDTH_SELECT_SHIFT (2)
#define SCU_SGPIO_CONTROL_SGPIO_SERIAL_SHIFT_WIDTH_SELECT_MASK (0x00000004)
#define SCU_SGPIO_CONTROL_SGPIO_TEST_BIT_SHIFT (15)
#define SCU_SGPIO_CONTROL_SGPIO_TEST_BIT_MASK (0x00008000)
#define SCU_SGPIO_CONTROL_SGPIO_RESERVED_MASK (0xFFFF7FF8)
#define SCU_SGICRx_GEN_BIT(name) \
SCU_GEN_BIT(SCU_SGPIO_CONTROL_SGPIO_ ## name)
#define SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_R0_SHIFT (0)
#define SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_R0_MASK (0x0000000F)
#define SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_R1_SHIFT (4)
#define SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_R1_MASK (0x000000F0)
#define SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_R2_SHIFT (8)
#define SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_R2_MASK (0x00000F00)
#define SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_R3_SHIFT (12)
#define SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_R3_MASK (0x0000F000)
#define SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_RESERVED_MASK (0xFFFF0000)
#define SCU_SGPBRx_GEN_VAL(name, value) \
SCU_GEN_VALUE(SCU_SGPIO_PROGRAMMABLE_BLINK_REGISTER_ ## name, value)
#define SCU_SGPIO_START_DRIVE_LOWER_R0_SHIFT (0)
#define SCU_SGPIO_START_DRIVE_LOWER_R0_MASK (0x00000003)
#define SCU_SGPIO_START_DRIVE_LOWER_R1_SHIFT (4)
#define SCU_SGPIO_START_DRIVE_LOWER_R1_MASK (0x00000030)
#define SCU_SGPIO_START_DRIVE_LOWER_R2_SHIFT (8)
#define SCU_SGPIO_START_DRIVE_LOWER_R2_MASK (0x00000300)
#define SCU_SGPIO_START_DRIVE_LOWER_R3_SHIFT (12)
#define SCU_SGPIO_START_DRIVE_LOWER_R3_MASK (0x00003000)
#define SCU_SGPIO_START_DRIVE_LOWER_RESERVED_MASK (0xFFFF8888)
#define SCU_SGSDLRx_GEN_VAL(name, value) \
SCU_GEN_VALUE(SCU_SGPIO_START_DRIVE_LOWER_ ## name, value)
#define SCU_SGPIO_START_DRIVE_UPPER_R0_SHIFT (0)
#define SCU_SGPIO_START_DRIVE_UPPER_R0_MASK (0x00000003)
#define SCU_SGPIO_START_DRIVE_UPPER_R1_SHIFT (4)
#define SCU_SGPIO_START_DRIVE_UPPER_R1_MASK (0x00000030)
#define SCU_SGPIO_START_DRIVE_UPPER_R2_SHIFT (8)
#define SCU_SGPIO_START_DRIVE_UPPER_R2_MASK (0x00000300)
#define SCU_SGPIO_START_DRIVE_UPPER_R3_SHIFT (12)
#define SCU_SGPIO_START_DRIVE_UPPER_R3_MASK (0x00003000)
#define SCU_SGPIO_START_DRIVE_UPPER_RESERVED_MASK (0xFFFF8888)
#define SCU_SGSDURx_GEN_VAL(name, value) \
SCU_GEN_VALUE(SCU_SGPIO_START_DRIVE_LOWER_ ## name, value)
#define SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_D0_SHIFT (0)
#define SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_D0_MASK (0x00000003)
#define SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_D1_SHIFT (4)
#define SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_D1_MASK (0x00000030)
#define SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_D2_SHIFT (8)
#define SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_D2_MASK (0x00000300)
#define SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_D3_SHIFT (12)
#define SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_D3_MASK (0x00003000)
#define SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_RESERVED_MASK (0xFFFF8888)
#define SCU_SGSIDLRx_GEN_VAL(name, value) \
SCU_GEN_VALUE(SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_ ## name, value)
#define SCU_SGPIO_SERIAL_INPUT_DATA_UPPER_D0_SHIFT (0)
#define SCU_SGPIO_SERIAL_INPUT_DATA_UPPER_D0_MASK (0x00000003)
#define SCU_SGPIO_SERIAL_INPUT_DATA_UPPER_D1_SHIFT (4)
#define SCU_SGPIO_SERIAL_INPUT_DATA_UPPER_D1_MASK (0x00000030)
#define SCU_SGPIO_SERIAL_INPUT_DATA_UPPER_D2_SHIFT (8)
#define SCU_SGPIO_SERIAL_INPUT_DATA_UPPER_D2_MASK (0x00000300)
#define SCU_SGPIO_SERIAL_INPUT_DATA_UPPER_D3_SHIFT (12)
#define SCU_SGPIO_SERIAL_INPUT_DATA_UPPER_D3_MASK (0x00003000)
#define SCU_SGPIO_SERIAL_INPUT_DATA_UPPER_RESERVED_MASK (0xFFFF8888)
#define SCU_SGSIDURx_GEN_VAL(name, value) \
SCU_GEN_VALUE(SCU_SGPIO_SERIAL_INPUT_DATA_LOWER_ ## name, value)
#define SCU_SGPIO_VENDOR_SPECIFIC_CODE_SHIFT (0)
#define SCU_SGPIO_VENDOR_SPECIFIC_CODE_MASK (0x0000000F)
#define SCU_SGPIO_VENDOR_SPECIFIC_CODE_RESERVED_MASK (0xFFFFFFF0)
#define SCU_SGVSCR_GEN_VAL(value) \
SCU_GEN_VALUE(SCU_SGPIO_VENDOR_SPECIFIC_CODE ## name, value)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INPUT_DATA0_SHIFT (0)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INPUT_DATA0_MASK (0x00000003)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INVERT_INPUT_DATA0_SHIFT (2)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INVERT_INPUT_DATA0_MASK (0x00000004)
#define SCU_SGPIO_OUPUT_DATA_SELECT_JOG_ENABLE_DATA0_SHIFT (3)
#define SCU_SGPIO_OUPUT_DATA_SELECT_JOG_ENABLE_DATA0_MASK (0x00000008)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INPUT_DATA1_SHIFT (4)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INPUT_DATA1_MASK (0x00000030)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INVERT_INPUT_DATA1_SHIFT (6)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INVERT_INPUT_DATA1_MASK (0x00000040)
#define SCU_SGPIO_OUPUT_DATA_SELECT_JOG_ENABLE_DATA1_SHIFT (7)
#define SCU_SGPIO_OUPUT_DATA_SELECT_JOG_ENABLE_DATA1_MASK (0x00000080)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INPUT_DATA2_SHIFT (8)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INPUT_DATA2_MASK (0x00000300)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INVERT_INPUT_DATA2_SHIFT (10)
#define SCU_SGPIO_OUPUT_DATA_SELECT_INVERT_INPUT_DATA2_MASK (0x00000400)
#define SCU_SGPIO_OUPUT_DATA_SELECT_JOG_ENABLE_DATA2_SHIFT (11)
#define SCU_SGPIO_OUPUT_DATA_SELECT_JOG_ENABLE_DATA2_MASK (0x00000800)
#define SCU_SGPIO_OUPUT_DATA_SELECT_RESERVED_MASK (0xFFFFF000)
#define SCU_SGODSR_GEN_VAL(name, value) \
SCU_GEN_VALUE(SCU_SGPIO_OUPUT_DATA_SELECT_ ## name, value)
#define SCU_SGODSR_GEN_BIT(name) \
SCU_GEN_BIT(SCU_SGPIO_OUPUT_DATA_SELECT_ ## name)
/*
* *****************************************************************************
* * SMU Registers
@ -1529,10 +1413,12 @@ struct scu_sgpio_registers {
u32 serial_input_upper;
/* 0x0018 SGPIO_SGVSCR */
u32 vendor_specific_code;
/* 0x001C Reserved */
u32 reserved_001c;
/* 0x0020 SGPIO_SGODSR */
u32 ouput_data_select[8];
u32 output_data_select[8];
/* Remainder of memory space 256 bytes */
u32 reserved_1444_14ff[0x31];
u32 reserved_1444_14ff[0x30];
};

View File

@ -386,6 +386,18 @@ static bool is_remote_device_ready(struct isci_remote_device *idev)
}
}
/*
* called once the remote node context has transisitioned to a ready
* state (after suspending RX and/or TX due to early D2H fis)
*/
static void atapi_remote_device_resume_done(void *_dev)
{
struct isci_remote_device *idev = _dev;
struct isci_request *ireq = idev->working_request;
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
}
enum sci_status sci_remote_device_event_handler(struct isci_remote_device *idev,
u32 event_code)
{
@ -432,6 +444,16 @@ enum sci_status sci_remote_device_event_handler(struct isci_remote_device *idev,
if (status != SCI_SUCCESS)
return status;
if (state == SCI_STP_DEV_ATAPI_ERROR) {
/* For ATAPI error state resume the RNC right away. */
if (scu_get_event_type(event_code) == SCU_EVENT_TYPE_RNC_SUSPEND_TX ||
scu_get_event_type(event_code) == SCU_EVENT_TYPE_RNC_SUSPEND_TX_RX) {
return sci_remote_node_context_resume(&idev->rnc,
atapi_remote_device_resume_done,
idev);
}
}
if (state == SCI_STP_DEV_IDLE) {
/* We pick up suspension events to handle specifically to this
@ -625,6 +647,7 @@ enum sci_status sci_remote_device_complete_io(struct isci_host *ihost,
case SCI_STP_DEV_CMD:
case SCI_STP_DEV_NCQ:
case SCI_STP_DEV_NCQ_ERROR:
case SCI_STP_DEV_ATAPI_ERROR:
status = common_complete_io(iport, idev, ireq);
if (status != SCI_SUCCESS)
break;
@ -1020,6 +1043,7 @@ static const struct sci_base_state sci_remote_device_state_table[] = {
[SCI_STP_DEV_NCQ_ERROR] = {
.enter_state = sci_stp_remote_device_ready_ncq_error_substate_enter,
},
[SCI_STP_DEV_ATAPI_ERROR] = { },
[SCI_STP_DEV_AWAIT_RESET] = { },
[SCI_SMP_DEV_IDLE] = {
.enter_state = sci_smp_remote_device_ready_idle_substate_enter,

View File

@ -243,6 +243,15 @@ enum sci_remote_device_states {
*/
SCI_STP_DEV_NCQ_ERROR,
/**
* This is the ATAPI error state for the STP ATAPI remote device.
* This state is entered when ATAPI device sends error status FIS
* without data while the device object is in CMD state.
* A suspension event is expected in this state.
* The device object will resume right away.
*/
SCI_STP_DEV_ATAPI_ERROR,
/**
* This is the READY substate indicates the device is waiting for the RESET task
* coming to be recovered from certain hardware specific error.

View File

@ -481,7 +481,29 @@ static void sci_stp_optimized_request_construct(struct isci_request *ireq,
}
}
static void sci_atapi_construct(struct isci_request *ireq)
{
struct host_to_dev_fis *h2d_fis = &ireq->stp.cmd;
struct sas_task *task;
/* To simplify the implementation we take advantage of the
* silicon's partial acceleration of atapi protocol (dma data
* transfers), so we promote all commands to dma protocol. This
* breaks compatibility with ATA_HORKAGE_ATAPI_MOD16_DMA drives.
*/
h2d_fis->features |= ATAPI_PKT_DMA;
scu_stp_raw_request_construct_task_context(ireq);
task = isci_request_access_task(ireq);
if (task->data_dir == DMA_NONE)
task->total_xfer_len = 0;
/* clear the response so we can detect arrivial of an
* unsolicited h2d fis
*/
ireq->stp.rsp.fis_type = 0;
}
static enum sci_status
sci_io_request_construct_sata(struct isci_request *ireq,
@ -491,6 +513,7 @@ sci_io_request_construct_sata(struct isci_request *ireq,
{
enum sci_status status = SCI_SUCCESS;
struct sas_task *task = isci_request_access_task(ireq);
struct domain_device *dev = ireq->target_device->domain_dev;
/* check for management protocols */
if (ireq->ttype == tmf_task) {
@ -519,6 +542,13 @@ sci_io_request_construct_sata(struct isci_request *ireq,
}
/* ATAPI */
if (dev->sata_dev.command_set == ATAPI_COMMAND_SET &&
task->ata_task.fis.command == ATA_CMD_PACKET) {
sci_atapi_construct(ireq);
return SCI_SUCCESS;
}
/* non data */
if (task->data_dir == DMA_NONE) {
scu_stp_raw_request_construct_task_context(ireq);
@ -627,7 +657,7 @@ enum sci_status sci_task_request_construct_sata(struct isci_request *ireq)
/**
* sci_req_tx_bytes - bytes transferred when reply underruns request
* @sci_req: request that was terminated early
* @ireq: request that was terminated early
*/
#define SCU_TASK_CONTEXT_SRAM 0x200000
static u32 sci_req_tx_bytes(struct isci_request *ireq)
@ -729,6 +759,10 @@ sci_io_request_terminate(struct isci_request *ireq)
case SCI_REQ_STP_SOFT_RESET_WAIT_H2D_ASSERTED:
case SCI_REQ_STP_SOFT_RESET_WAIT_H2D_DIAG:
case SCI_REQ_STP_SOFT_RESET_WAIT_D2H:
case SCI_REQ_ATAPI_WAIT_H2D:
case SCI_REQ_ATAPI_WAIT_PIO_SETUP:
case SCI_REQ_ATAPI_WAIT_D2H:
case SCI_REQ_ATAPI_WAIT_TC_COMP:
sci_change_state(&ireq->sm, SCI_REQ_ABORTING);
return SCI_SUCCESS;
case SCI_REQ_TASK_WAIT_TC_RESP:
@ -1194,8 +1228,8 @@ static enum sci_status sci_stp_request_pio_data_out_transmit_data(struct isci_re
{
struct isci_stp_request *stp_req = &ireq->stp.req;
struct scu_sgl_element_pair *sgl_pair;
enum sci_status status = SCI_SUCCESS;
struct scu_sgl_element *sgl;
enum sci_status status;
u32 offset;
u32 len = 0;
@ -1249,7 +1283,7 @@ static enum sci_status sci_stp_request_pio_data_out_transmit_data(struct isci_re
*/
static enum sci_status
sci_stp_request_pio_data_in_copy_data_buffer(struct isci_stp_request *stp_req,
u8 *data_buf, u32 len)
u8 *data_buf, u32 len)
{
struct isci_request *ireq;
u8 *src_addr;
@ -1423,6 +1457,128 @@ static enum sci_status sci_stp_request_udma_general_frame_handler(struct isci_re
return status;
}
static enum sci_status process_unsolicited_fis(struct isci_request *ireq,
u32 frame_index)
{
struct isci_host *ihost = ireq->owning_controller;
enum sci_status status;
struct dev_to_host_fis *frame_header;
u32 *frame_buffer;
status = sci_unsolicited_frame_control_get_header(&ihost->uf_control,
frame_index,
(void **)&frame_header);
if (status != SCI_SUCCESS)
return status;
if (frame_header->fis_type != FIS_REGD2H) {
dev_err(&ireq->isci_host->pdev->dev,
"%s ERROR: invalid fis type 0x%X\n",
__func__, frame_header->fis_type);
return SCI_FAILURE;
}
sci_unsolicited_frame_control_get_buffer(&ihost->uf_control,
frame_index,
(void **)&frame_buffer);
sci_controller_copy_sata_response(&ireq->stp.rsp,
(u32 *)frame_header,
frame_buffer);
/* Frame has been decoded return it to the controller */
sci_controller_release_frame(ihost, frame_index);
return status;
}
static enum sci_status atapi_d2h_reg_frame_handler(struct isci_request *ireq,
u32 frame_index)
{
struct sas_task *task = isci_request_access_task(ireq);
enum sci_status status;
status = process_unsolicited_fis(ireq, frame_index);
if (status == SCI_SUCCESS) {
if (ireq->stp.rsp.status & ATA_ERR)
status = SCI_IO_FAILURE_RESPONSE_VALID;
} else {
status = SCI_IO_FAILURE_RESPONSE_VALID;
}
if (status != SCI_SUCCESS) {
ireq->scu_status = SCU_TASK_DONE_CHECK_RESPONSE;
ireq->sci_status = status;
} else {
ireq->scu_status = SCU_TASK_DONE_GOOD;
ireq->sci_status = SCI_SUCCESS;
}
/* the d2h ufi is the end of non-data commands */
if (task->data_dir == DMA_NONE)
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
return status;
}
static void scu_atapi_reconstruct_raw_frame_task_context(struct isci_request *ireq)
{
struct ata_device *dev = sas_to_ata_dev(ireq->target_device->domain_dev);
void *atapi_cdb = ireq->ttype_ptr.io_task_ptr->ata_task.atapi_packet;
struct scu_task_context *task_context = ireq->tc;
/* fill in the SCU Task Context for a DATA fis containing CDB in Raw Frame
* type. The TC for previous Packet fis was already there, we only need to
* change the H2D fis content.
*/
memset(&ireq->stp.cmd, 0, sizeof(struct host_to_dev_fis));
memcpy(((u8 *)&ireq->stp.cmd + sizeof(u32)), atapi_cdb, ATAPI_CDB_LEN);
memset(&(task_context->type.stp), 0, sizeof(struct stp_task_context));
task_context->type.stp.fis_type = FIS_DATA;
task_context->transfer_length_bytes = dev->cdb_len;
}
static void scu_atapi_construct_task_context(struct isci_request *ireq)
{
struct ata_device *dev = sas_to_ata_dev(ireq->target_device->domain_dev);
struct sas_task *task = isci_request_access_task(ireq);
struct scu_task_context *task_context = ireq->tc;
int cdb_len = dev->cdb_len;
/* reference: SSTL 1.13.4.2
* task_type, sata_direction
*/
if (task->data_dir == DMA_TO_DEVICE) {
task_context->task_type = SCU_TASK_TYPE_PACKET_DMA_OUT;
task_context->sata_direction = 0;
} else {
/* todo: for NO_DATA command, we need to send out raw frame. */
task_context->task_type = SCU_TASK_TYPE_PACKET_DMA_IN;
task_context->sata_direction = 1;
}
memset(&task_context->type.stp, 0, sizeof(task_context->type.stp));
task_context->type.stp.fis_type = FIS_DATA;
memset(&ireq->stp.cmd, 0, sizeof(ireq->stp.cmd));
memcpy(&ireq->stp.cmd.lbal, task->ata_task.atapi_packet, cdb_len);
task_context->ssp_command_iu_length = cdb_len / sizeof(u32);
/* task phase is set to TX_CMD */
task_context->task_phase = 0x1;
/* retry counter */
task_context->stp_retry_count = 0;
/* data transfer size. */
task_context->transfer_length_bytes = task->total_xfer_len;
/* setup sgl */
sci_request_build_sgl(ireq);
}
enum sci_status
sci_io_request_frame_handler(struct isci_request *ireq,
u32 frame_index)
@ -1490,29 +1646,30 @@ sci_io_request_frame_handler(struct isci_request *ireq,
return SCI_SUCCESS;
case SCI_REQ_SMP_WAIT_RESP: {
struct smp_resp *rsp_hdr = &ireq->smp.rsp;
void *frame_header;
struct sas_task *task = isci_request_access_task(ireq);
struct scatterlist *sg = &task->smp_task.smp_resp;
void *frame_header, *kaddr;
u8 *rsp;
sci_unsolicited_frame_control_get_header(&ihost->uf_control,
frame_index,
&frame_header);
frame_index,
&frame_header);
kaddr = kmap_atomic(sg_page(sg), KM_IRQ0);
rsp = kaddr + sg->offset;
sci_swab32_cpy(rsp, frame_header, 1);
/* byte swap the header. */
word_cnt = SMP_RESP_HDR_SZ / sizeof(u32);
sci_swab32_cpy(rsp_hdr, frame_header, word_cnt);
if (rsp_hdr->frame_type == SMP_RESPONSE) {
if (rsp[0] == SMP_RESPONSE) {
void *smp_resp;
sci_unsolicited_frame_control_get_buffer(&ihost->uf_control,
frame_index,
&smp_resp);
frame_index,
&smp_resp);
word_cnt = (sizeof(struct smp_resp) - SMP_RESP_HDR_SZ) /
sizeof(u32);
sci_swab32_cpy(((u8 *) rsp_hdr) + SMP_RESP_HDR_SZ,
smp_resp, word_cnt);
word_cnt = (sg->length/4)-1;
if (word_cnt > 0)
word_cnt = min_t(unsigned int, word_cnt,
SCU_UNSOLICITED_FRAME_BUFFER_SIZE/4);
sci_swab32_cpy(rsp + 4, smp_resp, word_cnt);
ireq->scu_status = SCU_TASK_DONE_GOOD;
ireq->sci_status = SCI_SUCCESS;
@ -1528,12 +1685,13 @@ sci_io_request_frame_handler(struct isci_request *ireq,
__func__,
ireq,
frame_index,
rsp_hdr->frame_type);
rsp[0]);
ireq->scu_status = SCU_TASK_DONE_SMP_FRM_TYPE_ERR;
ireq->sci_status = SCI_FAILURE_CONTROLLER_SPECIFIC_IO_ERR;
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
}
kunmap_atomic(kaddr, KM_IRQ0);
sci_controller_release_frame(ihost, frame_index);
@ -1833,6 +1991,24 @@ sci_io_request_frame_handler(struct isci_request *ireq,
return status;
}
case SCI_REQ_ATAPI_WAIT_PIO_SETUP: {
struct sas_task *task = isci_request_access_task(ireq);
sci_controller_release_frame(ihost, frame_index);
ireq->target_device->working_request = ireq;
if (task->data_dir == DMA_NONE) {
sci_change_state(&ireq->sm, SCI_REQ_ATAPI_WAIT_TC_COMP);
scu_atapi_reconstruct_raw_frame_task_context(ireq);
} else {
sci_change_state(&ireq->sm, SCI_REQ_ATAPI_WAIT_D2H);
scu_atapi_construct_task_context(ireq);
}
sci_controller_continue_io(ireq);
return SCI_SUCCESS;
}
case SCI_REQ_ATAPI_WAIT_D2H:
return atapi_d2h_reg_frame_handler(ireq, frame_index);
case SCI_REQ_ABORTING:
/*
* TODO: Is it even possible to get an unsolicited frame in the
@ -1898,10 +2074,9 @@ static enum sci_status stp_request_udma_await_tc_event(struct isci_request *ireq
case SCU_MAKE_COMPLETION_STATUS(SCU_TASK_DONE_MAX_PLD_ERR):
case SCU_MAKE_COMPLETION_STATUS(SCU_TASK_DONE_LL_R_ERR):
case SCU_MAKE_COMPLETION_STATUS(SCU_TASK_DONE_CMD_LL_R_ERR):
case SCU_MAKE_COMPLETION_STATUS(SCU_TASK_DONE_CRC_ERR):
sci_remote_device_suspend(ireq->target_device,
SCU_EVENT_SPECIFIC(SCU_NORMALIZE_COMPLETION_STATUS(completion_code)));
/* Fall through to the default case */
/* Fall through to the default case */
default:
/* All other completion status cause the IO to be complete. */
ireq->scu_status = SCU_NORMALIZE_COMPLETION_STATUS(completion_code);
@ -1964,6 +2139,112 @@ stp_request_soft_reset_await_h2d_diagnostic_tc_event(struct isci_request *ireq,
return SCI_SUCCESS;
}
static enum sci_status atapi_raw_completion(struct isci_request *ireq, u32 completion_code,
enum sci_base_request_states next)
{
enum sci_status status = SCI_SUCCESS;
switch (SCU_GET_COMPLETION_TL_STATUS(completion_code)) {
case SCU_MAKE_COMPLETION_STATUS(SCU_TASK_DONE_GOOD):
ireq->scu_status = SCU_TASK_DONE_GOOD;
ireq->sci_status = SCI_SUCCESS;
sci_change_state(&ireq->sm, next);
break;
default:
/* All other completion status cause the IO to be complete.
* If a NAK was received, then it is up to the user to retry
* the request.
*/
ireq->scu_status = SCU_NORMALIZE_COMPLETION_STATUS(completion_code);
ireq->sci_status = SCI_FAILURE_CONTROLLER_SPECIFIC_IO_ERR;
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
break;
}
return status;
}
static enum sci_status atapi_data_tc_completion_handler(struct isci_request *ireq,
u32 completion_code)
{
struct isci_remote_device *idev = ireq->target_device;
struct dev_to_host_fis *d2h = &ireq->stp.rsp;
enum sci_status status = SCI_SUCCESS;
switch (SCU_GET_COMPLETION_TL_STATUS(completion_code)) {
case (SCU_TASK_DONE_GOOD << SCU_COMPLETION_TL_STATUS_SHIFT):
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
break;
case (SCU_TASK_DONE_UNEXP_FIS << SCU_COMPLETION_TL_STATUS_SHIFT): {
u16 len = sci_req_tx_bytes(ireq);
/* likely non-error data underrrun, workaround missing
* d2h frame from the controller
*/
if (d2h->fis_type != FIS_REGD2H) {
d2h->fis_type = FIS_REGD2H;
d2h->flags = (1 << 6);
d2h->status = 0x50;
d2h->error = 0;
d2h->lbal = 0;
d2h->byte_count_low = len & 0xff;
d2h->byte_count_high = len >> 8;
d2h->device = 0xa0;
d2h->lbal_exp = 0;
d2h->lbam_exp = 0;
d2h->lbah_exp = 0;
d2h->_r_a = 0;
d2h->sector_count = 0x3;
d2h->sector_count_exp = 0;
d2h->_r_b = 0;
d2h->_r_c = 0;
d2h->_r_d = 0;
}
ireq->scu_status = SCU_TASK_DONE_GOOD;
ireq->sci_status = SCI_SUCCESS_IO_DONE_EARLY;
status = ireq->sci_status;
/* the hw will have suspended the rnc, so complete the
* request upon pending resume
*/
sci_change_state(&idev->sm, SCI_STP_DEV_ATAPI_ERROR);
break;
}
case (SCU_TASK_DONE_EXCESS_DATA << SCU_COMPLETION_TL_STATUS_SHIFT):
/* In this case, there is no UF coming after.
* compelte the IO now.
*/
ireq->scu_status = SCU_TASK_DONE_GOOD;
ireq->sci_status = SCI_SUCCESS;
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
break;
default:
if (d2h->fis_type == FIS_REGD2H) {
/* UF received change the device state to ATAPI_ERROR */
status = ireq->sci_status;
sci_change_state(&idev->sm, SCI_STP_DEV_ATAPI_ERROR);
} else {
/* If receiving any non-sucess TC status, no UF
* received yet, then an UF for the status fis
* is coming after (XXX: suspect this is
* actually a protocol error or a bug like the
* DONE_UNEXP_FIS case)
*/
ireq->scu_status = SCU_TASK_DONE_CHECK_RESPONSE;
ireq->sci_status = SCI_FAILURE_IO_RESPONSE_VALID;
sci_change_state(&ireq->sm, SCI_REQ_ATAPI_WAIT_D2H);
}
break;
}
return status;
}
enum sci_status
sci_io_request_tc_completion(struct isci_request *ireq,
u32 completion_code)
@ -2015,6 +2296,17 @@ sci_io_request_tc_completion(struct isci_request *ireq,
return request_aborting_state_tc_event(ireq,
completion_code);
case SCI_REQ_ATAPI_WAIT_H2D:
return atapi_raw_completion(ireq, completion_code,
SCI_REQ_ATAPI_WAIT_PIO_SETUP);
case SCI_REQ_ATAPI_WAIT_TC_COMP:
return atapi_raw_completion(ireq, completion_code,
SCI_REQ_ATAPI_WAIT_D2H);
case SCI_REQ_ATAPI_WAIT_D2H:
return atapi_data_tc_completion_handler(ireq, completion_code);
default:
dev_warn(&ihost->pdev->dev,
"%s: SCIC IO Request given task completion "
@ -2421,6 +2713,8 @@ static void isci_process_stp_response(struct sas_task *task, struct dev_to_host_
*/
if (fis->status & ATA_DF)
ts->stat = SAS_PROTO_RESPONSE;
else if (fis->status & ATA_ERR)
ts->stat = SAM_STAT_CHECK_CONDITION;
else
ts->stat = SAM_STAT_GOOD;
@ -2603,18 +2897,7 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
status = SAM_STAT_GOOD;
set_bit(IREQ_COMPLETE_IN_TARGET, &request->flags);
if (task->task_proto == SAS_PROTOCOL_SMP) {
void *rsp = &request->smp.rsp;
dev_dbg(&ihost->pdev->dev,
"%s: SMP protocol completion\n",
__func__);
sg_copy_from_buffer(
&task->smp_task.smp_resp, 1,
rsp, sizeof(struct smp_resp));
} else if (completion_status
== SCI_IO_SUCCESS_IO_DONE_EARLY) {
if (completion_status == SCI_IO_SUCCESS_IO_DONE_EARLY) {
/* This was an SSP / STP / SATA transfer.
* There is a possibility that less data than
@ -2791,6 +3074,7 @@ static void sci_request_started_state_enter(struct sci_base_state_machine *sm)
{
struct isci_request *ireq = container_of(sm, typeof(*ireq), sm);
struct domain_device *dev = ireq->target_device->domain_dev;
enum sci_base_request_states state;
struct sas_task *task;
/* XXX as hch said always creating an internal sas_task for tmf
@ -2802,26 +3086,30 @@ static void sci_request_started_state_enter(struct sci_base_state_machine *sm)
* substates
*/
if (!task && dev->dev_type == SAS_END_DEV) {
sci_change_state(sm, SCI_REQ_TASK_WAIT_TC_COMP);
state = SCI_REQ_TASK_WAIT_TC_COMP;
} else if (!task &&
(isci_request_access_tmf(ireq)->tmf_code == isci_tmf_sata_srst_high ||
isci_request_access_tmf(ireq)->tmf_code == isci_tmf_sata_srst_low)) {
sci_change_state(sm, SCI_REQ_STP_SOFT_RESET_WAIT_H2D_ASSERTED);
state = SCI_REQ_STP_SOFT_RESET_WAIT_H2D_ASSERTED;
} else if (task && task->task_proto == SAS_PROTOCOL_SMP) {
sci_change_state(sm, SCI_REQ_SMP_WAIT_RESP);
state = SCI_REQ_SMP_WAIT_RESP;
} else if (task && sas_protocol_ata(task->task_proto) &&
!task->ata_task.use_ncq) {
u32 state;
if (task->data_dir == DMA_NONE)
if (dev->sata_dev.command_set == ATAPI_COMMAND_SET &&
task->ata_task.fis.command == ATA_CMD_PACKET) {
state = SCI_REQ_ATAPI_WAIT_H2D;
} else if (task->data_dir == DMA_NONE) {
state = SCI_REQ_STP_NON_DATA_WAIT_H2D;
else if (task->ata_task.dma_xfer)
} else if (task->ata_task.dma_xfer) {
state = SCI_REQ_STP_UDMA_WAIT_TC_COMP;
else /* PIO */
} else /* PIO */ {
state = SCI_REQ_STP_PIO_WAIT_H2D;
sci_change_state(sm, state);
}
} else {
/* SSP or NCQ are fully accelerated, no substates */
return;
}
sci_change_state(sm, state);
}
static void sci_request_completed_state_enter(struct sci_base_state_machine *sm)
@ -2913,6 +3201,10 @@ static const struct sci_base_state sci_request_state_table[] = {
[SCI_REQ_TASK_WAIT_TC_RESP] = { },
[SCI_REQ_SMP_WAIT_RESP] = { },
[SCI_REQ_SMP_WAIT_TC_COMP] = { },
[SCI_REQ_ATAPI_WAIT_H2D] = { },
[SCI_REQ_ATAPI_WAIT_PIO_SETUP] = { },
[SCI_REQ_ATAPI_WAIT_D2H] = { },
[SCI_REQ_ATAPI_WAIT_TC_COMP] = { },
[SCI_REQ_COMPLETED] = {
.enter_state = sci_request_completed_state_enter,
},

View File

@ -96,7 +96,6 @@ enum sci_request_protocol {
* to wait for another fis or if the transfer is complete. Upon
* receipt of a d2h fis this will be the status field of that fis.
* @sgl - track pio transfer progress as we iterate through the sgl
* @device_cdb_len - atapi device advertises it's transfer constraints at setup
*/
struct isci_stp_request {
u32 pio_len;
@ -107,7 +106,6 @@ struct isci_stp_request {
u8 set;
u32 offset;
} sgl;
u32 device_cdb_len;
};
struct isci_request {
@ -173,9 +171,6 @@ struct isci_request {
u8 rsp_buf[SSP_RESP_IU_MAX_SIZE];
};
} ssp;
struct {
struct smp_resp rsp;
} smp;
struct {
struct isci_stp_request req;
struct host_to_dev_fis cmd;
@ -251,6 +246,32 @@ enum sci_base_request_states {
*/
SCI_REQ_STP_PIO_DATA_OUT,
/*
* While in this state the IO request object is waiting for the TC
* completion notification for the H2D Register FIS
*/
SCI_REQ_ATAPI_WAIT_H2D,
/*
* While in this state the IO request object is waiting for either a
* PIO Setup.
*/
SCI_REQ_ATAPI_WAIT_PIO_SETUP,
/*
* The non-data IO transit to this state in this state after receiving
* TC completion. While in this state IO request object is waiting for
* D2H status frame as UF.
*/
SCI_REQ_ATAPI_WAIT_D2H,
/*
* When transmitting raw frames hardware reports task context completion
* after every frame submission, so in the non-accelerated case we need
* to expect the completion for the "cdb" frame.
*/
SCI_REQ_ATAPI_WAIT_TC_COMP,
/*
* The AWAIT_TC_COMPLETION sub-state indicates that the started raw
* task management request is waiting for the transmission of the

View File

@ -204,8 +204,6 @@ struct smp_req {
u8 req_data[0];
} __packed;
#define SMP_RESP_HDR_SZ 4
/*
* struct sci_sas_address - This structure depicts how a SAS address is
* represented by SCI.

View File

@ -1345,29 +1345,6 @@ static void isci_smp_task_done(struct sas_task *task)
complete(&task->completion);
}
static struct sas_task *isci_alloc_task(void)
{
struct sas_task *task = kzalloc(sizeof(*task), GFP_KERNEL);
if (task) {
INIT_LIST_HEAD(&task->list);
spin_lock_init(&task->task_state_lock);
task->task_state_flags = SAS_TASK_STATE_PENDING;
init_timer(&task->timer);
init_completion(&task->completion);
}
return task;
}
static void isci_free_task(struct isci_host *ihost, struct sas_task *task)
{
if (task) {
BUG_ON(!list_empty(&task->list));
kfree(task);
}
}
static int isci_smp_execute_task(struct isci_host *ihost,
struct domain_device *dev, void *req,
int req_size, void *resp, int resp_size)
@ -1376,7 +1353,7 @@ static int isci_smp_execute_task(struct isci_host *ihost,
struct sas_task *task = NULL;
for (retry = 0; retry < 3; retry++) {
task = isci_alloc_task();
task = sas_alloc_task(GFP_KERNEL);
if (!task)
return -ENOMEM;
@ -1439,13 +1416,13 @@ static int isci_smp_execute_task(struct isci_host *ihost,
SAS_ADDR(dev->sas_addr),
task->task_status.resp,
task->task_status.stat);
isci_free_task(ihost, task);
sas_free_task(task);
task = NULL;
}
}
ex_err:
BUG_ON(retry == 3 && task != NULL);
isci_free_task(ihost, task);
sas_free_task(task);
return res;
}

View File

@ -286,6 +286,25 @@ isci_task_set_completion_status(
task->task_status.resp = response;
task->task_status.stat = status;
switch (task->task_proto) {
case SAS_PROTOCOL_SATA:
case SAS_PROTOCOL_STP:
case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
if (task_notification_selection
== isci_perform_error_io_completion) {
/* SATA/STP I/O has it's own means of scheduling device
* error handling on the normal path.
*/
task_notification_selection
= isci_perform_normal_io_completion;
}
break;
default:
break;
}
switch (task_notification_selection) {
case isci_perform_error_io_completion:

View File

@ -872,6 +872,61 @@ static void iscsi_sw_tcp_session_destroy(struct iscsi_cls_session *cls_session)
iscsi_host_free(shost);
}
static mode_t iscsi_sw_tcp_attr_is_visible(int param_type, int param)
{
switch (param_type) {
case ISCSI_HOST_PARAM:
switch (param) {
case ISCSI_HOST_PARAM_NETDEV_NAME:
case ISCSI_HOST_PARAM_HWADDRESS:
case ISCSI_HOST_PARAM_IPADDRESS:
case ISCSI_HOST_PARAM_INITIATOR_NAME:
return S_IRUGO;
default:
return 0;
}
case ISCSI_PARAM:
switch (param) {
case ISCSI_PARAM_MAX_RECV_DLENGTH:
case ISCSI_PARAM_MAX_XMIT_DLENGTH:
case ISCSI_PARAM_HDRDGST_EN:
case ISCSI_PARAM_DATADGST_EN:
case ISCSI_PARAM_CONN_ADDRESS:
case ISCSI_PARAM_CONN_PORT:
case ISCSI_PARAM_EXP_STATSN:
case ISCSI_PARAM_PERSISTENT_ADDRESS:
case ISCSI_PARAM_PERSISTENT_PORT:
case ISCSI_PARAM_PING_TMO:
case ISCSI_PARAM_RECV_TMO:
case ISCSI_PARAM_INITIAL_R2T_EN:
case ISCSI_PARAM_MAX_R2T:
case ISCSI_PARAM_IMM_DATA_EN:
case ISCSI_PARAM_FIRST_BURST:
case ISCSI_PARAM_MAX_BURST:
case ISCSI_PARAM_PDU_INORDER_EN:
case ISCSI_PARAM_DATASEQ_INORDER_EN:
case ISCSI_PARAM_ERL:
case ISCSI_PARAM_TARGET_NAME:
case ISCSI_PARAM_TPGT:
case ISCSI_PARAM_USERNAME:
case ISCSI_PARAM_PASSWORD:
case ISCSI_PARAM_USERNAME_IN:
case ISCSI_PARAM_PASSWORD_IN:
case ISCSI_PARAM_FAST_ABORT:
case ISCSI_PARAM_ABORT_TMO:
case ISCSI_PARAM_LU_RESET_TMO:
case ISCSI_PARAM_TGT_RESET_TMO:
case ISCSI_PARAM_IFACE_NAME:
case ISCSI_PARAM_INITIATOR_NAME:
return S_IRUGO;
default:
return 0;
}
}
return 0;
}
static int iscsi_sw_tcp_slave_alloc(struct scsi_device *sdev)
{
set_bit(QUEUE_FLAG_BIDI, &sdev->request_queue->queue_flags);
@ -910,33 +965,6 @@ static struct iscsi_transport iscsi_sw_tcp_transport = {
.name = "tcp",
.caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_HDRDGST
| CAP_DATADGST,
.param_mask = ISCSI_MAX_RECV_DLENGTH |
ISCSI_MAX_XMIT_DLENGTH |
ISCSI_HDRDGST_EN |
ISCSI_DATADGST_EN |
ISCSI_INITIAL_R2T_EN |
ISCSI_MAX_R2T |
ISCSI_IMM_DATA_EN |
ISCSI_FIRST_BURST |
ISCSI_MAX_BURST |
ISCSI_PDU_INORDER_EN |
ISCSI_DATASEQ_INORDER_EN |
ISCSI_ERL |
ISCSI_CONN_PORT |
ISCSI_CONN_ADDRESS |
ISCSI_EXP_STATSN |
ISCSI_PERSISTENT_PORT |
ISCSI_PERSISTENT_ADDRESS |
ISCSI_TARGET_NAME | ISCSI_TPGT |
ISCSI_USERNAME | ISCSI_PASSWORD |
ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN |
ISCSI_FAST_ABORT | ISCSI_ABORT_TMO |
ISCSI_LU_RESET_TMO | ISCSI_TGT_RESET_TMO |
ISCSI_PING_TMO | ISCSI_RECV_TMO |
ISCSI_IFACE_NAME | ISCSI_INITIATOR_NAME,
.host_param_mask = ISCSI_HOST_HWADDRESS | ISCSI_HOST_IPADDRESS |
ISCSI_HOST_INITIATOR_NAME |
ISCSI_HOST_NETDEV_NAME,
/* session management */
.create_session = iscsi_sw_tcp_session_create,
.destroy_session = iscsi_sw_tcp_session_destroy,
@ -944,6 +972,7 @@ static struct iscsi_transport iscsi_sw_tcp_transport = {
.create_conn = iscsi_sw_tcp_conn_create,
.bind_conn = iscsi_sw_tcp_conn_bind,
.destroy_conn = iscsi_sw_tcp_conn_destroy,
.attr_is_visible = iscsi_sw_tcp_attr_is_visible,
.set_param = iscsi_sw_tcp_conn_set_param,
.get_conn_param = iscsi_sw_tcp_conn_get_param,
.get_session_param = iscsi_session_get_param,

View File

@ -65,16 +65,15 @@ static struct workqueue_struct *fc_exch_workqueue;
* assigned range of exchanges to per cpu pool.
*/
struct fc_exch_pool {
spinlock_t lock;
struct list_head ex_list;
u16 next_index;
u16 total_exches;
/* two cache of free slot in exch array */
u16 left;
u16 right;
spinlock_t lock;
struct list_head ex_list;
};
} ____cacheline_aligned_in_smp;
/**
* struct fc_exch_mgr - The Exchange Manager (EM).
@ -91,13 +90,13 @@ struct fc_exch_pool {
* It manages the allocation of exchange IDs.
*/
struct fc_exch_mgr {
struct fc_exch_pool *pool;
mempool_t *ep_pool;
enum fc_class class;
struct kref kref;
u16 min_xid;
u16 max_xid;
mempool_t *ep_pool;
u16 pool_max_index;
struct fc_exch_pool *pool;
/*
* currently exchange mgr stats are updated but not used.

View File

@ -759,7 +759,6 @@ static void fc_fcp_recv(struct fc_seq *seq, struct fc_frame *fp, void *arg)
goto out;
if (fc_fcp_lock_pkt(fsp))
goto out;
fsp->last_pkt_time = jiffies;
if (fh->fh_type == FC_TYPE_BLS) {
fc_fcp_abts_resp(fsp, fp);
@ -1148,7 +1147,6 @@ static int fc_fcp_cmd_send(struct fc_lport *lport, struct fc_fcp_pkt *fsp,
rc = -1;
goto unlock;
}
fsp->last_pkt_time = jiffies;
fsp->seq_ptr = seq;
fc_fcp_pkt_hold(fsp); /* hold for fc_fcp_pkt_destroy */

View File

@ -3163,7 +3163,6 @@ int iscsi_set_param(struct iscsi_cls_conn *cls_conn,
{
struct iscsi_conn *conn = cls_conn->dd_data;
struct iscsi_session *session = conn->session;
uint32_t value;
switch(param) {
case ISCSI_PARAM_FAST_ABORT:
@ -3220,14 +3219,6 @@ int iscsi_set_param(struct iscsi_cls_conn *cls_conn,
case ISCSI_PARAM_ERL:
sscanf(buf, "%d", &session->erl);
break;
case ISCSI_PARAM_IFMARKER_EN:
sscanf(buf, "%d", &value);
BUG_ON(value);
break;
case ISCSI_PARAM_OFMARKER_EN:
sscanf(buf, "%d", &value);
BUG_ON(value);
break;
case ISCSI_PARAM_EXP_STATSN:
sscanf(buf, "%u", &conn->exp_statsn);
break;

View File

@ -219,17 +219,20 @@ out_err2:
/* ---------- Device registration and unregistration ---------- */
static inline void sas_unregister_common_dev(struct domain_device *dev)
static void sas_unregister_common_dev(struct asd_sas_port *port, struct domain_device *dev)
{
sas_notify_lldd_dev_gone(dev);
if (!dev->parent)
dev->port->port_dev = NULL;
else
list_del_init(&dev->siblings);
spin_lock_irq(&port->dev_list_lock);
list_del_init(&dev->dev_list_node);
spin_unlock_irq(&port->dev_list_lock);
}
void sas_unregister_dev(struct domain_device *dev)
void sas_unregister_dev(struct asd_sas_port *port, struct domain_device *dev)
{
if (dev->rphy) {
sas_remove_children(&dev->rphy->dev);
@ -241,15 +244,15 @@ void sas_unregister_dev(struct domain_device *dev)
kfree(dev->ex_dev.ex_phy);
dev->ex_dev.ex_phy = NULL;
}
sas_unregister_common_dev(dev);
sas_unregister_common_dev(port, dev);
}
void sas_unregister_domain_devices(struct asd_sas_port *port)
{
struct domain_device *dev, *n;
list_for_each_entry_safe_reverse(dev,n,&port->dev_list,dev_list_node)
sas_unregister_dev(dev);
list_for_each_entry_safe_reverse(dev, n, &port->dev_list, dev_list_node)
sas_unregister_dev(port, dev);
port->port->rphy = NULL;

View File

@ -199,6 +199,8 @@ static void sas_set_ex_phy(struct domain_device *dev, int phy_id,
phy->virtual = dr->virtual;
phy->last_da_index = -1;
phy->phy->identify.sas_address = SAS_ADDR(phy->attached_sas_addr);
phy->phy->identify.device_type = phy->attached_dev_type;
phy->phy->identify.initiator_port_protocols = phy->attached_iproto;
phy->phy->identify.target_port_protocols = phy->attached_tproto;
phy->phy->identify.phy_identifier = phy_id;
@ -329,6 +331,7 @@ static void ex_assign_report_general(struct domain_device *dev,
dev->ex_dev.ex_change_count = be16_to_cpu(rg->change_count);
dev->ex_dev.max_route_indexes = be16_to_cpu(rg->route_indexes);
dev->ex_dev.num_phys = min(rg->num_phys, (u8)MAX_EXPANDER_PHYS);
dev->ex_dev.t2t_supp = rg->t2t_supp;
dev->ex_dev.conf_route_table = rg->conf_route_table;
dev->ex_dev.configuring = rg->configuring;
memcpy(dev->ex_dev.enclosure_logical_id, rg->enclosure_logical_id, 8);
@ -751,7 +754,10 @@ static struct domain_device *sas_ex_discover_end_dev(
out_list_del:
sas_rphy_free(child->rphy);
child->rphy = NULL;
spin_lock_irq(&parent->port->dev_list_lock);
list_del(&child->dev_list_node);
spin_unlock_irq(&parent->port->dev_list_lock);
out_free:
sas_port_delete(phy->port);
out_err:
@ -1133,15 +1139,17 @@ static void sas_print_parent_topology_bug(struct domain_device *child,
};
struct domain_device *parent = child->parent;
sas_printk("%s ex %016llx phy 0x%x <--> %s ex %016llx phy 0x%x "
"has %c:%c routing link!\n",
sas_printk("%s ex %016llx (T2T supp:%d) phy 0x%x <--> %s ex %016llx "
"(T2T supp:%d) phy 0x%x has %c:%c routing link!\n",
ex_type[parent->dev_type],
SAS_ADDR(parent->sas_addr),
parent->ex_dev.t2t_supp,
parent_phy->phy_id,
ex_type[child->dev_type],
SAS_ADDR(child->sas_addr),
child->ex_dev.t2t_supp,
child_phy->phy_id,
ra_char[parent_phy->routing_attr],
@ -1238,10 +1246,15 @@ static int sas_check_parent_topology(struct domain_device *child)
sas_print_parent_topology_bug(child, parent_phy, child_phy);
res = -ENODEV;
}
} else if (parent_phy->routing_attr == TABLE_ROUTING &&
child_phy->routing_attr != SUBTRACTIVE_ROUTING) {
sas_print_parent_topology_bug(child, parent_phy, child_phy);
res = -ENODEV;
} else if (parent_phy->routing_attr == TABLE_ROUTING) {
if (child_phy->routing_attr == SUBTRACTIVE_ROUTING ||
(child_phy->routing_attr == TABLE_ROUTING &&
child_ex->t2t_supp && parent_ex->t2t_supp)) {
/* All good */;
} else {
sas_print_parent_topology_bug(child, parent_phy, child_phy);
res = -ENODEV;
}
}
break;
case FANOUT_DEV:
@ -1729,7 +1742,7 @@ out:
return res;
}
static void sas_unregister_ex_tree(struct domain_device *dev)
static void sas_unregister_ex_tree(struct asd_sas_port *port, struct domain_device *dev)
{
struct expander_device *ex = &dev->ex_dev;
struct domain_device *child, *n;
@ -1738,11 +1751,11 @@ static void sas_unregister_ex_tree(struct domain_device *dev)
child->gone = 1;
if (child->dev_type == EDGE_DEV ||
child->dev_type == FANOUT_DEV)
sas_unregister_ex_tree(child);
sas_unregister_ex_tree(port, child);
else
sas_unregister_dev(child);
sas_unregister_dev(port, child);
}
sas_unregister_dev(dev);
sas_unregister_dev(port, dev);
}
static void sas_unregister_devs_sas_addr(struct domain_device *parent,
@ -1759,9 +1772,9 @@ static void sas_unregister_devs_sas_addr(struct domain_device *parent,
child->gone = 1;
if (child->dev_type == EDGE_DEV ||
child->dev_type == FANOUT_DEV)
sas_unregister_ex_tree(child);
sas_unregister_ex_tree(parent->port, child);
else
sas_unregister_dev(child);
sas_unregister_dev(parent->port, child);
break;
}
}

View File

@ -51,6 +51,91 @@ static void sas_host_smp_discover(struct sas_ha_struct *sas_ha, u8 *resp_data,
resp_data[15] = rphy->identify.target_port_protocols;
}
/**
* to_sas_gpio_gp_bit - given the gpio frame data find the byte/bit position of 'od'
* @od: od bit to find
* @data: incoming bitstream (from frame)
* @index: requested data register index (from frame)
* @count: total number of registers in the bitstream (from frame)
* @bit: bit position of 'od' in the returned byte
*
* returns NULL if 'od' is not in 'data'
*
* From SFF-8485 v0.7:
* "In GPIO_TX[1], bit 0 of byte 3 contains the first bit (i.e., OD0.0)
* and bit 7 of byte 0 contains the 32nd bit (i.e., OD10.1).
*
* In GPIO_TX[2], bit 0 of byte 3 contains the 33rd bit (i.e., OD10.2)
* and bit 7 of byte 0 contains the 64th bit (i.e., OD21.0)."
*
* The general-purpose (raw-bitstream) RX registers have the same layout
* although 'od' is renamed 'id' for 'input data'.
*
* SFF-8489 defines the behavior of the LEDs in response to the 'od' values.
*/
static u8 *to_sas_gpio_gp_bit(unsigned int od, u8 *data, u8 index, u8 count, u8 *bit)
{
unsigned int reg;
u8 byte;
/* gp registers start at index 1 */
if (index == 0)
return NULL;
index--; /* make index 0-based */
if (od < index * 32)
return NULL;
od -= index * 32;
reg = od >> 5;
if (reg >= count)
return NULL;
od &= (1 << 5) - 1;
byte = 3 - (od >> 3);
*bit = od & ((1 << 3) - 1);
return &data[reg * 4 + byte];
}
int try_test_sas_gpio_gp_bit(unsigned int od, u8 *data, u8 index, u8 count)
{
u8 *byte;
u8 bit;
byte = to_sas_gpio_gp_bit(od, data, index, count, &bit);
if (!byte)
return -1;
return (*byte >> bit) & 1;
}
EXPORT_SYMBOL(try_test_sas_gpio_gp_bit);
static int sas_host_smp_write_gpio(struct sas_ha_struct *sas_ha, u8 *resp_data,
u8 reg_type, u8 reg_index, u8 reg_count,
u8 *req_data)
{
struct sas_internal *i = to_sas_internal(sas_ha->core.shost->transportt);
int written;
if (i->dft->lldd_write_gpio == NULL) {
resp_data[2] = SMP_RESP_FUNC_UNK;
return 0;
}
written = i->dft->lldd_write_gpio(sas_ha, reg_type, reg_index,
reg_count, req_data);
if (written < 0) {
resp_data[2] = SMP_RESP_FUNC_FAILED;
written = 0;
} else
resp_data[2] = SMP_RESP_FUNC_ACC;
return written;
}
static void sas_report_phy_sata(struct sas_ha_struct *sas_ha, u8 *resp_data,
u8 phy_id)
{
@ -230,9 +315,23 @@ int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
/* Can't implement; hosts have no routes */
break;
case SMP_WRITE_GPIO_REG:
/* FIXME: need GPIO support in the transport class */
case SMP_WRITE_GPIO_REG: {
/* SFF-8485 v0.7 */
const int base_frame_size = 11;
int to_write = req_data[4];
if (blk_rq_bytes(req) < base_frame_size + to_write * 4 ||
req->resid_len < base_frame_size + to_write * 4) {
resp_data[2] = SMP_RESP_INV_FRM_LEN;
break;
}
to_write = sas_host_smp_write_gpio(sas_ha, resp_data, req_data[2],
req_data[3], to_write, &req_data[8]);
req->resid_len -= base_frame_size + to_write * 4;
rsp->resid_len -= 8;
break;
}
case SMP_CONF_ROUTE_INFO:
/* Can't implement; hosts have no routes */

View File

@ -37,7 +37,32 @@
#include "../scsi_sas_internal.h"
struct kmem_cache *sas_task_cache;
static struct kmem_cache *sas_task_cache;
struct sas_task *sas_alloc_task(gfp_t flags)
{
struct sas_task *task = kmem_cache_zalloc(sas_task_cache, flags);
if (task) {
INIT_LIST_HEAD(&task->list);
spin_lock_init(&task->task_state_lock);
task->task_state_flags = SAS_TASK_STATE_PENDING;
init_timer(&task->timer);
init_completion(&task->completion);
}
return task;
}
EXPORT_SYMBOL_GPL(sas_alloc_task);
void sas_free_task(struct sas_task *task)
{
if (task) {
BUG_ON(!list_empty(&task->list));
kmem_cache_free(sas_task_cache, task);
}
}
EXPORT_SYMBOL_GPL(sas_free_task);
/*------------ SAS addr hash -----------*/
void sas_hash_addr(u8 *hashed, const u8 *sas_addr)
@ -152,10 +177,15 @@ int sas_unregister_ha(struct sas_ha_struct *sas_ha)
static int sas_get_linkerrors(struct sas_phy *phy)
{
if (scsi_is_sas_phy_local(phy))
/* FIXME: we have no local phy stats
* gathering at this time */
return -EINVAL;
if (scsi_is_sas_phy_local(phy)) {
struct Scsi_Host *shost = dev_to_shost(phy->dev.parent);
struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost);
struct asd_sas_phy *asd_phy = sas_ha->sas_phy[phy->number];
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
return i->dft->lldd_control_phy(asd_phy, PHY_FUNC_GET_EVENTS, NULL);
}
return sas_smp_get_phy_events(phy);
}
@ -293,8 +323,7 @@ EXPORT_SYMBOL_GPL(sas_domain_release_transport);
static int __init sas_class_init(void)
{
sas_task_cache = kmem_cache_create("sas_task", sizeof(struct sas_task),
0, SLAB_HWCACHE_ALIGN, NULL);
sas_task_cache = KMEM_CACHE(sas_task, SLAB_HWCACHE_ALIGN);
if (!sas_task_cache)
return -ENOMEM;

View File

@ -182,78 +182,55 @@ int sas_queue_up(struct sas_task *task)
return 0;
}
/**
* sas_queuecommand -- Enqueue a command for processing
* @parameters: See SCSI Core documentation
*
* Note: XXX: Remove the host unlock/lock pair when SCSI Core can
* call us without holding an IRQ spinlock...
*/
static int sas_queuecommand_lck(struct scsi_cmnd *cmd,
void (*scsi_done)(struct scsi_cmnd *))
__releases(host->host_lock)
__acquires(dev->sata_dev.ap->lock)
__releases(dev->sata_dev.ap->lock)
__acquires(host->host_lock)
int sas_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
{
int res = 0;
struct domain_device *dev = cmd_to_domain_dev(cmd);
struct Scsi_Host *host = cmd->device->host;
struct sas_internal *i = to_sas_internal(host->transportt);
struct domain_device *dev = cmd_to_domain_dev(cmd);
struct sas_ha_struct *sas_ha = dev->port->ha;
struct sas_task *task;
int res = 0;
spin_unlock_irq(host->host_lock);
{
struct sas_ha_struct *sas_ha = dev->port->ha;
struct sas_task *task;
/* If the device fell off, no sense in issuing commands */
if (dev->gone) {
cmd->result = DID_BAD_TARGET << 16;
scsi_done(cmd);
goto out;
}
if (dev_is_sata(dev)) {
unsigned long flags;
spin_lock_irqsave(dev->sata_dev.ap->lock, flags);
res = ata_sas_queuecmd(cmd, dev->sata_dev.ap);
spin_unlock_irqrestore(dev->sata_dev.ap->lock, flags);
goto out;
}
res = -ENOMEM;
task = sas_create_task(cmd, dev, GFP_ATOMIC);
if (!task)
goto out;
cmd->scsi_done = scsi_done;
/* Queue up, Direct Mode or Task Collector Mode. */
if (sas_ha->lldd_max_execute_num < 2)
res = i->dft->lldd_execute_task(task, 1, GFP_ATOMIC);
else
res = sas_queue_up(task);
/* Examine */
if (res) {
SAS_DPRINTK("lldd_execute_task returned: %d\n", res);
ASSIGN_SAS_TASK(cmd, NULL);
sas_free_task(task);
if (res == -SAS_QUEUE_FULL) {
cmd->result = DID_SOFT_ERROR << 16; /* retry */
res = 0;
scsi_done(cmd);
}
goto out;
}
/* If the device fell off, no sense in issuing commands */
if (dev->gone) {
cmd->result = DID_BAD_TARGET << 16;
goto out_done;
}
out:
spin_lock_irq(host->host_lock);
return res;
}
DEF_SCSI_QCMD(sas_queuecommand)
if (dev_is_sata(dev)) {
unsigned long flags;
spin_lock_irqsave(dev->sata_dev.ap->lock, flags);
res = ata_sas_queuecmd(cmd, dev->sata_dev.ap);
spin_unlock_irqrestore(dev->sata_dev.ap->lock, flags);
return res;
}
task = sas_create_task(cmd, dev, GFP_ATOMIC);
if (!task)
return SCSI_MLQUEUE_HOST_BUSY;
/* Queue up, Direct Mode or Task Collector Mode. */
if (sas_ha->lldd_max_execute_num < 2)
res = i->dft->lldd_execute_task(task, 1, GFP_ATOMIC);
else
res = sas_queue_up(task);
if (res)
goto out_free_task;
return 0;
out_free_task:
SAS_DPRINTK("lldd_execute_task returned: %d\n", res);
ASSIGN_SAS_TASK(cmd, NULL);
sas_free_task(task);
if (res == -SAS_QUEUE_FULL)
cmd->result = DID_SOFT_ERROR << 16; /* retry */
else
cmd->result = DID_ERROR << 16;
out_done:
cmd->scsi_done(cmd);
return 0;
}
static void sas_eh_finish_cmd(struct scsi_cmnd *cmd)
{
@ -784,8 +761,7 @@ int sas_target_alloc(struct scsi_target *starget)
return 0;
}
#define SAS_DEF_QD 32
#define SAS_MAX_QD 64
#define SAS_DEF_QD 256
int sas_slave_configure(struct scsi_device *scsi_dev)
{
@ -825,34 +801,41 @@ void sas_slave_destroy(struct scsi_device *scsi_dev)
struct domain_device *dev = sdev_to_domain_dev(scsi_dev);
if (dev_is_sata(dev))
dev->sata_dev.ap->link.device[0].class = ATA_DEV_NONE;
sas_to_ata_dev(dev)->class = ATA_DEV_NONE;
}
int sas_change_queue_depth(struct scsi_device *scsi_dev, int new_depth,
int reason)
int sas_change_queue_depth(struct scsi_device *sdev, int depth, int reason)
{
int res = min(new_depth, SAS_MAX_QD);
struct domain_device *dev = sdev_to_domain_dev(sdev);
if (reason != SCSI_QDEPTH_DEFAULT)
if (dev_is_sata(dev))
return __ata_change_queue_depth(dev->sata_dev.ap, sdev, depth,
reason);
switch (reason) {
case SCSI_QDEPTH_DEFAULT:
case SCSI_QDEPTH_RAMP_UP:
if (!sdev->tagged_supported)
depth = 1;
scsi_adjust_queue_depth(sdev, scsi_get_tag_type(sdev), depth);
break;
case SCSI_QDEPTH_QFULL:
scsi_track_queue_full(sdev, depth);
break;
default:
return -EOPNOTSUPP;
if (scsi_dev->tagged_supported)
scsi_adjust_queue_depth(scsi_dev, scsi_get_tag_type(scsi_dev),
res);
else {
struct domain_device *dev = sdev_to_domain_dev(scsi_dev);
sas_printk("device %llx LUN %x queue depth changed to 1\n",
SAS_ADDR(dev->sas_addr),
scsi_dev->lun);
scsi_adjust_queue_depth(scsi_dev, 0, 1);
res = 1;
}
return res;
return depth;
}
int sas_change_queue_type(struct scsi_device *scsi_dev, int qt)
{
struct domain_device *dev = sdev_to_domain_dev(scsi_dev);
if (dev_is_sata(dev))
return -EINVAL;
if (!scsi_dev->tagged_supported)
return 0;

View File

@ -846,8 +846,24 @@ struct lpfc_hba {
struct dentry *debug_hbqinfo;
struct dentry *debug_dumpHostSlim;
struct dentry *debug_dumpHBASlim;
struct dentry *debug_dumpData; /* BlockGuard BPL*/
struct dentry *debug_dumpDif; /* BlockGuard BPL*/
struct dentry *debug_dumpData; /* BlockGuard BPL */
struct dentry *debug_dumpDif; /* BlockGuard BPL */
struct dentry *debug_InjErrLBA; /* LBA to inject errors at */
struct dentry *debug_writeGuard; /* inject write guard_tag errors */
struct dentry *debug_writeApp; /* inject write app_tag errors */
struct dentry *debug_writeRef; /* inject write ref_tag errors */
struct dentry *debug_readApp; /* inject read app_tag errors */
struct dentry *debug_readRef; /* inject read ref_tag errors */
/* T10 DIF error injection */
uint32_t lpfc_injerr_wgrd_cnt;
uint32_t lpfc_injerr_wapp_cnt;
uint32_t lpfc_injerr_wref_cnt;
uint32_t lpfc_injerr_rapp_cnt;
uint32_t lpfc_injerr_rref_cnt;
sector_t lpfc_injerr_lba;
#define LPFC_INJERR_LBA_OFF (sector_t)0xffffffffffffffff
struct dentry *debug_slow_ring_trc;
struct lpfc_debugfs_trc *slow_ring_trc;
atomic_t slow_ring_trc_cnt;

View File

@ -52,6 +52,13 @@
#define LPFC_MIN_DEVLOSS_TMO 1
#define LPFC_MAX_DEVLOSS_TMO 255
/*
* Write key size should be multiple of 4. If write key is changed
* make sure that library write key is also changed.
*/
#define LPFC_REG_WRITE_KEY_SIZE 4
#define LPFC_REG_WRITE_KEY "EMLX"
/**
* lpfc_jedec_to_ascii - Hex to ascii convertor according to JEDEC rules
* @incr: integer to convert.
@ -693,7 +700,7 @@ lpfc_selective_reset(struct lpfc_hba *phba)
int rc;
if (!phba->cfg_enable_hba_reset)
return -EIO;
return -EACCES;
status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
@ -742,9 +749,11 @@ lpfc_issue_reset(struct device *dev, struct device_attribute *attr,
struct Scsi_Host *shost = class_to_shost(dev);
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
struct lpfc_hba *phba = vport->phba;
int status = -EINVAL;
if (!phba->cfg_enable_hba_reset)
return -EACCES;
if (strncmp(buf, "selective", sizeof("selective") - 1) == 0)
status = phba->lpfc_selective_reset(phba);
@ -765,16 +774,21 @@ lpfc_issue_reset(struct device *dev, struct device_attribute *attr,
* Returns:
* zero for success
**/
static int
int
lpfc_sli4_pdev_status_reg_wait(struct lpfc_hba *phba)
{
struct lpfc_register portstat_reg;
struct lpfc_register portstat_reg = {0};
int i;
msleep(100);
lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr,
&portstat_reg.word0);
/* verify if privilaged for the request operation */
if (!bf_get(lpfc_sliport_status_rn, &portstat_reg) &&
!bf_get(lpfc_sliport_status_err, &portstat_reg))
return -EPERM;
/* wait for the SLI port firmware ready after firmware reset */
for (i = 0; i < LPFC_FW_RESET_MAXIMUM_WAIT_10MS_CNT; i++) {
msleep(10);
@ -816,16 +830,13 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
int rc;
if (!phba->cfg_enable_hba_reset)
return -EIO;
return -EACCES;
if ((phba->sli_rev < LPFC_SLI_REV4) ||
(bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) !=
LPFC_SLI_INTF_IF_TYPE_2))
return -EPERM;
if (!pdev->is_physfn)
return -EPERM;
/* Disable SR-IOV virtual functions if enabled */
if (phba->cfg_sriov_nr_virtfn) {
pci_disable_sriov(pdev);
@ -858,7 +869,7 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
rc = lpfc_sli4_pdev_status_reg_wait(phba);
if (rc)
return -EIO;
return rc;
init_completion(&online_compl);
rc = lpfc_workq_post_event(phba, &status, &online_compl,
@ -984,7 +995,7 @@ lpfc_board_mode_store(struct device *dev, struct device_attribute *attr,
if (!status)
return strlen(buf);
else
return -EIO;
return status;
}
/**
@ -3885,18 +3896,23 @@ sysfs_ctlreg_write(struct file *filp, struct kobject *kobj,
if ((off + count) > FF_REG_AREA_SIZE)
return -ERANGE;
if (count == 0) return 0;
if (count <= LPFC_REG_WRITE_KEY_SIZE)
return 0;
if (off % 4 || count % 4 || (unsigned long)buf % 4)
return -EINVAL;
if (!(vport->fc_flag & FC_OFFLINE_MODE)) {
/* This is to protect HBA registers from accidental writes. */
if (memcmp(buf, LPFC_REG_WRITE_KEY, LPFC_REG_WRITE_KEY_SIZE))
return -EINVAL;
if (!(vport->fc_flag & FC_OFFLINE_MODE))
return -EPERM;
}
spin_lock_irq(&phba->hbalock);
for (buf_off = 0; buf_off < count; buf_off += sizeof(uint32_t))
writel(*((uint32_t *)(buf + buf_off)),
for (buf_off = 0; buf_off < count - LPFC_REG_WRITE_KEY_SIZE;
buf_off += sizeof(uint32_t))
writel(*((uint32_t *)(buf + buf_off + LPFC_REG_WRITE_KEY_SIZE)),
phba->ctrl_regs_memmap_p + off + buf_off);
spin_unlock_irq(&phba->hbalock);
@ -4097,8 +4113,10 @@ sysfs_mbox_read(struct file *filp, struct kobject *kobj,
struct Scsi_Host *shost = class_to_shost(dev);
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
struct lpfc_hba *phba = vport->phba;
int rc;
LPFC_MBOXQ_t *mboxq;
MAILBOX_t *pmb;
uint32_t mbox_tmo;
int rc;
if (off > MAILBOX_CMD_SIZE)
return -ERANGE;
@ -4123,7 +4141,8 @@ sysfs_mbox_read(struct file *filp, struct kobject *kobj,
if (off == 0 &&
phba->sysfs_mbox.state == SMBOX_WRITING &&
phba->sysfs_mbox.offset >= 2 * sizeof(uint32_t)) {
pmb = &phba->sysfs_mbox.mbox->u.mb;
mboxq = (LPFC_MBOXQ_t *)&phba->sysfs_mbox.mbox;
pmb = &mboxq->u.mb;
switch (pmb->mbxCommand) {
/* Offline only */
case MBX_INIT_LINK:
@ -4233,9 +4252,8 @@ sysfs_mbox_read(struct file *filp, struct kobject *kobj,
} else {
spin_unlock_irq(&phba->hbalock);
rc = lpfc_sli_issue_mbox_wait (phba,
phba->sysfs_mbox.mbox,
lpfc_mbox_tmo_val(phba, pmb->mbxCommand) * HZ);
mbox_tmo = lpfc_mbox_tmo_val(phba, mboxq);
rc = lpfc_sli_issue_mbox_wait(phba, mboxq, mbox_tmo);
spin_lock_irq(&phba->hbalock);
}
@ -4480,9 +4498,10 @@ lpfc_get_host_fabric_name (struct Scsi_Host *shost)
spin_lock_irq(shost->host_lock);
if ((vport->fc_flag & FC_FABRIC) ||
((phba->fc_topology == LPFC_TOPOLOGY_LOOP) &&
(vport->fc_flag & FC_PUBLIC_LOOP)))
if ((vport->port_state > LPFC_FLOGI) &&
((vport->fc_flag & FC_FABRIC) ||
((phba->fc_topology == LPFC_TOPOLOGY_LOOP) &&
(vport->fc_flag & FC_PUBLIC_LOOP))))
node_name = wwn_to_u64(phba->fc_fabparam.nodeName.u.wwn);
else
/* fabric is local port if there is no F/FL_Port */
@ -4555,9 +4574,17 @@ lpfc_get_stats(struct Scsi_Host *shost)
memset(hs, 0, sizeof (struct fc_host_statistics));
hs->tx_frames = pmb->un.varRdStatus.xmitFrameCnt;
hs->tx_words = (pmb->un.varRdStatus.xmitByteCnt * 256);
/*
* The MBX_READ_STATUS returns tx_k_bytes which has to
* converted to words
*/
hs->tx_words = (uint64_t)
((uint64_t)pmb->un.varRdStatus.xmitByteCnt
* (uint64_t)256);
hs->rx_frames = pmb->un.varRdStatus.rcvFrameCnt;
hs->rx_words = (pmb->un.varRdStatus.rcvByteCnt * 256);
hs->rx_words = (uint64_t)
((uint64_t)pmb->un.varRdStatus.rcvByteCnt
* (uint64_t)256);
memset(pmboxq, 0, sizeof (LPFC_MBOXQ_t));
pmb->mbxCommand = MBX_READ_LNK_STAT;

View File

@ -209,7 +209,7 @@ void __lpfc_mbox_cmpl_put(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbox_cmpl_put(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_mbox_cmd_check(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_mbox_dev_check(struct lpfc_hba *);
int lpfc_mbox_tmo_val(struct lpfc_hba *, int);
int lpfc_mbox_tmo_val(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_init_vfi(struct lpfcMboxq *, struct lpfc_vport *);
void lpfc_reg_vfi(struct lpfcMboxq *, struct lpfc_vport *, dma_addr_t);
void lpfc_init_vpi(struct lpfc_hba *, struct lpfcMboxq *, uint16_t);
@ -451,3 +451,5 @@ int lpfc_wr_object(struct lpfc_hba *, struct list_head *, uint32_t, uint32_t *);
/* functions to support SR-IOV */
int lpfc_sli_probe_sriov_nr_virtfn(struct lpfc_hba *, int);
uint16_t lpfc_sli_sriov_nr_virtfn_get(struct lpfc_hba *);
int lpfc_sli4_queue_create(struct lpfc_hba *);
void lpfc_sli4_queue_destroy(struct lpfc_hba *);

View File

@ -1856,6 +1856,9 @@ lpfc_decode_firmware_rev(struct lpfc_hba *phba, char *fwrevision, int flag)
case 2:
c = 'B';
break;
case 3:
c = 'X';
break;
default:
c = 0;
break;

View File

@ -996,6 +996,85 @@ lpfc_debugfs_dumpDataDif_write(struct file *file, const char __user *buf,
return nbytes;
}
static int
lpfc_debugfs_dif_err_open(struct inode *inode, struct file *file)
{
file->private_data = inode->i_private;
return 0;
}
static ssize_t
lpfc_debugfs_dif_err_read(struct file *file, char __user *buf,
size_t nbytes, loff_t *ppos)
{
struct dentry *dent = file->f_dentry;
struct lpfc_hba *phba = file->private_data;
char cbuf[16];
int cnt = 0;
if (dent == phba->debug_writeGuard)
cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_wgrd_cnt);
else if (dent == phba->debug_writeApp)
cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_wapp_cnt);
else if (dent == phba->debug_writeRef)
cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_wref_cnt);
else if (dent == phba->debug_readApp)
cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_rapp_cnt);
else if (dent == phba->debug_readRef)
cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_rref_cnt);
else if (dent == phba->debug_InjErrLBA)
cnt = snprintf(cbuf, 16, "0x%lx\n",
(unsigned long) phba->lpfc_injerr_lba);
else
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0547 Unknown debugfs error injection entry\n");
return simple_read_from_buffer(buf, nbytes, ppos, &cbuf, cnt);
}
static ssize_t
lpfc_debugfs_dif_err_write(struct file *file, const char __user *buf,
size_t nbytes, loff_t *ppos)
{
struct dentry *dent = file->f_dentry;
struct lpfc_hba *phba = file->private_data;
char dstbuf[32];
unsigned long tmp;
int size;
memset(dstbuf, 0, 32);
size = (nbytes < 32) ? nbytes : 32;
if (copy_from_user(dstbuf, buf, size))
return 0;
if (strict_strtoul(dstbuf, 0, &tmp))
return 0;
if (dent == phba->debug_writeGuard)
phba->lpfc_injerr_wgrd_cnt = (uint32_t)tmp;
else if (dent == phba->debug_writeApp)
phba->lpfc_injerr_wapp_cnt = (uint32_t)tmp;
else if (dent == phba->debug_writeRef)
phba->lpfc_injerr_wref_cnt = (uint32_t)tmp;
else if (dent == phba->debug_readApp)
phba->lpfc_injerr_rapp_cnt = (uint32_t)tmp;
else if (dent == phba->debug_readRef)
phba->lpfc_injerr_rref_cnt = (uint32_t)tmp;
else if (dent == phba->debug_InjErrLBA)
phba->lpfc_injerr_lba = (sector_t)tmp;
else
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0548 Unknown debugfs error injection entry\n");
return nbytes;
}
static int
lpfc_debugfs_dif_err_release(struct inode *inode, struct file *file)
{
return 0;
}
/**
* lpfc_debugfs_nodelist_open - Open the nodelist debugfs file
* @inode: The inode pointer that contains a vport pointer.
@ -3380,6 +3459,16 @@ static const struct file_operations lpfc_debugfs_op_dumpDif = {
.release = lpfc_debugfs_dumpDataDif_release,
};
#undef lpfc_debugfs_op_dif_err
static const struct file_operations lpfc_debugfs_op_dif_err = {
.owner = THIS_MODULE,
.open = lpfc_debugfs_dif_err_open,
.llseek = lpfc_debugfs_lseek,
.read = lpfc_debugfs_dif_err_read,
.write = lpfc_debugfs_dif_err_write,
.release = lpfc_debugfs_dif_err_release,
};
#undef lpfc_debugfs_op_slow_ring_trc
static const struct file_operations lpfc_debugfs_op_slow_ring_trc = {
.owner = THIS_MODULE,
@ -3788,6 +3877,74 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
goto debug_failed;
}
/* Setup DIF Error Injections */
snprintf(name, sizeof(name), "InjErrLBA");
phba->debug_InjErrLBA =
debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
phba->hba_debugfs_root,
phba, &lpfc_debugfs_op_dif_err);
if (!phba->debug_InjErrLBA) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0807 Cannot create debugfs InjErrLBA\n");
goto debug_failed;
}
phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF;
snprintf(name, sizeof(name), "writeGuardInjErr");
phba->debug_writeGuard =
debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
phba->hba_debugfs_root,
phba, &lpfc_debugfs_op_dif_err);
if (!phba->debug_writeGuard) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0802 Cannot create debugfs writeGuard\n");
goto debug_failed;
}
snprintf(name, sizeof(name), "writeAppInjErr");
phba->debug_writeApp =
debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
phba->hba_debugfs_root,
phba, &lpfc_debugfs_op_dif_err);
if (!phba->debug_writeApp) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0803 Cannot create debugfs writeApp\n");
goto debug_failed;
}
snprintf(name, sizeof(name), "writeRefInjErr");
phba->debug_writeRef =
debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
phba->hba_debugfs_root,
phba, &lpfc_debugfs_op_dif_err);
if (!phba->debug_writeRef) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0804 Cannot create debugfs writeRef\n");
goto debug_failed;
}
snprintf(name, sizeof(name), "readAppInjErr");
phba->debug_readApp =
debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
phba->hba_debugfs_root,
phba, &lpfc_debugfs_op_dif_err);
if (!phba->debug_readApp) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0805 Cannot create debugfs readApp\n");
goto debug_failed;
}
snprintf(name, sizeof(name), "readRefInjErr");
phba->debug_readRef =
debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
phba->hba_debugfs_root,
phba, &lpfc_debugfs_op_dif_err);
if (!phba->debug_readRef) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0806 Cannot create debugfs readApp\n");
goto debug_failed;
}
/* Setup slow ring trace */
if (lpfc_debugfs_max_slow_ring_trc) {
num = lpfc_debugfs_max_slow_ring_trc - 1;
@ -4090,6 +4247,30 @@ lpfc_debugfs_terminate(struct lpfc_vport *vport)
debugfs_remove(phba->debug_dumpDif); /* dumpDif */
phba->debug_dumpDif = NULL;
}
if (phba->debug_InjErrLBA) {
debugfs_remove(phba->debug_InjErrLBA); /* InjErrLBA */
phba->debug_InjErrLBA = NULL;
}
if (phba->debug_writeGuard) {
debugfs_remove(phba->debug_writeGuard); /* writeGuard */
phba->debug_writeGuard = NULL;
}
if (phba->debug_writeApp) {
debugfs_remove(phba->debug_writeApp); /* writeApp */
phba->debug_writeApp = NULL;
}
if (phba->debug_writeRef) {
debugfs_remove(phba->debug_writeRef); /* writeRef */
phba->debug_writeRef = NULL;
}
if (phba->debug_readApp) {
debugfs_remove(phba->debug_readApp); /* readApp */
phba->debug_readApp = NULL;
}
if (phba->debug_readRef) {
debugfs_remove(phba->debug_readRef); /* readRef */
phba->debug_readRef = NULL;
}
if (phba->slow_ring_trc) {
kfree(phba->slow_ring_trc);

View File

@ -3386,7 +3386,14 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
cmdiocb->context1 = NULL;
}
}
/*
* The driver received a LOGO from the rport and has ACK'd it.
* At this point, the driver is done so release the IOCB and
* remove the ndlp reference.
*/
lpfc_els_free_iocb(phba, cmdiocb);
lpfc_nlp_put(ndlp);
return;
}
@ -4082,9 +4089,6 @@ lpfc_els_rsp_rnid_acc(struct lpfc_vport *vport, uint8_t format,
phba->fc_stat.elsXmitACC++;
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
lpfc_nlp_put(ndlp);
elsiocb->context1 = NULL; /* Don't need ndlp for cmpl,
* it could be freed */
rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0);
if (rc == IOCB_ERROR) {
@ -4166,6 +4170,11 @@ lpfc_els_rsp_echo_acc(struct lpfc_vport *vport, uint8_t *data,
psli = &phba->sli;
cmdsize = oldiocb->iocb.unsli3.rcvsli3.acc_len;
/* The accumulated length can exceed the BPL_SIZE. For
* now, use this as the limit
*/
if (cmdsize > LPFC_BPL_SIZE)
cmdsize = LPFC_BPL_SIZE;
elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry, ndlp,
ndlp->nlp_DID, ELS_CMD_ACC);
if (!elsiocb)
@ -4189,9 +4198,6 @@ lpfc_els_rsp_echo_acc(struct lpfc_vport *vport, uint8_t *data,
phba->fc_stat.elsXmitACC++;
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
lpfc_nlp_put(ndlp);
elsiocb->context1 = NULL; /* Don't need ndlp for cmpl,
* it could be freed */
rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0);
if (rc == IOCB_ERROR) {
@ -7258,16 +7264,11 @@ lpfc_issue_els_fdisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
icmd->un.elsreq64.myID = 0;
icmd->un.elsreq64.fl = 1;
if ((phba->sli_rev == LPFC_SLI_REV4) &&
(bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) ==
LPFC_SLI_INTF_IF_TYPE_0)) {
/* FDISC needs to be 1 for WQE VPI */
elsiocb->iocb.ulpCt_h = (SLI4_CT_VPI >> 1) & 1;
elsiocb->iocb.ulpCt_l = SLI4_CT_VPI & 1 ;
/* Set the ulpContext to the vpi */
elsiocb->iocb.ulpContext = phba->vpi_ids[vport->vpi];
} else {
/* For FDISC, Let FDISC rsp set the NPortID for this VPI */
/*
* SLI3 ports require a different context type value than SLI4.
* Catch SLI3 ports here and override the prep.
*/
if (phba->sli_rev == LPFC_SLI_REV3) {
icmd->ulpCt_h = 1;
icmd->ulpCt_l = 0;
}

View File

@ -1412,7 +1412,7 @@ lpfc_register_fcf(struct lpfc_hba *phba)
if (phba->pport->port_state != LPFC_FLOGI) {
phba->hba_flag |= FCF_RR_INPROG;
spin_unlock_irq(&phba->hbalock);
lpfc_issue_init_vfi(phba->pport);
lpfc_initial_flogi(phba->pport);
return;
}
spin_unlock_irq(&phba->hbalock);
@ -2646,7 +2646,9 @@ lpfc_init_vfi_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
{
struct lpfc_vport *vport = mboxq->vport;
if (mboxq->u.mb.mbxStatus && (mboxq->u.mb.mbxStatus != 0x4002)) {
/* VFI not supported on interface type 0, just do the flogi */
if (mboxq->u.mb.mbxStatus && (bf_get(lpfc_sli_intf_if_type,
&phba->sli4_hba.sli_intf) != LPFC_SLI_INTF_IF_TYPE_0)) {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_MBOX,
"2891 Init VFI mailbox failed 0x%x\n",
@ -2655,6 +2657,7 @@ lpfc_init_vfi_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
return;
}
lpfc_initial_flogi(vport);
mempool_free(mboxq, phba->mbox_mem_pool);
return;

View File

@ -41,6 +41,8 @@
* Or clear that bit field:
* bf_set(example_bit_field, &t1, 0);
*/
#define bf_get_be32(name, ptr) \
((be32_to_cpu((ptr)->name##_WORD) >> name##_SHIFT) & name##_MASK)
#define bf_get_le32(name, ptr) \
((le32_to_cpu((ptr)->name##_WORD) >> name##_SHIFT) & name##_MASK)
#define bf_get(name, ptr) \
@ -678,7 +680,6 @@ struct lpfc_register {
#define lpfc_rq_doorbell_num_posted_SHIFT 16
#define lpfc_rq_doorbell_num_posted_MASK 0x3FFF
#define lpfc_rq_doorbell_num_posted_WORD word0
#define LPFC_RQ_POST_BATCH 8 /* RQEs to post at one time */
#define lpfc_rq_doorbell_id_SHIFT 0
#define lpfc_rq_doorbell_id_MASK 0xFFFF
#define lpfc_rq_doorbell_id_WORD word0
@ -784,6 +785,8 @@ union lpfc_sli4_cfg_shdr {
#define LPFC_Q_CREATE_VERSION_2 2
#define LPFC_Q_CREATE_VERSION_1 1
#define LPFC_Q_CREATE_VERSION_0 0
#define LPFC_OPCODE_VERSION_0 0
#define LPFC_OPCODE_VERSION_1 1
} request;
struct {
uint32_t word6;
@ -825,6 +828,7 @@ struct mbox_header {
#define LPFC_EXTENT_VERSION_DEFAULT 0
/* Subsystem Definitions */
#define LPFC_MBOX_SUBSYSTEM_NA 0x0
#define LPFC_MBOX_SUBSYSTEM_COMMON 0x1
#define LPFC_MBOX_SUBSYSTEM_FCOE 0xC
@ -835,25 +839,34 @@ struct mbox_header {
#define HOST_ENDIAN_HIGH_WORD1 0xFF7856FF
/* Common Opcodes */
#define LPFC_MBOX_OPCODE_CQ_CREATE 0x0C
#define LPFC_MBOX_OPCODE_EQ_CREATE 0x0D
#define LPFC_MBOX_OPCODE_MQ_CREATE 0x15
#define LPFC_MBOX_OPCODE_GET_CNTL_ATTRIBUTES 0x20
#define LPFC_MBOX_OPCODE_NOP 0x21
#define LPFC_MBOX_OPCODE_MQ_DESTROY 0x35
#define LPFC_MBOX_OPCODE_CQ_DESTROY 0x36
#define LPFC_MBOX_OPCODE_EQ_DESTROY 0x37
#define LPFC_MBOX_OPCODE_QUERY_FW_CFG 0x3A
#define LPFC_MBOX_OPCODE_FUNCTION_RESET 0x3D
#define LPFC_MBOX_OPCODE_MQ_CREATE_EXT 0x5A
#define LPFC_MBOX_OPCODE_GET_RSRC_EXTENT_INFO 0x9A
#define LPFC_MBOX_OPCODE_GET_ALLOC_RSRC_EXTENT 0x9B
#define LPFC_MBOX_OPCODE_ALLOC_RSRC_EXTENT 0x9C
#define LPFC_MBOX_OPCODE_DEALLOC_RSRC_EXTENT 0x9D
#define LPFC_MBOX_OPCODE_GET_FUNCTION_CONFIG 0xA0
#define LPFC_MBOX_OPCODE_GET_PROFILE_CONFIG 0xA4
#define LPFC_MBOX_OPCODE_WRITE_OBJECT 0xAC
#define LPFC_MBOX_OPCODE_GET_SLI4_PARAMETERS 0xB5
#define LPFC_MBOX_OPCODE_NA 0x00
#define LPFC_MBOX_OPCODE_CQ_CREATE 0x0C
#define LPFC_MBOX_OPCODE_EQ_CREATE 0x0D
#define LPFC_MBOX_OPCODE_MQ_CREATE 0x15
#define LPFC_MBOX_OPCODE_GET_CNTL_ATTRIBUTES 0x20
#define LPFC_MBOX_OPCODE_NOP 0x21
#define LPFC_MBOX_OPCODE_MQ_DESTROY 0x35
#define LPFC_MBOX_OPCODE_CQ_DESTROY 0x36
#define LPFC_MBOX_OPCODE_EQ_DESTROY 0x37
#define LPFC_MBOX_OPCODE_QUERY_FW_CFG 0x3A
#define LPFC_MBOX_OPCODE_FUNCTION_RESET 0x3D
#define LPFC_MBOX_OPCODE_GET_PORT_NAME 0x4D
#define LPFC_MBOX_OPCODE_MQ_CREATE_EXT 0x5A
#define LPFC_MBOX_OPCODE_GET_RSRC_EXTENT_INFO 0x9A
#define LPFC_MBOX_OPCODE_GET_ALLOC_RSRC_EXTENT 0x9B
#define LPFC_MBOX_OPCODE_ALLOC_RSRC_EXTENT 0x9C
#define LPFC_MBOX_OPCODE_DEALLOC_RSRC_EXTENT 0x9D
#define LPFC_MBOX_OPCODE_GET_FUNCTION_CONFIG 0xA0
#define LPFC_MBOX_OPCODE_GET_PROFILE_CONFIG 0xA4
#define LPFC_MBOX_OPCODE_SET_PROFILE_CONFIG 0xA5
#define LPFC_MBOX_OPCODE_GET_PROFILE_LIST 0xA6
#define LPFC_MBOX_OPCODE_SET_ACT_PROFILE 0xA8
#define LPFC_MBOX_OPCODE_GET_FACTORY_PROFILE_CONFIG 0xA9
#define LPFC_MBOX_OPCODE_READ_OBJECT 0xAB
#define LPFC_MBOX_OPCODE_WRITE_OBJECT 0xAC
#define LPFC_MBOX_OPCODE_READ_OBJECT_LIST 0xAD
#define LPFC_MBOX_OPCODE_DELETE_OBJECT 0xAE
#define LPFC_MBOX_OPCODE_GET_SLI4_PARAMETERS 0xB5
/* FCoE Opcodes */
#define LPFC_MBOX_OPCODE_FCOE_WQ_CREATE 0x01
@ -867,6 +880,7 @@ struct mbox_header {
#define LPFC_MBOX_OPCODE_FCOE_DELETE_FCF 0x0A
#define LPFC_MBOX_OPCODE_FCOE_POST_HDR_TEMPLATE 0x0B
#define LPFC_MBOX_OPCODE_FCOE_REDISCOVER_FCF 0x10
#define LPFC_MBOX_OPCODE_FCOE_SET_FCLINK_SETTINGS 0x21
#define LPFC_MBOX_OPCODE_FCOE_LINK_DIAG_STATE 0x22
#define LPFC_MBOX_OPCODE_FCOE_LINK_DIAG_LOOPBACK 0x23
@ -1470,16 +1484,81 @@ struct sli4_sge { /* SLI-4 */
uint32_t addr_lo;
uint32_t word2;
#define lpfc_sli4_sge_offset_SHIFT 0 /* Offset of buffer - Not used*/
#define lpfc_sli4_sge_offset_MASK 0x1FFFFFFF
#define lpfc_sli4_sge_offset_SHIFT 0
#define lpfc_sli4_sge_offset_MASK 0x07FFFFFF
#define lpfc_sli4_sge_offset_WORD word2
#define lpfc_sli4_sge_last_SHIFT 31 /* Last SEG in the SGL sets
this flag !! */
#define lpfc_sli4_sge_type_SHIFT 27
#define lpfc_sli4_sge_type_MASK 0x0000000F
#define lpfc_sli4_sge_type_WORD word2
#define LPFC_SGE_TYPE_DATA 0x0
#define LPFC_SGE_TYPE_DIF 0x4
#define LPFC_SGE_TYPE_LSP 0x5
#define LPFC_SGE_TYPE_PEDIF 0x6
#define LPFC_SGE_TYPE_PESEED 0x7
#define LPFC_SGE_TYPE_DISEED 0x8
#define LPFC_SGE_TYPE_ENC 0x9
#define LPFC_SGE_TYPE_ATM 0xA
#define LPFC_SGE_TYPE_SKIP 0xC
#define lpfc_sli4_sge_last_SHIFT 31 /* Last SEG in the SGL sets it */
#define lpfc_sli4_sge_last_MASK 0x00000001
#define lpfc_sli4_sge_last_WORD word2
uint32_t sge_len;
};
struct sli4_sge_diseed { /* SLI-4 */
uint32_t ref_tag;
uint32_t ref_tag_tran;
uint32_t word2;
#define lpfc_sli4_sge_dif_apptran_SHIFT 0
#define lpfc_sli4_sge_dif_apptran_MASK 0x0000FFFF
#define lpfc_sli4_sge_dif_apptran_WORD word2
#define lpfc_sli4_sge_dif_af_SHIFT 24
#define lpfc_sli4_sge_dif_af_MASK 0x00000001
#define lpfc_sli4_sge_dif_af_WORD word2
#define lpfc_sli4_sge_dif_na_SHIFT 25
#define lpfc_sli4_sge_dif_na_MASK 0x00000001
#define lpfc_sli4_sge_dif_na_WORD word2
#define lpfc_sli4_sge_dif_hi_SHIFT 26
#define lpfc_sli4_sge_dif_hi_MASK 0x00000001
#define lpfc_sli4_sge_dif_hi_WORD word2
#define lpfc_sli4_sge_dif_type_SHIFT 27
#define lpfc_sli4_sge_dif_type_MASK 0x0000000F
#define lpfc_sli4_sge_dif_type_WORD word2
#define lpfc_sli4_sge_dif_last_SHIFT 31 /* Last SEG in the SGL sets it */
#define lpfc_sli4_sge_dif_last_MASK 0x00000001
#define lpfc_sli4_sge_dif_last_WORD word2
uint32_t word3;
#define lpfc_sli4_sge_dif_apptag_SHIFT 0
#define lpfc_sli4_sge_dif_apptag_MASK 0x0000FFFF
#define lpfc_sli4_sge_dif_apptag_WORD word3
#define lpfc_sli4_sge_dif_bs_SHIFT 16
#define lpfc_sli4_sge_dif_bs_MASK 0x00000007
#define lpfc_sli4_sge_dif_bs_WORD word3
#define lpfc_sli4_sge_dif_ai_SHIFT 19
#define lpfc_sli4_sge_dif_ai_MASK 0x00000001
#define lpfc_sli4_sge_dif_ai_WORD word3
#define lpfc_sli4_sge_dif_me_SHIFT 20
#define lpfc_sli4_sge_dif_me_MASK 0x00000001
#define lpfc_sli4_sge_dif_me_WORD word3
#define lpfc_sli4_sge_dif_re_SHIFT 21
#define lpfc_sli4_sge_dif_re_MASK 0x00000001
#define lpfc_sli4_sge_dif_re_WORD word3
#define lpfc_sli4_sge_dif_ce_SHIFT 22
#define lpfc_sli4_sge_dif_ce_MASK 0x00000001
#define lpfc_sli4_sge_dif_ce_WORD word3
#define lpfc_sli4_sge_dif_nr_SHIFT 23
#define lpfc_sli4_sge_dif_nr_MASK 0x00000001
#define lpfc_sli4_sge_dif_nr_WORD word3
#define lpfc_sli4_sge_dif_oprx_SHIFT 24
#define lpfc_sli4_sge_dif_oprx_MASK 0x0000000F
#define lpfc_sli4_sge_dif_oprx_WORD word3
#define lpfc_sli4_sge_dif_optx_SHIFT 28
#define lpfc_sli4_sge_dif_optx_MASK 0x0000000F
#define lpfc_sli4_sge_dif_optx_WORD word3
/* optx and oprx use BG_OP_IN defines in lpfc_hw.h */
};
struct fcf_record {
uint32_t max_rcv_size;
uint32_t fka_adv_period;
@ -2019,6 +2098,15 @@ struct lpfc_mbx_read_config {
#define lpfc_mbx_rd_conf_extnts_inuse_MASK 0x00000001
#define lpfc_mbx_rd_conf_extnts_inuse_WORD word1
uint32_t word2;
#define lpfc_mbx_rd_conf_lnk_numb_SHIFT 0
#define lpfc_mbx_rd_conf_lnk_numb_MASK 0x0000003F
#define lpfc_mbx_rd_conf_lnk_numb_WORD word2
#define lpfc_mbx_rd_conf_lnk_type_SHIFT 6
#define lpfc_mbx_rd_conf_lnk_type_MASK 0x00000003
#define lpfc_mbx_rd_conf_lnk_type_WORD word2
#define lpfc_mbx_rd_conf_lnk_ldv_SHIFT 8
#define lpfc_mbx_rd_conf_lnk_ldv_MASK 0x00000001
#define lpfc_mbx_rd_conf_lnk_ldv_WORD word2
#define lpfc_mbx_rd_conf_topology_SHIFT 24
#define lpfc_mbx_rd_conf_topology_MASK 0x000000FF
#define lpfc_mbx_rd_conf_topology_WORD word2
@ -2552,8 +2640,152 @@ struct lpfc_mbx_get_prof_cfg {
} u;
};
struct lpfc_controller_attribute {
uint32_t version_string[8];
uint32_t manufacturer_name[8];
uint32_t supported_modes;
uint32_t word17;
#define lpfc_cntl_attr_eprom_ver_lo_SHIFT 0
#define lpfc_cntl_attr_eprom_ver_lo_MASK 0x000000ff
#define lpfc_cntl_attr_eprom_ver_lo_WORD word17
#define lpfc_cntl_attr_eprom_ver_hi_SHIFT 8
#define lpfc_cntl_attr_eprom_ver_hi_MASK 0x000000ff
#define lpfc_cntl_attr_eprom_ver_hi_WORD word17
uint32_t mbx_da_struct_ver;
uint32_t ep_fw_da_struct_ver;
uint32_t ncsi_ver_str[3];
uint32_t dflt_ext_timeout;
uint32_t model_number[8];
uint32_t description[16];
uint32_t serial_number[8];
uint32_t ip_ver_str[8];
uint32_t fw_ver_str[8];
uint32_t bios_ver_str[8];
uint32_t redboot_ver_str[8];
uint32_t driver_ver_str[8];
uint32_t flash_fw_ver_str[8];
uint32_t functionality;
uint32_t word105;
#define lpfc_cntl_attr_max_cbd_len_SHIFT 0
#define lpfc_cntl_attr_max_cbd_len_MASK 0x0000ffff
#define lpfc_cntl_attr_max_cbd_len_WORD word105
#define lpfc_cntl_attr_asic_rev_SHIFT 16
#define lpfc_cntl_attr_asic_rev_MASK 0x000000ff
#define lpfc_cntl_attr_asic_rev_WORD word105
#define lpfc_cntl_attr_gen_guid0_SHIFT 24
#define lpfc_cntl_attr_gen_guid0_MASK 0x000000ff
#define lpfc_cntl_attr_gen_guid0_WORD word105
uint32_t gen_guid1_12[3];
uint32_t word109;
#define lpfc_cntl_attr_gen_guid13_14_SHIFT 0
#define lpfc_cntl_attr_gen_guid13_14_MASK 0x0000ffff
#define lpfc_cntl_attr_gen_guid13_14_WORD word109
#define lpfc_cntl_attr_gen_guid15_SHIFT 16
#define lpfc_cntl_attr_gen_guid15_MASK 0x000000ff
#define lpfc_cntl_attr_gen_guid15_WORD word109
#define lpfc_cntl_attr_hba_port_cnt_SHIFT 24
#define lpfc_cntl_attr_hba_port_cnt_MASK 0x000000ff
#define lpfc_cntl_attr_hba_port_cnt_WORD word109
uint32_t word110;
#define lpfc_cntl_attr_dflt_lnk_tmo_SHIFT 0
#define lpfc_cntl_attr_dflt_lnk_tmo_MASK 0x0000ffff
#define lpfc_cntl_attr_dflt_lnk_tmo_WORD word110
#define lpfc_cntl_attr_multi_func_dev_SHIFT 24
#define lpfc_cntl_attr_multi_func_dev_MASK 0x000000ff
#define lpfc_cntl_attr_multi_func_dev_WORD word110
uint32_t word111;
#define lpfc_cntl_attr_cache_valid_SHIFT 0
#define lpfc_cntl_attr_cache_valid_MASK 0x000000ff
#define lpfc_cntl_attr_cache_valid_WORD word111
#define lpfc_cntl_attr_hba_status_SHIFT 8
#define lpfc_cntl_attr_hba_status_MASK 0x000000ff
#define lpfc_cntl_attr_hba_status_WORD word111
#define lpfc_cntl_attr_max_domain_SHIFT 16
#define lpfc_cntl_attr_max_domain_MASK 0x000000ff
#define lpfc_cntl_attr_max_domain_WORD word111
#define lpfc_cntl_attr_lnk_numb_SHIFT 24
#define lpfc_cntl_attr_lnk_numb_MASK 0x0000003f
#define lpfc_cntl_attr_lnk_numb_WORD word111
#define lpfc_cntl_attr_lnk_type_SHIFT 30
#define lpfc_cntl_attr_lnk_type_MASK 0x00000003
#define lpfc_cntl_attr_lnk_type_WORD word111
uint32_t fw_post_status;
uint32_t hba_mtu[8];
uint32_t word121;
uint32_t reserved1[3];
uint32_t word125;
#define lpfc_cntl_attr_pci_vendor_id_SHIFT 0
#define lpfc_cntl_attr_pci_vendor_id_MASK 0x0000ffff
#define lpfc_cntl_attr_pci_vendor_id_WORD word125
#define lpfc_cntl_attr_pci_device_id_SHIFT 16
#define lpfc_cntl_attr_pci_device_id_MASK 0x0000ffff
#define lpfc_cntl_attr_pci_device_id_WORD word125
uint32_t word126;
#define lpfc_cntl_attr_pci_subvdr_id_SHIFT 0
#define lpfc_cntl_attr_pci_subvdr_id_MASK 0x0000ffff
#define lpfc_cntl_attr_pci_subvdr_id_WORD word126
#define lpfc_cntl_attr_pci_subsys_id_SHIFT 16
#define lpfc_cntl_attr_pci_subsys_id_MASK 0x0000ffff
#define lpfc_cntl_attr_pci_subsys_id_WORD word126
uint32_t word127;
#define lpfc_cntl_attr_pci_bus_num_SHIFT 0
#define lpfc_cntl_attr_pci_bus_num_MASK 0x000000ff
#define lpfc_cntl_attr_pci_bus_num_WORD word127
#define lpfc_cntl_attr_pci_dev_num_SHIFT 8
#define lpfc_cntl_attr_pci_dev_num_MASK 0x000000ff
#define lpfc_cntl_attr_pci_dev_num_WORD word127
#define lpfc_cntl_attr_pci_fnc_num_SHIFT 16
#define lpfc_cntl_attr_pci_fnc_num_MASK 0x000000ff
#define lpfc_cntl_attr_pci_fnc_num_WORD word127
#define lpfc_cntl_attr_inf_type_SHIFT 24
#define lpfc_cntl_attr_inf_type_MASK 0x000000ff
#define lpfc_cntl_attr_inf_type_WORD word127
uint32_t unique_id[2];
uint32_t word130;
#define lpfc_cntl_attr_num_netfil_SHIFT 0
#define lpfc_cntl_attr_num_netfil_MASK 0x000000ff
#define lpfc_cntl_attr_num_netfil_WORD word130
uint32_t reserved2[4];
};
struct lpfc_mbx_get_cntl_attributes {
union lpfc_sli4_cfg_shdr cfg_shdr;
struct lpfc_controller_attribute cntl_attr;
};
struct lpfc_mbx_get_port_name {
struct mbox_header header;
union {
struct {
uint32_t word4;
#define lpfc_mbx_get_port_name_lnk_type_SHIFT 0
#define lpfc_mbx_get_port_name_lnk_type_MASK 0x00000003
#define lpfc_mbx_get_port_name_lnk_type_WORD word4
} request;
struct {
uint32_t word4;
#define lpfc_mbx_get_port_name_name0_SHIFT 0
#define lpfc_mbx_get_port_name_name0_MASK 0x000000FF
#define lpfc_mbx_get_port_name_name0_WORD word4
#define lpfc_mbx_get_port_name_name1_SHIFT 8
#define lpfc_mbx_get_port_name_name1_MASK 0x000000FF
#define lpfc_mbx_get_port_name_name1_WORD word4
#define lpfc_mbx_get_port_name_name2_SHIFT 16
#define lpfc_mbx_get_port_name_name2_MASK 0x000000FF
#define lpfc_mbx_get_port_name_name2_WORD word4
#define lpfc_mbx_get_port_name_name3_SHIFT 24
#define lpfc_mbx_get_port_name_name3_MASK 0x000000FF
#define lpfc_mbx_get_port_name_name3_WORD word4
#define LPFC_LINK_NUMBER_0 0
#define LPFC_LINK_NUMBER_1 1
#define LPFC_LINK_NUMBER_2 2
#define LPFC_LINK_NUMBER_3 3
} response;
} u;
};
/* Mailbox Completion Queue Error Messages */
#define MB_CQE_STATUS_SUCCESS 0x0
#define MB_CQE_STATUS_SUCCESS 0x0
#define MB_CQE_STATUS_INSUFFICIENT_PRIVILEGES 0x1
#define MB_CQE_STATUS_INVALID_PARAMETER 0x2
#define MB_CQE_STATUS_INSUFFICIENT_RESOURCES 0x3
@ -2637,8 +2869,9 @@ struct lpfc_mqe {
struct lpfc_mbx_run_link_diag_test link_diag_test;
struct lpfc_mbx_get_func_cfg get_func_cfg;
struct lpfc_mbx_get_prof_cfg get_prof_cfg;
struct lpfc_mbx_nop nop;
struct lpfc_mbx_wr_object wr_object;
struct lpfc_mbx_get_port_name get_port_name;
struct lpfc_mbx_nop nop;
} un;
};
@ -2855,6 +3088,9 @@ struct wqe_common {
#define wqe_ctxt_tag_MASK 0x0000FFFF
#define wqe_ctxt_tag_WORD word6
uint32_t word7;
#define wqe_dif_SHIFT 0
#define wqe_dif_MASK 0x00000003
#define wqe_dif_WORD word7
#define wqe_ct_SHIFT 2
#define wqe_ct_MASK 0x00000003
#define wqe_ct_WORD word7
@ -2867,12 +3103,21 @@ struct wqe_common {
#define wqe_class_SHIFT 16
#define wqe_class_MASK 0x00000007
#define wqe_class_WORD word7
#define wqe_ar_SHIFT 19
#define wqe_ar_MASK 0x00000001
#define wqe_ar_WORD word7
#define wqe_ag_SHIFT wqe_ar_SHIFT
#define wqe_ag_MASK wqe_ar_MASK
#define wqe_ag_WORD wqe_ar_WORD
#define wqe_pu_SHIFT 20
#define wqe_pu_MASK 0x00000003
#define wqe_pu_WORD word7
#define wqe_erp_SHIFT 22
#define wqe_erp_MASK 0x00000001
#define wqe_erp_WORD word7
#define wqe_conf_SHIFT wqe_erp_SHIFT
#define wqe_conf_MASK wqe_erp_MASK
#define wqe_conf_WORD wqe_erp_WORD
#define wqe_lnk_SHIFT 23
#define wqe_lnk_MASK 0x00000001
#define wqe_lnk_WORD word7
@ -2931,6 +3176,9 @@ struct wqe_common {
#define wqe_xc_SHIFT 21
#define wqe_xc_MASK 0x00000001
#define wqe_xc_WORD word10
#define wqe_sr_SHIFT 22
#define wqe_sr_MASK 0x00000001
#define wqe_sr_WORD word10
#define wqe_ccpe_SHIFT 23
#define wqe_ccpe_MASK 0x00000001
#define wqe_ccpe_WORD word10

View File

@ -58,8 +58,7 @@ spinlock_t _dump_buf_lock;
static void lpfc_get_hba_model_desc(struct lpfc_hba *, uint8_t *, uint8_t *);
static int lpfc_post_rcv_buf(struct lpfc_hba *);
static int lpfc_sli4_queue_create(struct lpfc_hba *);
static void lpfc_sli4_queue_destroy(struct lpfc_hba *);
static int lpfc_sli4_queue_verify(struct lpfc_hba *);
static int lpfc_create_bootstrap_mbox(struct lpfc_hba *);
static int lpfc_setup_endian_order(struct lpfc_hba *);
static int lpfc_sli4_read_config(struct lpfc_hba *);
@ -1438,6 +1437,7 @@ lpfc_handle_eratt_s4(struct lpfc_hba *phba)
struct Scsi_Host *shost;
uint32_t if_type;
struct lpfc_register portstat_reg;
int rc;
/* If the pci channel is offline, ignore possible errors, since
* we cannot communicate with the pci card anyway.
@ -1480,16 +1480,24 @@ lpfc_handle_eratt_s4(struct lpfc_hba *phba)
lpfc_sli4_offline_eratt(phba);
return;
}
if (bf_get(lpfc_sliport_status_rn, &portstat_reg)) {
/*
* TODO: Attempt port recovery via a port reset.
* When fully implemented, the driver should
* attempt to recover the port here and return.
* For now, log an error and take the port offline.
*/
/*
* On error status condition, driver need to wait for port
* ready before performing reset.
*/
rc = lpfc_sli4_pdev_status_reg_wait(phba);
if (!rc) {
/* need reset: attempt for port recovery */
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2887 Port Error: Attempting "
"Port Recovery\n");
lpfc_offline_prep(phba);
lpfc_offline(phba);
lpfc_sli_brdrestart(phba);
if (lpfc_online(phba) == 0) {
lpfc_unblock_mgmt_io(phba);
return;
}
/* fall through for not able to recover */
}
lpfc_sli4_offline_eratt(phba);
break;
@ -1724,11 +1732,20 @@ lpfc_parse_vpd(struct lpfc_hba *phba, uint8_t *vpd, int len)
j = 0;
Length -= (3+i);
while(i--) {
phba->Port[j++] = vpd[index++];
if (j == 19)
break;
if ((phba->sli_rev == LPFC_SLI_REV4) &&
(phba->sli4_hba.pport_name_sta ==
LPFC_SLI4_PPNAME_GET)) {
j++;
index++;
} else
phba->Port[j++] = vpd[index++];
if (j == 19)
break;
}
phba->Port[j] = 0;
if ((phba->sli_rev != LPFC_SLI_REV4) ||
(phba->sli4_hba.pport_name_sta ==
LPFC_SLI4_PPNAME_NON))
phba->Port[j] = 0;
continue;
}
else {
@ -1958,7 +1975,7 @@ lpfc_get_hba_model_desc(struct lpfc_hba *phba, uint8_t *mdp, uint8_t *descp)
case PCI_DEVICE_ID_LANCER_FCOE:
case PCI_DEVICE_ID_LANCER_FCOE_VF:
oneConnect = 1;
m = (typeof(m)){"OCe50100", "PCIe", "FCoE"};
m = (typeof(m)){"OCe15100", "PCIe", "FCoE"};
break;
default:
m = (typeof(m)){"Unknown", "", ""};
@ -2432,17 +2449,19 @@ lpfc_block_mgmt_io(struct lpfc_hba * phba)
uint8_t actcmd = MBX_HEARTBEAT;
unsigned long timeout;
timeout = msecs_to_jiffies(LPFC_MBOX_TMO * 1000) + jiffies;
spin_lock_irqsave(&phba->hbalock, iflag);
phba->sli.sli_flag |= LPFC_BLOCK_MGMT_IO;
if (phba->sli.mbox_active)
if (phba->sli.mbox_active) {
actcmd = phba->sli.mbox_active->u.mb.mbxCommand;
/* Determine how long we might wait for the active mailbox
* command to be gracefully completed by firmware.
*/
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba,
phba->sli.mbox_active) * 1000) + jiffies;
}
spin_unlock_irqrestore(&phba->hbalock, iflag);
/* Determine how long we might wait for the active mailbox
* command to be gracefully completed by firmware.
*/
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, actcmd) * 1000) +
jiffies;
/* Wait for the outstnading mailbox command to complete */
while (phba->sli.mbox_active) {
/* Check active mailbox complete status every 2ms */
@ -3949,7 +3968,7 @@ static int
lpfc_enable_pci_dev(struct lpfc_hba *phba)
{
struct pci_dev *pdev;
int bars;
int bars = 0;
/* Obtain PCI device reference */
if (!phba->pcidev)
@ -3978,6 +3997,8 @@ lpfc_enable_pci_dev(struct lpfc_hba *phba)
out_disable_device:
pci_disable_device(pdev);
out_error:
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"1401 Failed to enable pci device, bars:x%x\n", bars);
return -ENODEV;
}
@ -4051,9 +4072,6 @@ lpfc_sli_sriov_nr_virtfn_get(struct lpfc_hba *phba)
uint16_t nr_virtfn;
int pos;
if (!pdev->is_physfn)
return 0;
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV);
if (pos == 0)
return 0;
@ -4474,15 +4492,15 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
}
}
mempool_free(mboxq, phba->mbox_mem_pool);
/* Create all the SLI4 queues */
rc = lpfc_sli4_queue_create(phba);
/* Verify all the SLI4 queues */
rc = lpfc_sli4_queue_verify(phba);
if (rc)
goto out_free_bsmbx;
/* Create driver internal CQE event pool */
rc = lpfc_sli4_cq_event_pool_create(phba);
if (rc)
goto out_destroy_queue;
goto out_free_bsmbx;
/* Initialize and populate the iocb list per host */
rc = lpfc_init_sgl_list(phba);
@ -4516,14 +4534,21 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
goto out_remove_rpi_hdrs;
}
phba->sli4_hba.fcp_eq_hdl = kzalloc((sizeof(struct lpfc_fcp_eq_hdl) *
/*
* The cfg_fcp_eq_count can be zero whenever there is exactly one
* interrupt vector. This is not an error
*/
if (phba->cfg_fcp_eq_count) {
phba->sli4_hba.fcp_eq_hdl =
kzalloc((sizeof(struct lpfc_fcp_eq_hdl) *
phba->cfg_fcp_eq_count), GFP_KERNEL);
if (!phba->sli4_hba.fcp_eq_hdl) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2572 Failed allocate memory for fast-path "
"per-EQ handle array\n");
rc = -ENOMEM;
goto out_free_fcf_rr_bmask;
if (!phba->sli4_hba.fcp_eq_hdl) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2572 Failed allocate memory for "
"fast-path per-EQ handle array\n");
rc = -ENOMEM;
goto out_free_fcf_rr_bmask;
}
}
phba->sli4_hba.msix_entries = kzalloc((sizeof(struct msix_entry) *
@ -4567,8 +4592,6 @@ out_free_sgl_list:
lpfc_free_sgl_list(phba);
out_destroy_cq_event_pool:
lpfc_sli4_cq_event_pool_destroy(phba);
out_destroy_queue:
lpfc_sli4_queue_destroy(phba);
out_free_bsmbx:
lpfc_destroy_bootstrap_mbox(phba);
out_free_mem:
@ -4608,9 +4631,6 @@ lpfc_sli4_driver_resource_unset(struct lpfc_hba *phba)
/* Free the SCSI sgl management array */
kfree(phba->sli4_hba.lpfc_scsi_psb_array);
/* Free the SLI4 queues */
lpfc_sli4_queue_destroy(phba);
/* Free the completion queue EQ event pool */
lpfc_sli4_cq_event_release_all(phba);
lpfc_sli4_cq_event_pool_destroy(phba);
@ -6139,24 +6159,21 @@ lpfc_setup_endian_order(struct lpfc_hba *phba)
}
/**
* lpfc_sli4_queue_create - Create all the SLI4 queues
* lpfc_sli4_queue_verify - Verify and update EQ and CQ counts
* @phba: pointer to lpfc hba data structure.
*
* This routine is invoked to allocate all the SLI4 queues for the FCoE HBA
* operation. For each SLI4 queue type, the parameters such as queue entry
* count (queue depth) shall be taken from the module parameter. For now,
* we just use some constant number as place holder.
* This routine is invoked to check the user settable queue counts for EQs and
* CQs. after this routine is called the counts will be set to valid values that
* adhere to the constraints of the system's interrupt vectors and the port's
* queue resources.
*
* Return codes
* 0 - successful
* -ENOMEM - No available memory
* -EIO - The mailbox failed to complete successfully.
**/
static int
lpfc_sli4_queue_create(struct lpfc_hba *phba)
lpfc_sli4_queue_verify(struct lpfc_hba *phba)
{
struct lpfc_queue *qdesc;
int fcp_eqidx, fcp_cqidx, fcp_wqidx;
int cfg_fcp_wq_count;
int cfg_fcp_eq_count;
@ -6229,14 +6246,43 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
/* The overall number of event queues used */
phba->sli4_hba.cfg_eqn = phba->cfg_fcp_eq_count + LPFC_SP_EQN_DEF;
/*
* Create Event Queues (EQs)
*/
/* Get EQ depth from module parameter, fake the default for now */
phba->sli4_hba.eq_esize = LPFC_EQE_SIZE_4B;
phba->sli4_hba.eq_ecount = LPFC_EQE_DEF_COUNT;
/* Get CQ depth from module parameter, fake the default for now */
phba->sli4_hba.cq_esize = LPFC_CQE_SIZE;
phba->sli4_hba.cq_ecount = LPFC_CQE_DEF_COUNT;
return 0;
out_error:
return -ENOMEM;
}
/**
* lpfc_sli4_queue_create - Create all the SLI4 queues
* @phba: pointer to lpfc hba data structure.
*
* This routine is invoked to allocate all the SLI4 queues for the FCoE HBA
* operation. For each SLI4 queue type, the parameters such as queue entry
* count (queue depth) shall be taken from the module parameter. For now,
* we just use some constant number as place holder.
*
* Return codes
* 0 - sucessful
* -ENOMEM - No availble memory
* -EIO - The mailbox failed to complete successfully.
**/
int
lpfc_sli4_queue_create(struct lpfc_hba *phba)
{
struct lpfc_queue *qdesc;
int fcp_eqidx, fcp_cqidx, fcp_wqidx;
/*
* Create Event Queues (EQs)
*/
/* Create slow path event queue */
qdesc = lpfc_sli4_queue_alloc(phba, phba->sli4_hba.eq_esize,
phba->sli4_hba.eq_ecount);
@ -6247,14 +6293,20 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
}
phba->sli4_hba.sp_eq = qdesc;
/* Create fast-path FCP Event Queue(s) */
phba->sli4_hba.fp_eq = kzalloc((sizeof(struct lpfc_queue *) *
phba->cfg_fcp_eq_count), GFP_KERNEL);
if (!phba->sli4_hba.fp_eq) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2576 Failed allocate memory for fast-path "
"EQ record array\n");
goto out_free_sp_eq;
/*
* Create fast-path FCP Event Queue(s). The cfg_fcp_eq_count can be
* zero whenever there is exactly one interrupt vector. This is not
* an error.
*/
if (phba->cfg_fcp_eq_count) {
phba->sli4_hba.fp_eq = kzalloc((sizeof(struct lpfc_queue *) *
phba->cfg_fcp_eq_count), GFP_KERNEL);
if (!phba->sli4_hba.fp_eq) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2576 Failed allocate memory for "
"fast-path EQ record array\n");
goto out_free_sp_eq;
}
}
for (fcp_eqidx = 0; fcp_eqidx < phba->cfg_fcp_eq_count; fcp_eqidx++) {
qdesc = lpfc_sli4_queue_alloc(phba, phba->sli4_hba.eq_esize,
@ -6271,10 +6323,6 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
* Create Complete Queues (CQs)
*/
/* Get CQ depth from module parameter, fake the default for now */
phba->sli4_hba.cq_esize = LPFC_CQE_SIZE;
phba->sli4_hba.cq_ecount = LPFC_CQE_DEF_COUNT;
/* Create slow-path Mailbox Command Complete Queue */
qdesc = lpfc_sli4_queue_alloc(phba, phba->sli4_hba.cq_esize,
phba->sli4_hba.cq_ecount);
@ -6296,16 +6344,25 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
phba->sli4_hba.els_cq = qdesc;
/* Create fast-path FCP Completion Queue(s), one-to-one with EQs */
phba->sli4_hba.fcp_cq = kzalloc((sizeof(struct lpfc_queue *) *
phba->cfg_fcp_eq_count), GFP_KERNEL);
/*
* Create fast-path FCP Completion Queue(s), one-to-one with FCP EQs.
* If there are no FCP EQs then create exactly one FCP CQ.
*/
if (phba->cfg_fcp_eq_count)
phba->sli4_hba.fcp_cq = kzalloc((sizeof(struct lpfc_queue *) *
phba->cfg_fcp_eq_count),
GFP_KERNEL);
else
phba->sli4_hba.fcp_cq = kzalloc(sizeof(struct lpfc_queue *),
GFP_KERNEL);
if (!phba->sli4_hba.fcp_cq) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2577 Failed allocate memory for fast-path "
"CQ record array\n");
goto out_free_els_cq;
}
for (fcp_cqidx = 0; fcp_cqidx < phba->cfg_fcp_eq_count; fcp_cqidx++) {
fcp_cqidx = 0;
do {
qdesc = lpfc_sli4_queue_alloc(phba, phba->sli4_hba.cq_esize,
phba->sli4_hba.cq_ecount);
if (!qdesc) {
@ -6315,7 +6372,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
goto out_free_fcp_cq;
}
phba->sli4_hba.fcp_cq[fcp_cqidx] = qdesc;
}
} while (++fcp_cqidx < phba->cfg_fcp_eq_count);
/* Create Mailbox Command Queue */
phba->sli4_hba.mq_esize = LPFC_MQE_SIZE;
@ -6447,7 +6504,7 @@ out_error:
* -ENOMEM - No available memory
* -EIO - The mailbox failed to complete successfully.
**/
static void
void
lpfc_sli4_queue_destroy(struct lpfc_hba *phba)
{
int fcp_qidx;
@ -6723,6 +6780,10 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
"0540 Receive Queue not allocated\n");
goto out_destroy_fcp_wq;
}
lpfc_rq_adjust_repost(phba, phba->sli4_hba.hdr_rq, LPFC_ELS_HBQ);
lpfc_rq_adjust_repost(phba, phba->sli4_hba.dat_rq, LPFC_ELS_HBQ);
rc = lpfc_rq_create(phba, phba->sli4_hba.hdr_rq, phba->sli4_hba.dat_rq,
phba->sli4_hba.els_cq, LPFC_USOL);
if (rc) {
@ -6731,6 +6792,7 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
"rc = 0x%x\n", rc);
goto out_destroy_fcp_wq;
}
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"2592 USL RQ setup: hdr-rq-id=%d, dat-rq-id=%d "
"parent cq-id=%d\n",
@ -6790,8 +6852,10 @@ lpfc_sli4_queue_unset(struct lpfc_hba *phba)
/* Unset ELS complete queue */
lpfc_cq_destroy(phba, phba->sli4_hba.els_cq);
/* Unset FCP response complete queue */
for (fcp_qidx = 0; fcp_qidx < phba->cfg_fcp_eq_count; fcp_qidx++)
fcp_qidx = 0;
do {
lpfc_cq_destroy(phba, phba->sli4_hba.fcp_cq[fcp_qidx]);
} while (++fcp_qidx < phba->cfg_fcp_eq_count);
/* Unset fast-path event queue */
for (fcp_qidx = 0; fcp_qidx < phba->cfg_fcp_eq_count; fcp_qidx++)
lpfc_eq_destroy(phba, phba->sli4_hba.fp_eq[fcp_qidx]);
@ -7040,10 +7104,11 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
* the loop again.
*/
for (rdy_chk = 0; rdy_chk < 1000; rdy_chk++) {
msleep(10);
if (lpfc_readl(phba->sli4_hba.u.if_type2.
STATUSregaddr, &reg_data.word0)) {
rc = -ENODEV;
break;
goto out;
}
if (bf_get(lpfc_sliport_status_rdy, &reg_data))
break;
@ -7051,7 +7116,6 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
reset_again++;
break;
}
msleep(10);
}
/*
@ -7065,11 +7129,6 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
}
/* Detect any port errors. */
if (lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr,
&reg_data.word0)) {
rc = -ENODEV;
break;
}
if ((bf_get(lpfc_sliport_status_err, &reg_data)) ||
(rdy_chk >= 1000)) {
phba->work_status[0] = readl(
@ -7102,6 +7161,7 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
break;
}
out:
/* Catch the not-ready port failure after a port reset. */
if (num_resets >= MAX_IF_TYPE_2_RESETS)
rc = -ENODEV;
@ -7149,12 +7209,13 @@ lpfc_sli4_send_nop_mbox_cmds(struct lpfc_hba *phba, uint32_t cnt)
lpfc_sli4_config(phba, mboxq, LPFC_MBOX_SUBSYSTEM_COMMON,
LPFC_MBOX_OPCODE_NOP, length, LPFC_SLI4_MBX_EMBED);
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
for (cmdsent = 0; cmdsent < cnt; cmdsent++) {
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
else
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, mboxq);
rc = lpfc_sli_issue_mbox_wait(phba, mboxq, mbox_tmo);
}
if (rc == MBX_TIMEOUT)
break;
/* Check return status */
@ -7974,6 +8035,7 @@ lpfc_sli4_unset_hba(struct lpfc_hba *phba)
/* Reset SLI4 HBA FCoE function */
lpfc_pci_function_reset(phba);
lpfc_sli4_queue_destroy(phba);
return;
}
@ -8087,6 +8149,7 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba)
/* Reset SLI4 HBA FCoE function */
lpfc_pci_function_reset(phba);
lpfc_sli4_queue_destroy(phba);
/* Stop the SLI4 device port */
phba->pport->work_port_events = 0;
@ -8120,7 +8183,7 @@ lpfc_pc_sli4_params_get(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_PORT_CAPABILITIES);
mbox_tmo = lpfc_mbox_tmo_val(phba, mboxq);
rc = lpfc_sli_issue_mbox_wait(phba, mboxq, mbox_tmo);
}
@ -8182,6 +8245,7 @@ lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
int rc;
struct lpfc_mqe *mqe = &mboxq->u.mqe;
struct lpfc_pc_sli4_params *sli4_params;
uint32_t mbox_tmo;
int length;
struct lpfc_sli4_parameters *mbx_sli4_parameters;
@ -8200,9 +8264,10 @@ lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
length, LPFC_SLI4_MBX_EMBED);
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
else
rc = lpfc_sli_issue_mbox_wait(phba, mboxq,
lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG));
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, mboxq);
rc = lpfc_sli_issue_mbox_wait(phba, mboxq, mbox_tmo);
}
if (unlikely(rc))
return rc;
sli4_params = &phba->sli4_hba.pc_sli4_params;
@ -8271,11 +8336,8 @@ lpfc_pci_probe_one_s3(struct pci_dev *pdev, const struct pci_device_id *pid)
/* Perform generic PCI device enabling operation */
error = lpfc_enable_pci_dev(phba);
if (error) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"1401 Failed to enable pci device.\n");
if (error)
goto out_free_phba;
}
/* Set up SLI API function jump table for PCI-device group-0 HBAs */
error = lpfc_api_table_setup(phba, LPFC_PCI_DEV_LP);
@ -8322,6 +8384,9 @@ lpfc_pci_probe_one_s3(struct pci_dev *pdev, const struct pci_device_id *pid)
goto out_free_iocb_list;
}
/* Get the default values for Model Name and Description */
lpfc_get_hba_model_desc(phba, phba->ModelName, phba->ModelDesc);
/* Create SCSI host to the physical port */
error = lpfc_create_shost(phba);
if (error) {
@ -8885,16 +8950,17 @@ lpfc_write_firmware(struct lpfc_hba *phba, const struct firmware *fw)
uint32_t offset = 0, temp_offset = 0;
INIT_LIST_HEAD(&dma_buffer_list);
if ((image->magic_number != LPFC_GROUP_OJECT_MAGIC_NUM) ||
(bf_get(lpfc_grp_hdr_file_type, image) != LPFC_FILE_TYPE_GROUP) ||
(bf_get(lpfc_grp_hdr_id, image) != LPFC_FILE_ID_GROUP) ||
(image->size != fw->size)) {
if ((be32_to_cpu(image->magic_number) != LPFC_GROUP_OJECT_MAGIC_NUM) ||
(bf_get_be32(lpfc_grp_hdr_file_type, image) !=
LPFC_FILE_TYPE_GROUP) ||
(bf_get_be32(lpfc_grp_hdr_id, image) != LPFC_FILE_ID_GROUP) ||
(be32_to_cpu(image->size) != fw->size)) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3022 Invalid FW image found. "
"Magic:%d Type:%x ID:%x\n",
image->magic_number,
bf_get(lpfc_grp_hdr_file_type, image),
bf_get(lpfc_grp_hdr_id, image));
"Magic:%x Type:%x ID:%x\n",
be32_to_cpu(image->magic_number),
bf_get_be32(lpfc_grp_hdr_file_type, image),
bf_get_be32(lpfc_grp_hdr_id, image));
return -EINVAL;
}
lpfc_decode_firmware_rev(phba, fwrev, 1);
@ -8924,11 +8990,11 @@ lpfc_write_firmware(struct lpfc_hba *phba, const struct firmware *fw)
while (offset < fw->size) {
temp_offset = offset;
list_for_each_entry(dmabuf, &dma_buffer_list, list) {
if (offset + SLI4_PAGE_SIZE > fw->size) {
temp_offset += fw->size - offset;
if (temp_offset + SLI4_PAGE_SIZE > fw->size) {
memcpy(dmabuf->virt,
fw->data + temp_offset,
fw->size - offset);
fw->size - temp_offset);
temp_offset = fw->size;
break;
}
memcpy(dmabuf->virt, fw->data + temp_offset,
@ -8984,7 +9050,6 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
uint32_t cfg_mode, intr_mode;
int mcnt;
int adjusted_fcp_eq_count;
int fcp_qidx;
const struct firmware *fw;
uint8_t file_name[16];
@ -8995,11 +9060,8 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
/* Perform generic PCI device enabling operation */
error = lpfc_enable_pci_dev(phba);
if (error) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"1409 Failed to enable pci device.\n");
if (error)
goto out_free_phba;
}
/* Set up SLI API function jump table for PCI-device group-1 HBAs */
error = lpfc_api_table_setup(phba, LPFC_PCI_DEV_OC);
@ -9054,6 +9116,9 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
goto out_free_iocb_list;
}
/* Get the default values for Model Name and Description */
lpfc_get_hba_model_desc(phba, phba->ModelName, phba->ModelDesc);
/* Create SCSI host to the physical port */
error = lpfc_create_shost(phba);
if (error) {
@ -9093,16 +9158,6 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
adjusted_fcp_eq_count = phba->sli4_hba.msix_vec_nr - 1;
else
adjusted_fcp_eq_count = phba->cfg_fcp_eq_count;
/* Free unused EQs */
for (fcp_qidx = adjusted_fcp_eq_count;
fcp_qidx < phba->cfg_fcp_eq_count;
fcp_qidx++) {
lpfc_sli4_queue_free(phba->sli4_hba.fp_eq[fcp_qidx]);
/* do not delete the first fcp_cq */
if (fcp_qidx)
lpfc_sli4_queue_free(
phba->sli4_hba.fcp_cq[fcp_qidx]);
}
phba->cfg_fcp_eq_count = adjusted_fcp_eq_count;
/* Set up SLI-4 HBA */
if (lpfc_sli4_hba_setup(phba)) {
@ -9285,6 +9340,7 @@ lpfc_pci_suspend_one_s4(struct pci_dev *pdev, pm_message_t msg)
/* Disable interrupt from device */
lpfc_sli4_disable_intr(phba);
lpfc_sli4_queue_destroy(phba);
/* Save device state to PCI config space */
pci_save_state(pdev);
@ -9414,6 +9470,7 @@ lpfc_sli4_prep_dev_for_reset(struct lpfc_hba *phba)
/* Disable interrupt and pci device */
lpfc_sli4_disable_intr(phba);
lpfc_sli4_queue_destroy(phba);
pci_disable_device(phba->pcidev);
/* Flush all driver's outstanding SCSI I/Os as we are to reset */

View File

@ -36,6 +36,7 @@
#define LOG_SECURITY 0x00008000 /* Security events */
#define LOG_EVENT 0x00010000 /* CT,TEMP,DUMP, logging */
#define LOG_FIP 0x00020000 /* FIP events */
#define LOG_FCP_UNDER 0x00040000 /* FCP underruns errors */
#define LOG_ALL_MSG 0xffffffff /* LOG all messages */
#define lpfc_printf_vlog(vport, level, mask, fmt, arg...) \

View File

@ -1598,9 +1598,12 @@ lpfc_mbox_dev_check(struct lpfc_hba *phba)
* Timeout value to be used for the given mailbox command
**/
int
lpfc_mbox_tmo_val(struct lpfc_hba *phba, int cmd)
lpfc_mbox_tmo_val(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
{
switch (cmd) {
MAILBOX_t *mbox = &mboxq->u.mb;
uint8_t subsys, opcode;
switch (mbox->mbxCommand) {
case MBX_WRITE_NV: /* 0x03 */
case MBX_UPDATE_CFG: /* 0x1B */
case MBX_DOWN_LOAD: /* 0x1C */
@ -1610,6 +1613,28 @@ lpfc_mbox_tmo_val(struct lpfc_hba *phba, int cmd)
case MBX_LOAD_EXP_ROM: /* 0x9C */
return LPFC_MBOX_TMO_FLASH_CMD;
case MBX_SLI4_CONFIG: /* 0x9b */
subsys = lpfc_sli_config_mbox_subsys_get(phba, mboxq);
opcode = lpfc_sli_config_mbox_opcode_get(phba, mboxq);
if (subsys == LPFC_MBOX_SUBSYSTEM_COMMON) {
switch (opcode) {
case LPFC_MBOX_OPCODE_READ_OBJECT:
case LPFC_MBOX_OPCODE_WRITE_OBJECT:
case LPFC_MBOX_OPCODE_READ_OBJECT_LIST:
case LPFC_MBOX_OPCODE_DELETE_OBJECT:
case LPFC_MBOX_OPCODE_GET_FUNCTION_CONFIG:
case LPFC_MBOX_OPCODE_GET_PROFILE_LIST:
case LPFC_MBOX_OPCODE_SET_ACT_PROFILE:
case LPFC_MBOX_OPCODE_SET_PROFILE_CONFIG:
case LPFC_MBOX_OPCODE_GET_FACTORY_PROFILE_CONFIG:
return LPFC_MBOX_SLI4_CONFIG_EXTENDED_TMO;
}
}
if (subsys == LPFC_MBOX_SUBSYSTEM_FCOE) {
switch (opcode) {
case LPFC_MBOX_OPCODE_FCOE_SET_FCLINK_SETTINGS:
return LPFC_MBOX_SLI4_CONFIG_EXTENDED_TMO;
}
}
return LPFC_MBOX_SLI4_CONFIG_TMO;
}
return LPFC_MBOX_TMO;
@ -1859,7 +1884,7 @@ lpfc_sli4_mbox_rsrc_extent(struct lpfc_hba *phba, struct lpfcMboxq *mbox,
}
/* Complete the initialization for the particular Opcode. */
opcode = lpfc_sli4_mbox_opcode_get(phba, mbox);
opcode = lpfc_sli_config_mbox_opcode_get(phba, mbox);
switch (opcode) {
case LPFC_MBOX_OPCODE_ALLOC_RSRC_EXTENT:
if (emb == LPFC_SLI4_MBX_EMBED)
@ -1886,23 +1911,56 @@ lpfc_sli4_mbox_rsrc_extent(struct lpfc_hba *phba, struct lpfcMboxq *mbox,
}
/**
* lpfc_sli4_mbox_opcode_get - Get the opcode from a sli4 mailbox command
* lpfc_sli_config_mbox_subsys_get - Get subsystem from a sli_config mbox cmd
* @phba: pointer to lpfc hba data structure.
* @mbox: pointer to lpfc mbox command.
* @mbox: pointer to lpfc mbox command queue entry.
*
* This routine gets the opcode from a SLI4 specific mailbox command for
* sending IOCTL command. If the mailbox command is not MBX_SLI4_CONFIG
* (0x9B) or if the IOCTL sub-header is not present, opcode 0x0 shall be
* returned.
* This routine gets the subsystem from a SLI4 specific SLI_CONFIG mailbox
* command. If the mailbox command is not MBX_SLI4_CONFIG (0x9B) or if the
* sub-header is not present, subsystem LPFC_MBOX_SUBSYSTEM_NA (0x0) shall
* be returned.
**/
uint8_t
lpfc_sli4_mbox_opcode_get(struct lpfc_hba *phba, struct lpfcMboxq *mbox)
lpfc_sli_config_mbox_subsys_get(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox)
{
struct lpfc_mbx_sli4_config *sli4_cfg;
union lpfc_sli4_cfg_shdr *cfg_shdr;
if (mbox->u.mb.mbxCommand != MBX_SLI4_CONFIG)
return 0;
return LPFC_MBOX_SUBSYSTEM_NA;
sli4_cfg = &mbox->u.mqe.un.sli4_config;
/* For embedded mbox command, get opcode from embedded sub-header*/
if (bf_get(lpfc_mbox_hdr_emb, &sli4_cfg->header.cfg_mhdr)) {
cfg_shdr = &mbox->u.mqe.un.sli4_config.header.cfg_shdr;
return bf_get(lpfc_mbox_hdr_subsystem, &cfg_shdr->request);
}
/* For non-embedded mbox command, get opcode from first dma page */
if (unlikely(!mbox->sge_array))
return LPFC_MBOX_SUBSYSTEM_NA;
cfg_shdr = (union lpfc_sli4_cfg_shdr *)mbox->sge_array->addr[0];
return bf_get(lpfc_mbox_hdr_subsystem, &cfg_shdr->request);
}
/**
* lpfc_sli_config_mbox_opcode_get - Get opcode from a sli_config mbox cmd
* @phba: pointer to lpfc hba data structure.
* @mbox: pointer to lpfc mbox command queue entry.
*
* This routine gets the opcode from a SLI4 specific SLI_CONFIG mailbox
* command. If the mailbox command is not MBX_SLI4_CONFIG (0x9B) or if
* the sub-header is not present, opcode LPFC_MBOX_OPCODE_NA (0x0) be
* returned.
**/
uint8_t
lpfc_sli_config_mbox_opcode_get(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox)
{
struct lpfc_mbx_sli4_config *sli4_cfg;
union lpfc_sli4_cfg_shdr *cfg_shdr;
if (mbox->u.mb.mbxCommand != MBX_SLI4_CONFIG)
return LPFC_MBOX_OPCODE_NA;
sli4_cfg = &mbox->u.mqe.un.sli4_config;
/* For embedded mbox command, get opcode from embedded sub-header*/
@ -1913,7 +1971,7 @@ lpfc_sli4_mbox_opcode_get(struct lpfc_hba *phba, struct lpfcMboxq *mbox)
/* For non-embedded mbox command, get opcode from first dma page */
if (unlikely(!mbox->sge_array))
return 0;
return LPFC_MBOX_OPCODE_NA;
cfg_shdr = (union lpfc_sli4_cfg_shdr *)mbox->sge_array->addr[0];
return bf_get(lpfc_mbox_hdr_opcode, &cfg_shdr->request);
}

View File

@ -58,6 +58,13 @@ static char *dif_op_str[] = {
"SCSI_PROT_READ_PASS",
"SCSI_PROT_WRITE_PASS",
};
struct scsi_dif_tuple {
__be16 guard_tag; /* Checksum */
__be16 app_tag; /* Opaque storage */
__be32 ref_tag; /* Target LBA or indirect LBA */
};
static void
lpfc_release_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_scsi_buf *psb);
static void
@ -1263,6 +1270,174 @@ lpfc_scsi_prep_dma_buf_s3(struct lpfc_hba *phba, struct lpfc_scsi_buf *lpfc_cmd)
return 0;
}
static inline unsigned
lpfc_cmd_blksize(struct scsi_cmnd *sc)
{
return sc->device->sector_size;
}
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
/*
* Given a scsi cmnd, determine the BlockGuard tags to be used with it
* @sc: The SCSI command to examine
* @reftag: (out) BlockGuard reference tag for transmitted data
* @apptag: (out) BlockGuard application tag for transmitted data
* @new_guard (in) Value to replace CRC with if needed
*
* Returns (1) if error injection was performed, (0) otherwise
*/
static int
lpfc_bg_err_inject(struct lpfc_hba *phba, struct scsi_cmnd *sc,
uint32_t *reftag, uint16_t *apptag, uint32_t new_guard)
{
struct scatterlist *sgpe; /* s/g prot entry */
struct scatterlist *sgde; /* s/g data entry */
struct scsi_dif_tuple *src;
uint32_t op = scsi_get_prot_op(sc);
uint32_t blksize;
uint32_t numblks;
sector_t lba;
int rc = 0;
if (op == SCSI_PROT_NORMAL)
return 0;
lba = scsi_get_lba(sc);
if (phba->lpfc_injerr_lba != LPFC_INJERR_LBA_OFF) {
blksize = lpfc_cmd_blksize(sc);
numblks = (scsi_bufflen(sc) + blksize - 1) / blksize;
/* Make sure we have the right LBA if one is specified */
if ((phba->lpfc_injerr_lba < lba) ||
(phba->lpfc_injerr_lba >= (lba + numblks)))
return 0;
}
sgpe = scsi_prot_sglist(sc);
sgde = scsi_sglist(sc);
/* Should we change the Reference Tag */
if (reftag) {
/*
* If we are SCSI_PROT_WRITE_STRIP, the protection data is
* being stripped from the wire, thus it doesn't matter.
*/
if ((op == SCSI_PROT_WRITE_PASS) ||
(op == SCSI_PROT_WRITE_INSERT)) {
if (phba->lpfc_injerr_wref_cnt) {
/* DEADBEEF will be the reftag on the wire */
*reftag = 0xDEADBEEF;
phba->lpfc_injerr_wref_cnt--;
phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF;
rc = 1;
lpfc_printf_log(phba, KERN_ERR, LOG_BG,
"9081 BLKGRD: Injecting reftag error: "
"write lba x%lx\n", (unsigned long)lba);
}
} else {
if (phba->lpfc_injerr_rref_cnt) {
*reftag = 0xDEADBEEF;
phba->lpfc_injerr_rref_cnt--;
phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF;
rc = 1;
lpfc_printf_log(phba, KERN_ERR, LOG_BG,
"9076 BLKGRD: Injecting reftag error: "
"read lba x%lx\n", (unsigned long)lba);
}
}
}
/* Should we change the Application Tag */
if (apptag) {
/*
* If we are SCSI_PROT_WRITE_STRIP, the protection data is
* being stripped from the wire, thus it doesn't matter.
*/
if ((op == SCSI_PROT_WRITE_PASS) ||
(op == SCSI_PROT_WRITE_INSERT)) {
if (phba->lpfc_injerr_wapp_cnt) {
/* DEAD will be the apptag on the wire */
*apptag = 0xDEAD;
phba->lpfc_injerr_wapp_cnt--;
phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF;
rc = 1;
lpfc_printf_log(phba, KERN_ERR, LOG_BG,
"9077 BLKGRD: Injecting apptag error: "
"write lba x%lx\n", (unsigned long)lba);
}
} else {
if (phba->lpfc_injerr_rapp_cnt) {
*apptag = 0xDEAD;
phba->lpfc_injerr_rapp_cnt--;
phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF;
rc = 1;
lpfc_printf_log(phba, KERN_ERR, LOG_BG,
"9078 BLKGRD: Injecting apptag error: "
"read lba x%lx\n", (unsigned long)lba);
}
}
}
/* Should we change the Guard Tag */
/*
* If we are SCSI_PROT_WRITE_INSERT, the protection data is
* being on the wire is being fully generated on the HBA.
* The host cannot change it or force an error.
*/
if (((op == SCSI_PROT_WRITE_STRIP) ||
(op == SCSI_PROT_WRITE_PASS)) &&
phba->lpfc_injerr_wgrd_cnt) {
if (sgpe) {
src = (struct scsi_dif_tuple *)sg_virt(sgpe);
/*
* Just inject an error in the first
* prot block.
*/
lpfc_printf_log(phba, KERN_ERR, LOG_BG,
"9079 BLKGRD: Injecting guard error: "
"write lba x%lx oldGuard x%x refTag x%x\n",
(unsigned long)lba, src->guard_tag,
src->ref_tag);
src->guard_tag = (uint16_t)new_guard;
phba->lpfc_injerr_wgrd_cnt--;
phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF;
rc = 1;
} else {
blksize = lpfc_cmd_blksize(sc);
/*
* Jump past the first data block
* and inject an error in the
* prot data. The prot data is already
* embedded after the regular data.
*/
src = (struct scsi_dif_tuple *)
(sg_virt(sgde) + blksize);
lpfc_printf_log(phba, KERN_ERR, LOG_BG,
"9080 BLKGRD: Injecting guard error: "
"write lba x%lx oldGuard x%x refTag x%x\n",
(unsigned long)lba, src->guard_tag,
src->ref_tag);
src->guard_tag = (uint16_t)new_guard;
phba->lpfc_injerr_wgrd_cnt--;
phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF;
rc = 1;
}
}
return rc;
}
#endif
/*
* Given a scsi cmnd, determine the BlockGuard opcodes to be used with it
* @sc: The SCSI command to examine
@ -1341,18 +1516,6 @@ lpfc_sc_to_bg_opcodes(struct lpfc_hba *phba, struct scsi_cmnd *sc,
return ret;
}
struct scsi_dif_tuple {
__be16 guard_tag; /* Checksum */
__be16 app_tag; /* Opaque storage */
__be32 ref_tag; /* Target LBA or indirect LBA */
};
static inline unsigned
lpfc_cmd_blksize(struct scsi_cmnd *sc)
{
return sc->device->sector_size;
}
/*
* This function sets up buffer list for protection groups of
* type LPFC_PG_TYPE_NO_DIF
@ -1401,6 +1564,11 @@ lpfc_bg_setup_bpl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
blksize = lpfc_cmd_blksize(sc);
reftag = scsi_get_lba(sc) & 0xffffffff;
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
/* reftag is the only error we can inject here */
lpfc_bg_err_inject(phba, sc, &reftag, 0, 0);
#endif
/* setup PDE5 with what we have */
pde5 = (struct lpfc_pde5 *) bpl;
memset(pde5, 0, sizeof(struct lpfc_pde5));
@ -1532,6 +1700,11 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
blksize = lpfc_cmd_blksize(sc);
reftag = scsi_get_lba(sc) & 0xffffffff;
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
/* reftag / guard tag are the only errors we can inject here */
lpfc_bg_err_inject(phba, sc, &reftag, 0, 0xDEAD);
#endif
split_offset = 0;
do {
/* setup PDE5 with what we have */
@ -1671,7 +1844,6 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
}
} while (!alldone);
out:
return num_bde;
@ -2075,6 +2247,7 @@ lpfc_scsi_prep_dma_buf_s4(struct lpfc_hba *phba, struct lpfc_scsi_buf *lpfc_cmd)
else
bf_set(lpfc_sli4_sge_last, sgl, 0);
bf_set(lpfc_sli4_sge_offset, sgl, dma_offset);
bf_set(lpfc_sli4_sge_type, sgl, LPFC_SGE_TYPE_DATA);
sgl->word2 = cpu_to_le32(sgl->word2);
sgl->sge_len = cpu_to_le32(dma_len);
dma_offset += dma_len;
@ -2325,8 +2498,9 @@ lpfc_handle_fcp_err(struct lpfc_vport *vport, struct lpfc_scsi_buf *lpfc_cmd,
}
lp = (uint32_t *)cmnd->sense_buffer;
if (!scsi_status && (resp_info & RESID_UNDER))
logit = LOG_FCP;
if (!scsi_status && (resp_info & RESID_UNDER) &&
vport->cfg_log_verbose & LOG_FCP_UNDER)
logit = LOG_FCP_UNDER;
lpfc_printf_vlog(vport, KERN_WARNING, logit,
"9024 FCP command x%x failed: x%x SNS x%x x%x "
@ -2342,7 +2516,7 @@ lpfc_handle_fcp_err(struct lpfc_vport *vport, struct lpfc_scsi_buf *lpfc_cmd,
if (resp_info & RESID_UNDER) {
scsi_set_resid(cmnd, be32_to_cpu(fcprsp->rspResId));
lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP_UNDER,
"9025 FCP Read Underrun, expected %d, "
"residual %d Data: x%x x%x x%x\n",
be32_to_cpu(fcpcmd->fcpDl),
@ -2449,6 +2623,7 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
struct lpfc_fast_path_event *fast_path_evt;
struct Scsi_Host *shost;
uint32_t queue_depth, scsi_id;
uint32_t logit = LOG_FCP;
/* Sanity check on return of outstanding command */
if (!(lpfc_cmd->pCmd))
@ -2470,16 +2645,22 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
lpfc_cmd->status = IOSTAT_DRIVER_REJECT;
else if (lpfc_cmd->status >= IOSTAT_CNT)
lpfc_cmd->status = IOSTAT_DEFAULT;
lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
"9030 FCP cmd x%x failed <%d/%d> "
"status: x%x result: x%x Data: x%x x%x\n",
cmd->cmnd[0],
cmd->device ? cmd->device->id : 0xffff,
cmd->device ? cmd->device->lun : 0xffff,
lpfc_cmd->status, lpfc_cmd->result,
pIocbOut->iocb.ulpContext,
lpfc_cmd->cur_iocbq.iocb.ulpIoTag);
if (lpfc_cmd->status == IOSTAT_FCP_RSP_ERROR
&& !lpfc_cmd->fcp_rsp->rspStatus3
&& (lpfc_cmd->fcp_rsp->rspStatus2 & RESID_UNDER)
&& !(phba->cfg_log_verbose & LOG_FCP_UNDER))
logit = 0;
else
logit = LOG_FCP | LOG_FCP_UNDER;
lpfc_printf_vlog(vport, KERN_WARNING, logit,
"9030 FCP cmd x%x failed <%d/%d> "
"status: x%x result: x%x Data: x%x x%x\n",
cmd->cmnd[0],
cmd->device ? cmd->device->id : 0xffff,
cmd->device ? cmd->device->lun : 0xffff,
lpfc_cmd->status, lpfc_cmd->result,
pIocbOut->iocb.ulpContext,
lpfc_cmd->cur_iocbq.iocb.ulpIoTag);
switch (lpfc_cmd->status) {
case IOSTAT_FCP_RSP_ERROR:
@ -3056,8 +3237,9 @@ lpfc_queuecommand_lck(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *))
}
ndlp = rdata->pnode;
if (!(phba->sli3_options & LPFC_SLI3_BG_ENABLED) &&
scsi_get_prot_op(cmnd) != SCSI_PROT_NORMAL) {
if ((scsi_get_prot_op(cmnd) != SCSI_PROT_NORMAL) &&
(!(phba->sli3_options & LPFC_SLI3_BG_ENABLED) ||
(phba->sli_rev == LPFC_SLI_REV4))) {
lpfc_printf_log(phba, KERN_ERR, LOG_BG,
"9058 BLKGRD: ERROR: rcvd protected cmd:%02x"
@ -3691,9 +3873,9 @@ lpfc_bus_reset_handler(struct scsi_cmnd *cmnd)
fc_host_post_vendor_event(shost, fc_get_event_number(),
sizeof(scsi_event), (char *)&scsi_event, LPFC_NL_VENDOR_ID);
ret = fc_block_scsi_eh(cmnd);
if (ret)
return ret;
status = fc_block_scsi_eh(cmnd);
if (status)
return status;
/*
* Since the driver manages a single bus device, reset all

View File

@ -379,10 +379,10 @@ lpfc_sli4_rq_put(struct lpfc_queue *hq, struct lpfc_queue *dq,
dq->host_index = ((dq->host_index + 1) % dq->entry_count);
/* Ring The Header Receive Queue Doorbell */
if (!(hq->host_index % LPFC_RQ_POST_BATCH)) {
if (!(hq->host_index % hq->entry_repost)) {
doorbell.word0 = 0;
bf_set(lpfc_rq_doorbell_num_posted, &doorbell,
LPFC_RQ_POST_BATCH);
hq->entry_repost);
bf_set(lpfc_rq_doorbell_id, &doorbell, hq->queue_id);
writel(doorbell.word0, hq->phba->sli4_hba.RQDBregaddr);
}
@ -1864,7 +1864,7 @@ lpfc_sli_hbqbuf_init_hbqs(struct lpfc_hba *phba, uint32_t qno)
{
if (phba->sli_rev == LPFC_SLI_REV4)
return lpfc_sli_hbqbuf_fill_hbqs(phba, qno,
lpfc_hbq_defs[qno]->entry_count);
lpfc_hbq_defs[qno]->entry_count);
else
return lpfc_sli_hbqbuf_fill_hbqs(phba, qno,
lpfc_hbq_defs[qno]->init_count);
@ -2200,10 +2200,13 @@ lpfc_sli_handle_mb_event(struct lpfc_hba *phba)
/* Unknown mailbox command compl */
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
"(%d):0323 Unknown Mailbox command "
"x%x (x%x) Cmpl\n",
"x%x (x%x/x%x) Cmpl\n",
pmb->vport ? pmb->vport->vpi : 0,
pmbox->mbxCommand,
lpfc_sli4_mbox_opcode_get(phba, pmb));
lpfc_sli_config_mbox_subsys_get(phba,
pmb),
lpfc_sli_config_mbox_opcode_get(phba,
pmb));
phba->link_state = LPFC_HBA_ERROR;
phba->work_hs = HS_FFER3;
lpfc_handle_eratt(phba);
@ -2215,17 +2218,19 @@ lpfc_sli_handle_mb_event(struct lpfc_hba *phba)
if (pmbox->mbxStatus == MBXERR_NO_RESOURCES) {
/* Mbox cmd cmpl error - RETRYing */
lpfc_printf_log(phba, KERN_INFO,
LOG_MBOX | LOG_SLI,
"(%d):0305 Mbox cmd cmpl "
"error - RETRYing Data: x%x "
"(x%x) x%x x%x x%x\n",
pmb->vport ? pmb->vport->vpi :0,
pmbox->mbxCommand,
lpfc_sli4_mbox_opcode_get(phba,
pmb),
pmbox->mbxStatus,
pmbox->un.varWords[0],
pmb->vport->port_state);
LOG_MBOX | LOG_SLI,
"(%d):0305 Mbox cmd cmpl "
"error - RETRYing Data: x%x "
"(x%x/x%x) x%x x%x x%x\n",
pmb->vport ? pmb->vport->vpi : 0,
pmbox->mbxCommand,
lpfc_sli_config_mbox_subsys_get(phba,
pmb),
lpfc_sli_config_mbox_opcode_get(phba,
pmb),
pmbox->mbxStatus,
pmbox->un.varWords[0],
pmb->vport->port_state);
pmbox->mbxStatus = 0;
pmbox->mbxOwner = OWN_HOST;
rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
@ -2236,11 +2241,12 @@ lpfc_sli_handle_mb_event(struct lpfc_hba *phba)
/* Mailbox cmd <cmd> Cmpl <cmpl> */
lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
"(%d):0307 Mailbox cmd x%x (x%x) Cmpl x%p "
"(%d):0307 Mailbox cmd x%x (x%x/x%x) Cmpl x%p "
"Data: x%x x%x x%x x%x x%x x%x x%x x%x x%x\n",
pmb->vport ? pmb->vport->vpi : 0,
pmbox->mbxCommand,
lpfc_sli4_mbox_opcode_get(phba, pmb),
lpfc_sli_config_mbox_subsys_get(phba, pmb),
lpfc_sli_config_mbox_opcode_get(phba, pmb),
pmb->mbox_cmpl,
*((uint32_t *) pmbox),
pmbox->un.varWords[0],
@ -4685,6 +4691,175 @@ lpfc_sli4_read_rev(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
return 0;
}
/**
* lpfc_sli4_retrieve_pport_name - Retrieve SLI4 device physical port name
* @phba: pointer to lpfc hba data structure.
*
* This routine retrieves SLI4 device physical port name this PCI function
* is attached to.
*
* Return codes
* 0 - sucessful
* otherwise - failed to retrieve physical port name
**/
static int
lpfc_sli4_retrieve_pport_name(struct lpfc_hba *phba)
{
LPFC_MBOXQ_t *mboxq;
struct lpfc_mbx_read_config *rd_config;
struct lpfc_mbx_get_cntl_attributes *mbx_cntl_attr;
struct lpfc_controller_attribute *cntl_attr;
struct lpfc_mbx_get_port_name *get_port_name;
void *virtaddr = NULL;
uint32_t alloclen, reqlen;
uint32_t shdr_status, shdr_add_status;
union lpfc_sli4_cfg_shdr *shdr;
char cport_name = 0;
int rc;
/* We assume nothing at this point */
phba->sli4_hba.lnk_info.lnk_dv = LPFC_LNK_DAT_INVAL;
phba->sli4_hba.pport_name_sta = LPFC_SLI4_PPNAME_NON;
mboxq = (LPFC_MBOXQ_t *)mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!mboxq)
return -ENOMEM;
/* obtain link type and link number via READ_CONFIG */
lpfc_read_config(phba, mboxq);
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
if (rc == MBX_SUCCESS) {
rd_config = &mboxq->u.mqe.un.rd_config;
if (bf_get(lpfc_mbx_rd_conf_lnk_ldv, rd_config)) {
phba->sli4_hba.lnk_info.lnk_dv = LPFC_LNK_DAT_VAL;
phba->sli4_hba.lnk_info.lnk_tp =
bf_get(lpfc_mbx_rd_conf_lnk_type, rd_config);
phba->sli4_hba.lnk_info.lnk_no =
bf_get(lpfc_mbx_rd_conf_lnk_numb, rd_config);
lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"3081 lnk_type:%d, lnk_numb:%d\n",
phba->sli4_hba.lnk_info.lnk_tp,
phba->sli4_hba.lnk_info.lnk_no);
goto retrieve_ppname;
} else
lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
"3082 Mailbox (x%x) returned ldv:x0\n",
bf_get(lpfc_mqe_command,
&mboxq->u.mqe));
} else
lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
"3083 Mailbox (x%x) failed, status:x%x\n",
bf_get(lpfc_mqe_command, &mboxq->u.mqe),
bf_get(lpfc_mqe_status, &mboxq->u.mqe));
/* obtain link type and link number via COMMON_GET_CNTL_ATTRIBUTES */
reqlen = sizeof(struct lpfc_mbx_get_cntl_attributes);
alloclen = lpfc_sli4_config(phba, mboxq, LPFC_MBOX_SUBSYSTEM_COMMON,
LPFC_MBOX_OPCODE_GET_CNTL_ATTRIBUTES, reqlen,
LPFC_SLI4_MBX_NEMBED);
if (alloclen < reqlen) {
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"3084 Allocated DMA memory size (%d) is "
"less than the requested DMA memory size "
"(%d)\n", alloclen, reqlen);
rc = -ENOMEM;
goto out_free_mboxq;
}
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
virtaddr = mboxq->sge_array->addr[0];
mbx_cntl_attr = (struct lpfc_mbx_get_cntl_attributes *)virtaddr;
shdr = &mbx_cntl_attr->cfg_shdr;
shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
if (shdr_status || shdr_add_status || rc) {
lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
"3085 Mailbox x%x (x%x/x%x) failed, "
"rc:x%x, status:x%x, add_status:x%x\n",
bf_get(lpfc_mqe_command, &mboxq->u.mqe),
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
rc, shdr_status, shdr_add_status);
rc = -ENXIO;
goto out_free_mboxq;
}
cntl_attr = &mbx_cntl_attr->cntl_attr;
phba->sli4_hba.lnk_info.lnk_dv = LPFC_LNK_DAT_VAL;
phba->sli4_hba.lnk_info.lnk_tp =
bf_get(lpfc_cntl_attr_lnk_type, cntl_attr);
phba->sli4_hba.lnk_info.lnk_no =
bf_get(lpfc_cntl_attr_lnk_numb, cntl_attr);
lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"3086 lnk_type:%d, lnk_numb:%d\n",
phba->sli4_hba.lnk_info.lnk_tp,
phba->sli4_hba.lnk_info.lnk_no);
retrieve_ppname:
lpfc_sli4_config(phba, mboxq, LPFC_MBOX_SUBSYSTEM_COMMON,
LPFC_MBOX_OPCODE_GET_PORT_NAME,
sizeof(struct lpfc_mbx_get_port_name) -
sizeof(struct lpfc_sli4_cfg_mhdr),
LPFC_SLI4_MBX_EMBED);
get_port_name = &mboxq->u.mqe.un.get_port_name;
shdr = (union lpfc_sli4_cfg_shdr *)&get_port_name->header.cfg_shdr;
bf_set(lpfc_mbox_hdr_version, &shdr->request, LPFC_OPCODE_VERSION_1);
bf_set(lpfc_mbx_get_port_name_lnk_type, &get_port_name->u.request,
phba->sli4_hba.lnk_info.lnk_tp);
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response);
if (shdr_status || shdr_add_status || rc) {
lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
"3087 Mailbox x%x (x%x/x%x) failed: "
"rc:x%x, status:x%x, add_status:x%x\n",
bf_get(lpfc_mqe_command, &mboxq->u.mqe),
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
rc, shdr_status, shdr_add_status);
rc = -ENXIO;
goto out_free_mboxq;
}
switch (phba->sli4_hba.lnk_info.lnk_no) {
case LPFC_LINK_NUMBER_0:
cport_name = bf_get(lpfc_mbx_get_port_name_name0,
&get_port_name->u.response);
phba->sli4_hba.pport_name_sta = LPFC_SLI4_PPNAME_GET;
break;
case LPFC_LINK_NUMBER_1:
cport_name = bf_get(lpfc_mbx_get_port_name_name1,
&get_port_name->u.response);
phba->sli4_hba.pport_name_sta = LPFC_SLI4_PPNAME_GET;
break;
case LPFC_LINK_NUMBER_2:
cport_name = bf_get(lpfc_mbx_get_port_name_name2,
&get_port_name->u.response);
phba->sli4_hba.pport_name_sta = LPFC_SLI4_PPNAME_GET;
break;
case LPFC_LINK_NUMBER_3:
cport_name = bf_get(lpfc_mbx_get_port_name_name3,
&get_port_name->u.response);
phba->sli4_hba.pport_name_sta = LPFC_SLI4_PPNAME_GET;
break;
default:
break;
}
if (phba->sli4_hba.pport_name_sta == LPFC_SLI4_PPNAME_GET) {
phba->Port[0] = cport_name;
phba->Port[1] = '\0';
lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"3091 SLI get port name: %s\n", phba->Port);
}
out_free_mboxq:
if (rc != MBX_TIMEOUT) {
if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
lpfc_sli4_mbox_cmd_free(phba, mboxq);
else
mempool_free(mboxq, phba->mbox_mem_pool);
}
return rc;
}
/**
* lpfc_sli4_arm_cqeq_intr - Arm sli-4 device completion and event queues
* @phba: pointer to lpfc hba data structure.
@ -4754,7 +4929,7 @@ lpfc_sli4_get_avail_extnt_rsrc(struct lpfc_hba *phba, uint16_t type,
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
if (unlikely(rc)) {
@ -4911,7 +5086,7 @@ lpfc_sli4_cfg_post_extnts(struct lpfc_hba *phba, uint16_t *extnt_cnt,
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
@ -5194,7 +5369,7 @@ lpfc_sli4_dealloc_extent(struct lpfc_hba *phba, uint16_t type)
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox_tmo);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
if (unlikely(rc)) {
@ -5619,7 +5794,7 @@ lpfc_sli4_get_allocated_extnts(struct lpfc_hba *phba, uint16_t type,
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
@ -5748,6 +5923,17 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
kfree(vpd);
goto out_free_mbox;
}
/*
* Retrieve sli4 device physical port name, failure of doing it
* is considered as non-fatal.
*/
rc = lpfc_sli4_retrieve_pport_name(phba);
if (!rc)
lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
"3080 Successful retrieving SLI4 device "
"physical port name: %s.\n", phba->Port);
/*
* Evaluate the read rev and vpd data. Populate the driver
* state with the results. If this routine fails, the failure
@ -5818,9 +6004,13 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
* then turn off the global config parameters to disable the
* feature in the driver. This is not a fatal error.
*/
if ((phba->cfg_enable_bg) &&
!(bf_get(lpfc_mbx_rq_ftr_rsp_dif, &mqe->un.req_ftrs)))
ftr_rsp++;
phba->sli3_options &= ~LPFC_SLI3_BG_ENABLED;
if (phba->cfg_enable_bg) {
if (bf_get(lpfc_mbx_rq_ftr_rsp_dif, &mqe->un.req_ftrs))
phba->sli3_options |= LPFC_SLI3_BG_ENABLED;
else
ftr_rsp++;
}
if (phba->max_vpi && phba->cfg_enable_npiv &&
!(bf_get(lpfc_mbx_rq_ftr_rsp_npiv, &mqe->un.req_ftrs)))
@ -5937,12 +6127,20 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
goto out_free_mbox;
}
/* Create all the SLI4 queues */
rc = lpfc_sli4_queue_create(phba);
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3089 Failed to allocate queues\n");
rc = -ENODEV;
goto out_stop_timers;
}
/* Set up all the queues to the device */
rc = lpfc_sli4_queue_setup(phba);
if (unlikely(rc)) {
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
"0381 Error %d during queue setup.\n ", rc);
goto out_stop_timers;
goto out_destroy_queue;
}
/* Arm the CQs and then EQs on device */
@ -6015,15 +6213,20 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
spin_lock_irq(&phba->hbalock);
phba->link_state = LPFC_LINK_DOWN;
spin_unlock_irq(&phba->hbalock);
if (phba->cfg_suppress_link_up == LPFC_INITIALIZE_LINK)
if (phba->cfg_suppress_link_up == LPFC_INITIALIZE_LINK) {
rc = phba->lpfc_hba_init_link(phba, MBX_NOWAIT);
if (rc)
goto out_unset_queue;
}
mempool_free(mboxq, phba->mbox_mem_pool);
return rc;
out_unset_queue:
/* Unset all the queues set up in this routine when error out */
if (rc)
lpfc_sli4_queue_unset(phba);
lpfc_sli4_queue_unset(phba);
out_destroy_queue:
lpfc_sli4_queue_destroy(phba);
out_stop_timers:
if (rc)
lpfc_stop_hba_timers(phba);
lpfc_stop_hba_timers(phba);
out_free_mbox:
mempool_free(mboxq, phba->mbox_mem_pool);
return rc;
@ -6318,7 +6521,7 @@ lpfc_sli_issue_mbox_s3(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox,
}
/* timeout active mbox command */
mod_timer(&psli->mbox_tmo, (jiffies +
(HZ * lpfc_mbox_tmo_val(phba, mb->mbxCommand))));
(HZ * lpfc_mbox_tmo_val(phba, pmbox))));
}
/* Mailbox cmd <cmd> issue */
@ -6442,9 +6645,8 @@ lpfc_sli_issue_mbox_s3(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox,
drvr_flag);
goto out_not_finished;
}
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba,
mb->mbxCommand) *
1000) + jiffies;
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, pmbox) *
1000) + jiffies;
i = 0;
/* Wait for command to complete */
while (((word0 & OWN_CHIP) == OWN_CHIP) ||
@ -6555,21 +6757,21 @@ static int
lpfc_sli4_async_mbox_block(struct lpfc_hba *phba)
{
struct lpfc_sli *psli = &phba->sli;
uint8_t actcmd = MBX_HEARTBEAT;
int rc = 0;
unsigned long timeout;
unsigned long timeout = 0;
/* Mark the asynchronous mailbox command posting as blocked */
spin_lock_irq(&phba->hbalock);
psli->sli_flag |= LPFC_SLI_ASYNC_MBX_BLK;
if (phba->sli.mbox_active)
actcmd = phba->sli.mbox_active->u.mb.mbxCommand;
spin_unlock_irq(&phba->hbalock);
/* Determine how long we might wait for the active mailbox
* command to be gracefully completed by firmware.
*/
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, actcmd) * 1000) +
jiffies;
if (phba->sli.mbox_active)
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba,
phba->sli.mbox_active) *
1000) + jiffies;
spin_unlock_irq(&phba->hbalock);
/* Wait for the outstnading mailbox command to complete */
while (phba->sli.mbox_active) {
/* Check active mailbox complete status every 2ms */
@ -6664,11 +6866,12 @@ lpfc_sli4_post_sync_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
if (psli->sli_flag & LPFC_SLI_MBOX_ACTIVE) {
spin_unlock_irqrestore(&phba->hbalock, iflag);
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
"(%d):2532 Mailbox command x%x (x%x) "
"(%d):2532 Mailbox command x%x (x%x/x%x) "
"cannot issue Data: x%x x%x\n",
mboxq->vport ? mboxq->vport->vpi : 0,
mboxq->u.mb.mbxCommand,
lpfc_sli4_mbox_opcode_get(phba, mboxq),
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
psli->sli_flag, MBX_POLL);
return MBXERR_ERROR;
}
@ -6691,7 +6894,7 @@ lpfc_sli4_post_sync_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
dma_address = &phba->sli4_hba.bmbx.dma_address;
writel(dma_address->addr_hi, phba->sli4_hba.BMBXregaddr);
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, mbx_cmnd)
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, mboxq)
* 1000) + jiffies;
do {
bmbx_reg.word0 = readl(phba->sli4_hba.BMBXregaddr);
@ -6707,7 +6910,7 @@ lpfc_sli4_post_sync_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
/* Post the low mailbox dma address to the port. */
writel(dma_address->addr_lo, phba->sli4_hba.BMBXregaddr);
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, mbx_cmnd)
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, mboxq)
* 1000) + jiffies;
do {
bmbx_reg.word0 = readl(phba->sli4_hba.BMBXregaddr);
@ -6746,11 +6949,12 @@ lpfc_sli4_post_sync_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
lpfc_sli4_swap_str(phba, mboxq);
lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
"(%d):0356 Mailbox cmd x%x (x%x) Status x%x "
"(%d):0356 Mailbox cmd x%x (x%x/x%x) Status x%x "
"Data: x%x x%x x%x x%x x%x x%x x%x x%x x%x x%x x%x"
" x%x x%x CQ: x%x x%x x%x x%x\n",
mboxq->vport ? mboxq->vport->vpi : 0,
mbx_cmnd, lpfc_sli4_mbox_opcode_get(phba, mboxq),
mboxq->vport ? mboxq->vport->vpi : 0, mbx_cmnd,
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
bf_get(lpfc_mqe_status, mb),
mb->un.mb_words[0], mb->un.mb_words[1],
mb->un.mb_words[2], mb->un.mb_words[3],
@ -6796,11 +7000,12 @@ lpfc_sli_issue_mbox_s4(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
rc = lpfc_mbox_dev_check(phba);
if (unlikely(rc)) {
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
"(%d):2544 Mailbox command x%x (x%x) "
"(%d):2544 Mailbox command x%x (x%x/x%x) "
"cannot issue Data: x%x x%x\n",
mboxq->vport ? mboxq->vport->vpi : 0,
mboxq->u.mb.mbxCommand,
lpfc_sli4_mbox_opcode_get(phba, mboxq),
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
psli->sli_flag, flag);
goto out_not_finished;
}
@ -6814,20 +7019,25 @@ lpfc_sli_issue_mbox_s4(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
if (rc != MBX_SUCCESS)
lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX | LOG_SLI,
"(%d):2541 Mailbox command x%x "
"(x%x) cannot issue Data: x%x x%x\n",
"(x%x/x%x) cannot issue Data: "
"x%x x%x\n",
mboxq->vport ? mboxq->vport->vpi : 0,
mboxq->u.mb.mbxCommand,
lpfc_sli4_mbox_opcode_get(phba, mboxq),
lpfc_sli_config_mbox_subsys_get(phba,
mboxq),
lpfc_sli_config_mbox_opcode_get(phba,
mboxq),
psli->sli_flag, flag);
return rc;
} else if (flag == MBX_POLL) {
lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX | LOG_SLI,
"(%d):2542 Try to issue mailbox command "
"x%x (x%x) synchronously ahead of async"
"x%x (x%x/x%x) synchronously ahead of async"
"mailbox command queue: x%x x%x\n",
mboxq->vport ? mboxq->vport->vpi : 0,
mboxq->u.mb.mbxCommand,
lpfc_sli4_mbox_opcode_get(phba, mboxq),
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
psli->sli_flag, flag);
/* Try to block the asynchronous mailbox posting */
rc = lpfc_sli4_async_mbox_block(phba);
@ -6836,16 +7046,18 @@ lpfc_sli_issue_mbox_s4(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
rc = lpfc_sli4_post_sync_mbox(phba, mboxq);
if (rc != MBX_SUCCESS)
lpfc_printf_log(phba, KERN_ERR,
LOG_MBOX | LOG_SLI,
"(%d):2597 Mailbox command "
"x%x (x%x) cannot issue "
"Data: x%x x%x\n",
mboxq->vport ?
mboxq->vport->vpi : 0,
mboxq->u.mb.mbxCommand,
lpfc_sli4_mbox_opcode_get(phba,
mboxq),
psli->sli_flag, flag);
LOG_MBOX | LOG_SLI,
"(%d):2597 Mailbox command "
"x%x (x%x/x%x) cannot issue "
"Data: x%x x%x\n",
mboxq->vport ?
mboxq->vport->vpi : 0,
mboxq->u.mb.mbxCommand,
lpfc_sli_config_mbox_subsys_get(phba,
mboxq),
lpfc_sli_config_mbox_opcode_get(phba,
mboxq),
psli->sli_flag, flag);
/* Unblock the async mailbox posting afterward */
lpfc_sli4_async_mbox_unblock(phba);
}
@ -6856,11 +7068,12 @@ lpfc_sli_issue_mbox_s4(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
rc = lpfc_mbox_cmd_check(phba, mboxq);
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
"(%d):2543 Mailbox command x%x (x%x) "
"(%d):2543 Mailbox command x%x (x%x/x%x) "
"cannot issue Data: x%x x%x\n",
mboxq->vport ? mboxq->vport->vpi : 0,
mboxq->u.mb.mbxCommand,
lpfc_sli4_mbox_opcode_get(phba, mboxq),
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
psli->sli_flag, flag);
goto out_not_finished;
}
@ -6872,10 +7085,11 @@ lpfc_sli_issue_mbox_s4(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
spin_unlock_irqrestore(&phba->hbalock, iflags);
lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
"(%d):0354 Mbox cmd issue - Enqueue Data: "
"x%x (x%x) x%x x%x x%x\n",
"x%x (x%x/x%x) x%x x%x x%x\n",
mboxq->vport ? mboxq->vport->vpi : 0xffffff,
bf_get(lpfc_mqe_command, &mboxq->u.mqe),
lpfc_sli4_mbox_opcode_get(phba, mboxq),
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
phba->pport->port_state,
psli->sli_flag, MBX_NOWAIT);
/* Wake up worker thread to transport mailbox command from head */
@ -6952,13 +7166,14 @@ lpfc_sli4_post_async_mbox(struct lpfc_hba *phba)
/* Start timer for the mbox_tmo and log some mailbox post messages */
mod_timer(&psli->mbox_tmo, (jiffies +
(HZ * lpfc_mbox_tmo_val(phba, mbx_cmnd))));
(HZ * lpfc_mbox_tmo_val(phba, mboxq))));
lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
"(%d):0355 Mailbox cmd x%x (x%x) issue Data: "
"(%d):0355 Mailbox cmd x%x (x%x/x%x) issue Data: "
"x%x x%x\n",
mboxq->vport ? mboxq->vport->vpi : 0, mbx_cmnd,
lpfc_sli4_mbox_opcode_get(phba, mboxq),
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
phba->pport->port_state, psli->sli_flag);
if (mbx_cmnd != MBX_HEARTBEAT) {
@ -6982,11 +7197,12 @@ lpfc_sli4_post_async_mbox(struct lpfc_hba *phba)
rc = lpfc_sli4_mq_put(phba->sli4_hba.mbx_wq, mqe);
if (rc != MBX_SUCCESS) {
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
"(%d):2533 Mailbox command x%x (x%x) "
"(%d):2533 Mailbox command x%x (x%x/x%x) "
"cannot issue Data: x%x x%x\n",
mboxq->vport ? mboxq->vport->vpi : 0,
mboxq->u.mb.mbxCommand,
lpfc_sli4_mbox_opcode_get(phba, mboxq),
lpfc_sli_config_mbox_subsys_get(phba, mboxq),
lpfc_sli_config_mbox_opcode_get(phba, mboxq),
psli->sli_flag, MBX_NOWAIT);
goto out_not_finished;
}
@ -7322,6 +7538,8 @@ lpfc_sli4_bpl2sgl(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
if (inbound == 1)
offset = 0;
bf_set(lpfc_sli4_sge_offset, sgl, offset);
bf_set(lpfc_sli4_sge_type, sgl,
LPFC_SGE_TYPE_DATA);
offset += bde.tus.f.bdeSize;
}
sgl->word2 = cpu_to_le32(sgl->word2);
@ -9359,7 +9577,6 @@ lpfc_sli_issue_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq,
/* now issue the command */
retval = lpfc_sli_issue_mbox(phba, pmboxq, MBX_NOWAIT);
if (retval == MBX_BUSY || retval == MBX_SUCCESS) {
wait_event_interruptible_timeout(done_q,
pmboxq->mbox_flag & LPFC_MBX_WAKE,
@ -9403,23 +9620,24 @@ void
lpfc_sli_mbox_sys_shutdown(struct lpfc_hba *phba)
{
struct lpfc_sli *psli = &phba->sli;
uint8_t actcmd = MBX_HEARTBEAT;
unsigned long timeout;
timeout = msecs_to_jiffies(LPFC_MBOX_TMO * 1000) + jiffies;
spin_lock_irq(&phba->hbalock);
psli->sli_flag |= LPFC_SLI_ASYNC_MBX_BLK;
spin_unlock_irq(&phba->hbalock);
if (psli->sli_flag & LPFC_SLI_ACTIVE) {
spin_lock_irq(&phba->hbalock);
if (phba->sli.mbox_active)
actcmd = phba->sli.mbox_active->u.mb.mbxCommand;
spin_unlock_irq(&phba->hbalock);
/* Determine how long we might wait for the active mailbox
* command to be gracefully completed by firmware.
*/
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, actcmd) *
1000) + jiffies;
if (phba->sli.mbox_active)
timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba,
phba->sli.mbox_active) *
1000) + jiffies;
spin_unlock_irq(&phba->hbalock);
while (phba->sli.mbox_active) {
/* Check active mailbox complete status every 2ms */
msleep(2);
@ -10415,12 +10633,17 @@ lpfc_sli4_sp_handle_mbox_event(struct lpfc_hba *phba, struct lpfc_mcqe *mcqe)
/* Move mbox data to caller's mailbox region, do endian swapping */
if (pmb->mbox_cmpl && mbox)
lpfc_sli_pcimem_bcopy(mbox, mqe, sizeof(struct lpfc_mqe));
/* Set the mailbox status with SLI4 range 0x4000 */
mcqe_status = bf_get(lpfc_mcqe_status, mcqe);
if (mcqe_status != MB_CQE_STATUS_SUCCESS)
bf_set(lpfc_mqe_status, mqe,
(LPFC_MBX_ERROR_RANGE | mcqe_status));
/*
* For mcqe errors, conditionally move a modified error code to
* the mbox so that the error will not be missed.
*/
mcqe_status = bf_get(lpfc_mcqe_status, mcqe);
if (mcqe_status != MB_CQE_STATUS_SUCCESS) {
if (bf_get(lpfc_mqe_status, mqe) == MBX_SUCCESS)
bf_set(lpfc_mqe_status, mqe,
(LPFC_MBX_ERROR_RANGE | mcqe_status));
}
if (pmb->mbox_flag & LPFC_MBX_IMED_UNREG) {
pmb->mbox_flag &= ~LPFC_MBX_IMED_UNREG;
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_MBOX_VPORT,
@ -10796,7 +11019,7 @@ lpfc_sli4_sp_handle_eqe(struct lpfc_hba *phba, struct lpfc_eqe *eqe)
case LPFC_MCQ:
while ((cqe = lpfc_sli4_cq_get(cq))) {
workposted |= lpfc_sli4_sp_handle_mcqe(phba, cqe);
if (!(++ecount % LPFC_GET_QE_REL_INT))
if (!(++ecount % cq->entry_repost))
lpfc_sli4_cq_release(cq, LPFC_QUEUE_NOARM);
}
break;
@ -10808,7 +11031,7 @@ lpfc_sli4_sp_handle_eqe(struct lpfc_hba *phba, struct lpfc_eqe *eqe)
else
workposted |= lpfc_sli4_sp_handle_cqe(phba, cq,
cqe);
if (!(++ecount % LPFC_GET_QE_REL_INT))
if (!(++ecount % cq->entry_repost))
lpfc_sli4_cq_release(cq, LPFC_QUEUE_NOARM);
}
break;
@ -11040,7 +11263,7 @@ lpfc_sli4_fp_handle_eqe(struct lpfc_hba *phba, struct lpfc_eqe *eqe,
/* Process all the entries to the CQ */
while ((cqe = lpfc_sli4_cq_get(cq))) {
workposted |= lpfc_sli4_fp_handle_wcqe(phba, cq, cqe);
if (!(++ecount % LPFC_GET_QE_REL_INT))
if (!(++ecount % cq->entry_repost))
lpfc_sli4_cq_release(cq, LPFC_QUEUE_NOARM);
}
@ -11110,6 +11333,8 @@ lpfc_sli4_sp_intr_handler(int irq, void *dev_id)
/* Get to the EQ struct associated with this vector */
speq = phba->sli4_hba.sp_eq;
if (unlikely(!speq))
return IRQ_NONE;
/* Check device state for handling interrupt */
if (unlikely(lpfc_intr_state_check(phba))) {
@ -11127,7 +11352,7 @@ lpfc_sli4_sp_intr_handler(int irq, void *dev_id)
*/
while ((eqe = lpfc_sli4_eq_get(speq))) {
lpfc_sli4_sp_handle_eqe(phba, eqe);
if (!(++ecount % LPFC_GET_QE_REL_INT))
if (!(++ecount % speq->entry_repost))
lpfc_sli4_eq_release(speq, LPFC_QUEUE_NOARM);
}
@ -11187,6 +11412,8 @@ lpfc_sli4_fp_intr_handler(int irq, void *dev_id)
if (unlikely(!phba))
return IRQ_NONE;
if (unlikely(!phba->sli4_hba.fp_eq))
return IRQ_NONE;
/* Get to the EQ struct associated with this vector */
fpeq = phba->sli4_hba.fp_eq[fcp_eqidx];
@ -11207,7 +11434,7 @@ lpfc_sli4_fp_intr_handler(int irq, void *dev_id)
*/
while ((eqe = lpfc_sli4_eq_get(fpeq))) {
lpfc_sli4_fp_handle_eqe(phba, eqe, fcp_eqidx);
if (!(++ecount % LPFC_GET_QE_REL_INT))
if (!(++ecount % fpeq->entry_repost))
lpfc_sli4_eq_release(fpeq, LPFC_QUEUE_NOARM);
}
@ -11359,6 +11586,15 @@ lpfc_sli4_queue_alloc(struct lpfc_hba *phba, uint32_t entry_size,
}
queue->entry_size = entry_size;
queue->entry_count = entry_count;
/*
* entry_repost is calculated based on the number of entries in the
* queue. This works out except for RQs. If buffers are NOT initially
* posted for every RQE, entry_repost should be adjusted accordingly.
*/
queue->entry_repost = (entry_count >> 3);
if (queue->entry_repost < LPFC_QUEUE_MIN_REPOST)
queue->entry_repost = LPFC_QUEUE_MIN_REPOST;
queue->phba = phba;
return queue;
@ -11923,6 +12159,31 @@ out:
return status;
}
/**
* lpfc_rq_adjust_repost - Adjust entry_repost for an RQ
* @phba: HBA structure that indicates port to create a queue on.
* @rq: The queue structure to use for the receive queue.
* @qno: The associated HBQ number
*
*
* For SLI4 we need to adjust the RQ repost value based on
* the number of buffers that are initially posted to the RQ.
*/
void
lpfc_rq_adjust_repost(struct lpfc_hba *phba, struct lpfc_queue *rq, int qno)
{
uint32_t cnt;
cnt = lpfc_hbq_defs[qno]->entry_count;
/* Recalc repost for RQs based on buffers initially posted */
cnt = (cnt >> 3);
if (cnt < LPFC_QUEUE_MIN_REPOST)
cnt = LPFC_QUEUE_MIN_REPOST;
rq->entry_repost = cnt;
}
/**
* lpfc_rq_create - Create a Receive Queue on the HBA
* @phba: HBA structure that indicates port to create a queue on.
@ -12489,7 +12750,7 @@ lpfc_sli4_post_sgl(struct lpfc_hba *phba,
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
/* The IOCTL status is embedded in the mailbox subheader. */
@ -12704,7 +12965,7 @@ lpfc_sli4_post_els_sgl_list(struct lpfc_hba *phba)
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
shdr = (union lpfc_sli4_cfg_shdr *) &sgl->cfg_shdr;
@ -12867,7 +13128,7 @@ lpfc_sli4_post_els_sgl_list_ext(struct lpfc_hba *phba)
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
shdr = (union lpfc_sli4_cfg_shdr *) &sgl->cfg_shdr;
@ -12991,7 +13252,7 @@ lpfc_sli4_post_scsi_sgl_block(struct lpfc_hba *phba, struct list_head *sblist,
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
shdr = (union lpfc_sli4_cfg_shdr *) &sgl->cfg_shdr;
@ -13147,7 +13408,7 @@ lpfc_sli4_post_scsi_sgl_blk_ext(struct lpfc_hba *phba, struct list_head *sblist,
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
shdr = (union lpfc_sli4_cfg_shdr *) &sgl->cfg_shdr;
@ -13296,7 +13557,8 @@ lpfc_fc_frame_to_vport(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr,
uint32_t did = (fc_hdr->fh_d_id[0] << 16 |
fc_hdr->fh_d_id[1] << 8 |
fc_hdr->fh_d_id[2]);
if (did == Fabric_DID)
return phba->pport;
vports = lpfc_create_vport_work_array(phba);
if (vports != NULL)
for (i = 0; i <= phba->max_vpi && vports[i] != NULL; i++) {
@ -14312,7 +14574,7 @@ lpfc_sli4_init_vpi(struct lpfc_vport *vport)
if (!mboxq)
return -ENOMEM;
lpfc_init_vpi(phba, mboxq, vport->vpi);
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_INIT_VPI);
mbox_tmo = lpfc_mbox_tmo_val(phba, mboxq);
rc = lpfc_sli_issue_mbox_wait(phba, mboxq, mbox_tmo);
if (rc != MBX_SUCCESS) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_SLI,
@ -15188,7 +15450,7 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
if (!phba->sli4_hba.intr_enable)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG);
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
/* The IOCTL status is embedded in the mailbox subheader. */

View File

@ -293,13 +293,11 @@ struct lpfc_sli {
struct lpfc_lnk_stat lnk_stat_offsets;
};
#define LPFC_MBOX_TMO 30 /* Sec tmo for outstanding mbox
command */
#define LPFC_MBOX_SLI4_CONFIG_TMO 60 /* Sec tmo for outstanding mbox
command */
#define LPFC_MBOX_TMO_FLASH_CMD 300 /* Sec tmo for outstanding FLASH write
* or erase cmds. This is especially
* long because of the potential of
* multiple flash erases that can be
* spawned.
*/
/* Timeout for normal outstanding mbox command (Seconds) */
#define LPFC_MBOX_TMO 30
/* Timeout for non-flash-based outstanding sli_config mbox command (Seconds) */
#define LPFC_MBOX_SLI4_CONFIG_TMO 60
/* Timeout for flash-based outstanding sli_config mbox command (Seconds) */
#define LPFC_MBOX_SLI4_CONFIG_EXTENDED_TMO 300
/* Timeout for other flash-based outstanding mbox command (Seconds) */
#define LPFC_MBOX_TMO_FLASH_CMD 300

View File

@ -23,7 +23,6 @@
#define LPFC_XRI_EXCH_BUSY_WAIT_T1 10
#define LPFC_XRI_EXCH_BUSY_WAIT_T2 30000
#define LPFC_RELEASE_NOTIFICATION_INTERVAL 32
#define LPFC_GET_QE_REL_INT 32
#define LPFC_RPI_LOW_WATER_MARK 10
#define LPFC_UNREG_FCF 1
@ -126,6 +125,8 @@ struct lpfc_queue {
struct list_head child_list;
uint32_t entry_count; /* Number of entries to support on the queue */
uint32_t entry_size; /* Size of each queue entry. */
uint32_t entry_repost; /* Count of entries before doorbell is rung */
#define LPFC_QUEUE_MIN_REPOST 8
uint32_t queue_id; /* Queue ID assigned by the hardware */
uint32_t assoc_qid; /* Queue ID associated with, for CQ/WQ/MQ */
struct list_head page_list;
@ -388,6 +389,16 @@ struct lpfc_iov {
uint32_t vf_number;
};
struct lpfc_sli4_lnk_info {
uint8_t lnk_dv;
#define LPFC_LNK_DAT_INVAL 0
#define LPFC_LNK_DAT_VAL 1
uint8_t lnk_tp;
#define LPFC_LNK_GE 0x0 /* FCoE */
#define LPFC_LNK_FC 0x1 /* FC */
uint8_t lnk_no;
};
/* SLI4 HBA data structure entries */
struct lpfc_sli4_hba {
void __iomem *conf_regs_memmap_p; /* Kernel memory mapped address for
@ -503,6 +514,10 @@ struct lpfc_sli4_hba {
struct list_head sp_els_xri_aborted_work_queue;
struct list_head sp_unsol_work_queue;
struct lpfc_sli4_link link_state;
struct lpfc_sli4_lnk_info lnk_info;
uint32_t pport_name_sta;
#define LPFC_SLI4_PPNAME_NON 0
#define LPFC_SLI4_PPNAME_GET 1
struct lpfc_iov iov;
spinlock_t abts_scsi_buf_list_lock; /* list of aborted SCSI IOs */
spinlock_t abts_sgl_list_lock; /* list of aborted els IOs */
@ -553,6 +568,7 @@ struct lpfc_rsrc_blks {
* SLI4 specific function prototypes
*/
int lpfc_pci_function_reset(struct lpfc_hba *);
int lpfc_sli4_pdev_status_reg_wait(struct lpfc_hba *);
int lpfc_sli4_hba_setup(struct lpfc_hba *);
int lpfc_sli4_config(struct lpfc_hba *, struct lpfcMboxq *, uint8_t,
uint8_t, uint32_t, bool);
@ -576,6 +592,7 @@ uint32_t lpfc_wq_create(struct lpfc_hba *, struct lpfc_queue *,
struct lpfc_queue *, uint32_t);
uint32_t lpfc_rq_create(struct lpfc_hba *, struct lpfc_queue *,
struct lpfc_queue *, struct lpfc_queue *, uint32_t);
void lpfc_rq_adjust_repost(struct lpfc_hba *, struct lpfc_queue *, int);
uint32_t lpfc_eq_destroy(struct lpfc_hba *, struct lpfc_queue *);
uint32_t lpfc_cq_destroy(struct lpfc_hba *, struct lpfc_queue *);
uint32_t lpfc_mq_destroy(struct lpfc_hba *, struct lpfc_queue *);
@ -632,5 +649,5 @@ void lpfc_mbx_cmpl_fcf_rr_read_fcf_rec(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbx_cmpl_read_fcf_rec(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_sli4_unregister_fcf(struct lpfc_hba *);
int lpfc_sli4_post_status_check(struct lpfc_hba *);
uint8_t lpfc_sli4_mbox_opcode_get(struct lpfc_hba *, struct lpfcMboxq *);
uint8_t lpfc_sli_config_mbox_subsys_get(struct lpfc_hba *, LPFC_MBOXQ_t *);
uint8_t lpfc_sli_config_mbox_opcode_get(struct lpfc_hba *, LPFC_MBOXQ_t *);

View File

@ -18,7 +18,7 @@
* included with this package. *
*******************************************************************/
#define LPFC_DRIVER_VERSION "8.3.25"
#define LPFC_DRIVER_VERSION "8.3.27"
#define LPFC_DRIVER_NAME "lpfc"
#define LPFC_SP_DRIVER_HANDLER_NAME "lpfc:sp"
#define LPFC_FP_DRIVER_HANDLER_NAME "lpfc:fp"

View File

@ -692,13 +692,14 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
/* Indicate free memory when release */
NLP_SET_FREE_REQ(ndlp);
} else {
if (!NLP_CHK_NODE_ACT(ndlp))
if (!NLP_CHK_NODE_ACT(ndlp)) {
ndlp = lpfc_enable_node(vport, ndlp,
NLP_STE_UNUSED_NODE);
if (!ndlp)
goto skip_logo;
}
/* Remove ndlp from vport npld list */
/* Remove ndlp from vport list */
lpfc_dequeue_node(vport, ndlp);
spin_lock_irq(&phba->ndlp_lock);
if (!NLP_CHK_FREE_REQ(ndlp))
@ -711,8 +712,17 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
}
spin_unlock_irq(&phba->ndlp_lock);
}
if (!(vport->vpi_state & LPFC_VPI_REGISTERED))
/*
* If the vpi is not registered, then a valid FDISC doesn't
* exist and there is no need for a ELS LOGO. Just cleanup
* the ndlp.
*/
if (!(vport->vpi_state & LPFC_VPI_REGISTERED)) {
lpfc_nlp_put(ndlp);
goto skip_logo;
}
vport->unreg_vpi_cmpl = VPORT_INVAL;
timeout = msecs_to_jiffies(phba->fc_ratov * 2000);
if (!lpfc_issue_els_npiv_logo(vport, ndlp))

View File

@ -230,9 +230,6 @@ static void mac_esp_send_pdma_cmd(struct esp *esp, u32 addr, u32 esp_count,
u32 dma_count, int write, u8 cmd)
{
struct mac_esp_priv *mep = MAC_ESP_GET_PRIV(esp);
unsigned long flags;
local_irq_save(flags);
mep->error = 0;
@ -270,8 +267,6 @@ static void mac_esp_send_pdma_cmd(struct esp *esp, u32 addr, u32 esp_count,
esp_count = n;
}
} while (esp_count);
local_irq_restore(flags);
}
/*
@ -353,8 +348,6 @@ static void mac_esp_send_pio_cmd(struct esp *esp, u32 addr, u32 esp_count,
struct mac_esp_priv *mep = MAC_ESP_GET_PRIV(esp);
u8 *fifo = esp->regs + ESP_FDATA * 16;
disable_irq(esp->host->irq);
cmd &= ~ESP_CMD_DMA;
mep->error = 0;
@ -431,8 +424,6 @@ static void mac_esp_send_pio_cmd(struct esp *esp, u32 addr, u32 esp_count,
scsi_esp_cmd(esp, ESP_CMD_TI);
}
}
enable_irq(esp->host->irq);
}
static int mac_esp_irq_pending(struct esp *esp)

View File

@ -33,9 +33,9 @@
/*
* MegaRAID SAS Driver meta data
*/
#define MEGASAS_VERSION "00.00.05.40-rc1"
#define MEGASAS_RELDATE "Jul. 26, 2011"
#define MEGASAS_EXT_VERSION "Tue. Jul. 26 17:00:00 PDT 2011"
#define MEGASAS_VERSION "00.00.06.12-rc1"
#define MEGASAS_RELDATE "Oct. 5, 2011"
#define MEGASAS_EXT_VERSION "Wed. Oct. 5 17:00:00 PDT 2011"
/*
* Device IDs
@ -48,6 +48,7 @@
#define PCI_DEVICE_ID_LSI_SAS0073SKINNY 0x0073
#define PCI_DEVICE_ID_LSI_SAS0071SKINNY 0x0071
#define PCI_DEVICE_ID_LSI_FUSION 0x005b
#define PCI_DEVICE_ID_LSI_INVADER 0x005d
/*
* =====================================
@ -138,6 +139,7 @@
#define MFI_CMD_ABORT 0x06
#define MFI_CMD_SMP 0x07
#define MFI_CMD_STP 0x08
#define MFI_CMD_INVALID 0xff
#define MR_DCMD_CTRL_GET_INFO 0x01010000
#define MR_DCMD_LD_GET_LIST 0x03010000
@ -221,6 +223,7 @@ enum MFI_STAT {
MFI_STAT_RESERVATION_IN_PROGRESS = 0x36,
MFI_STAT_I2C_ERRORS_DETECTED = 0x37,
MFI_STAT_PCI_ERRORS_DETECTED = 0x38,
MFI_STAT_CONFIG_SEQ_MISMATCH = 0x67,
MFI_STAT_INVALID_STATUS = 0xFF
};
@ -716,7 +719,7 @@ struct megasas_ctrl_info {
#define MEGASAS_DEFAULT_INIT_ID -1
#define MEGASAS_MAX_LUN 8
#define MEGASAS_MAX_LD 64
#define MEGASAS_DEFAULT_CMD_PER_LUN 128
#define MEGASAS_DEFAULT_CMD_PER_LUN 256
#define MEGASAS_MAX_PD (MEGASAS_MAX_PD_CHANNELS * \
MEGASAS_MAX_DEV_PER_CHANNEL)
#define MEGASAS_MAX_LD_IDS (MEGASAS_MAX_LD_CHANNELS * \
@ -755,6 +758,7 @@ struct megasas_ctrl_info {
#define MEGASAS_INT_CMDS 32
#define MEGASAS_SKINNY_INT_CMDS 5
#define MEGASAS_MAX_MSIX_QUEUES 16
/*
* FW can accept both 32 and 64 bit SGLs. We want to allocate 32/64 bit
* SGLs based on the size of dma_addr_t
@ -1276,6 +1280,11 @@ struct megasas_aen_event {
struct megasas_instance *instance;
};
struct megasas_irq_context {
struct megasas_instance *instance;
u32 MSIxIndex;
};
struct megasas_instance {
u32 *producer;
@ -1349,8 +1358,9 @@ struct megasas_instance {
/* Ptr to hba specific information */
void *ctrl_context;
u8 msi_flag;
struct msix_entry msixentry;
unsigned int msix_vectors;
struct msix_entry msixentry[MEGASAS_MAX_MSIX_QUEUES];
struct megasas_irq_context irq_context[MEGASAS_MAX_MSIX_QUEUES];
u64 map_id;
struct megasas_cmd *map_update_cmd;
unsigned long bar;

View File

@ -18,7 +18,7 @@
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* FILE: megaraid_sas_base.c
* Version : v00.00.05.40-rc1
* Version : v00.00.06.12-rc1
*
* Authors: LSI Corporation
* Sreenivas Bagalkote
@ -84,7 +84,7 @@ MODULE_VERSION(MEGASAS_VERSION);
MODULE_AUTHOR("megaraidlinux@lsi.com");
MODULE_DESCRIPTION("LSI MegaRAID SAS Driver");
int megasas_transition_to_ready(struct megasas_instance *instance);
int megasas_transition_to_ready(struct megasas_instance *instance, int ocr);
static int megasas_get_pd_list(struct megasas_instance *instance);
static int megasas_issue_init_mfi(struct megasas_instance *instance);
static int megasas_register_aen(struct megasas_instance *instance,
@ -114,6 +114,8 @@ static struct pci_device_id megasas_pci_table[] = {
/* xscale IOP */
{PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FUSION)},
/* Fusion */
{PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_INVADER)},
/* Invader */
{}
};
@ -213,6 +215,10 @@ megasas_return_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd)
cmd->scmd = NULL;
cmd->frame_count = 0;
if ((instance->pdev->device != PCI_DEVICE_ID_LSI_FUSION) &&
(instance->pdev->device != PCI_DEVICE_ID_LSI_INVADER) &&
(reset_devices))
cmd->frame->hdr.cmd = MFI_CMD_INVALID;
list_add_tail(&cmd->list, &instance->cmd_pool);
spin_unlock_irqrestore(&instance->cmd_pool_lock, flags);
@ -1583,7 +1589,8 @@ void megaraid_sas_kill_hba(struct megasas_instance *instance)
{
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_SAS0073SKINNY) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_SAS0071SKINNY) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION)) {
(instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER)) {
writel(MFI_STOP_ADP, &instance->reg_set->doorbell);
} else {
writel(MFI_STOP_ADP, &instance->reg_set->inbound_doorbell);
@ -1907,7 +1914,6 @@ static int megasas_generic_reset(struct scsi_cmnd *scmd)
static enum
blk_eh_timer_return megasas_reset_timer(struct scsi_cmnd *scmd)
{
struct megasas_cmd *cmd = (struct megasas_cmd *)scmd->SCp.ptr;
struct megasas_instance *instance;
unsigned long flags;
@ -1916,7 +1922,7 @@ blk_eh_timer_return megasas_reset_timer(struct scsi_cmnd *scmd)
return BLK_EH_NOT_HANDLED;
}
instance = cmd->instance;
instance = (struct megasas_instance *)scmd->device->host->hostdata;
if (!(instance->flag & MEGASAS_FW_BUSY)) {
/* FW is busy, throttle IO */
spin_lock_irqsave(instance->host->host_lock, flags);
@ -1957,7 +1963,8 @@ static int megasas_reset_bus_host(struct scsi_cmnd *scmd)
/*
* First wait for all commands to complete
*/
if (instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION)
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER))
ret = megasas_reset_fusion(scmd->device->host);
else
ret = megasas_generic_reset(scmd);
@ -2161,7 +2168,16 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
cmd->scmd->SCp.ptr = NULL;
switch (hdr->cmd) {
case MFI_CMD_INVALID:
/* Some older 1068 controller FW may keep a pended
MR_DCMD_CTRL_EVENT_GET_INFO left over from the main kernel
when booting the kdump kernel. Ignore this command to
prevent a kernel panic on shutdown of the kdump kernel. */
printk(KERN_WARNING "megaraid_sas: MFI_CMD_INVALID command "
"completed.\n");
printk(KERN_WARNING "megaraid_sas: If you have a controller "
"other than PERC5, please upgrade your firmware.\n");
break;
case MFI_CMD_PD_SCSI_IO:
case MFI_CMD_LD_SCSI_IO:
@ -2477,7 +2493,7 @@ process_fw_state_change_wq(struct work_struct *work)
msleep(1000);
}
if (megasas_transition_to_ready(instance)) {
if (megasas_transition_to_ready(instance, 1)) {
printk(KERN_NOTICE "megaraid_sas:adapter not ready\n");
megaraid_sas_kill_hba(instance);
@ -2532,7 +2548,7 @@ megasas_deplete_reply_queue(struct megasas_instance *instance,
instance->reg_set)
) == 0) {
/* Hardware may not set outbound_intr_status in MSI-X mode */
if (!instance->msi_flag)
if (!instance->msix_vectors)
return IRQ_NONE;
}
@ -2590,16 +2606,14 @@ megasas_deplete_reply_queue(struct megasas_instance *instance,
*/
static irqreturn_t megasas_isr(int irq, void *devp)
{
struct megasas_instance *instance;
struct megasas_irq_context *irq_context = devp;
struct megasas_instance *instance = irq_context->instance;
unsigned long flags;
irqreturn_t rc;
if (atomic_read(
&(((struct megasas_instance *)devp)->fw_reset_no_pci_access)))
if (atomic_read(&instance->fw_reset_no_pci_access))
return IRQ_HANDLED;
instance = (struct megasas_instance *)devp;
spin_lock_irqsave(&instance->hba_lock, flags);
rc = megasas_deplete_reply_queue(instance, DID_OK);
spin_unlock_irqrestore(&instance->hba_lock, flags);
@ -2617,7 +2631,7 @@ static irqreturn_t megasas_isr(int irq, void *devp)
* has to wait for the ready state.
*/
int
megasas_transition_to_ready(struct megasas_instance* instance)
megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
{
int i;
u8 max_wait;
@ -2639,11 +2653,13 @@ megasas_transition_to_ready(struct megasas_instance* instance)
switch (fw_state) {
case MFI_STATE_FAULT:
printk(KERN_DEBUG "megasas: FW in FAULT state!!\n");
max_wait = MEGASAS_RESET_WAIT_TIME;
cur_state = MFI_STATE_FAULT;
break;
if (ocr) {
max_wait = MEGASAS_RESET_WAIT_TIME;
cur_state = MFI_STATE_FAULT;
break;
} else
return -ENODEV;
case MFI_STATE_WAIT_HANDSHAKE:
/*
@ -2654,7 +2670,9 @@ megasas_transition_to_ready(struct megasas_instance* instance)
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_SAS0071SKINNY) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FUSION)) {
PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER)) {
writel(
MFI_INIT_CLEAR_HANDSHAKE|MFI_INIT_HOTPLUG,
&instance->reg_set->doorbell);
@ -2674,7 +2692,9 @@ megasas_transition_to_ready(struct megasas_instance* instance)
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_SAS0071SKINNY) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FUSION)) {
PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER)) {
writel(MFI_INIT_HOTPLUG,
&instance->reg_set->doorbell);
} else
@ -2695,11 +2715,15 @@ megasas_transition_to_ready(struct megasas_instance* instance)
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_SAS0071SKINNY) ||
(instance->pdev->device
== PCI_DEVICE_ID_LSI_FUSION)) {
== PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device
== PCI_DEVICE_ID_LSI_INVADER)) {
writel(MFI_RESET_FLAGS,
&instance->reg_set->doorbell);
if (instance->pdev->device ==
PCI_DEVICE_ID_LSI_FUSION) {
if ((instance->pdev->device ==
PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER)) {
for (i = 0; i < (10 * 1000); i += 20) {
if (readl(
&instance->
@ -2922,6 +2946,10 @@ static int megasas_create_frame_pool(struct megasas_instance *instance)
memset(cmd->frame, 0, total_sz);
cmd->frame->io.context = cmd->index;
cmd->frame->io.pad_0 = 0;
if ((instance->pdev->device != PCI_DEVICE_ID_LSI_FUSION) &&
(instance->pdev->device != PCI_DEVICE_ID_LSI_INVADER) &&
(reset_devices))
cmd->frame->hdr.cmd = MFI_CMD_INVALID;
}
return 0;
@ -3474,6 +3502,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
struct megasas_register_set __iomem *reg_set;
struct megasas_ctrl_info *ctrl_info;
unsigned long bar_list;
int i;
/* Find first memory bar */
bar_list = pci_select_bars(instance->pdev, IORESOURCE_MEM);
@ -3496,6 +3525,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
switch (instance->pdev->device) {
case PCI_DEVICE_ID_LSI_FUSION:
case PCI_DEVICE_ID_LSI_INVADER:
instance->instancet = &megasas_instance_template_fusion;
break;
case PCI_DEVICE_ID_LSI_SAS1078R:
@ -3520,15 +3550,39 @@ static int megasas_init_fw(struct megasas_instance *instance)
/*
* We expect the FW state to be READY
*/
if (megasas_transition_to_ready(instance))
if (megasas_transition_to_ready(instance, 0))
goto fail_ready_state;
/* Check if MSI-X is supported while in ready state */
msix_enable = (instance->instancet->read_fw_status_reg(reg_set) &
0x4000000) >> 0x1a;
if (msix_enable && !msix_disable &&
!pci_enable_msix(instance->pdev, &instance->msixentry, 1))
instance->msi_flag = 1;
if (msix_enable && !msix_disable) {
/* Check max MSI-X vectors */
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER)) {
instance->msix_vectors = (readl(&instance->reg_set->
outbound_scratch_pad_2
) & 0x1F) + 1;
} else
instance->msix_vectors = 1;
/* Don't bother allocating more MSI-X vectors than cpus */
instance->msix_vectors = min(instance->msix_vectors,
(unsigned int)num_online_cpus());
for (i = 0; i < instance->msix_vectors; i++)
instance->msixentry[i].entry = i;
i = pci_enable_msix(instance->pdev, instance->msixentry,
instance->msix_vectors);
if (i >= 0) {
if (i) {
if (!pci_enable_msix(instance->pdev,
instance->msixentry, i))
instance->msix_vectors = i;
else
instance->msix_vectors = 0;
}
} else
instance->msix_vectors = 0;
}
/* Get operational params, sge flags, send init cmd to controller */
if (instance->instancet->init_adapter(instance))
@ -3892,7 +3946,8 @@ static int megasas_io_attach(struct megasas_instance *instance)
host->max_cmd_len = 16;
/* Fusion only supports host reset */
if (instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) {
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER)) {
host->hostt->eh_device_reset_handler = NULL;
host->hostt->eh_bus_reset_handler = NULL;
}
@ -3942,7 +3997,7 @@ fail_set_dma_mask:
static int __devinit
megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
{
int rval, pos;
int rval, pos, i, j;
struct Scsi_Host *host;
struct megasas_instance *instance;
u16 control = 0;
@ -4002,6 +4057,7 @@ megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
switch (instance->pdev->device) {
case PCI_DEVICE_ID_LSI_FUSION:
case PCI_DEVICE_ID_LSI_INVADER:
{
struct fusion_context *fusion;
@ -4094,7 +4150,8 @@ megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
instance->last_time = 0;
instance->disableOnlineCtrlReset = 1;
if (instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION)
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER))
INIT_WORK(&instance->work_init, megasas_fusion_ocr_wq);
else
INIT_WORK(&instance->work_init, process_fw_state_change_wq);
@ -4108,11 +4165,32 @@ megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
/*
* Register IRQ
*/
if (request_irq(instance->msi_flag ? instance->msixentry.vector :
pdev->irq, instance->instancet->service_isr,
IRQF_SHARED, "megasas", instance)) {
printk(KERN_DEBUG "megasas: Failed to register IRQ\n");
goto fail_irq;
if (instance->msix_vectors) {
for (i = 0 ; i < instance->msix_vectors; i++) {
instance->irq_context[i].instance = instance;
instance->irq_context[i].MSIxIndex = i;
if (request_irq(instance->msixentry[i].vector,
instance->instancet->service_isr, 0,
"megasas",
&instance->irq_context[i])) {
printk(KERN_DEBUG "megasas: Failed to "
"register IRQ for vector %d.\n", i);
for (j = 0 ; j < i ; j++)
free_irq(
instance->msixentry[j].vector,
&instance->irq_context[j]);
goto fail_irq;
}
}
} else {
instance->irq_context[0].instance = instance;
instance->irq_context[0].MSIxIndex = 0;
if (request_irq(pdev->irq, instance->instancet->service_isr,
IRQF_SHARED, "megasas",
&instance->irq_context[0])) {
printk(KERN_DEBUG "megasas: Failed to register IRQ\n");
goto fail_irq;
}
}
instance->instancet->enable_intr(instance->reg_set);
@ -4156,15 +4234,20 @@ megasas_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
pci_set_drvdata(pdev, NULL);
instance->instancet->disable_intr(instance->reg_set);
free_irq(instance->msi_flag ? instance->msixentry.vector :
instance->pdev->irq, instance);
if (instance->msix_vectors)
for (i = 0 ; i < instance->msix_vectors; i++)
free_irq(instance->msixentry[i].vector,
&instance->irq_context[i]);
else
free_irq(instance->pdev->irq, &instance->irq_context[0]);
fail_irq:
if (instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION)
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER))
megasas_release_fusion(instance);
else
megasas_release_mfi(instance);
fail_init_mfi:
if (instance->msi_flag)
if (instance->msix_vectors)
pci_disable_msix(instance->pdev);
fail_alloc_dma_buf:
if (instance->evt_detail)
@ -4280,6 +4363,7 @@ megasas_suspend(struct pci_dev *pdev, pm_message_t state)
{
struct Scsi_Host *host;
struct megasas_instance *instance;
int i;
instance = pci_get_drvdata(pdev);
host = instance->host;
@ -4303,9 +4387,14 @@ megasas_suspend(struct pci_dev *pdev, pm_message_t state)
pci_set_drvdata(instance->pdev, instance);
instance->instancet->disable_intr(instance->reg_set);
free_irq(instance->msi_flag ? instance->msixentry.vector :
instance->pdev->irq, instance);
if (instance->msi_flag)
if (instance->msix_vectors)
for (i = 0 ; i < instance->msix_vectors; i++)
free_irq(instance->msixentry[i].vector,
&instance->irq_context[i]);
else
free_irq(instance->pdev->irq, &instance->irq_context[0]);
if (instance->msix_vectors)
pci_disable_msix(instance->pdev);
pci_save_state(pdev);
@ -4323,7 +4412,7 @@ megasas_suspend(struct pci_dev *pdev, pm_message_t state)
static int
megasas_resume(struct pci_dev *pdev)
{
int rval;
int rval, i, j;
struct Scsi_Host *host;
struct megasas_instance *instance;
@ -4357,15 +4446,17 @@ megasas_resume(struct pci_dev *pdev)
/*
* We expect the FW state to be READY
*/
if (megasas_transition_to_ready(instance))
if (megasas_transition_to_ready(instance, 0))
goto fail_ready_state;
/* Now re-enable MSI-X */
if (instance->msi_flag)
pci_enable_msix(instance->pdev, &instance->msixentry, 1);
if (instance->msix_vectors)
pci_enable_msix(instance->pdev, instance->msixentry,
instance->msix_vectors);
switch (instance->pdev->device) {
case PCI_DEVICE_ID_LSI_FUSION:
case PCI_DEVICE_ID_LSI_INVADER:
{
megasas_reset_reply_desc(instance);
if (megasas_ioc_init_fusion(instance)) {
@ -4391,11 +4482,32 @@ megasas_resume(struct pci_dev *pdev)
/*
* Register IRQ
*/
if (request_irq(instance->msi_flag ? instance->msixentry.vector :
pdev->irq, instance->instancet->service_isr,
IRQF_SHARED, "megasas", instance)) {
printk(KERN_ERR "megasas: Failed to register IRQ\n");
goto fail_irq;
if (instance->msix_vectors) {
for (i = 0 ; i < instance->msix_vectors; i++) {
instance->irq_context[i].instance = instance;
instance->irq_context[i].MSIxIndex = i;
if (request_irq(instance->msixentry[i].vector,
instance->instancet->service_isr, 0,
"megasas",
&instance->irq_context[i])) {
printk(KERN_DEBUG "megasas: Failed to "
"register IRQ for vector %d.\n", i);
for (j = 0 ; j < i ; j++)
free_irq(
instance->msixentry[j].vector,
&instance->irq_context[j]);
goto fail_irq;
}
}
} else {
instance->irq_context[0].instance = instance;
instance->irq_context[0].MSIxIndex = 0;
if (request_irq(pdev->irq, instance->instancet->service_isr,
IRQF_SHARED, "megasas",
&instance->irq_context[0])) {
printk(KERN_DEBUG "megasas: Failed to register IRQ\n");
goto fail_irq;
}
}
instance->instancet->enable_intr(instance->reg_set);
@ -4492,13 +4604,18 @@ static void __devexit megasas_detach_one(struct pci_dev *pdev)
instance->instancet->disable_intr(instance->reg_set);
free_irq(instance->msi_flag ? instance->msixentry.vector :
instance->pdev->irq, instance);
if (instance->msi_flag)
if (instance->msix_vectors)
for (i = 0 ; i < instance->msix_vectors; i++)
free_irq(instance->msixentry[i].vector,
&instance->irq_context[i]);
else
free_irq(instance->pdev->irq, &instance->irq_context[0]);
if (instance->msix_vectors)
pci_disable_msix(instance->pdev);
switch (instance->pdev->device) {
case PCI_DEVICE_ID_LSI_FUSION:
case PCI_DEVICE_ID_LSI_INVADER:
megasas_release_fusion(instance);
for (i = 0; i < 2 ; i++)
if (fusion->ld_map[i])
@ -4539,14 +4656,20 @@ static void __devexit megasas_detach_one(struct pci_dev *pdev)
*/
static void megasas_shutdown(struct pci_dev *pdev)
{
int i;
struct megasas_instance *instance = pci_get_drvdata(pdev);
instance->unload = 1;
megasas_flush_cache(instance);
megasas_shutdown_controller(instance, MR_DCMD_CTRL_SHUTDOWN);
instance->instancet->disable_intr(instance->reg_set);
free_irq(instance->msi_flag ? instance->msixentry.vector :
instance->pdev->irq, instance);
if (instance->msi_flag)
if (instance->msix_vectors)
for (i = 0 ; i < instance->msix_vectors; i++)
free_irq(instance->msixentry[i].vector,
&instance->irq_context[i]);
else
free_irq(instance->pdev->irq, &instance->irq_context[0]);
if (instance->msix_vectors)
pci_disable_msix(instance->pdev);
}

View File

@ -52,6 +52,7 @@
#include <scsi/scsi_host.h>
#include "megaraid_sas_fusion.h"
#include "megaraid_sas.h"
#include <asm/div64.h>
#define ABS_DIFF(a, b) (((a) > (b)) ? ((a) - (b)) : ((b) - (a)))
@ -226,8 +227,9 @@ u32 MR_GetSpanBlock(u32 ld, u64 row, u64 *span_blk,
* span - Span number
* block - Absolute Block number in the physical disk
*/
u8 MR_GetPhyParams(u32 ld, u64 stripRow, u16 stripRef, u64 *pdBlock,
u16 *pDevHandle, struct RAID_CONTEXT *pRAID_Context,
u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow,
u16 stripRef, u64 *pdBlock, u16 *pDevHandle,
struct RAID_CONTEXT *pRAID_Context,
struct MR_FW_RAID_MAP_ALL *map)
{
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
@ -279,7 +281,8 @@ u8 MR_GetPhyParams(u32 ld, u64 stripRow, u16 stripRef, u64 *pdBlock,
*pDevHandle = MR_PdDevHandleGet(pd, map);
else {
*pDevHandle = MR_PD_INVALID; /* set dev handle as invalid. */
if (raid->level >= 5)
if ((raid->level >= 5) &&
(instance->pdev->device != PCI_DEVICE_ID_LSI_INVADER))
pRAID_Context->regLockFlags = REGION_TYPE_EXCLUSIVE;
else if (raid->level == 1) {
/* Get alternate Pd. */
@ -306,7 +309,8 @@ u8 MR_GetPhyParams(u32 ld, u64 stripRow, u16 stripRef, u64 *pdBlock,
* This function will return 0 if region lock was acquired OR return num strips
*/
u8
MR_BuildRaidContext(struct IO_REQUEST_INFO *io_info,
MR_BuildRaidContext(struct megasas_instance *instance,
struct IO_REQUEST_INFO *io_info,
struct RAID_CONTEXT *pRAID_Context,
struct MR_FW_RAID_MAP_ALL *map)
{
@ -394,8 +398,12 @@ MR_BuildRaidContext(struct IO_REQUEST_INFO *io_info,
}
pRAID_Context->timeoutValue = map->raidMap.fpPdIoTimeoutSec;
pRAID_Context->regLockFlags = (isRead) ? REGION_TYPE_SHARED_READ :
raid->regTypeReqOnWrite;
if (instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER)
pRAID_Context->regLockFlags = (isRead) ?
raid->regTypeReqOnRead : raid->regTypeReqOnWrite;
else
pRAID_Context->regLockFlags = (isRead) ?
REGION_TYPE_SHARED_READ : raid->regTypeReqOnWrite;
pRAID_Context->VirtualDiskTgtId = raid->targetId;
pRAID_Context->regLockRowLBA = regStart;
pRAID_Context->regLockLength = regSize;
@ -404,7 +412,8 @@ MR_BuildRaidContext(struct IO_REQUEST_INFO *io_info,
/*Get Phy Params only if FP capable, or else leave it to MR firmware
to do the calculation.*/
if (io_info->fpOkForIo) {
retval = MR_GetPhyParams(ld, start_strip, ref_in_start_stripe,
retval = MR_GetPhyParams(instance, ld, start_strip,
ref_in_start_stripe,
&io_info->pdBlock,
&io_info->devHandle, pRAID_Context,
map);
@ -415,7 +424,8 @@ MR_BuildRaidContext(struct IO_REQUEST_INFO *io_info,
} else if (isRead) {
uint stripIdx;
for (stripIdx = 0; stripIdx < num_strips; stripIdx++) {
if (!MR_GetPhyParams(ld, start_strip + stripIdx,
if (!MR_GetPhyParams(instance, ld,
start_strip + stripIdx,
ref_in_start_stripe,
&io_info->pdBlock,
&io_info->devHandle,

View File

@ -74,7 +74,8 @@ megasas_issue_polled(struct megasas_instance *instance,
struct megasas_cmd *cmd);
u8
MR_BuildRaidContext(struct IO_REQUEST_INFO *io_info,
MR_BuildRaidContext(struct megasas_instance *instance,
struct IO_REQUEST_INFO *io_info,
struct RAID_CONTEXT *pRAID_Context,
struct MR_FW_RAID_MAP_ALL *map);
u16 MR_TargetIdToLdGet(u32 ldTgtId, struct MR_FW_RAID_MAP_ALL *map);
@ -89,7 +90,7 @@ u8 MR_ValidateMapInfo(struct MR_FW_RAID_MAP_ALL *map,
struct LD_LOAD_BALANCE_INFO *lbInfo);
u16 get_updated_dev_handle(struct LD_LOAD_BALANCE_INFO *lbInfo,
struct IO_REQUEST_INFO *in_info);
int megasas_transition_to_ready(struct megasas_instance *instance);
int megasas_transition_to_ready(struct megasas_instance *instance, int ocr);
void megaraid_sas_kill_hba(struct megasas_instance *instance);
extern u32 megasas_dbg_lvl;
@ -101,6 +102,10 @@ extern u32 megasas_dbg_lvl;
void
megasas_enable_intr_fusion(struct megasas_register_set __iomem *regs)
{
/* For Thunderbolt/Invader also clear intr on enable */
writel(~0, &regs->outbound_intr_status);
readl(&regs->outbound_intr_status);
writel(~MFI_FUSION_ENABLE_INTERRUPT_MASK, &(regs)->outbound_intr_mask);
/* Dummy readl to force pci flush */
@ -139,11 +144,6 @@ megasas_clear_intr_fusion(struct megasas_register_set __iomem *regs)
if (!(status & MFI_FUSION_ENABLE_INTERRUPT_MASK))
return 0;
/*
* dummy read to flush PCI
*/
readl(&regs->outbound_intr_status);
return 1;
}
@ -385,7 +385,7 @@ static int megasas_create_frame_pool_fusion(struct megasas_instance *instance)
int
megasas_alloc_cmds_fusion(struct megasas_instance *instance)
{
int i, j;
int i, j, count;
u32 max_cmd, io_frames_sz;
struct fusion_context *fusion;
struct megasas_cmd_fusion *cmd;
@ -409,9 +409,10 @@ megasas_alloc_cmds_fusion(struct megasas_instance *instance)
goto fail_req_desc;
}
count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
fusion->reply_frames_desc_pool =
pci_pool_create("reply_frames pool", instance->pdev,
fusion->reply_alloc_sz, 16, 0);
fusion->reply_alloc_sz * count, 16, 0);
if (!fusion->reply_frames_desc_pool) {
printk(KERN_ERR "megasas; Could not allocate memory for "
@ -430,7 +431,7 @@ megasas_alloc_cmds_fusion(struct megasas_instance *instance)
}
reply_desc = fusion->reply_frames_desc;
for (i = 0; i < fusion->reply_q_depth; i++, reply_desc++)
for (i = 0; i < fusion->reply_q_depth * count; i++, reply_desc++)
reply_desc->Words = ULLONG_MAX;
io_frames_sz = fusion->io_frames_alloc_sz;
@ -590,7 +591,6 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
struct megasas_init_frame *init_frame;
struct MPI2_IOC_INIT_REQUEST *IOCInitMessage;
dma_addr_t ioc_init_handle;
u32 context;
struct megasas_cmd *cmd;
u8 ret;
struct fusion_context *fusion;
@ -634,14 +634,13 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
fusion->reply_frames_desc_phys;
IOCInitMessage->SystemRequestFrameBaseAddress =
fusion->io_request_frames_phys;
/* Set to 0 for none or 1 MSI-X vectors */
IOCInitMessage->HostMSIxVectors = (instance->msix_vectors > 0 ?
instance->msix_vectors : 0);
init_frame = (struct megasas_init_frame *)cmd->frame;
memset(init_frame, 0, MEGAMFI_FRAME_SIZE);
frame_hdr = &cmd->frame->hdr;
context = init_frame->context;
init_frame->context = context;
frame_hdr->cmd_status = 0xFF;
frame_hdr->flags |= MFI_FRAME_DONT_POST_IN_REPLY_QUEUE;
@ -881,7 +880,7 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
struct megasas_register_set __iomem *reg_set;
struct fusion_context *fusion;
u32 max_cmd;
int i = 0;
int i = 0, count;
fusion = instance->ctrl_context;
@ -933,7 +932,9 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
(MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE -
sizeof(union MPI2_SGE_IO_UNION))/16;
fusion->last_reply_idx = 0;
count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
for (i = 0 ; i < count; i++)
fusion->last_reply_idx[i] = 0;
/*
* Allocate memory for descriptors
@ -1043,7 +1044,9 @@ map_cmd_status(struct megasas_cmd_fusion *cmd, u8 status, u8 ext_status)
case MFI_STAT_DEVICE_NOT_FOUND:
cmd->scmd->result = DID_BAD_TARGET << 16;
break;
case MFI_STAT_CONFIG_SEQ_MISMATCH:
cmd->scmd->result = DID_IMM_RETRY << 16;
break;
default:
printk(KERN_DEBUG "megasas: FW status %#x\n", status);
cmd->scmd->result = DID_ERROR << 16;
@ -1066,14 +1069,17 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
struct MPI25_IEEE_SGE_CHAIN64 *sgl_ptr,
struct megasas_cmd_fusion *cmd)
{
int i, sg_processed;
int sge_count, sge_idx;
int i, sg_processed, sge_count;
struct scatterlist *os_sgl;
struct fusion_context *fusion;
fusion = instance->ctrl_context;
cmd->io_request->ChainOffset = 0;
if (instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) {
struct MPI25_IEEE_SGE_CHAIN64 *sgl_ptr_end = sgl_ptr;
sgl_ptr_end += fusion->max_sge_in_main_msg - 1;
sgl_ptr_end->Flags = 0;
}
sge_count = scsi_dma_map(scp);
@ -1082,16 +1088,14 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
if (sge_count > instance->max_num_sge || !sge_count)
return sge_count;
if (sge_count > fusion->max_sge_in_main_msg) {
/* One element to store the chain info */
sge_idx = fusion->max_sge_in_main_msg - 1;
} else
sge_idx = sge_count;
scsi_for_each_sg(scp, os_sgl, sge_count, i) {
sgl_ptr->Length = sg_dma_len(os_sgl);
sgl_ptr->Address = sg_dma_address(os_sgl);
sgl_ptr->Flags = 0;
if (instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) {
if (i == sge_count - 1)
sgl_ptr->Flags = IEEE_SGE_FLAGS_END_OF_LIST;
}
sgl_ptr++;
sg_processed = i + 1;
@ -1100,13 +1104,30 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
(sge_count > fusion->max_sge_in_main_msg)) {
struct MPI25_IEEE_SGE_CHAIN64 *sg_chain;
cmd->io_request->ChainOffset =
fusion->chain_offset_io_request;
if (instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER) {
if ((cmd->io_request->IoFlags &
MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH) !=
MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH)
cmd->io_request->ChainOffset =
fusion->
chain_offset_io_request;
else
cmd->io_request->ChainOffset = 0;
} else
cmd->io_request->ChainOffset =
fusion->chain_offset_io_request;
sg_chain = sgl_ptr;
/* Prepare chain element */
sg_chain->NextChainOffset = 0;
sg_chain->Flags = (IEEE_SGE_FLAGS_CHAIN_ELEMENT |
MPI2_IEEE_SGE_FLAGS_IOCPLBNTA_ADDR);
if (instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER)
sg_chain->Flags = IEEE_SGE_FLAGS_CHAIN_ELEMENT;
else
sg_chain->Flags =
(IEEE_SGE_FLAGS_CHAIN_ELEMENT |
MPI2_IEEE_SGE_FLAGS_IOCPLBNTA_ADDR);
sg_chain->Length = (sizeof(union MPI2_SGE_IO_UNION)
*(sge_count - sg_processed));
sg_chain->Address = cmd->sg_frame_phys_addr;
@ -1399,11 +1420,18 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
io_request->RaidContext.regLockFlags = 0;
fp_possible = 0;
} else {
if (MR_BuildRaidContext(&io_info, &io_request->RaidContext,
if (MR_BuildRaidContext(instance, &io_info,
&io_request->RaidContext,
local_map_ptr))
fp_possible = io_info.fpOkForIo;
}
/* Use smp_processor_id() for now until cmd->request->cpu is CPU
id by default, not CPU group id, otherwise all MSI-X queues won't
be utilized */
cmd->request_desc->SCSIIO.MSIxIndex = instance->msix_vectors ?
smp_processor_id() % instance->msix_vectors : 0;
if (fp_possible) {
megasas_set_pd_lba(io_request, scp->cmd_len, &io_info, scp,
local_map_ptr, start_lba_lo);
@ -1412,6 +1440,20 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
cmd->request_desc->SCSIIO.RequestFlags =
(MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY
<< MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
if (instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) {
if (io_request->RaidContext.regLockFlags ==
REGION_TYPE_UNUSED)
cmd->request_desc->SCSIIO.RequestFlags =
(MEGASAS_REQ_DESCRIPT_FLAGS_NO_LOCK <<
MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
io_request->RaidContext.Type = MPI2_TYPE_CUDA;
io_request->RaidContext.nseg = 0x1;
io_request->IoFlags |=
MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH;
io_request->RaidContext.regLockFlags |=
(MR_RL_FLAGS_GRANT_DESTINATION_CUDA |
MR_RL_FLAGS_SEQ_NUM_ENABLE);
}
if ((fusion->load_balance_info[device_id].loadBalanceFlag) &&
(io_info.isRead)) {
io_info.devHandle =
@ -1426,11 +1468,23 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
} else {
io_request->RaidContext.timeoutValue =
local_map_ptr->raidMap.fpPdIoTimeoutSec;
io_request->Function = MEGASAS_MPI2_FUNCTION_LD_IO_REQUEST;
io_request->DevHandle = device_id;
cmd->request_desc->SCSIIO.RequestFlags =
(MEGASAS_REQ_DESCRIPT_FLAGS_LD_IO
<< MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
if (instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) {
if (io_request->RaidContext.regLockFlags ==
REGION_TYPE_UNUSED)
cmd->request_desc->SCSIIO.RequestFlags =
(MEGASAS_REQ_DESCRIPT_FLAGS_NO_LOCK <<
MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
io_request->RaidContext.Type = MPI2_TYPE_CUDA;
io_request->RaidContext.regLockFlags |=
(MR_RL_FLAGS_GRANT_DESTINATION_CPU0 |
MR_RL_FLAGS_SEQ_NUM_ENABLE);
io_request->RaidContext.nseg = 0x1;
}
io_request->Function = MEGASAS_MPI2_FUNCTION_LD_IO_REQUEST;
io_request->DevHandle = device_id;
} /* Not FP */
}
@ -1513,8 +1567,10 @@ megasas_build_io_fusion(struct megasas_instance *instance,
io_request->EEDPFlags = 0;
io_request->Control = 0;
io_request->EEDPBlockSize = 0;
io_request->IoFlags = 0;
io_request->ChainOffset = 0;
io_request->RaidContext.RAIDFlags = 0;
io_request->RaidContext.Type = 0;
io_request->RaidContext.nseg = 0;
memcpy(io_request->CDB.CDB32, scp->cmnd, scp->cmd_len);
/*
@ -1612,7 +1668,6 @@ megasas_build_and_issue_cmd_fusion(struct megasas_instance *instance,
req_desc->Words = 0;
cmd->request_desc = req_desc;
cmd->request_desc->Words = 0;
if (megasas_build_io_fusion(instance, scmd, cmd)) {
megasas_return_cmd_fusion(instance, cmd);
@ -1647,7 +1702,7 @@ megasas_build_and_issue_cmd_fusion(struct megasas_instance *instance,
* Completes all commands that is in reply descriptor queue
*/
int
complete_cmd_fusion(struct megasas_instance *instance)
complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
{
union MPI2_REPLY_DESCRIPTORS_UNION *desc;
struct MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR *reply_desc;
@ -1667,7 +1722,9 @@ complete_cmd_fusion(struct megasas_instance *instance)
return IRQ_HANDLED;
desc = fusion->reply_frames_desc;
desc += fusion->last_reply_idx;
desc += ((MSIxIndex * fusion->reply_alloc_sz)/
sizeof(union MPI2_REPLY_DESCRIPTORS_UNION)) +
fusion->last_reply_idx[MSIxIndex];
reply_desc = (struct MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR *)desc;
@ -1740,16 +1797,19 @@ complete_cmd_fusion(struct megasas_instance *instance)
break;
}
fusion->last_reply_idx++;
if (fusion->last_reply_idx >= fusion->reply_q_depth)
fusion->last_reply_idx = 0;
fusion->last_reply_idx[MSIxIndex]++;
if (fusion->last_reply_idx[MSIxIndex] >=
fusion->reply_q_depth)
fusion->last_reply_idx[MSIxIndex] = 0;
desc->Words = ULLONG_MAX;
num_completed++;
/* Get the next reply descriptor */
if (!fusion->last_reply_idx)
desc = fusion->reply_frames_desc;
if (!fusion->last_reply_idx[MSIxIndex])
desc = fusion->reply_frames_desc +
((MSIxIndex * fusion->reply_alloc_sz)/
sizeof(union MPI2_REPLY_DESCRIPTORS_UNION));
else
desc++;
@ -1769,7 +1829,7 @@ complete_cmd_fusion(struct megasas_instance *instance)
return IRQ_NONE;
wmb();
writel(fusion->last_reply_idx,
writel((MSIxIndex << 24) | fusion->last_reply_idx[MSIxIndex],
&instance->reg_set->reply_post_host_index);
megasas_check_and_restore_queue_depth(instance);
return IRQ_HANDLED;
@ -1787,6 +1847,9 @@ megasas_complete_cmd_dpc_fusion(unsigned long instance_addr)
struct megasas_instance *instance =
(struct megasas_instance *)instance_addr;
unsigned long flags;
u32 count, MSIxIndex;
count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
/* If we have already declared adapter dead, donot complete cmds */
spin_lock_irqsave(&instance->hba_lock, flags);
@ -1797,7 +1860,8 @@ megasas_complete_cmd_dpc_fusion(unsigned long instance_addr)
spin_unlock_irqrestore(&instance->hba_lock, flags);
spin_lock_irqsave(&instance->completion_lock, flags);
complete_cmd_fusion(instance);
for (MSIxIndex = 0 ; MSIxIndex < count; MSIxIndex++)
complete_cmd_fusion(instance, MSIxIndex);
spin_unlock_irqrestore(&instance->completion_lock, flags);
}
@ -1806,20 +1870,24 @@ megasas_complete_cmd_dpc_fusion(unsigned long instance_addr)
*/
irqreturn_t megasas_isr_fusion(int irq, void *devp)
{
struct megasas_instance *instance = (struct megasas_instance *)devp;
struct megasas_irq_context *irq_context = devp;
struct megasas_instance *instance = irq_context->instance;
u32 mfiStatus, fw_state;
if (!instance->msi_flag) {
if (!instance->msix_vectors) {
mfiStatus = instance->instancet->clear_intr(instance->reg_set);
if (!mfiStatus)
return IRQ_NONE;
}
/* If we are resetting, bail */
if (test_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags))
if (test_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags)) {
instance->instancet->clear_intr(instance->reg_set);
return IRQ_HANDLED;
}
if (!complete_cmd_fusion(instance)) {
if (!complete_cmd_fusion(instance, irq_context->MSIxIndex)) {
instance->instancet->clear_intr(instance->reg_set);
/* If we didn't complete any commands, check for FW fault */
fw_state = instance->instancet->read_fw_status_reg(
instance->reg_set) & MFI_STATE_MASK;
@ -1866,6 +1934,14 @@ build_mpt_mfi_pass_thru(struct megasas_instance *instance,
fusion = instance->ctrl_context;
io_req = cmd->io_request;
if (instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) {
struct MPI25_IEEE_SGE_CHAIN64 *sgl_ptr_end =
(struct MPI25_IEEE_SGE_CHAIN64 *)&io_req->SGL;
sgl_ptr_end += fusion->max_sge_in_main_msg - 1;
sgl_ptr_end->Flags = 0;
}
mpi25_ieee_chain =
(struct MPI25_IEEE_SGE_CHAIN64 *)&io_req->SGL.IeeeChain;
@ -1928,15 +2004,12 @@ megasas_issue_dcmd_fusion(struct megasas_instance *instance,
struct megasas_cmd *cmd)
{
union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc;
union desc_value d_val;
req_desc = build_mpt_cmd(instance, cmd);
if (!req_desc) {
printk(KERN_ERR "Couldn't issue MFI pass thru cmd\n");
return;
}
d_val.word = req_desc->Words;
instance->instancet->fire_cmd(instance, req_desc->u.low,
req_desc->u.high, instance->reg_set);
}
@ -2029,14 +2102,16 @@ out:
void megasas_reset_reply_desc(struct megasas_instance *instance)
{
int i;
int i, count;
struct fusion_context *fusion;
union MPI2_REPLY_DESCRIPTORS_UNION *reply_desc;
fusion = instance->ctrl_context;
fusion->last_reply_idx = 0;
count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
for (i = 0 ; i < count ; i++)
fusion->last_reply_idx[i] = 0;
reply_desc = fusion->reply_frames_desc;
for (i = 0 ; i < fusion->reply_q_depth; i++, reply_desc++)
for (i = 0 ; i < fusion->reply_q_depth * count; i++, reply_desc++)
reply_desc->Words = ULLONG_MAX;
}
@ -2057,8 +2132,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost)
if (instance->adprecovery == MEGASAS_HW_CRITICAL_ERROR) {
printk(KERN_WARNING "megaraid_sas: Hardware critical error, "
"returning FAILED.\n");
retval = FAILED;
goto out;
return FAILED;
}
mutex_lock(&instance->reset_mutex);
@ -2173,7 +2247,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost)
}
/* Wait for FW to become ready */
if (megasas_transition_to_ready(instance)) {
if (megasas_transition_to_ready(instance, 1)) {
printk(KERN_WARNING "megaraid_sas: Failed to "
"transition controller to ready.\n");
continue;
@ -2186,6 +2260,8 @@ int megasas_reset_fusion(struct Scsi_Host *shost)
continue;
}
clear_bit(MEGASAS_FUSION_IN_RESET,
&instance->reset_flags);
instance->instancet->enable_intr(instance->reg_set);
instance->adprecovery = MEGASAS_HBA_OPERATIONAL;
@ -2247,6 +2323,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost)
megaraid_sas_kill_hba(instance);
retval = FAILED;
} else {
clear_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags);
instance->instancet->enable_intr(instance->reg_set);
instance->adprecovery = MEGASAS_HBA_OPERATIONAL;
}

Some files were not shown because too many files have changed in this diff Show More