1
0
Fork 0

Merge master.kernel.org:/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6

* master.kernel.org:/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (166 commits)
  [SCSI] ibmvscsi: convert to use the data buffer accessors
  [SCSI] dc395x: convert to use the data buffer accessors
  [SCSI] ncr53c8xx: convert to use the data buffer accessors
  [SCSI] sym53c8xx: convert to use the data buffer accessors
  [SCSI] ppa: coding police and printk levels
  [SCSI] aic7xxx_old: remove redundant GFP_ATOMIC from kmalloc
  [SCSI] i2o: remove redundant GFP_ATOMIC from kmalloc from device.c
  [SCSI] remove the dead CYBERSTORMIII_SCSI option
  [SCSI] don't build scsi_dma_{map,unmap} for !HAS_DMA
  [SCSI] Clean up scsi_add_lun a bit
  [SCSI] 53c700: Remove printk, which triggers because of low scsi clock on SNI RMs
  [SCSI] sni_53c710: Cleanup
  [SCSI] qla4xxx: Fix underrun/overrun conditions
  [SCSI] megaraid_mbox: use mutex instead of semaphore
  [SCSI] aacraid: add 51245, 51645 and 52245 adapters to documentation.
  [SCSI] qla2xxx: update version to 8.02.00-k1.
  [SCSI] qla2xxx: add support for NPIV
  [SCSI] stex: use resid for xfer len information
  [SCSI] Add Brownie 1200U3P to blacklist
  [SCSI] scsi.c: convert to use the data buffer accessors
  ...
hifive-unleashed-5.1
Linus Torvalds 2007-07-15 16:51:54 -07:00
commit bc06cffdec
190 changed files with 21735 additions and 26347 deletions

View File

@ -50,6 +50,9 @@ Supported Cards/Chipsets
9005:0285:9005:02be Adaptec 31605 (Marauder160)
9005:0285:9005:02c3 Adaptec 51205 (Voodoo120)
9005:0285:9005:02c4 Adaptec 51605 (Voodoo160)
9005:0285:9005:02ce Adaptec 51245 (Voodoo124)
9005:0285:9005:02cf Adaptec 51645 (Voodoo164)
9005:0285:9005:02d0 Adaptec 52445 (Voodoo244)
1011:0046:9005:0364 Adaptec 5400S (Mustang)
9005:0287:9005:0800 Adaptec Themisto (Jupiter)
9005:0200:9005:0200 Adaptec Themisto (Jupiter)

View File

@ -0,0 +1,450 @@
SCSI FC Tansport
=============================================
Date: 4/12/2007
Kernel Revisions for features:
rports : <<TBS>>
vports : 2.6.22 (? TBD)
Introduction
============
This file documents the features and components of the SCSI FC Transport.
It also provides documents the API between the transport and FC LLDDs.
The FC transport can be found at:
drivers/scsi/scsi_transport_fc.c
include/scsi/scsi_transport_fc.h
include/scsi/scsi_netlink_fc.h
This file is found at Documentation/scsi/scsi_fc_transport.txt
FC Remote Ports (rports)
========================================================================
<< To Be Supplied >>
FC Virtual Ports (vports)
========================================================================
Overview:
-------------------------------
New FC standards have defined mechanisms which allows for a single physical
port to appear on as multiple communication ports. Using the N_Port Id
Virtualization (NPIV) mechanism, a point-to-point connection to a Fabric
can be assigned more than 1 N_Port_ID. Each N_Port_ID appears as a
separate port to other endpoints on the fabric, even though it shares one
physical link to the switch for communication. Each N_Port_ID can have a
unique view of the fabric based on fabric zoning and array lun-masking
(just like a normal non-NPIV adapter). Using the Virtual Fabric (VF)
mechanism, adding a fabric header to each frame allows the port to
interact with the Fabric Port to join multiple fabrics. The port will
obtain an N_Port_ID on each fabric it joins. Each fabric will have its
own unique view of endpoints and configuration parameters. NPIV may be
used together with VF so that the port can obtain multiple N_Port_IDs
on each virtual fabric.
The FC transport is now recognizing a new object - a vport. A vport is
an entity that has a world-wide unique World Wide Port Name (wwpn) and
World Wide Node Name (wwnn). The transport also allows for the FC4's to
be specified for the vport, with FCP_Initiator being the primary role
expected. Once instantiated by one of the above methods, it will have a
distinct N_Port_ID and view of fabric endpoints and storage entities.
The fc_host associated with the physical adapter will export the ability
to create vports. The transport will create the vport object within the
Linux device tree, and instruct the fc_host's driver to instantiate the
virtual port. Typically, the driver will create a new scsi_host instance
on the vport, resulting in a unique <H,C,T,L> namespace for the vport.
Thus, whether a FC port is based on a physical port or on a virtual port,
each will appear as a unique scsi_host with its own target and lun space.
Note: At this time, the transport is written to create only NPIV-based
vports. However, consideration was given to VF-based vports and it
should be a minor change to add support if needed. The remaining
discussion will concentrate on NPIV.
Note: World Wide Name assignment (and uniqueness guarantees) are left
up to an administrative entity controling the vport. For example,
if vports are to be associated with virtual machines, a XEN mgmt
utility would be responsible for creating wwpn/wwnn's for the vport,
using it's own naming authority and OUI. (Note: it already does this
for virtual MAC addresses).
Device Trees and Vport Objects:
-------------------------------
Today, the device tree typically contains the scsi_host object,
with rports and scsi target objects underneath it. Currently the FC
transport creates the vport object and places it under the scsi_host
object corresponding to the physical adapter. The LLDD will allocate
a new scsi_host for the vport and link it's object under the vport.
The remainder of the tree under the vports scsi_host is the same
as the non-NPIV case. The transport is written currently to easily
allow the parent of the vport to be something other than the scsi_host.
This could be used in the future to link the object onto a vm-specific
device tree. If the vport's parent is not the physical port's scsi_host,
a symbolic link to the vport object will be placed in the physical
port's scsi_host.
Here's what to expect in the device tree :
The typical Physical Port's Scsi_Host:
/sys/devices/.../host17/
and it has the typical decendent tree:
/sys/devices/.../host17/rport-17:0-0/target17:0:0/17:0:0:0:
and then the vport is created on the Physical Port:
/sys/devices/.../host17/vport-17:0-0
and the vport's Scsi_Host is then created:
/sys/devices/.../host17/vport-17:0-0/host18
and then the rest of the tree progresses, such as:
/sys/devices/.../host17/vport-17:0-0/host18/rport-18:0-0/target18:0:0/18:0:0:0:
Here's what to expect in the sysfs tree :
scsi_hosts:
/sys/class/scsi_host/host17 physical port's scsi_host
/sys/class/scsi_host/host18 vport's scsi_host
fc_hosts:
/sys/class/fc_host/host17 physical port's fc_host
/sys/class/fc_host/host18 vport's fc_host
fc_vports:
/sys/class/fc_vports/vport-17:0-0 the vport's fc_vport
fc_rports:
/sys/class/fc_remote_ports/rport-17:0-0 rport on the physical port
/sys/class/fc_remote_ports/rport-18:0-0 rport on the vport
Vport Attributes:
-------------------------------
The new fc_vport class object has the following attributes
node_name: Read_Only
The WWNN of the vport
port_name: Read_Only
The WWPN of the vport
roles: Read_Only
Indicates the FC4 roles enabled on the vport.
symbolic_name: Read_Write
A string, appended to the driver's symbolic port name string, which
is registered with the switch to identify the vport. For example,
a hypervisor could set this string to "Xen Domain 2 VM 5 Vport 2",
and this set of identifiers can be seen on switch management screens
to identify the port.
vport_delete: Write_Only
When written with a "1", will tear down the vport.
vport_disable: Write_Only
When written with a "1", will transition the vport to a disabled.
state. The vport will still be instantiated with the Linux kernel,
but it will not be active on the FC link.
When written with a "0", will enable the vport.
vport_last_state: Read_Only
Indicates the previous state of the vport. See the section below on
"Vport States".
vport_state: Read_Only
Indicates the state of the vport. See the section below on
"Vport States".
vport_type: Read_Only
Reflects the FC mechanism used to create the virtual port.
Only NPIV is supported currently.
For the fc_host class object, the following attributes are added for vports:
max_npiv_vports: Read_Only
Indicates the maximum number of NPIV-based vports that the
driver/adapter can support on the fc_host.
npiv_vports_inuse: Read_Only
Indicates how many NPIV-based vports have been instantiated on the
fc_host.
vport_create: Write_Only
A "simple" create interface to instantiate a vport on an fc_host.
A "<WWPN>:<WWNN>" string is written to the attribute. The transport
then instantiates the vport object and calls the LLDD to create the
vport with the role of FCP_Initiator. Each WWN is specified as 16
hex characters and may *not* contain any prefixes (e.g. 0x, x, etc).
vport_delete: Write_Only
A "simple" delete interface to teardown a vport. A "<WWPN>:<WWNN>"
string is written to the attribute. The transport will locate the
vport on the fc_host with the same WWNs and tear it down. Each WWN
is specified as 16 hex characters and may *not* contain any prefixes
(e.g. 0x, x, etc).
Vport States:
-------------------------------
Vport instantiation consists of two parts:
- Creation with the kernel and LLDD. This means all transport and
driver data structures are built up, and device objects created.
This is equivalent to a driver "attach" on an adapter, which is
independent of the adapter's link state.
- Instantiation of the vport on the FC link via ELS traffic, etc.
This is equivalent to a "link up" and successfull link initialization.
Futher information can be found in the interfaces section below for
Vport Creation.
Once a vport has been instantiated with the kernel/LLDD, a vport state
can be reported via the sysfs attribute. The following states exist:
FC_VPORT_UNKNOWN - Unknown
An temporary state, typically set only while the vport is being
instantiated with the kernel and LLDD.
FC_VPORT_ACTIVE - Active
The vport has been successfully been created on the FC link.
It is fully functional.
FC_VPORT_DISABLED - Disabled
The vport instantiated, but "disabled". The vport is not instantiated
on the FC link. This is equivalent to a physical port with the
link "down".
FC_VPORT_LINKDOWN - Linkdown
The vport is not operational as the physical link is not operational.
FC_VPORT_INITIALIZING - Initializing
The vport is in the process of instantiating on the FC link.
The LLDD will set this state just prior to starting the ELS traffic
to create the vport. This state will persist until the vport is
successfully created (state becomes FC_VPORT_ACTIVE) or it fails
(state is one of the values below). As this state is transitory,
it will not be preserved in the "vport_last_state".
FC_VPORT_NO_FABRIC_SUPP - No Fabric Support
The vport is not operational. One of the following conditions were
encountered:
- The FC topology is not Point-to-Point
- The FC port is not connected to an F_Port
- The F_Port has indicated that NPIV is not supported.
FC_VPORT_NO_FABRIC_RSCS - No Fabric Resources
The vport is not operational. The Fabric failed FDISC with a status
indicating that it does not have sufficient resources to complete
the operation.
FC_VPORT_FABRIC_LOGOUT - Fabric Logout
The vport is not operational. The Fabric has LOGO'd the N_Port_ID
associated with the vport.
FC_VPORT_FABRIC_REJ_WWN - Fabric Rejected WWN
The vport is not operational. The Fabric failed FDISC with a status
indicating that the WWN's are not valid.
FC_VPORT_FAILED - VPort Failed
The vport is not operational. This is a catchall for all other
error conditions.
The following state table indicates the different state transitions:
State Event New State
--------------------------------------------------------------------
n/a Initialization Unknown
Unknown: Link Down Linkdown
Link Up & Loop No Fabric Support
Link Up & no Fabric No Fabric Support
Link Up & FLOGI response No Fabric Support
indicates no NPIV support
Link Up & FDISC being sent Initializing
Disable request Disable
Linkdown: Link Up Unknown
Initializing: FDISC ACC Active
FDISC LS_RJT w/ no resources No Fabric Resources
FDISC LS_RJT w/ invalid Fabric Rejected WWN
pname or invalid nport_id
FDISC LS_RJT failed for Vport Failed
other reasons
Link Down Linkdown
Disable request Disable
Disable: Enable request Unknown
Active: LOGO received from fabric Fabric Logout
Link Down Linkdown
Disable request Disable
Fabric Logout: Link still up Unknown
The following 4 error states all have the same transitions:
No Fabric Support:
No Fabric Resources:
Fabric Rejected WWN:
Vport Failed:
Disable request Disable
Link goes down Linkdown
Transport <-> LLDD Interfaces :
-------------------------------
Vport support by LLDD:
The LLDD indicates support for vports by supplying a vport_create()
function in the transport template. The presense of this function will
cause the creation of the new attributes on the fc_host. As part of
the physical port completing its initialization relative to the
transport, it should set the max_npiv_vports attribute to indicate the
maximum number of vports the driver and/or adapter supports.
Vport Creation:
The LLDD vport_create() syntax is:
int vport_create(struct fc_vport *vport, bool disable)
where:
vport: Is the newly allocated vport object
disable: If "true", the vport is to be created in a disabled stated.
If "false", the vport is to be enabled upon creation.
When a request is made to create a new vport (via sgio/netlink, or the
vport_create fc_host attribute), the transport will validate that the LLDD
can support another vport (e.g. max_npiv_vports > npiv_vports_inuse).
If not, the create request will be failed. If space remains, the transport
will increment the vport count, create the vport object, and then call the
LLDD's vport_create() function with the newly allocated vport object.
As mentioned above, vport creation is divided into two parts:
- Creation with the kernel and LLDD. This means all transport and
driver data structures are built up, and device objects created.
This is equivalent to a driver "attach" on an adapter, which is
independent of the adapter's link state.
- Instantiation of the vport on the FC link via ELS traffic, etc.
This is equivalent to a "link up" and successfull link initialization.
The LLDD's vport_create() function will not synchronously wait for both
parts to be fully completed before returning. It must validate that the
infrastructure exists to support NPIV, and complete the first part of
vport creation (data structure build up) before returning. We do not
hinge vport_create() on the link-side operation mainly because:
- The link may be down. It is not a failure if it is. It simply
means the vport is in an inoperable state until the link comes up.
This is consistent with the link bouncing post vport creation.
- The vport may be created in a disabled state.
- This is consistent with a model where: the vport equates to a
FC adapter. The vport_create is synonymous with driver attachment
to the adapter, which is independent of link state.
Note: special error codes have been defined to delineate infrastructure
failure cases for quicker resolution.
The expected behavior for the LLDD's vport_create() function is:
- Validate Infrastructure:
- If the driver or adapter cannot support another vport, whether
due to improper firmware, (a lie about) max_npiv, or a lack of
some other resource - return VPCERR_UNSUPPORTED.
- If the driver validates the WWN's against those already active on
the adapter and detects an overlap - return VPCERR_BAD_WWN.
- If the driver detects the topology is loop, non-fabric, or the
FLOGI did not support NPIV - return VPCERR_NO_FABRIC_SUPP.
- Allocate data structures. If errors are encountered, such as out
of memory conditions, return the respective negative Exxx error code.
- If the role is FCP Initiator, the LLDD is to :
- Call scsi_host_alloc() to allocate a scsi_host for the vport.
- Call scsi_add_host(new_shost, &vport->dev) to start the scsi_host
and bind it as a child of the vport device.
- Initializes the fc_host attribute values.
- Kick of further vport state transitions based on the disable flag and
link state - and return success (zero).
LLDD Implementers Notes:
- It is suggested that there be a different fc_function_templates for
the physical port and the virtual port. The physical port's template
would have the vport_create, vport_delete, and vport_disable functions,
while the vports would not.
- It is suggested that there be different scsi_host_templates
for the physical port and virtual port. Likely, there are driver
attributes, embedded into the scsi_host_template, that are applicable
for the physical port only (link speed, topology setting, etc). This
ensures that the attributes are applicable to the respective scsi_host.
Vport Disable/Enable:
The LLDD vport_disable() syntax is:
int vport_disable(struct fc_vport *vport, bool disable)
where:
vport: Is vport to to be enabled or disabled
disable: If "true", the vport is to be disabled.
If "false", the vport is to be enabled.
When a request is made to change the disabled state on a vport, the
transport will validate the request against the existing vport state.
If the request is to disable and the vport is already disabled, the
request will fail. Similarly, if the request is to enable, and the
vport is not in a disabled state, the request will fail. If the request
is valid for the vport state, the transport will call the LLDD to
change the vport's state.
Within the LLDD, if a vport is disabled, it remains instantiated with
the kernel and LLDD, but it is not active or visible on the FC link in
any way. (see Vport Creation and the 2 part instantiation discussion).
The vport will remain in this state until it is deleted or re-enabled.
When enabling a vport, the LLDD reinstantiates the vport on the FC
link - essentially restarting the LLDD statemachine (see Vport States
above).
Vport Deletion:
The LLDD vport_delete() syntax is:
int vport_delete(struct fc_vport *vport)
where:
vport: Is vport to delete
When a request is made to delete a vport (via sgio/netlink, or via the
fc_host or fc_vport vport_delete attributes), the transport will call
the LLDD to terminate the vport on the FC link, and teardown all other
datastructures and references. If the LLDD completes successfully,
the transport will teardown the vport objects and complete the vport
removal. If the LLDD delete request fails, the vport object will remain,
but will be in an indeterminate state.
Within the LLDD, the normal code paths for a scsi_host teardown should
be followed. E.g. If the vport has a FCP Initiator role, the LLDD
will call fc_remove_host() for the vports scsi_host, followed by
scsi_remove_host() and scsi_host_put() for the vports scsi_host.
Other:
fc_host port_type attribute:
There is a new fc_host port_type value - FC_PORTTYPE_NPIV. This value
must be set on all vport-based fc_hosts. Normally, on a physical port,
the port_type attribute would be set to NPORT, NLPORT, etc based on the
topology type and existence of the fabric. As this is not applicable to
a vport, it makes more sense to report the FC mechanism used to create
the vport.
Driver unload:
FC drivers are required to call fc_remove_host() prior to calling
scsi_remove_host(). This allows the fc_host to tear down all remote
ports prior the scsi_host being torn down. The fc_remove_host() call
was updated to remove all vports for the fc_host as well.
Credits
=======
The following people have contributed to this document:
James Smart
james.smart@emulex.com

View File

@ -555,7 +555,6 @@ complete_scsi_command( CommandList_struct *cp, int timeout, __u32 tag)
{
struct scsi_cmnd *cmd;
ctlr_info_t *ctlr;
u64bit addr64;
ErrorInfo_struct *ei;
ei = cp->err_info;
@ -569,20 +568,7 @@ complete_scsi_command( CommandList_struct *cp, int timeout, __u32 tag)
cmd = (struct scsi_cmnd *) cp->scsi_cmd;
ctlr = hba[cp->ctlr];
/* undo the DMA mappings */
if (cmd->use_sg) {
pci_unmap_sg(ctlr->pdev,
cmd->request_buffer, cmd->use_sg,
cmd->sc_data_direction);
}
else if (cmd->request_bufflen) {
addr64.val32.lower = cp->SG[0].Addr.lower;
addr64.val32.upper = cp->SG[0].Addr.upper;
pci_unmap_single(ctlr->pdev, (dma_addr_t) addr64.val,
cmd->request_bufflen,
cmd->sc_data_direction);
}
scsi_dma_unmap(cmd);
cmd->result = (DID_OK << 16); /* host byte */
cmd->result |= (COMMAND_COMPLETE << 8); /* msg byte */
@ -597,7 +583,7 @@ complete_scsi_command( CommandList_struct *cp, int timeout, __u32 tag)
ei->SenseLen > SCSI_SENSE_BUFFERSIZE ?
SCSI_SENSE_BUFFERSIZE :
ei->SenseLen);
cmd->resid = ei->ResidualCnt;
scsi_set_resid(cmd, ei->ResidualCnt);
if(ei->CommandStatus != 0)
{ /* an error has occurred */
@ -1204,46 +1190,29 @@ cciss_scatter_gather(struct pci_dev *pdev,
CommandList_struct *cp,
struct scsi_cmnd *cmd)
{
unsigned int use_sg, nsegs=0, len;
struct scatterlist *scatter = (struct scatterlist *) cmd->request_buffer;
unsigned int len;
struct scatterlist *sg;
__u64 addr64;
int use_sg, i;
/* is it just one virtual address? */
if (!cmd->use_sg) {
if (cmd->request_bufflen) { /* anything to xfer? */
BUG_ON(scsi_sg_count(cmd) > MAXSGENTRIES);
addr64 = (__u64) pci_map_single(pdev,
cmd->request_buffer,
cmd->request_bufflen,
cmd->sc_data_direction);
cp->SG[0].Addr.lower =
(__u32) (addr64 & (__u64) 0x00000000FFFFFFFF);
cp->SG[0].Addr.upper =
(__u32) ((addr64 >> 32) & (__u64) 0x00000000FFFFFFFF);
cp->SG[0].Len = cmd->request_bufflen;
nsegs=1;
use_sg = scsi_dma_map(cmd);
if (use_sg) { /* not too many addrs? */
scsi_for_each_sg(cmd, sg, use_sg, i) {
addr64 = (__u64) sg_dma_address(sg);
len = sg_dma_len(sg);
cp->SG[i].Addr.lower =
(__u32) (addr64 & (__u64) 0x00000000FFFFFFFF);
cp->SG[i].Addr.upper =
(__u32) ((addr64 >> 32) & (__u64) 0x00000000FFFFFFFF);
cp->SG[i].Len = len;
cp->SG[i].Ext = 0; // we are not chaining
}
} /* else, must be a list of virtual addresses.... */
else if (cmd->use_sg <= MAXSGENTRIES) { /* not too many addrs? */
}
use_sg = pci_map_sg(pdev, cmd->request_buffer, cmd->use_sg,
cmd->sc_data_direction);
for (nsegs=0; nsegs < use_sg; nsegs++) {
addr64 = (__u64) sg_dma_address(&scatter[nsegs]);
len = sg_dma_len(&scatter[nsegs]);
cp->SG[nsegs].Addr.lower =
(__u32) (addr64 & (__u64) 0x00000000FFFFFFFF);
cp->SG[nsegs].Addr.upper =
(__u32) ((addr64 >> 32) & (__u64) 0x00000000FFFFFFFF);
cp->SG[nsegs].Len = len;
cp->SG[nsegs].Ext = 0; // we are not chaining
}
} else BUG();
cp->Header.SGList = (__u8) nsegs; /* no. SGs contig in this cmd */
cp->Header.SGTotal = (__u16) nsegs; /* total sgs in this cmd list */
cp->Header.SGList = (__u8) use_sg; /* no. SGs contig in this cmd */
cp->Header.SGTotal = (__u16) use_sg; /* total sgs in this cmd list */
return;
}

View File

@ -1509,69 +1509,6 @@ static void sbp2_prep_command_orb_sg(struct sbp2_command_orb *orb,
}
}
static void sbp2_prep_command_orb_no_sg(struct sbp2_command_orb *orb,
struct sbp2_fwhost_info *hi,
struct sbp2_command_info *cmd,
struct scatterlist *sgpnt,
u32 orb_direction,
unsigned int scsi_request_bufflen,
void *scsi_request_buffer,
enum dma_data_direction dma_dir)
{
cmd->dma_dir = dma_dir;
cmd->dma_size = scsi_request_bufflen;
cmd->dma_type = CMD_DMA_SINGLE;
cmd->cmd_dma = dma_map_single(hi->host->device.parent,
scsi_request_buffer,
cmd->dma_size, cmd->dma_dir);
orb->data_descriptor_hi = ORB_SET_NODE_ID(hi->host->node_id);
orb->misc |= ORB_SET_DIRECTION(orb_direction);
/* handle case where we get a command w/o s/g enabled
* (but check for transfers larger than 64K) */
if (scsi_request_bufflen <= SBP2_MAX_SG_ELEMENT_LENGTH) {
orb->data_descriptor_lo = cmd->cmd_dma;
orb->misc |= ORB_SET_DATA_SIZE(scsi_request_bufflen);
} else {
/* The buffer is too large. Turn this into page tables. */
struct sbp2_unrestricted_page_table *sg_element =
&cmd->scatter_gather_element[0];
u32 sg_count, sg_len;
dma_addr_t sg_addr;
orb->data_descriptor_lo = cmd->sge_dma;
orb->misc |= ORB_SET_PAGE_TABLE_PRESENT(0x1);
/* fill out our SBP-2 page tables; split up the large buffer */
sg_count = 0;
sg_len = scsi_request_bufflen;
sg_addr = cmd->cmd_dma;
while (sg_len) {
sg_element[sg_count].segment_base_lo = sg_addr;
if (sg_len > SBP2_MAX_SG_ELEMENT_LENGTH) {
sg_element[sg_count].length_segment_base_hi =
PAGE_TABLE_SET_SEGMENT_LENGTH(SBP2_MAX_SG_ELEMENT_LENGTH);
sg_addr += SBP2_MAX_SG_ELEMENT_LENGTH;
sg_len -= SBP2_MAX_SG_ELEMENT_LENGTH;
} else {
sg_element[sg_count].length_segment_base_hi =
PAGE_TABLE_SET_SEGMENT_LENGTH(sg_len);
sg_len = 0;
}
sg_count++;
}
orb->misc |= ORB_SET_DATA_SIZE(sg_count);
sbp2util_cpu_to_be32_buffer(sg_element,
(sizeof(struct sbp2_unrestricted_page_table)) *
sg_count);
}
}
static void sbp2_create_command_orb(struct sbp2_lu *lu,
struct sbp2_command_info *cmd,
unchar *scsi_cmd,
@ -1615,13 +1552,9 @@ static void sbp2_create_command_orb(struct sbp2_lu *lu,
orb->data_descriptor_hi = 0x0;
orb->data_descriptor_lo = 0x0;
orb->misc |= ORB_SET_DIRECTION(1);
} else if (scsi_use_sg)
} else
sbp2_prep_command_orb_sg(orb, hi, cmd, scsi_use_sg, sgpnt,
orb_direction, dma_dir);
else
sbp2_prep_command_orb_no_sg(orb, hi, cmd, sgpnt, orb_direction,
scsi_request_bufflen,
scsi_request_buffer, dma_dir);
sbp2util_cpu_to_be32_buffer(orb, sizeof(*orb));
@ -1710,15 +1643,15 @@ static int sbp2_send_command(struct sbp2_lu *lu, struct scsi_cmnd *SCpnt,
void (*done)(struct scsi_cmnd *))
{
unchar *scsi_cmd = (unchar *)SCpnt->cmnd;
unsigned int request_bufflen = SCpnt->request_bufflen;
unsigned int request_bufflen = scsi_bufflen(SCpnt);
struct sbp2_command_info *cmd;
cmd = sbp2util_allocate_command_orb(lu, SCpnt, done);
if (!cmd)
return -EIO;
sbp2_create_command_orb(lu, cmd, scsi_cmd, SCpnt->use_sg,
request_bufflen, SCpnt->request_buffer,
sbp2_create_command_orb(lu, cmd, scsi_cmd, scsi_sg_count(SCpnt),
request_bufflen, scsi_sglist(SCpnt),
SCpnt->sc_data_direction);
sbp2_link_orb_command(lu, cmd);

View File

@ -134,19 +134,9 @@ iscsi_iser_cmd_init(struct iscsi_cmd_task *ctask)
{
struct iscsi_iser_conn *iser_conn = ctask->conn->dd_data;
struct iscsi_iser_cmd_task *iser_ctask = ctask->dd_data;
struct scsi_cmnd *sc = ctask->sc;
iser_ctask->command_sent = 0;
iser_ctask->iser_conn = iser_conn;
if (sc->sc_data_direction == DMA_TO_DEVICE) {
BUG_ON(ctask->total_length == 0);
debug_scsi("cmd [itt %x total %d imm %d unsol_data %d\n",
ctask->itt, ctask->total_length, ctask->imm_count,
ctask->unsol_count);
}
iser_ctask_rdma_init(iser_ctask);
}
@ -219,6 +209,14 @@ iscsi_iser_ctask_xmit(struct iscsi_conn *conn,
struct iscsi_iser_cmd_task *iser_ctask = ctask->dd_data;
int error = 0;
if (ctask->sc->sc_data_direction == DMA_TO_DEVICE) {
BUG_ON(scsi_bufflen(ctask->sc) == 0);
debug_scsi("cmd [itt %x total %d imm %d unsol_data %d\n",
ctask->itt, scsi_bufflen(ctask->sc),
ctask->imm_count, ctask->unsol_count);
}
debug_scsi("ctask deq [cid %d itt 0x%x]\n",
conn->id, ctask->itt);
@ -375,7 +373,8 @@ static struct iscsi_transport iscsi_iser_transport;
static struct iscsi_cls_session *
iscsi_iser_session_create(struct iscsi_transport *iscsit,
struct scsi_transport_template *scsit,
uint32_t initial_cmdsn, uint32_t *hostno)
uint16_t cmds_max, uint16_t qdepth,
uint32_t initial_cmdsn, uint32_t *hostno)
{
struct iscsi_cls_session *cls_session;
struct iscsi_session *session;
@ -386,7 +385,13 @@ iscsi_iser_session_create(struct iscsi_transport *iscsit,
struct iscsi_iser_cmd_task *iser_ctask;
struct iser_desc *desc;
/*
* we do not support setting can_queue cmd_per_lun from userspace yet
* because we preallocate so many resources
*/
cls_session = iscsi_session_setup(iscsit, scsit,
ISCSI_DEF_XMIT_CMDS_MAX,
ISCSI_MAX_CMD_PER_LUN,
sizeof(struct iscsi_iser_cmd_task),
sizeof(struct iser_desc),
initial_cmdsn, &hn);
@ -545,7 +550,7 @@ iscsi_iser_ep_disconnect(__u64 ep_handle)
static struct scsi_host_template iscsi_iser_sht = {
.name = "iSCSI Initiator over iSER, v." DRV_VER,
.queuecommand = iscsi_queuecommand,
.can_queue = ISCSI_XMIT_CMDS_MAX - 1,
.can_queue = ISCSI_DEF_XMIT_CMDS_MAX - 1,
.sg_tablesize = ISCSI_ISER_SG_TABLESIZE,
.max_sectors = 1024,
.cmd_per_lun = ISCSI_MAX_CMD_PER_LUN,
@ -574,8 +579,12 @@ static struct iscsi_transport iscsi_iser_transport = {
ISCSI_EXP_STATSN |
ISCSI_PERSISTENT_PORT |
ISCSI_PERSISTENT_ADDRESS |
ISCSI_TARGET_NAME |
ISCSI_TPGT,
ISCSI_TARGET_NAME | ISCSI_TPGT |
ISCSI_USERNAME | ISCSI_PASSWORD |
ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN,
.host_param_mask = ISCSI_HOST_HWADDRESS |
ISCSI_HOST_NETDEV_NAME |
ISCSI_HOST_INITIATOR_NAME,
.host_template = &iscsi_iser_sht,
.conndata_size = sizeof(struct iscsi_conn),
.max_lun = ISCSI_ISER_MAX_LUN,
@ -592,6 +601,9 @@ static struct iscsi_transport iscsi_iser_transport = {
.get_session_param = iscsi_session_get_param,
.start_conn = iscsi_iser_conn_start,
.stop_conn = iscsi_conn_stop,
/* iscsi host params */
.get_host_param = iscsi_host_get_param,
.set_host_param = iscsi_host_set_param,
/* IO */
.send_pdu = iscsi_conn_send_pdu,
.get_stats = iscsi_iser_conn_get_stats,

View File

@ -98,7 +98,7 @@
#define ISER_MAX_TX_MISC_PDUS 6 /* NOOP_OUT(2), TEXT(1), *
* SCSI_TMFUNC(2), LOGOUT(1) */
#define ISER_QP_MAX_RECV_DTOS (ISCSI_XMIT_CMDS_MAX + \
#define ISER_QP_MAX_RECV_DTOS (ISCSI_DEF_XMIT_CMDS_MAX + \
ISER_MAX_RX_MISC_PDUS + \
ISER_MAX_TX_MISC_PDUS)
@ -110,7 +110,7 @@
#define ISER_INFLIGHT_DATAOUTS 8
#define ISER_QP_MAX_REQ_DTOS (ISCSI_XMIT_CMDS_MAX * \
#define ISER_QP_MAX_REQ_DTOS (ISCSI_DEF_XMIT_CMDS_MAX * \
(1 + ISER_INFLIGHT_DATAOUTS) + \
ISER_MAX_TX_MISC_PDUS + \
ISER_MAX_RX_MISC_PDUS)

View File

@ -351,18 +351,12 @@ int iser_send_command(struct iscsi_conn *conn,
else
data_buf = &iser_ctask->data[ISER_DIR_OUT];
if (sc->use_sg) { /* using a scatter list */
data_buf->buf = sc->request_buffer;
data_buf->size = sc->use_sg;
} else if (sc->request_bufflen) {
/* using a single buffer - convert it into one entry SG */
sg_init_one(&data_buf->sg_single,
sc->request_buffer, sc->request_bufflen);
data_buf->buf = &data_buf->sg_single;
data_buf->size = 1;
if (scsi_sg_count(sc)) { /* using a scatter list */
data_buf->buf = scsi_sglist(sc);
data_buf->size = scsi_sg_count(sc);
}
data_buf->data_len = sc->request_bufflen;
data_buf->data_len = scsi_bufflen(sc);
if (hdr->flags & ISCSI_FLAG_CMD_READ) {
err = iser_prepare_read_cmd(ctask, edtl);

View File

@ -155,8 +155,8 @@ static int iser_create_ib_conn_res(struct iser_conn *ib_conn)
params.max_pages_per_fmr = ISCSI_ISER_SG_TABLESIZE + 1;
/* make the pool size twice the max number of SCSI commands *
* the ML is expected to queue, watermark for unmap at 50% */
params.pool_size = ISCSI_XMIT_CMDS_MAX * 2;
params.dirty_watermark = ISCSI_XMIT_CMDS_MAX;
params.pool_size = ISCSI_DEF_XMIT_CMDS_MAX * 2;
params.dirty_watermark = ISCSI_DEF_XMIT_CMDS_MAX;
params.cache = 0;
params.flush_function = NULL;
params.access = (IB_ACCESS_LOCAL_WRITE |

View File

@ -455,10 +455,7 @@ static void srp_unmap_data(struct scsi_cmnd *scmnd,
struct srp_target_port *target,
struct srp_request *req)
{
struct scatterlist *scat;
int nents;
if (!scmnd->request_buffer ||
if (!scsi_sglist(scmnd) ||
(scmnd->sc_data_direction != DMA_TO_DEVICE &&
scmnd->sc_data_direction != DMA_FROM_DEVICE))
return;
@ -468,20 +465,8 @@ static void srp_unmap_data(struct scsi_cmnd *scmnd,
req->fmr = NULL;
}
/*
* This handling of non-SG commands can be killed when the
* SCSI midlayer no longer generates non-SG commands.
*/
if (likely(scmnd->use_sg)) {
nents = scmnd->use_sg;
scat = scmnd->request_buffer;
} else {
nents = 1;
scat = &req->fake_sg;
}
ib_dma_unmap_sg(target->srp_host->dev->dev, scat, nents,
scmnd->sc_data_direction);
ib_dma_unmap_sg(target->srp_host->dev->dev, scsi_sglist(scmnd),
scsi_sg_count(scmnd), scmnd->sc_data_direction);
}
static void srp_remove_req(struct srp_target_port *target, struct srp_request *req)
@ -595,6 +580,7 @@ static int srp_map_fmr(struct srp_target_port *target, struct scatterlist *scat,
int ret;
struct srp_device *dev = target->srp_host->dev;
struct ib_device *ibdev = dev->dev;
struct scatterlist *sg;
if (!dev->fmr_pool)
return -ENODEV;
@ -604,16 +590,16 @@ static int srp_map_fmr(struct srp_target_port *target, struct scatterlist *scat,
return -EINVAL;
len = page_cnt = 0;
for (i = 0; i < sg_cnt; ++i) {
unsigned int dma_len = ib_sg_dma_len(ibdev, &scat[i]);
scsi_for_each_sg(req->scmnd, sg, sg_cnt, i) {
unsigned int dma_len = ib_sg_dma_len(ibdev, sg);
if (ib_sg_dma_address(ibdev, &scat[i]) & ~dev->fmr_page_mask) {
if (ib_sg_dma_address(ibdev, sg) & ~dev->fmr_page_mask) {
if (i > 0)
return -EINVAL;
else
++page_cnt;
}
if ((ib_sg_dma_address(ibdev, &scat[i]) + dma_len) &
if ((ib_sg_dma_address(ibdev, sg) + dma_len) &
~dev->fmr_page_mask) {
if (i < sg_cnt - 1)
return -EINVAL;
@ -633,12 +619,12 @@ static int srp_map_fmr(struct srp_target_port *target, struct scatterlist *scat,
return -ENOMEM;
page_cnt = 0;
for (i = 0; i < sg_cnt; ++i) {
unsigned int dma_len = ib_sg_dma_len(ibdev, &scat[i]);
scsi_for_each_sg(req->scmnd, sg, sg_cnt, i) {
unsigned int dma_len = ib_sg_dma_len(ibdev, sg);
for (j = 0; j < dma_len; j += dev->fmr_page_size)
dma_pages[page_cnt++] =
(ib_sg_dma_address(ibdev, &scat[i]) &
(ib_sg_dma_address(ibdev, sg) &
dev->fmr_page_mask) + j;
}
@ -673,7 +659,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target,
struct srp_device *dev;
struct ib_device *ibdev;
if (!scmnd->request_buffer || scmnd->sc_data_direction == DMA_NONE)
if (!scsi_sglist(scmnd) || scmnd->sc_data_direction == DMA_NONE)
return sizeof (struct srp_cmd);
if (scmnd->sc_data_direction != DMA_FROM_DEVICE &&
@ -683,18 +669,8 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target,
return -EINVAL;
}
/*
* This handling of non-SG commands can be killed when the
* SCSI midlayer no longer generates non-SG commands.
*/
if (likely(scmnd->use_sg)) {
nents = scmnd->use_sg;
scat = scmnd->request_buffer;
} else {
nents = 1;
scat = &req->fake_sg;
sg_init_one(scat, scmnd->request_buffer, scmnd->request_bufflen);
}
nents = scsi_sg_count(scmnd);
scat = scsi_sglist(scmnd);
dev = target->srp_host->dev;
ibdev = dev->dev;
@ -724,6 +700,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target,
* descriptor.
*/
struct srp_indirect_buf *buf = (void *) cmd->add_data;
struct scatterlist *sg;
u32 datalen = 0;
int i;
@ -732,11 +709,11 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target,
sizeof (struct srp_indirect_buf) +
count * sizeof (struct srp_direct_buf);
for (i = 0; i < count; ++i) {
unsigned int dma_len = ib_sg_dma_len(ibdev, &scat[i]);
scsi_for_each_sg(scmnd, sg, count, i) {
unsigned int dma_len = ib_sg_dma_len(ibdev, sg);
buf->desc_list[i].va =
cpu_to_be64(ib_sg_dma_address(ibdev, &scat[i]));
cpu_to_be64(ib_sg_dma_address(ibdev, sg));
buf->desc_list[i].key =
cpu_to_be32(dev->mr->rkey);
buf->desc_list[i].len = cpu_to_be32(dma_len);
@ -802,9 +779,9 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp)
}
if (rsp->flags & (SRP_RSP_FLAG_DOOVER | SRP_RSP_FLAG_DOUNDER))
scmnd->resid = be32_to_cpu(rsp->data_out_res_cnt);
scsi_set_resid(scmnd, be32_to_cpu(rsp->data_out_res_cnt));
else if (rsp->flags & (SRP_RSP_FLAG_DIOVER | SRP_RSP_FLAG_DIUNDER))
scmnd->resid = be32_to_cpu(rsp->data_in_res_cnt);
scsi_set_resid(scmnd, be32_to_cpu(rsp->data_in_res_cnt));
if (!req->tsk_mgmt) {
scmnd->host_scribble = (void *) -1L;

View File

@ -106,11 +106,6 @@ struct srp_request {
struct srp_iu *cmd;
struct srp_iu *tsk_mgmt;
struct ib_pool_fmr *fmr;
/*
* Fake scatterlist used when scmnd->use_sg==0. Can be killed
* when the SCSI midlayer no longer generates non-SG commands.
*/
struct scatterlist fake_sg;
struct completion done;
short index;
u8 cmd_done;

View File

@ -1,9 +0,0 @@
/* drivers/message/fusion/linux_compat.h */
#ifndef FUSION_LINUX_COMPAT_H
#define FUSION_LINUX_COMPAT_H
#include <linux/version.h>
#include <scsi/scsi_device.h>
#endif /* _LINUX_COMPAT_H */

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2006 LSI Logic Corporation.
* Copyright (c) 2000-2007 LSI Logic Corporation.
*
*
* Name: mpi.h
* Title: MPI Message independent structures and definitions
* Creation Date: July 27, 2000
*
* mpi.h Version: 01.05.12
* mpi.h Version: 01.05.13
*
* Version History
* ---------------
@ -78,6 +78,7 @@
* 08-30-05 01.05.10 Added 2 new IOCStatus codes for Target.
* 03-27-06 01.05.11 Bumped MPI_HEADER_VERSION_UNIT.
* 10-11-06 01.05.12 Bumped MPI_HEADER_VERSION_UNIT.
* 05-24-07 01.05.13 Bumped MPI_HEADER_VERSION_UNIT.
* --------------------------------------------------------------------------
*/
@ -108,7 +109,7 @@
/* Note: The major versions of 0xe0 through 0xff are reserved */
/* versioning for this MPI header set */
#define MPI_HEADER_VERSION_UNIT (0x0E)
#define MPI_HEADER_VERSION_UNIT (0x10)
#define MPI_HEADER_VERSION_DEV (0x00)
#define MPI_HEADER_VERSION_UNIT_MASK (0xFF00)
#define MPI_HEADER_VERSION_UNIT_SHIFT (8)

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2006 LSI Logic Corporation.
* Copyright (c) 2000-2007 LSI Logic Corporation.
*
*
* Name: mpi_cnfg.h
* Title: MPI Config message, structures, and Pages
* Creation Date: July 27, 2000
*
* mpi_cnfg.h Version: 01.05.13
* mpi_cnfg.h Version: 01.05.15
*
* Version History
* ---------------
@ -293,6 +293,21 @@
* Added more AccessStatus values for SAS Device Page 0.
* Added bit for SATA Asynchronous Notification Support in
* Flags field of SAS Device Page 0.
* 02-28-07 01.05.14 Added ExtFlags field to Manufacturing Page 4.
* Added Disable SMART Polling for CapabilitiesFlags of
* IOC Page 6.
* Added Disable SMART Polling to DeviceSettings of BIOS
* Page 1.
* Added Multi-Port Domain bit for DiscoveryStatus field
* of SAS IO Unit Page.
* Added Multi-Port Domain Illegal flag for SAS IO Unit
* Page 1 AdditionalControlFlags field.
* 05-24-07 01.05.15 Added Hide Physical Disks with Non-Integrated RAID
* Metadata bit to Manufacturing Page 4 ExtFlags field.
* Added Internal Connector to End Device Present bit to
* Expander Page 0 Flags field.
* Fixed define for
* MPI_SAS_EXPANDER1_DISCINFO_BAD_PHY_DISABLED.
* --------------------------------------------------------------------------
*/
@ -639,7 +654,7 @@ typedef struct _CONFIG_PAGE_MANUFACTURING_4
U8 InfoSize1; /* 0Bh */
U8 InquirySize; /* 0Ch */
U8 Flags; /* 0Dh */
U16 Reserved2; /* 0Eh */
U16 ExtFlags; /* 0Eh */
U8 InquiryData[56]; /* 10h */
U32 ISVolumeSettings; /* 48h */
U32 IMEVolumeSettings; /* 4Ch */
@ -658,7 +673,7 @@ typedef struct _CONFIG_PAGE_MANUFACTURING_4
} CONFIG_PAGE_MANUFACTURING_4, MPI_POINTER PTR_CONFIG_PAGE_MANUFACTURING_4,
ManufacturingPage4_t, MPI_POINTER pManufacturingPage4_t;
#define MPI_MANUFACTURING4_PAGEVERSION (0x04)
#define MPI_MANUFACTURING4_PAGEVERSION (0x05)
/* defines for the Flags field */
#define MPI_MANPAGE4_FORCE_BAD_BLOCK_TABLE (0x80)
@ -670,6 +685,12 @@ typedef struct _CONFIG_PAGE_MANUFACTURING_4
#define MPI_MANPAGE4_IM_RESYNC_CACHE_ENABLE (0x02)
#define MPI_MANPAGE4_IR_NO_MIX_SAS_SATA (0x01)
/* defines for the ExtFlags field */
#define MPI_MANPAGE4_EXTFLAGS_HIDE_NON_IR_METADATA (0x0008)
#define MPI_MANPAGE4_EXTFLAGS_SAS_CACHE_DISABLE (0x0004)
#define MPI_MANPAGE4_EXTFLAGS_SATA_CACHE_DISABLE (0x0002)
#define MPI_MANPAGE4_EXTFLAGS_LEGACY_MODE (0x0001)
#ifndef MPI_MANPAGE5_NUM_FORCEWWID
#define MPI_MANPAGE5_NUM_FORCEWWID (1)
@ -781,7 +802,7 @@ typedef struct _CONFIG_PAGE_MANUFACTURING_9
} CONFIG_PAGE_MANUFACTURING_9, MPI_POINTER PTR_CONFIG_PAGE_MANUFACTURING_9,
ManufacturingPage9_t, MPI_POINTER pManufacturingPage9_t;
#define MPI_MANUFACTURING6_PAGEVERSION (0x00)
#define MPI_MANUFACTURING9_PAGEVERSION (0x00)
typedef struct _CONFIG_PAGE_MANUFACTURING_10
@ -1138,6 +1159,8 @@ typedef struct _CONFIG_PAGE_IOC_6
/* IOC Page 6 Capabilities Flags */
#define MPI_IOCPAGE6_CAP_FLAGS_DISABLE_SMART_POLLING (0x00000008)
#define MPI_IOCPAGE6_CAP_FLAGS_MASK_METADATA_SIZE (0x00000006)
#define MPI_IOCPAGE6_CAP_FLAGS_64MB_METADATA_SIZE (0x00000000)
#define MPI_IOCPAGE6_CAP_FLAGS_512MB_METADATA_SIZE (0x00000002)
@ -1208,6 +1231,7 @@ typedef struct _CONFIG_PAGE_BIOS_1
#define MPI_BIOSPAGE1_IOCSET_ALTERNATE_CHS (0x00000008)
/* values for the DeviceSettings field */
#define MPI_BIOSPAGE1_DEVSET_DISABLE_SMART_POLLING (0x00000010)
#define MPI_BIOSPAGE1_DEVSET_DISABLE_SEQ_LUN (0x00000008)
#define MPI_BIOSPAGE1_DEVSET_DISABLE_RM_LUN (0x00000004)
#define MPI_BIOSPAGE1_DEVSET_DISABLE_NON_RM_LUN (0x00000002)
@ -2281,11 +2305,11 @@ typedef struct _CONFIG_PAGE_RAID_VOL_0
typedef struct _CONFIG_PAGE_RAID_VOL_1
{
CONFIG_PAGE_HEADER Header; /* 00h */
U8 VolumeID; /* 01h */
U8 VolumeBus; /* 02h */
U8 VolumeIOC; /* 03h */
U8 Reserved0; /* 04h */
U8 GUID[24]; /* 05h */
U8 VolumeID; /* 04h */
U8 VolumeBus; /* 05h */
U8 VolumeIOC; /* 06h */
U8 Reserved0; /* 07h */
U8 GUID[24]; /* 08h */
U8 Name[32]; /* 20h */
U64 WWID; /* 40h */
U32 Reserved1; /* 48h */
@ -2340,7 +2364,7 @@ typedef struct _RAID_PHYS_DISK0_STATUS
} RAID_PHYS_DISK0_STATUS, MPI_POINTER PTR_RAID_PHYS_DISK0_STATUS,
RaidPhysDiskStatus_t, MPI_POINTER pRaidPhysDiskStatus_t;
/* RAID Volume 2 IM Physical Disk DiskStatus flags */
/* RAID Physical Disk PhysDiskStatus flags */
#define MPI_PHYSDISK0_STATUS_FLAG_OUT_OF_SYNC (0x01)
#define MPI_PHYSDISK0_STATUS_FLAG_QUIESCED (0x02)
@ -2544,6 +2568,7 @@ typedef struct _CONFIG_PAGE_SAS_IO_UNIT_0
#define MPI_SAS_IOUNIT0_DS_TABLE_LINK (0x00000400)
#define MPI_SAS_IOUNIT0_DS_UNSUPPORTED_DEVICE (0x00000800)
#define MPI_SAS_IOUNIT0_DS_MAX_SATA_TARGETS (0x00001000)
#define MPI_SAS_IOUNIT0_DS_MULTI_PORT_DOMAIN (0x00002000)
typedef struct _MPI_SAS_IO_UNIT1_PHY_DATA
@ -2607,6 +2632,7 @@ typedef struct _CONFIG_PAGE_SAS_IO_UNIT_1
#define MPI_SAS_IOUNIT1_CONTROL_CLEAR_AFFILIATION (0x0001)
/* values for SAS IO Unit Page 1 AdditionalControlFlags */
#define MPI_SAS_IOUNIT1_ACONTROL_MULTI_PORT_DOMAIN_ILLEGAL (0x0080)
#define MPI_SAS_IOUNIT1_ACONTROL_SATA_ASYNCHROUNOUS_NOTIFICATION (0x0040)
#define MPI_SAS_IOUNIT1_ACONTROL_HIDE_NONZERO_ATTACHED_PHY_IDENT (0x0020)
#define MPI_SAS_IOUNIT1_ACONTROL_PORT_ENABLE_ONLY_SATA_LINK_RESET (0x0010)
@ -2734,6 +2760,7 @@ typedef struct _CONFIG_PAGE_SAS_EXPANDER_0
#define MPI_SAS_EXPANDER0_DS_UNSUPPORTED_DEVICE (0x00000800)
/* values for SAS Expander Page 0 Flags field */
#define MPI_SAS_EXPANDER0_FLAGS_CONNECTOR_END_DEVICE (0x04)
#define MPI_SAS_EXPANDER0_FLAGS_ROUTE_TABLE_CONFIG (0x02)
#define MPI_SAS_EXPANDER0_FLAGS_CONFIG_IN_PROGRESS (0x01)
@ -2774,7 +2801,7 @@ typedef struct _CONFIG_PAGE_SAS_EXPANDER_1
/* see mpi_sas.h for values for SAS Expander Page 1 AttachedDeviceInfo values */
/* values for SAS Expander Page 1 DiscoveryInfo field */
#define MPI_SAS_EXPANDER1_DISCINFO_BAD_PHY DISABLED (0x04)
#define MPI_SAS_EXPANDER1_DISCINFO_BAD_PHY_DISABLED (0x04)
#define MPI_SAS_EXPANDER1_DISCINFO_LINK_STATUS_CHANGE (0x02)
#define MPI_SAS_EXPANDER1_DISCINFO_NO_ROUTING_ENTRIES (0x01)
@ -2895,11 +2922,11 @@ typedef struct _CONFIG_PAGE_SAS_PHY_0
U8 AttachedPhyIdentifier; /* 16h */
U8 Reserved2; /* 17h */
U32 AttachedDeviceInfo; /* 18h */
U8 ProgrammedLinkRate; /* 20h */
U8 HwLinkRate; /* 21h */
U8 ChangeCount; /* 22h */
U8 Flags; /* 23h */
U32 PhyInfo; /* 24h */
U8 ProgrammedLinkRate; /* 1Ch */
U8 HwLinkRate; /* 1Dh */
U8 ChangeCount; /* 1Eh */
U8 Flags; /* 1Fh */
U32 PhyInfo; /* 20h */
} CONFIG_PAGE_SAS_PHY_0, MPI_POINTER PTR_CONFIG_PAGE_SAS_PHY_0,
SasPhyPage0_t, MPI_POINTER pSasPhyPage0_t;

View File

@ -3,28 +3,28 @@
MPI Header File Change History
==============================
Copyright (c) 2000-2006 LSI Logic Corporation.
Copyright (c) 2000-2007 LSI Logic Corporation.
---------------------------------------
Header Set Release Version: 01.05.14
Header Set Release Date: 10-11-06
Header Set Release Version: 01.05.16
Header Set Release Date: 05-24-07
---------------------------------------
Filename Current version Prior version
---------- --------------- -------------
mpi.h 01.05.12 01.05.11
mpi_ioc.h 01.05.12 01.05.11
mpi_cnfg.h 01.05.13 01.05.12
mpi_init.h 01.05.08 01.05.07
mpi.h 01.05.13 01.05.12
mpi_ioc.h 01.05.14 01.05.13
mpi_cnfg.h 01.05.15 01.05.14
mpi_init.h 01.05.09 01.05.09
mpi_targ.h 01.05.06 01.05.06
mpi_fc.h 01.05.01 01.05.01
mpi_lan.h 01.05.01 01.05.01
mpi_raid.h 01.05.02 01.05.02
mpi_raid.h 01.05.03 01.05.03
mpi_tool.h 01.05.03 01.05.03
mpi_inb.h 01.05.01 01.05.01
mpi_sas.h 01.05.04 01.05.03
mpi_sas.h 01.05.04 01.05.04
mpi_type.h 01.05.02 01.05.02
mpi_history.txt 01.05.14 01.05.13
mpi_history.txt 01.05.14 01.05.14
* Date Version Description
@ -95,6 +95,7 @@ mpi.h
* 08-30-05 01.05.10 Added 2 new IOCStatus codes for Target.
* 03-27-06 01.05.11 Bumped MPI_HEADER_VERSION_UNIT.
* 10-11-06 01.05.12 Bumped MPI_HEADER_VERSION_UNIT.
* 05-24-07 01.05.13 Bumped MPI_HEADER_VERSION_UNIT.
* --------------------------------------------------------------------------
mpi_ioc.h
@ -191,6 +192,13 @@ mpi_ioc.h
* data structure.
* Added new ImageType values for FWDownload and FWUpload
* requests.
* 02-28-07 01.05.13 Added MPI_EVENT_PRIMITIVE_ASYNCHRONOUS_EVENT for SAS
* Broadcast Event Data (replacing _RESERVED2).
* For Discovery Error Event Data DiscoveryStatus field,
* replaced _MULTPL_PATHS with _UNSUPPORTED_DEVICE and
* added _MULTI_PORT_DOMAIN.
* 05-24-07 01.05.14 Added Common Boot Block type to FWDownload Request.
* Added Common Boot Block type to FWUpload Request.
* --------------------------------------------------------------------------
mpi_cnfg.h
@ -473,6 +481,21 @@ mpi_cnfg.h
* Added more AccessStatus values for SAS Device Page 0.
* Added bit for SATA Asynchronous Notification Support in
* Flags field of SAS Device Page 0.
* 02-28-07 01.05.14 Added ExtFlags field to Manufacturing Page 4.
* Added Disable SMART Polling for CapabilitiesFlags of
* IOC Page 6.
* Added Disable SMART Polling to DeviceSettings of BIOS
* Page 1.
* Added Multi-Port Domain bit for DiscoveryStatus field
* of SAS IO Unit Page.
* Added Multi-Port Domain Illegal flag for SAS IO Unit
* Page 1 AdditionalControlFlags field.
* 05-24-07 01.05.15 Added Hide Physical Disks with Non-Integrated RAID
* Metadata bit to Manufacturing Page 4 ExtFlags field.
* Added Internal Connector to End Device Present bit to
* Expander Page 0 Flags field.
* Fixed define for
* MPI_SAS_EXPANDER1_DISCINFO_BAD_PHY_DISABLED.
* --------------------------------------------------------------------------
mpi_init.h
@ -517,6 +540,8 @@ mpi_init.h
* unique in the first 32 characters.
* 03-27-06 01.05.07 Added Task Management type of Clear ACA.
* 10-11-06 01.05.08 Shortened define for Task Management type of Clear ACA.
* 02-28-07 01.05.09 Defined two new MsgFlags bits for SCSI Task Management
* Request: Do Not Send Task IU and Soft Reset Option.
* --------------------------------------------------------------------------
mpi_targ.h
@ -571,7 +596,7 @@ mpi_fc.h
* 11-02-00 01.01.01 Original release for post 1.0 work
* 12-04-00 01.01.02 Added messages for Common Transport Send and
* Primitive Send.
* 01-09-01 01.01.03 Modified some of the new flags to have an MPI prefix
* 01-09-01 01.01.03 Modifed some of the new flags to have an MPI prefix
* and modified the FcPrimitiveSend flags.
* 01-25-01 01.01.04 Move InitiatorIndex in LinkServiceRsp reply to a larger
* field.
@ -634,6 +659,8 @@ mpi_raid.h
* 08-19-04 01.05.01 Original release for MPI v1.5.
* 01-15-05 01.05.02 Added defines for the two new RAID Actions for
* _SET_RESYNC_RATE and _SET_DATA_SCRUB_RATE.
* 02-28-07 01.05.03 Added new RAID Action, Device FW Update Mode, and
* associated defines.
* --------------------------------------------------------------------------
mpi_tool.h
@ -682,7 +709,22 @@ mpi_type.h
mpi_history.txt Parts list history
Filename 01.05.13 01.05.13 01.05.12 01.05.11 01.05.10 01.05.09
Filename 01.05.15 01.05.15
---------- -------- --------
mpi.h 01.05.12 01.05.13
mpi_ioc.h 01.05.13 01.05.14
mpi_cnfg.h 01.05.14 01.05.15
mpi_init.h 01.05.09 01.05.09
mpi_targ.h 01.05.06 01.05.06
mpi_fc.h 01.05.01 01.05.01
mpi_lan.h 01.05.01 01.05.01
mpi_raid.h 01.05.03 01.05.03
mpi_tool.h 01.05.03 01.05.03
mpi_inb.h 01.05.01 01.05.01
mpi_sas.h 01.05.04 01.05.04
mpi_type.h 01.05.02 01.05.02
Filename 01.05.14 01.05.13 01.05.12 01.05.11 01.05.10 01.05.09
---------- -------- -------- -------- -------- -------- --------
mpi.h 01.05.12 01.05.11 01.05.10 01.05.09 01.05.08 01.05.07
mpi_ioc.h 01.05.12 01.05.11 01.05.10 01.05.09 01.05.09 01.05.08

View File

@ -1,221 +0,0 @@
/*
* Copyright (c) 2003-2004 LSI Logic Corporation.
*
*
* Name: mpi_inb.h
* Title: MPI Inband structures and definitions
* Creation Date: September 30, 2003
*
* mpi_inb.h Version: 01.05.01
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 05-11-04 01.03.01 Original release.
* 08-19-04 01.05.01 Original release for MPI v1.5.
* --------------------------------------------------------------------------
*/
#ifndef MPI_INB_H
#define MPI_INB_H
/******************************************************************************
*
* I n b a n d M e s s a g e s
*
*******************************************************************************/
/****************************************************************************/
/* Inband Buffer Post Request */
/****************************************************************************/
typedef struct _MSG_INBAND_BUFFER_POST_REQUEST
{
U8 Reserved1; /* 00h */
U8 BufferCount; /* 01h */
U8 ChainOffset; /* 02h */
U8 Function; /* 03h */
U16 Reserved2; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
U32 Reserved4; /* 0Ch */
SGE_TRANS_SIMPLE_UNION SGL; /* 10h */
} MSG_INBAND_BUFFER_POST_REQUEST, MPI_POINTER PTR_MSG_INBAND_BUFFER_POST_REQUEST,
MpiInbandBufferPostRequest_t , MPI_POINTER pMpiInbandBufferPostRequest_t;
typedef struct _WWN_FC_FORMAT
{
U64 NodeName; /* 00h */
U64 PortName; /* 08h */
} WWN_FC_FORMAT, MPI_POINTER PTR_WWN_FC_FORMAT,
WwnFcFormat_t, MPI_POINTER pWwnFcFormat_t;
typedef struct _WWN_SAS_FORMAT
{
U64 WorldWideID; /* 00h */
U32 Reserved1; /* 08h */
U32 Reserved2; /* 0Ch */
} WWN_SAS_FORMAT, MPI_POINTER PTR_WWN_SAS_FORMAT,
WwnSasFormat_t, MPI_POINTER pWwnSasFormat_t;
typedef union _WWN_INBAND_FORMAT
{
WWN_FC_FORMAT Fc;
WWN_SAS_FORMAT Sas;
} WWN_INBAND_FORMAT, MPI_POINTER PTR_WWN_INBAND_FORMAT,
WwnInbandFormat, MPI_POINTER pWwnInbandFormat;
/* Inband Buffer Post reply message */
typedef struct _MSG_INBAND_BUFFER_POST_REPLY
{
U16 Reserved1; /* 00h */
U8 MsgLength; /* 02h */
U8 Function; /* 03h */
U16 Reserved2; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
U16 Reserved4; /* 0Ch */
U16 IOCStatus; /* 0Eh */
U32 IOCLogInfo; /* 10h */
U32 TransferLength; /* 14h */
U32 TransactionContext; /* 18h */
WWN_INBAND_FORMAT Wwn; /* 1Ch */
U32 IOCIdentifier[4]; /* 2Ch */
} MSG_INBAND_BUFFER_POST_REPLY, MPI_POINTER PTR_MSG_INBAND_BUFFER_POST_REPLY,
MpiInbandBufferPostReply_t, MPI_POINTER pMpiInbandBufferPostReply_t;
/****************************************************************************/
/* Inband Send Request */
/****************************************************************************/
typedef struct _MSG_INBAND_SEND_REQUEST
{
U16 Reserved1; /* 00h */
U8 ChainOffset; /* 02h */
U8 Function; /* 03h */
U16 Reserved2; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
U32 Reserved4; /* 0Ch */
WWN_INBAND_FORMAT Wwn; /* 10h */
U32 Reserved5; /* 20h */
SGE_IO_UNION SGL; /* 24h */
} MSG_INBAND_SEND_REQUEST, MPI_POINTER PTR_MSG_INBAND_SEND_REQUEST,
MpiInbandSendRequest_t , MPI_POINTER pMpiInbandSendRequest_t;
/* Inband Send reply message */
typedef struct _MSG_INBAND_SEND_REPLY
{
U16 Reserved1; /* 00h */
U8 MsgLength; /* 02h */
U8 Function; /* 03h */
U16 Reserved2; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
U16 Reserved4; /* 0Ch */
U16 IOCStatus; /* 0Eh */
U32 IOCLogInfo; /* 10h */
U32 ResponseLength; /* 14h */
} MSG_INBAND_SEND_REPLY, MPI_POINTER PTR_MSG_INBAND_SEND_REPLY,
MpiInbandSendReply_t, MPI_POINTER pMpiInbandSendReply_t;
/****************************************************************************/
/* Inband Response Request */
/****************************************************************************/
typedef struct _MSG_INBAND_RSP_REQUEST
{
U16 Reserved1; /* 00h */
U8 ChainOffset; /* 02h */
U8 Function; /* 03h */
U16 Reserved2; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
U32 Reserved4; /* 0Ch */
WWN_INBAND_FORMAT Wwn; /* 10h */
U32 IOCIdentifier[4]; /* 20h */
U32 ResponseLength; /* 30h */
SGE_IO_UNION SGL; /* 34h */
} MSG_INBAND_RSP_REQUEST, MPI_POINTER PTR_MSG_INBAND_RSP_REQUEST,
MpiInbandRspRequest_t , MPI_POINTER pMpiInbandRspRequest_t;
/* Inband Response reply message */
typedef struct _MSG_INBAND_RSP_REPLY
{
U16 Reserved1; /* 00h */
U8 MsgLength; /* 02h */
U8 Function; /* 03h */
U16 Reserved2; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
U16 Reserved4; /* 0Ch */
U16 IOCStatus; /* 0Eh */
U32 IOCLogInfo; /* 10h */
} MSG_INBAND_RSP_REPLY, MPI_POINTER PTR_MSG_INBAND_RSP_REPLY,
MpiInbandRspReply_t, MPI_POINTER pMpiInbandRspReply_t;
/****************************************************************************/
/* Inband Abort Request */
/****************************************************************************/
typedef struct _MSG_INBAND_ABORT_REQUEST
{
U8 Reserved1; /* 00h */
U8 AbortType; /* 01h */
U8 ChainOffset; /* 02h */
U8 Function; /* 03h */
U16 Reserved2; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
U32 Reserved4; /* 0Ch */
U32 ContextToAbort; /* 10h */
} MSG_INBAND_ABORT_REQUEST, MPI_POINTER PTR_MSG_INBAND_ABORT_REQUEST,
MpiInbandAbortRequest_t , MPI_POINTER pMpiInbandAbortRequest_t;
#define MPI_INBAND_ABORT_TYPE_ALL_BUFFERS (0x00)
#define MPI_INBAND_ABORT_TYPE_EXACT_BUFFER (0x01)
#define MPI_INBAND_ABORT_TYPE_SEND_REQUEST (0x02)
#define MPI_INBAND_ABORT_TYPE_RESPONSE_REQUEST (0x03)
/* Inband Abort reply message */
typedef struct _MSG_INBAND_ABORT_REPLY
{
U8 Reserved1; /* 00h */
U8 AbortType; /* 01h */
U8 MsgLength; /* 02h */
U8 Function; /* 03h */
U16 Reserved2; /* 04h */
U8 Reserved3; /* 06h */
U8 MsgFlags; /* 07h */
U32 MsgContext; /* 08h */
U16 Reserved4; /* 0Ch */
U16 IOCStatus; /* 0Eh */
U32 IOCLogInfo; /* 10h */
} MSG_INBAND_ABORT_REPLY, MPI_POINTER PTR_MSG_INBAND_ABORT_REPLY,
MpiInbandAbortReply_t, MPI_POINTER pMpiInbandAbortReply_t;
#endif

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2006 LSI Logic Corporation.
* Copyright (c) 2000-2007 LSI Logic Corporation.
*
*
* Name: mpi_init.h
* Title: MPI initiator mode messages and structures
* Creation Date: June 8, 2000
*
* mpi_init.h Version: 01.05.08
* mpi_init.h Version: 01.05.09
*
* Version History
* ---------------
@ -54,6 +54,8 @@
* unique in the first 32 characters.
* 03-27-06 01.05.07 Added Task Management type of Clear ACA.
* 10-11-06 01.05.08 Shortened define for Task Management type of Clear ACA.
* 02-28-07 01.05.09 Defined two new MsgFlags bits for SCSI Task Management
* Request: Do Not Send Task IU and Soft Reset Option.
* --------------------------------------------------------------------------
*/
@ -432,10 +434,14 @@ typedef struct _MSG_SCSI_TASK_MGMT
#define MPI_SCSITASKMGMT_TASKTYPE_CLR_ACA (0x08)
/* MsgFlags bits */
#define MPI_SCSITASKMGMT_MSGFLAGS_DO_NOT_SEND_TASK_IU (0x01)
#define MPI_SCSITASKMGMT_MSGFLAGS_TARGET_RESET_OPTION (0x00)
#define MPI_SCSITASKMGMT_MSGFLAGS_LIP_RESET_OPTION (0x02)
#define MPI_SCSITASKMGMT_MSGFLAGS_LIPRESET_RESET_OPTION (0x04)
#define MPI_SCSITASKMGMT_MSGFLAGS_SOFT_RESET_OPTION (0x08)
/* SCSI Task Management Reply */
typedef struct _MSG_SCSI_TASK_MGMT_REPLY
{

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2006 LSI Logic Corporation.
* Copyright (c) 2000-2007 LSI Logic Corporation.
*
*
* Name: mpi_ioc.h
* Title: MPI IOC, Port, Event, FW Download, and FW Upload messages
* Creation Date: August 11, 2000
*
* mpi_ioc.h Version: 01.05.12
* mpi_ioc.h Version: 01.05.14
*
* Version History
* ---------------
@ -106,6 +106,13 @@
* data structure.
* Added new ImageType values for FWDownload and FWUpload
* requests.
* 02-28-07 01.05.13 Added MPI_EVENT_PRIMITIVE_ASYNCHRONOUS_EVENT for SAS
* Broadcast Event Data (replacing _RESERVED2).
* For Discovery Error Event Data DiscoveryStatus field,
* replaced _MULTPL_PATHS with _UNSUPPORTED_DEVICE and
* added _MULTI_PORT_DOMAIN.
* 05-24-07 01.05.14 Added Common Boot Block type to FWDownload Request.
* Added Common Boot Block type to FWUpload Request.
* --------------------------------------------------------------------------
*/
@ -792,7 +799,7 @@ typedef struct _EVENT_DATA_SAS_BROADCAST_PRIMITIVE
#define MPI_EVENT_PRIMITIVE_CHANGE (0x01)
#define MPI_EVENT_PRIMITIVE_EXPANDER (0x03)
#define MPI_EVENT_PRIMITIVE_RESERVED2 (0x04)
#define MPI_EVENT_PRIMITIVE_ASYNCHRONOUS_EVENT (0x04)
#define MPI_EVENT_PRIMITIVE_RESERVED3 (0x05)
#define MPI_EVENT_PRIMITIVE_RESERVED4 (0x06)
#define MPI_EVENT_PRIMITIVE_CHANGE0_RESERVED (0x07)
@ -857,8 +864,9 @@ typedef struct _EVENT_DATA_DISCOVERY_ERROR
#define MPI_EVENT_DSCVRY_ERR_DS_SMP_CRC_ERROR (0x00000100)
#define MPI_EVENT_DSCVRY_ERR_DS_MULTPL_SUBTRACTIVE (0x00000200)
#define MPI_EVENT_DSCVRY_ERR_DS_TABLE_TO_TABLE (0x00000400)
#define MPI_EVENT_DSCVRY_ERR_DS_MULTPL_PATHS (0x00000800)
#define MPI_EVENT_DSCVRY_ERR_DS_UNSUPPORTED_DEVICE (0x00000800)
#define MPI_EVENT_DSCVRY_ERR_DS_MAX_SATA_TARGETS (0x00001000)
#define MPI_EVENT_DSCVRY_ERR_DS_MULTI_PORT_DOMAIN (0x00002000)
/* SAS SMP Error Event data */
@ -990,6 +998,7 @@ typedef struct _MSG_FW_DOWNLOAD
#define MPI_FW_DOWNLOAD_ITYPE_CONFIG_1 (0x07)
#define MPI_FW_DOWNLOAD_ITYPE_CONFIG_2 (0x08)
#define MPI_FW_DOWNLOAD_ITYPE_MEGARAID (0x09)
#define MPI_FW_DOWNLOAD_ITYPE_COMMON_BOOT_BLOCK (0x0B)
typedef struct _FWDownloadTCSGE
@ -1038,17 +1047,18 @@ typedef struct _MSG_FW_UPLOAD
} MSG_FW_UPLOAD, MPI_POINTER PTR_MSG_FW_UPLOAD,
FWUpload_t, MPI_POINTER pFWUpload_t;
#define MPI_FW_UPLOAD_ITYPE_FW_IOC_MEM (0x00)
#define MPI_FW_UPLOAD_ITYPE_FW_FLASH (0x01)
#define MPI_FW_UPLOAD_ITYPE_BIOS_FLASH (0x02)
#define MPI_FW_UPLOAD_ITYPE_NVDATA (0x03)
#define MPI_FW_UPLOAD_ITYPE_BOOTLOADER (0x04)
#define MPI_FW_UPLOAD_ITYPE_FW_BACKUP (0x05)
#define MPI_FW_UPLOAD_ITYPE_MANUFACTURING (0x06)
#define MPI_FW_UPLOAD_ITYPE_CONFIG_1 (0x07)
#define MPI_FW_UPLOAD_ITYPE_CONFIG_2 (0x08)
#define MPI_FW_UPLOAD_ITYPE_MEGARAID (0x09)
#define MPI_FW_UPLOAD_ITYPE_COMPLETE (0x0A)
#define MPI_FW_UPLOAD_ITYPE_FW_IOC_MEM (0x00)
#define MPI_FW_UPLOAD_ITYPE_FW_FLASH (0x01)
#define MPI_FW_UPLOAD_ITYPE_BIOS_FLASH (0x02)
#define MPI_FW_UPLOAD_ITYPE_NVDATA (0x03)
#define MPI_FW_UPLOAD_ITYPE_BOOTLOADER (0x04)
#define MPI_FW_UPLOAD_ITYPE_FW_BACKUP (0x05)
#define MPI_FW_UPLOAD_ITYPE_MANUFACTURING (0x06)
#define MPI_FW_UPLOAD_ITYPE_CONFIG_1 (0x07)
#define MPI_FW_UPLOAD_ITYPE_CONFIG_2 (0x08)
#define MPI_FW_UPLOAD_ITYPE_MEGARAID (0x09)
#define MPI_FW_UPLOAD_ITYPE_COMPLETE (0x0A)
#define MPI_FW_UPLOAD_ITYPE_COMMON_BOOT_BLOCK (0x0B)
typedef struct _FWUploadTCSGE
{

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2001-2005 LSI Logic Corporation.
* Copyright (c) 2001-2007 LSI Logic Corporation.
*
*
* Name: mpi_raid.h
* Title: MPI RAID message and structures
* Creation Date: February 27, 2001
*
* mpi_raid.h Version: 01.05.02
* mpi_raid.h Version: 01.05.03
*
* Version History
* ---------------
@ -32,6 +32,8 @@
* 08-19-04 01.05.01 Original release for MPI v1.5.
* 01-15-05 01.05.02 Added defines for the two new RAID Actions for
* _SET_RESYNC_RATE and _SET_DATA_SCRUB_RATE.
* 02-28-07 01.05.03 Added new RAID Action, Device FW Update Mode, and
* associated defines.
* --------------------------------------------------------------------------
*/
@ -90,6 +92,7 @@ typedef struct _MSG_RAID_ACTION
#define MPI_RAID_ACTION_INACTIVATE_VOLUME (0x12)
#define MPI_RAID_ACTION_SET_RESYNC_RATE (0x13)
#define MPI_RAID_ACTION_SET_DATA_SCRUB_RATE (0x14)
#define MPI_RAID_ACTION_DEVICE_FW_UPDATE_MODE (0x15)
/* ActionDataWord defines for use with MPI_RAID_ACTION_CREATE_VOLUME action */
#define MPI_RAID_ACTION_ADATA_DO_NOT_SYNC (0x00000001)
@ -111,6 +114,10 @@ typedef struct _MSG_RAID_ACTION
/* ActionDataWord defines for use with MPI_RAID_ACTION_SET_DATA_SCRUB_RATE action */
#define MPI_RAID_ACTION_ADATA_DATA_SCRUB_RATE_MASK (0x000000FF)
/* ActionDataWord defines for use with MPI_RAID_ACTION_DEVICE_FW_UPDATE_MODE action */
#define MPI_RAID_ACTION_ADATA_ENABLE_FW_UPDATE (0x00000001)
#define MPI_RAID_ACTION_ADATA_MASK_FW_UPDATE_TIMEOUT (0x0000FF00)
#define MPI_RAID_ACTION_ADATA_SHIFT_FW_UPDATE_TIMEOUT (8)
/* RAID Action reply message */

View File

@ -6,7 +6,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 1999-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
@ -64,6 +64,7 @@
#endif
#include "mptbase.h"
#include "lsi/mpi_log_fc.h"
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
#define my_NAME "Fusion MPT base driver"
@ -6349,14 +6350,37 @@ ProcessEventNotification(MPT_ADAPTER *ioc, EventNotificationReply_t *pEventReply
static void
mpt_fc_log_info(MPT_ADAPTER *ioc, u32 log_info)
{
static char *subcl_str[8] = {
"FCP Initiator", "FCP Target", "LAN", "MPI Message Layer",
"FC Link", "Context Manager", "Invalid Field Offset", "State Change Info"
};
u8 subcl = (log_info >> 24) & 0x7;
char *desc = "unknown";
printk(MYIOC_s_INFO_FMT "LogInfo(0x%08x): SubCl={%s}\n",
ioc->name, log_info, subcl_str[subcl]);
switch (log_info & 0xFF000000) {
case MPI_IOCLOGINFO_FC_INIT_BASE:
desc = "FCP Initiator";
break;
case MPI_IOCLOGINFO_FC_TARGET_BASE:
desc = "FCP Target";
break;
case MPI_IOCLOGINFO_FC_LAN_BASE:
desc = "LAN";
break;
case MPI_IOCLOGINFO_FC_MSG_BASE:
desc = "MPI Message Layer";
break;
case MPI_IOCLOGINFO_FC_LINK_BASE:
desc = "FC Link";
break;
case MPI_IOCLOGINFO_FC_CTX_BASE:
desc = "Context Manager";
break;
case MPI_IOCLOGINFO_FC_INVALID_FIELD_BYTE_OFFSET:
desc = "Invalid Field Offset";
break;
case MPI_IOCLOGINFO_FC_STATE_CHANGE:
desc = "State Change Info";
break;
}
printk(MYIOC_s_INFO_FMT "LogInfo(0x%08x): SubClass={%s}, Value=(0x%06x)\n",
ioc->name, log_info, desc, (log_info & 0xFFFFFF));
}
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/

View File

@ -6,7 +6,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 1999-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
@ -75,8 +75,8 @@
#define COPYRIGHT "Copyright (c) 1999-2007 " MODULEAUTHOR
#endif
#define MPT_LINUX_VERSION_COMMON "3.04.04"
#define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.04.04"
#define MPT_LINUX_VERSION_COMMON "3.04.05"
#define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.04.05"
#define WHAT_MAGIC_STRING "@" "(" "#" ")"
#define show_mptmod_ver(s,ver) \

View File

@ -5,7 +5,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 1999-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/

View File

@ -6,7 +6,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 1999-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/

View File

@ -4,7 +4,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 1999-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
@ -43,7 +43,6 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
#include "linux_compat.h" /* linux-2.6 tweaks */
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>

View File

@ -5,7 +5,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 2000-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/

View File

@ -5,7 +5,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 2000-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/

View File

@ -4,7 +4,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 1999-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
* Copyright (c) 2005-2007 Dell
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/

View File

@ -4,7 +4,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 1999-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
@ -44,7 +44,6 @@
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
#include "linux_compat.h" /* linux-2.6 tweaks */
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
@ -260,30 +259,13 @@ mptscsih_AddSGE(MPT_ADAPTER *ioc, struct scsi_cmnd *SCpnt,
/* Map the data portion, if any.
* sges_left = 0 if no data transfer.
*/
if ( (sges_left = SCpnt->use_sg) ) {
sges_left = pci_map_sg(ioc->pcidev,
(struct scatterlist *) SCpnt->request_buffer,
SCpnt->use_sg,
SCpnt->sc_data_direction);
if (sges_left == 0)
return FAILED;
} else if (SCpnt->request_bufflen) {
SCpnt->SCp.dma_handle = pci_map_single(ioc->pcidev,
SCpnt->request_buffer,
SCpnt->request_bufflen,
SCpnt->sc_data_direction);
dsgprintk((MYIOC_s_INFO_FMT "SG: non-SG for %p, len=%d\n",
ioc->name, SCpnt, SCpnt->request_bufflen));
mptscsih_add_sge((char *) &pReq->SGL,
0xD1000000|MPT_SGE_FLAGS_ADDRESSING|sgdir|SCpnt->request_bufflen,
SCpnt->SCp.dma_handle);
return SUCCESS;
}
sges_left = scsi_dma_map(SCpnt);
if (sges_left < 0)
return FAILED;
/* Handle the SG case.
*/
sg = (struct scatterlist *) SCpnt->request_buffer;
sg = scsi_sglist(SCpnt);
sg_done = 0;
sgeOffset = sizeof(SCSIIORequest_t) - sizeof(SGE_IO_UNION);
chainSge = NULL;
@ -465,7 +447,12 @@ mptscsih_issue_sep_command(MPT_ADAPTER *ioc, VirtTarget *vtarget,
MPT_FRAME_HDR *mf;
SEPRequest_t *SEPMsg;
if (ioc->bus_type == FC)
if (ioc->bus_type != SAS)
return;
/* Not supported for hidden raid components
*/
if (vtarget->tflags & MPT_TARGET_FLAGS_RAID_COMPONENT)
return;
if ((mf = mpt_get_msg_frame(ioc->InternalCtx, ioc)) == NULL) {
@ -662,7 +649,7 @@ mptscsih_io_done(MPT_ADAPTER *ioc, MPT_FRAME_HDR *mf, MPT_FRAME_HDR *mr)
scsi_state = pScsiReply->SCSIState;
scsi_status = pScsiReply->SCSIStatus;
xfer_cnt = le32_to_cpu(pScsiReply->TransferCount);
sc->resid = sc->request_bufflen - xfer_cnt;
scsi_set_resid(sc, scsi_bufflen(sc) - xfer_cnt);
log_info = le32_to_cpu(pScsiReply->IOCLogInfo);
/*
@ -767,7 +754,7 @@ mptscsih_io_done(MPT_ADAPTER *ioc, MPT_FRAME_HDR *mf, MPT_FRAME_HDR *mr)
break;
case MPI_IOCSTATUS_SCSI_RESIDUAL_MISMATCH: /* 0x0049 */
sc->resid = sc->request_bufflen - xfer_cnt;
scsi_set_resid(sc, scsi_bufflen(sc) - xfer_cnt);
if((xfer_cnt==0)||(sc->underflow > xfer_cnt))
sc->result=DID_SOFT_ERROR << 16;
else /* Sufficient data transfer occurred */
@ -816,7 +803,7 @@ mptscsih_io_done(MPT_ADAPTER *ioc, MPT_FRAME_HDR *mf, MPT_FRAME_HDR *mr)
break;
case MPI_IOCSTATUS_SCSI_DATA_OVERRUN: /* 0x0044 */
sc->resid=0;
scsi_set_resid(sc, 0);
case MPI_IOCSTATUS_SCSI_RECOVERED_ERROR: /* 0x0040 */
case MPI_IOCSTATUS_SUCCESS: /* 0x0000 */
sc->result = (DID_OK << 16) | scsi_status;
@ -899,23 +886,18 @@ mptscsih_io_done(MPT_ADAPTER *ioc, MPT_FRAME_HDR *mf, MPT_FRAME_HDR *mr)
scsi_state, scsi_status, log_info));
dreplyprintk(("%s: [%d:%d:%d:%d] resid=%d "
"bufflen=%d xfer_cnt=%d\n", __FUNCTION__,
sc->device->host->host_no, sc->device->channel, sc->device->id,
sc->device->lun, sc->resid, sc->request_bufflen,
xfer_cnt));
"bufflen=%d xfer_cnt=%d\n", __FUNCTION__,
sc->device->host->host_no,
sc->device->channel, sc->device->id,
sc->device->lun, scsi_get_resid(sc),
scsi_bufflen(sc), xfer_cnt));
}
#endif
} /* end of address reply case */
/* Unmap the DMA buffers, if any. */
if (sc->use_sg) {
pci_unmap_sg(ioc->pcidev, (struct scatterlist *) sc->request_buffer,
sc->use_sg, sc->sc_data_direction);
} else if (sc->request_bufflen) {
pci_unmap_single(ioc->pcidev, sc->SCp.dma_handle,
sc->request_bufflen, sc->sc_data_direction);
}
scsi_dma_unmap(sc);
sc->scsi_done(sc); /* Issue the command callback */
@ -970,17 +952,8 @@ mptscsih_flush_running_cmds(MPT_SCSI_HOST *hd)
/* Set status, free OS resources (SG DMA buffers)
* Do OS callback
*/
if (SCpnt->use_sg) {
pci_unmap_sg(ioc->pcidev,
(struct scatterlist *) SCpnt->request_buffer,
SCpnt->use_sg,
SCpnt->sc_data_direction);
} else if (SCpnt->request_bufflen) {
pci_unmap_single(ioc->pcidev,
SCpnt->SCp.dma_handle,
SCpnt->request_bufflen,
SCpnt->sc_data_direction);
}
scsi_dma_unmap(SCpnt);
SCpnt->result = DID_RESET << 16;
SCpnt->host_scribble = NULL;
@ -1023,14 +996,19 @@ mptscsih_search_running_cmds(MPT_SCSI_HOST *hd, VirtDevice *vdevice)
mf = (SCSIIORequest_t *)MPT_INDEX_2_MFPTR(hd->ioc, ii);
if (mf == NULL)
continue;
/* If the device is a hidden raid component, then its
* expected that the mf->function will be RAID_SCSI_IO
*/
if (vdevice->vtarget->tflags &
MPT_TARGET_FLAGS_RAID_COMPONENT && mf->Function !=
MPI_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)
continue;
int_to_scsilun(vdevice->lun, &lun);
if ((mf->Bus != vdevice->vtarget->channel) ||
(mf->TargetID != vdevice->vtarget->id) ||
memcmp(lun.scsi_lun, mf->LUN, 8))
continue;
dsprintk(( "search_running: found (sc=%p, mf = %p) "
"channel %d id %d, lun %d \n", hd->ScsiLookup[ii],
mf, mf->Bus, mf->TargetID, vdevice->lun));
/* Cleanup
*/
@ -1039,19 +1017,12 @@ mptscsih_search_running_cmds(MPT_SCSI_HOST *hd, VirtDevice *vdevice)
mpt_free_msg_frame(hd->ioc, (MPT_FRAME_HDR *)mf);
if ((unsigned char *)mf != sc->host_scribble)
continue;
if (sc->use_sg) {
pci_unmap_sg(hd->ioc->pcidev,
(struct scatterlist *) sc->request_buffer,
sc->use_sg,
sc->sc_data_direction);
} else if (sc->request_bufflen) {
pci_unmap_single(hd->ioc->pcidev,
sc->SCp.dma_handle,
sc->request_bufflen,
sc->sc_data_direction);
}
scsi_dma_unmap(sc);
sc->host_scribble = NULL;
sc->result = DID_NO_CONNECT << 16;
dsprintk(( "search_running: found (sc=%p, mf = %p) "
"channel %d id %d, lun %d \n", sc, mf,
vdevice->vtarget->channel, vdevice->vtarget->id, vdevice->lun));
sc->scsi_done(sc);
}
}
@ -1380,10 +1351,10 @@ mptscsih_qcmd(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))
* will be no data transfer! GRRRRR...
*/
if (SCpnt->sc_data_direction == DMA_FROM_DEVICE) {
datalen = SCpnt->request_bufflen;
datalen = scsi_bufflen(SCpnt);
scsidir = MPI_SCSIIO_CONTROL_READ; /* DATA IN (host<--ioc<--dev) */
} else if (SCpnt->sc_data_direction == DMA_TO_DEVICE) {
datalen = SCpnt->request_bufflen;
datalen = scsi_bufflen(SCpnt);
scsidir = MPI_SCSIIO_CONTROL_WRITE; /* DATA OUT (host-->ioc-->dev) */
} else {
datalen = 0;
@ -1768,20 +1739,45 @@ mptscsih_abort(struct scsi_cmnd * SCpnt)
u32 ctx2abort;
int scpnt_idx;
int retval;
VirtDevice *vdev;
VirtDevice *vdevice;
ulong sn = SCpnt->serial_number;
MPT_ADAPTER *ioc;
/* If we can't locate our host adapter structure, return FAILED status.
*/
if ((hd = (MPT_SCSI_HOST *) SCpnt->device->host->hostdata) == NULL) {
SCpnt->result = DID_RESET << 16;
SCpnt->scsi_done(SCpnt);
dfailprintk((KERN_INFO MYNAM ": mptscsih_abort: "
"Can't locate host! (sc=%p)\n",
SCpnt));
dfailprintk((KERN_INFO MYNAM ": mptscsih_abort: Can't locate "
"host! (sc=%p)\n", SCpnt));
return FAILED;
}
ioc = hd->ioc;
printk(MYIOC_s_INFO_FMT "attempting task abort! (sc=%p)\n",
ioc->name, SCpnt);
scsi_print_command(SCpnt);
vdevice = SCpnt->device->hostdata;
if (!vdevice || !vdevice->vtarget) {
dtmprintk((MYIOC_s_DEBUG_FMT "task abort: device has been "
"deleted (sc=%p)\n", ioc->name, SCpnt));
SCpnt->result = DID_NO_CONNECT << 16;
SCpnt->scsi_done(SCpnt);
retval = 0;
goto out;
}
/* Task aborts are not supported for hidden raid components.
*/
if (vdevice->vtarget->tflags & MPT_TARGET_FLAGS_RAID_COMPONENT) {
dtmprintk((MYIOC_s_DEBUG_FMT "task abort: hidden raid "
"component (sc=%p)\n", ioc->name, SCpnt));
SCpnt->result = DID_RESET << 16;
retval = FAILED;
goto out;
}
/* Find this command
*/
if ((scpnt_idx = SCPNT_TO_LOOKUP_IDX(SCpnt)) < 0) {
@ -1790,21 +1786,20 @@ mptscsih_abort(struct scsi_cmnd * SCpnt)
*/
SCpnt->result = DID_RESET << 16;
dtmprintk((KERN_INFO MYNAM ": %s: mptscsih_abort: "
"Command not in the active list! (sc=%p)\n",
hd->ioc->name, SCpnt));
return SUCCESS;
"Command not in the active list! (sc=%p)\n", ioc->name,
SCpnt));
retval = 0;
goto out;
}
if (hd->resetPending)
return FAILED;
if (hd->resetPending) {
retval = FAILED;
goto out;
}
if (hd->timeouts < -1)
hd->timeouts++;
printk(KERN_WARNING MYNAM ": %s: attempting task abort! (sc=%p)\n",
hd->ioc->name, SCpnt);
scsi_print_command(SCpnt);
/* Most important! Set TaskMsgContext to SCpnt's MsgContext!
* (the IO to be ABORT'd)
*
@ -1817,18 +1812,17 @@ mptscsih_abort(struct scsi_cmnd * SCpnt)
hd->abortSCpnt = SCpnt;
vdev = SCpnt->device->hostdata;
retval = mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_ABORT_TASK,
vdev->vtarget->channel, vdev->vtarget->id, vdev->lun,
ctx2abort, mptscsih_get_tm_timeout(hd->ioc));
vdevice->vtarget->channel, vdevice->vtarget->id, vdevice->lun,
ctx2abort, mptscsih_get_tm_timeout(ioc));
if (SCPNT_TO_LOOKUP_IDX(SCpnt) == scpnt_idx &&
SCpnt->serial_number == sn)
retval = FAILED;
printk (KERN_WARNING MYNAM ": %s: task abort: %s (sc=%p)\n",
hd->ioc->name,
((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt);
out:
printk(MYIOC_s_INFO_FMT "task abort: %s (sc=%p)\n",
ioc->name, ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt);
if (retval == 0)
return SUCCESS;
@ -1850,32 +1844,47 @@ mptscsih_dev_reset(struct scsi_cmnd * SCpnt)
{
MPT_SCSI_HOST *hd;
int retval;
VirtDevice *vdev;
VirtDevice *vdevice;
MPT_ADAPTER *ioc;
/* If we can't locate our host adapter structure, return FAILED status.
*/
if ((hd = (MPT_SCSI_HOST *) SCpnt->device->host->hostdata) == NULL){
dtmprintk((KERN_INFO MYNAM ": mptscsih_dev_reset: "
"Can't locate host! (sc=%p)\n",
SCpnt));
dtmprintk((KERN_INFO MYNAM ": mptscsih_dev_reset: Can't "
"locate host! (sc=%p)\n", SCpnt));
return FAILED;
}
if (hd->resetPending)
return FAILED;
printk(KERN_WARNING MYNAM ": %s: attempting target reset! (sc=%p)\n",
hd->ioc->name, SCpnt);
ioc = hd->ioc;
printk(MYIOC_s_INFO_FMT "attempting target reset! (sc=%p)\n",
ioc->name, SCpnt);
scsi_print_command(SCpnt);
vdev = SCpnt->device->hostdata;
retval = mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_TARGET_RESET,
vdev->vtarget->channel, vdev->vtarget->id,
0, 0, mptscsih_get_tm_timeout(hd->ioc));
if (hd->resetPending) {
retval = FAILED;
goto out;
}
printk (KERN_WARNING MYNAM ": %s: target reset: %s (sc=%p)\n",
hd->ioc->name,
((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt);
vdevice = SCpnt->device->hostdata;
if (!vdevice || !vdevice->vtarget) {
retval = 0;
goto out;
}
/* Target reset to hidden raid component is not supported
*/
if (vdevice->vtarget->tflags & MPT_TARGET_FLAGS_RAID_COMPONENT) {
retval = FAILED;
goto out;
}
retval = mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_TARGET_RESET,
vdevice->vtarget->channel, vdevice->vtarget->id, 0, 0,
mptscsih_get_tm_timeout(ioc));
out:
printk (MYIOC_s_INFO_FMT "target reset: %s (sc=%p)\n",
ioc->name, ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt);
if (retval == 0)
return SUCCESS;
@ -1899,18 +1908,19 @@ mptscsih_bus_reset(struct scsi_cmnd * SCpnt)
MPT_SCSI_HOST *hd;
int retval;
VirtDevice *vdev;
MPT_ADAPTER *ioc;
/* If we can't locate our host adapter structure, return FAILED status.
*/
if ((hd = (MPT_SCSI_HOST *) SCpnt->device->host->hostdata) == NULL){
dtmprintk((KERN_INFO MYNAM ": mptscsih_bus_reset: "
"Can't locate host! (sc=%p)\n",
SCpnt ) );
dtmprintk((KERN_INFO MYNAM ": mptscsih_bus_reset: Can't "
"locate host! (sc=%p)\n", SCpnt ));
return FAILED;
}
printk(KERN_WARNING MYNAM ": %s: attempting bus reset! (sc=%p)\n",
hd->ioc->name, SCpnt);
ioc = hd->ioc;
printk(MYIOC_s_INFO_FMT "attempting bus reset! (sc=%p)\n",
ioc->name, SCpnt);
scsi_print_command(SCpnt);
if (hd->timeouts < -1)
@ -1918,11 +1928,10 @@ mptscsih_bus_reset(struct scsi_cmnd * SCpnt)
vdev = SCpnt->device->hostdata;
retval = mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_RESET_BUS,
vdev->vtarget->channel, 0, 0, 0, mptscsih_get_tm_timeout(hd->ioc));
vdev->vtarget->channel, 0, 0, 0, mptscsih_get_tm_timeout(ioc));
printk (KERN_WARNING MYNAM ": %s: bus reset: %s (sc=%p)\n",
hd->ioc->name,
((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt);
printk(MYIOC_s_INFO_FMT "bus reset: %s (sc=%p)\n",
ioc->name, ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt);
if (retval == 0)
return SUCCESS;
@ -1943,37 +1952,38 @@ int
mptscsih_host_reset(struct scsi_cmnd *SCpnt)
{
MPT_SCSI_HOST * hd;
int status = SUCCESS;
int retval;
MPT_ADAPTER *ioc;
/* If we can't locate the host to reset, then we failed. */
if ((hd = (MPT_SCSI_HOST *) SCpnt->device->host->hostdata) == NULL){
dtmprintk( ( KERN_INFO MYNAM ": mptscsih_host_reset: "
"Can't locate host! (sc=%p)\n",
SCpnt ) );
dtmprintk( ( KERN_INFO MYNAM ": mptscsih_host_reset: Can't "
"locate host! (sc=%p)\n", SCpnt));
return FAILED;
}
printk(KERN_WARNING MYNAM ": %s: Attempting host reset! (sc=%p)\n",
hd->ioc->name, SCpnt);
ioc = hd->ioc;
printk(MYIOC_s_INFO_FMT "attempting host reset! (sc=%p)\n",
ioc->name, SCpnt);
/* If our attempts to reset the host failed, then return a failed
* status. The host will be taken off line by the SCSI mid-layer.
*/
if (mpt_HardResetHandler(hd->ioc, CAN_SLEEP) < 0){
status = FAILED;
if (mpt_HardResetHandler(hd->ioc, CAN_SLEEP) < 0) {
retval = FAILED;
} else {
/* Make sure TM pending is cleared and TM state is set to
* NONE.
*/
retval = 0;
hd->tmPending = 0;
hd->tmState = TM_STATE_NONE;
}
dtmprintk( ( KERN_INFO MYNAM ": mptscsih_host_reset: "
"Status = %s\n",
(status == SUCCESS) ? "SUCCESS" : "FAILED" ) );
printk(MYIOC_s_INFO_FMT "host reset: %s (sc=%p)\n",
ioc->name, ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt);
return status;
return retval;
}
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
@ -3150,6 +3160,16 @@ mptscsih_synchronize_cache(MPT_SCSI_HOST *hd, VirtDevice *vdevice)
{
INTERNAL_CMD iocmd;
/* Ignore hidden raid components, this is handled when the command
* is sent to the volume
*/
if (vdevice->vtarget->tflags & MPT_TARGET_FLAGS_RAID_COMPONENT)
return;
if (vdevice->vtarget->type != TYPE_DISK || vdevice->vtarget->deleted ||
!vdevice->configured_lun)
return;
/* Following parameters will not change
* in this routine.
*/
@ -3164,9 +3184,7 @@ mptscsih_synchronize_cache(MPT_SCSI_HOST *hd, VirtDevice *vdevice)
iocmd.id = vdevice->vtarget->id;
iocmd.lun = vdevice->lun;
if ((vdevice->vtarget->type == TYPE_DISK) &&
(vdevice->configured_lun))
mptscsih_do_cmd(hd, &iocmd);
mptscsih_do_cmd(hd, &iocmd);
}
EXPORT_SYMBOL(mptscsih_remove);

View File

@ -6,7 +6,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 1999-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/

View File

@ -4,7 +4,7 @@
* running LSI Logic Fusion MPT (Message Passing Technology) firmware.
*
* Copyright (c) 1999-2007 LSI Logic Corporation
* (mailto:mpt_linux_developer@lsi.com)
* (mailto:DL-MPTFusionLinux@lsi.com)
*
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
@ -44,7 +44,6 @@
*/
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
#include "linux_compat.h" /* linux-2.6 tweaks */
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>

View File

@ -485,7 +485,7 @@ int i2o_parm_field_get(struct i2o_device *i2o_dev, int group, int field,
u8 *resblk; /* 8 bytes for header */
int rc;
resblk = kmalloc(buflen + 8, GFP_KERNEL | GFP_ATOMIC);
resblk = kmalloc(buflen + 8, GFP_KERNEL);
if (!resblk)
return -ENOMEM;

View File

@ -377,12 +377,8 @@ static int i2o_scsi_reply(struct i2o_controller *c, u32 m,
osm_err("SCSI error %08x\n", error);
dev = &c->pdev->dev;
if (cmd->use_sg)
dma_unmap_sg(dev, cmd->request_buffer, cmd->use_sg,
cmd->sc_data_direction);
else if (cmd->SCp.dma_handle)
dma_unmap_single(dev, cmd->SCp.dma_handle, cmd->request_bufflen,
cmd->sc_data_direction);
scsi_dma_unmap(cmd);
cmd->scsi_done(cmd);
@ -664,21 +660,15 @@ static int i2o_scsi_queuecommand(struct scsi_cmnd *SCpnt,
if (sgl_offset != SGL_OFFSET_0) {
/* write size of data addressed by SGL */
*mptr++ = cpu_to_le32(SCpnt->request_bufflen);
*mptr++ = cpu_to_le32(scsi_bufflen(SCpnt));
/* Now fill in the SGList and command */
if (SCpnt->use_sg) {
if (!i2o_dma_map_sg(c, SCpnt->request_buffer,
SCpnt->use_sg,
if (scsi_sg_count(SCpnt)) {
if (!i2o_dma_map_sg(c, scsi_sglist(SCpnt),
scsi_sg_count(SCpnt),
SCpnt->sc_data_direction, &mptr))
goto nomem;
} else {
SCpnt->SCp.dma_handle =
i2o_dma_map_single(c, SCpnt->request_buffer,
SCpnt->request_bufflen,
SCpnt->sc_data_direction, &mptr);
if (dma_mapping_error(SCpnt->SCp.dma_handle))
goto nomem;
}
}

View File

@ -815,9 +815,7 @@ zfcp_get_adapter_by_busid(char *bus_id)
struct zfcp_unit *
zfcp_unit_enqueue(struct zfcp_port *port, fcp_lun_t fcp_lun)
{
struct zfcp_unit *unit, *tmp_unit;
unsigned int scsi_lun;
int found;
struct zfcp_unit *unit;
/*
* check that there is no unit with this FCP_LUN already in list
@ -863,22 +861,10 @@ zfcp_unit_enqueue(struct zfcp_port *port, fcp_lun_t fcp_lun)
}
zfcp_unit_get(unit);
unit->scsi_lun = scsilun_to_int((struct scsi_lun *)&unit->fcp_lun);
scsi_lun = 0;
found = 0;
write_lock_irq(&zfcp_data.config_lock);
list_for_each_entry(tmp_unit, &port->unit_list_head, list) {
if (tmp_unit->scsi_lun != scsi_lun) {
found = 1;
break;
}
scsi_lun++;
}
unit->scsi_lun = scsi_lun;
if (found)
list_add_tail(&unit->list, &tmp_unit->list);
else
list_add_tail(&unit->list, &port->unit_list_head);
list_add_tail(&unit->list, &port->unit_list_head);
atomic_clear_mask(ZFCP_STATUS_COMMON_REMOVE, &unit->status);
atomic_set_mask(ZFCP_STATUS_COMMON_RUNNING, &unit->status);
write_unlock_irq(&zfcp_data.config_lock);

View File

@ -1986,6 +1986,10 @@ zfcp_erp_adapter_strategy_generic(struct zfcp_erp_action *erp_action, int close)
failed_openfcp:
zfcp_close_fsf(erp_action->adapter);
failed_qdio:
atomic_clear_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK |
ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED |
ZFCP_STATUS_ADAPTER_XPORT_OK,
&erp_action->adapter->status);
out:
return retval;
}
@ -2167,6 +2171,9 @@ zfcp_erp_adapter_strategy_open_fsf_xconfig(struct zfcp_erp_action *erp_action)
sleep *= 2;
}
atomic_clear_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
&adapter->status);
if (!atomic_test_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK,
&adapter->status)) {
ZFCP_LOG_INFO("error: exchange of configuration data for "

View File

@ -1306,22 +1306,26 @@ static irqreturn_t twa_interrupt(int irq, void *dev_instance)
wake_up(&tw_dev->ioctl_wqueue);
}
} else {
struct scsi_cmnd *cmd;
cmd = tw_dev->srb[request_id];
twa_scsiop_execute_scsi_complete(tw_dev, request_id);
/* If no error command was a success */
if (error == 0) {
tw_dev->srb[request_id]->result = (DID_OK << 16);
cmd->result = (DID_OK << 16);
}
/* If error, command failed */
if (error == 1) {
/* Ask for a host reset */
tw_dev->srb[request_id]->result = (DID_OK << 16) | (CHECK_CONDITION << 1);
cmd->result = (DID_OK << 16) | (CHECK_CONDITION << 1);
}
/* Report residual bytes for single sgl */
if ((tw_dev->srb[request_id]->use_sg <= 1) && (full_command_packet->command.newcommand.status == 0)) {
if (full_command_packet->command.newcommand.sg_list[0].length < tw_dev->srb[request_id]->request_bufflen)
tw_dev->srb[request_id]->resid = tw_dev->srb[request_id]->request_bufflen - full_command_packet->command.newcommand.sg_list[0].length;
if ((scsi_sg_count(cmd) <= 1) && (full_command_packet->command.newcommand.status == 0)) {
if (full_command_packet->command.newcommand.sg_list[0].length < scsi_bufflen(tw_dev->srb[request_id]))
scsi_set_resid(cmd, scsi_bufflen(cmd) - full_command_packet->command.newcommand.sg_list[0].length);
}
/* Now complete the io */
@ -1384,53 +1388,21 @@ static int twa_map_scsi_sg_data(TW_Device_Extension *tw_dev, int request_id)
{
int use_sg;
struct scsi_cmnd *cmd = tw_dev->srb[request_id];
struct pci_dev *pdev = tw_dev->tw_pci_dev;
int retval = 0;
if (cmd->use_sg == 0)
goto out;
use_sg = pci_map_sg(pdev, cmd->request_buffer, cmd->use_sg, DMA_BIDIRECTIONAL);
if (use_sg == 0) {
use_sg = scsi_dma_map(cmd);
if (!use_sg)
return 0;
else if (use_sg < 0) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1c, "Failed to map scatter gather list");
goto out;
return 0;
}
cmd->SCp.phase = TW_PHASE_SGLIST;
cmd->SCp.have_data_in = use_sg;
retval = use_sg;
out:
return retval;
return use_sg;
} /* End twa_map_scsi_sg_data() */
/* This function will perform a pci-dma map for a single buffer */
static dma_addr_t twa_map_scsi_single_data(TW_Device_Extension *tw_dev, int request_id)
{
dma_addr_t mapping;
struct scsi_cmnd *cmd = tw_dev->srb[request_id];
struct pci_dev *pdev = tw_dev->tw_pci_dev;
dma_addr_t retval = 0;
if (cmd->request_bufflen == 0) {
retval = 0;
goto out;
}
mapping = pci_map_single(pdev, cmd->request_buffer, cmd->request_bufflen, DMA_BIDIRECTIONAL);
if (mapping == 0) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1d, "Failed to map page");
goto out;
}
cmd->SCp.phase = TW_PHASE_SINGLE;
cmd->SCp.have_data_in = mapping;
retval = mapping;
out:
return retval;
} /* End twa_map_scsi_single_data() */
/* This function will poll for a response interrupt of a request */
static int twa_poll_response(TW_Device_Extension *tw_dev, int request_id, int seconds)
{
@ -1815,15 +1787,13 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
u32 num_sectors = 0x0;
int i, sg_count;
struct scsi_cmnd *srb = NULL;
struct scatterlist *sglist = NULL;
dma_addr_t buffaddr = 0x0;
struct scatterlist *sglist = NULL, *sg;
int retval = 1;
if (tw_dev->srb[request_id]) {
if (tw_dev->srb[request_id]->request_buffer) {
sglist = (struct scatterlist *)tw_dev->srb[request_id]->request_buffer;
}
srb = tw_dev->srb[request_id];
if (scsi_sglist(srb))
sglist = scsi_sglist(srb);
}
/* Initialize command packet */
@ -1856,32 +1826,12 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
if (!sglistarg) {
/* Map sglist from scsi layer to cmd packet */
if (tw_dev->srb[request_id]->use_sg == 0) {
if (tw_dev->srb[request_id]->request_bufflen < TW_MIN_SGL_LENGTH) {
command_packet->sg_list[0].address = TW_CPU_TO_SGL(tw_dev->generic_buffer_phys[request_id]);
command_packet->sg_list[0].length = cpu_to_le32(TW_MIN_SGL_LENGTH);
if (tw_dev->srb[request_id]->sc_data_direction == DMA_TO_DEVICE || tw_dev->srb[request_id]->sc_data_direction == DMA_BIDIRECTIONAL)
memcpy(tw_dev->generic_buffer_virt[request_id], tw_dev->srb[request_id]->request_buffer, tw_dev->srb[request_id]->request_bufflen);
} else {
buffaddr = twa_map_scsi_single_data(tw_dev, request_id);
if (buffaddr == 0)
goto out;
command_packet->sg_list[0].address = TW_CPU_TO_SGL(buffaddr);
command_packet->sg_list[0].length = cpu_to_le32(tw_dev->srb[request_id]->request_bufflen);
}
command_packet->sgl_entries__lunh = cpu_to_le16(TW_REQ_LUN_IN((srb->device->lun >> 4), 1));
if (command_packet->sg_list[0].address & TW_CPU_TO_SGL(TW_ALIGNMENT_9000_SGL)) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2d, "Found unaligned address during execute scsi");
goto out;
}
}
if (tw_dev->srb[request_id]->use_sg > 0) {
if ((tw_dev->srb[request_id]->use_sg == 1) && (tw_dev->srb[request_id]->request_bufflen < TW_MIN_SGL_LENGTH)) {
if (tw_dev->srb[request_id]->sc_data_direction == DMA_TO_DEVICE || tw_dev->srb[request_id]->sc_data_direction == DMA_BIDIRECTIONAL) {
struct scatterlist *sg = (struct scatterlist *)tw_dev->srb[request_id]->request_buffer;
if (scsi_sg_count(srb)) {
if ((scsi_sg_count(srb) == 1) &&
(scsi_bufflen(srb) < TW_MIN_SGL_LENGTH)) {
if (srb->sc_data_direction == DMA_TO_DEVICE || srb->sc_data_direction == DMA_BIDIRECTIONAL) {
struct scatterlist *sg = scsi_sglist(srb);
char *buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
memcpy(tw_dev->generic_buffer_virt[request_id], buf, sg->length);
kunmap_atomic(buf - sg->offset, KM_IRQ0);
@ -1893,16 +1843,16 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
if (sg_count == 0)
goto out;
for (i = 0; i < sg_count; i++) {
command_packet->sg_list[i].address = TW_CPU_TO_SGL(sg_dma_address(&sglist[i]));
command_packet->sg_list[i].length = cpu_to_le32(sg_dma_len(&sglist[i]));
scsi_for_each_sg(srb, sg, sg_count, i) {
command_packet->sg_list[i].address = TW_CPU_TO_SGL(sg_dma_address(sg));
command_packet->sg_list[i].length = cpu_to_le32(sg_dma_len(sg));
if (command_packet->sg_list[i].address & TW_CPU_TO_SGL(TW_ALIGNMENT_9000_SGL)) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2e, "Found unaligned sgl address during execute scsi");
goto out;
}
}
}
command_packet->sgl_entries__lunh = cpu_to_le16(TW_REQ_LUN_IN((srb->device->lun >> 4), tw_dev->srb[request_id]->use_sg));
command_packet->sgl_entries__lunh = cpu_to_le16(TW_REQ_LUN_IN((srb->device->lun >> 4), scsi_sg_count(tw_dev->srb[request_id])));
}
} else {
/* Internal cdb post */
@ -1932,7 +1882,7 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
/* Update SG statistics */
if (srb) {
tw_dev->sgl_entries = tw_dev->srb[request_id]->use_sg;
tw_dev->sgl_entries = scsi_sg_count(tw_dev->srb[request_id]);
if (tw_dev->sgl_entries > tw_dev->max_sgl_entries)
tw_dev->max_sgl_entries = tw_dev->sgl_entries;
}
@ -1951,16 +1901,13 @@ out:
/* This function completes an execute scsi operation */
static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int request_id)
{
if (tw_dev->srb[request_id]->request_bufflen < TW_MIN_SGL_LENGTH &&
(tw_dev->srb[request_id]->sc_data_direction == DMA_FROM_DEVICE ||
tw_dev->srb[request_id]->sc_data_direction == DMA_BIDIRECTIONAL)) {
if (tw_dev->srb[request_id]->use_sg == 0) {
memcpy(tw_dev->srb[request_id]->request_buffer,
tw_dev->generic_buffer_virt[request_id],
tw_dev->srb[request_id]->request_bufflen);
}
if (tw_dev->srb[request_id]->use_sg == 1) {
struct scatterlist *sg = (struct scatterlist *)tw_dev->srb[request_id]->request_buffer;
struct scsi_cmnd *cmd = tw_dev->srb[request_id];
if (scsi_bufflen(cmd) < TW_MIN_SGL_LENGTH &&
(cmd->sc_data_direction == DMA_FROM_DEVICE ||
cmd->sc_data_direction == DMA_BIDIRECTIONAL)) {
if (scsi_sg_count(cmd) == 1) {
struct scatterlist *sg = scsi_sglist(tw_dev->srb[request_id]);
char *buf;
unsigned long flags = 0;
local_irq_save(flags);
@ -2017,16 +1964,8 @@ static char *twa_string_lookup(twa_message_type *table, unsigned int code)
static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id)
{
struct scsi_cmnd *cmd = tw_dev->srb[request_id];
struct pci_dev *pdev = tw_dev->tw_pci_dev;
switch(cmd->SCp.phase) {
case TW_PHASE_SINGLE:
pci_unmap_single(pdev, cmd->SCp.have_data_in, cmd->request_bufflen, DMA_BIDIRECTIONAL);
break;
case TW_PHASE_SGLIST:
pci_unmap_sg(pdev, cmd->request_buffer, cmd->use_sg, DMA_BIDIRECTIONAL);
break;
}
scsi_dma_unmap(cmd);
} /* End twa_unmap_scsi_data() */
/* scsi_host_template initializer */

View File

@ -1273,57 +1273,24 @@ static int tw_map_scsi_sg_data(struct pci_dev *pdev, struct scsi_cmnd *cmd)
int use_sg;
dprintk(KERN_WARNING "3w-xxxx: tw_map_scsi_sg_data()\n");
if (cmd->use_sg == 0)
return 0;
use_sg = pci_map_sg(pdev, cmd->request_buffer, cmd->use_sg, DMA_BIDIRECTIONAL);
if (use_sg == 0) {
use_sg = scsi_dma_map(cmd);
if (use_sg < 0) {
printk(KERN_WARNING "3w-xxxx: tw_map_scsi_sg_data(): pci_map_sg() failed.\n");
return 0;
}
cmd->SCp.phase = TW_PHASE_SGLIST;
cmd->SCp.have_data_in = use_sg;
return use_sg;
} /* End tw_map_scsi_sg_data() */
static u32 tw_map_scsi_single_data(struct pci_dev *pdev, struct scsi_cmnd *cmd)
{
dma_addr_t mapping;
dprintk(KERN_WARNING "3w-xxxx: tw_map_scsi_single_data()\n");
if (cmd->request_bufflen == 0)
return 0;
mapping = pci_map_page(pdev, virt_to_page(cmd->request_buffer), offset_in_page(cmd->request_buffer), cmd->request_bufflen, DMA_BIDIRECTIONAL);
if (mapping == 0) {
printk(KERN_WARNING "3w-xxxx: tw_map_scsi_single_data(): pci_map_page() failed.\n");
return 0;
}
cmd->SCp.phase = TW_PHASE_SINGLE;
cmd->SCp.have_data_in = mapping;
return mapping;
} /* End tw_map_scsi_single_data() */
static void tw_unmap_scsi_data(struct pci_dev *pdev, struct scsi_cmnd *cmd)
{
dprintk(KERN_WARNING "3w-xxxx: tw_unmap_scsi_data()\n");
switch(cmd->SCp.phase) {
case TW_PHASE_SINGLE:
pci_unmap_page(pdev, cmd->SCp.have_data_in, cmd->request_bufflen, DMA_BIDIRECTIONAL);
break;
case TW_PHASE_SGLIST:
pci_unmap_sg(pdev, cmd->request_buffer, cmd->use_sg, DMA_BIDIRECTIONAL);
break;
}
scsi_dma_unmap(cmd);
} /* End tw_unmap_scsi_data() */
/* This function will reset a device extension */
@ -1499,27 +1466,16 @@ static void tw_transfer_internal(TW_Device_Extension *tw_dev, int request_id,
void *buf;
unsigned int transfer_len;
unsigned long flags = 0;
struct scatterlist *sg = scsi_sglist(cmd);
if (cmd->use_sg) {
struct scatterlist *sg =
(struct scatterlist *)cmd->request_buffer;
local_irq_save(flags);
buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
transfer_len = min(sg->length, len);
} else {
buf = cmd->request_buffer;
transfer_len = min(cmd->request_bufflen, len);
}
local_irq_save(flags);
buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
transfer_len = min(sg->length, len);
memcpy(buf, data, transfer_len);
if (cmd->use_sg) {
struct scatterlist *sg;
sg = (struct scatterlist *)cmd->request_buffer;
kunmap_atomic(buf - sg->offset, KM_IRQ0);
local_irq_restore(flags);
}
kunmap_atomic(buf - sg->offset, KM_IRQ0);
local_irq_restore(flags);
}
/* This function is called by the isr to complete an inquiry command */
@ -1764,19 +1720,20 @@ static int tw_scsiop_read_write(TW_Device_Extension *tw_dev, int request_id)
{
TW_Command *command_packet;
unsigned long command_que_value;
u32 lba = 0x0, num_sectors = 0x0, buffaddr = 0x0;
u32 lba = 0x0, num_sectors = 0x0;
int i, use_sg;
struct scsi_cmnd *srb;
struct scatterlist *sglist;
struct scatterlist *sglist, *sg;
dprintk(KERN_NOTICE "3w-xxxx: tw_scsiop_read_write()\n");
if (tw_dev->srb[request_id]->request_buffer == NULL) {
srb = tw_dev->srb[request_id];
sglist = scsi_sglist(srb);
if (!sglist) {
printk(KERN_WARNING "3w-xxxx: tw_scsiop_read_write(): Request buffer NULL.\n");
return 1;
}
sglist = (struct scatterlist *)tw_dev->srb[request_id]->request_buffer;
srb = tw_dev->srb[request_id];
/* Initialize command packet */
command_packet = (TW_Command *)tw_dev->command_packet_virtual_address[request_id];
@ -1819,33 +1776,18 @@ static int tw_scsiop_read_write(TW_Device_Extension *tw_dev, int request_id)
command_packet->byte8.io.lba = lba;
command_packet->byte6.block_count = num_sectors;
/* Do this if there are no sg list entries */
if (tw_dev->srb[request_id]->use_sg == 0) {
dprintk(KERN_NOTICE "3w-xxxx: tw_scsiop_read_write(): SG = 0\n");
buffaddr = tw_map_scsi_single_data(tw_dev->tw_pci_dev, tw_dev->srb[request_id]);
if (buffaddr == 0)
return 1;
use_sg = tw_map_scsi_sg_data(tw_dev->tw_pci_dev, tw_dev->srb[request_id]);
if (!use_sg)
return 1;
command_packet->byte8.io.sgl[0].address = buffaddr;
command_packet->byte8.io.sgl[0].length = tw_dev->srb[request_id]->request_bufflen;
scsi_for_each_sg(tw_dev->srb[request_id], sg, use_sg, i) {
command_packet->byte8.io.sgl[i].address = sg_dma_address(sg);
command_packet->byte8.io.sgl[i].length = sg_dma_len(sg);
command_packet->size+=2;
}
/* Do this if we have multiple sg list entries */
if (tw_dev->srb[request_id]->use_sg > 0) {
use_sg = tw_map_scsi_sg_data(tw_dev->tw_pci_dev, tw_dev->srb[request_id]);
if (use_sg == 0)
return 1;
for (i=0;i<use_sg; i++) {
command_packet->byte8.io.sgl[i].address = sg_dma_address(&sglist[i]);
command_packet->byte8.io.sgl[i].length = sg_dma_len(&sglist[i]);
command_packet->size+=2;
}
}
/* Update SG statistics */
tw_dev->sgl_entries = tw_dev->srb[request_id]->use_sg;
tw_dev->sgl_entries = scsi_sg_count(tw_dev->srb[request_id]);
if (tw_dev->sgl_entries > tw_dev->max_sgl_entries)
tw_dev->max_sgl_entries = tw_dev->sgl_entries;

View File

@ -267,8 +267,6 @@ NCR_700_offset_period_to_sxfer(struct NCR_700_Host_Parameters *hostdata,
offset = max_offset;
}
if(XFERP < min_xferp) {
printk(KERN_WARNING "53c700: XFERP %d is less than minium, setting to %d\n",
XFERP, min_xferp);
XFERP = min_xferp;
}
return (offset & 0x0f) | (XFERP & 0x07)<<4;
@ -585,16 +583,8 @@ NCR_700_unmap(struct NCR_700_Host_Parameters *hostdata, struct scsi_cmnd *SCp,
struct NCR_700_command_slot *slot)
{
if(SCp->sc_data_direction != DMA_NONE &&
SCp->sc_data_direction != DMA_BIDIRECTIONAL) {
if(SCp->use_sg) {
dma_unmap_sg(hostdata->dev, SCp->request_buffer,
SCp->use_sg, SCp->sc_data_direction);
} else {
dma_unmap_single(hostdata->dev, slot->dma_handle,
SCp->request_bufflen,
SCp->sc_data_direction);
}
}
SCp->sc_data_direction != DMA_BIDIRECTIONAL)
scsi_dma_unmap(SCp);
}
STATIC inline void
@ -661,7 +651,6 @@ NCR_700_chip_setup(struct Scsi_Host *host)
{
struct NCR_700_Host_Parameters *hostdata =
(struct NCR_700_Host_Parameters *)host->hostdata[0];
__u32 dcntl_extra = 0;
__u8 min_period;
__u8 min_xferp = (hostdata->chip710 ? NCR_710_MIN_XFERP : NCR_700_MIN_XFERP);
@ -686,13 +675,14 @@ NCR_700_chip_setup(struct Scsi_Host *host)
burst_disable = BURST_DISABLE;
break;
}
dcntl_extra = COMPAT_700_MODE;
hostdata->dcntl_extra |= COMPAT_700_MODE;
NCR_700_writeb(dcntl_extra, host, DCNTL_REG);
NCR_700_writeb(hostdata->dcntl_extra, host, DCNTL_REG);
NCR_700_writeb(burst_length | hostdata->dmode_extra,
host, DMODE_710_REG);
NCR_700_writeb(burst_disable | (hostdata->differential ?
DIFF : 0), host, CTEST7_REG);
NCR_700_writeb(burst_disable | hostdata->ctest7_extra |
(hostdata->differential ? DIFF : 0),
host, CTEST7_REG);
NCR_700_writeb(BTB_TIMER_DISABLE, host, CTEST0_REG);
NCR_700_writeb(FULL_ARBITRATION | ENABLE_PARITY | PARITY
| AUTO_ATN, host, SCNTL0_REG);
@ -727,13 +717,13 @@ NCR_700_chip_setup(struct Scsi_Host *host)
* of spec: sync divider 2, async divider 3 */
DEBUG(("53c700: sync 2 async 3\n"));
NCR_700_writeb(SYNC_DIV_2_0, host, SBCL_REG);
NCR_700_writeb(ASYNC_DIV_3_0 | dcntl_extra, host, DCNTL_REG);
NCR_700_writeb(ASYNC_DIV_3_0 | hostdata->dcntl_extra, host, DCNTL_REG);
hostdata->sync_clock = hostdata->clock/2;
} else if(hostdata->clock > 50 && hostdata->clock <= 75) {
/* sync divider 1.5, async divider 3 */
DEBUG(("53c700: sync 1.5 async 3\n"));
NCR_700_writeb(SYNC_DIV_1_5, host, SBCL_REG);
NCR_700_writeb(ASYNC_DIV_3_0 | dcntl_extra, host, DCNTL_REG);
NCR_700_writeb(ASYNC_DIV_3_0 | hostdata->dcntl_extra, host, DCNTL_REG);
hostdata->sync_clock = hostdata->clock*2;
hostdata->sync_clock /= 3;
@ -741,18 +731,18 @@ NCR_700_chip_setup(struct Scsi_Host *host)
/* sync divider 1, async divider 2 */
DEBUG(("53c700: sync 1 async 2\n"));
NCR_700_writeb(SYNC_DIV_1_0, host, SBCL_REG);
NCR_700_writeb(ASYNC_DIV_2_0 | dcntl_extra, host, DCNTL_REG);
NCR_700_writeb(ASYNC_DIV_2_0 | hostdata->dcntl_extra, host, DCNTL_REG);
hostdata->sync_clock = hostdata->clock;
} else if(hostdata->clock > 25 && hostdata->clock <=37) {
/* sync divider 1, async divider 1.5 */
DEBUG(("53c700: sync 1 async 1.5\n"));
NCR_700_writeb(SYNC_DIV_1_0, host, SBCL_REG);
NCR_700_writeb(ASYNC_DIV_1_5 | dcntl_extra, host, DCNTL_REG);
NCR_700_writeb(ASYNC_DIV_1_5 | hostdata->dcntl_extra, host, DCNTL_REG);
hostdata->sync_clock = hostdata->clock;
} else {
DEBUG(("53c700: sync 1 async 1\n"));
NCR_700_writeb(SYNC_DIV_1_0, host, SBCL_REG);
NCR_700_writeb(ASYNC_DIV_1_0 | dcntl_extra, host, DCNTL_REG);
NCR_700_writeb(ASYNC_DIV_1_0 | hostdata->dcntl_extra, host, DCNTL_REG);
/* sync divider 1, async divider 1 */
hostdata->sync_clock = hostdata->clock;
}
@ -1263,14 +1253,13 @@ process_script_interrupt(__u32 dsps, __u32 dsp, struct scsi_cmnd *SCp,
host->host_no, pun, lun, NCR_700_condition[i],
NCR_700_phase[j], dsp - hostdata->pScript);
if(SCp != NULL) {
scsi_print_command(SCp);
struct scatterlist *sg;
if(SCp->use_sg) {
for(i = 0; i < SCp->use_sg + 1; i++) {
printk(KERN_INFO " SG[%d].length = %d, move_insn=%08x, addr %08x\n", i, ((struct scatterlist *)SCp->request_buffer)[i].length, ((struct NCR_700_command_slot *)SCp->host_scribble)->SG[i].ins, ((struct NCR_700_command_slot *)SCp->host_scribble)->SG[i].pAddr);
}
scsi_print_command(SCp);
scsi_for_each_sg(SCp, sg, scsi_sg_count(SCp) + 1, i) {
printk(KERN_INFO " SG[%d].length = %d, move_insn=%08x, addr %08x\n", i, sg->length, ((struct NCR_700_command_slot *)SCp->host_scribble)->SG[i].ins, ((struct NCR_700_command_slot *)SCp->host_scribble)->SG[i].pAddr);
}
}
}
NCR_700_internal_bus_reset(host);
} else if((dsps & 0xfffff000) == A_DEBUG_INTERRUPT) {
printk(KERN_NOTICE "scsi%d (%d:%d) DEBUG INTERRUPT %d AT %08x[%04x], continuing\n",
@ -1844,8 +1833,8 @@ NCR_700_queuecommand(struct scsi_cmnd *SCp, void (*done)(struct scsi_cmnd *))
}
/* sanity check: some of the commands generated by the mid-layer
* have an eccentric idea of their sc_data_direction */
if(!SCp->use_sg && !SCp->request_bufflen
&& SCp->sc_data_direction != DMA_NONE) {
if(!scsi_sg_count(SCp) && !scsi_bufflen(SCp) &&
SCp->sc_data_direction != DMA_NONE) {
#ifdef NCR_700_DEBUG
printk("53c700: Command");
scsi_print_command(SCp);
@ -1887,31 +1876,15 @@ NCR_700_queuecommand(struct scsi_cmnd *SCp, void (*done)(struct scsi_cmnd *))
int i;
int sg_count;
dma_addr_t vPtr = 0;
struct scatterlist *sg;
__u32 count = 0;
if(SCp->use_sg) {
sg_count = dma_map_sg(hostdata->dev,
SCp->request_buffer, SCp->use_sg,
direction);
} else {
vPtr = dma_map_single(hostdata->dev,
SCp->request_buffer,
SCp->request_bufflen,
direction);
count = SCp->request_bufflen;
slot->dma_handle = vPtr;
sg_count = 1;
}
sg_count = scsi_dma_map(SCp);
BUG_ON(sg_count < 0);
for(i = 0; i < sg_count; i++) {
if(SCp->use_sg) {
struct scatterlist *sg = SCp->request_buffer;
vPtr = sg_dma_address(&sg[i]);
count = sg_dma_len(&sg[i]);
}
scsi_for_each_sg(SCp, sg, sg_count, i) {
vPtr = sg_dma_address(sg);
count = sg_dma_len(sg);
slot->SG[i].ins = bS_to_host(move_ins | count);
DEBUG((" scatter block %d: move %d[%08x] from 0x%lx\n",

View File

@ -177,6 +177,7 @@ struct NCR_700_command_slot {
__u8 state;
#define NCR_700_FLAG_AUTOSENSE 0x01
__u8 flags;
__u8 pad1[2]; /* Needed for m68k where min alignment is 2 bytes */
int tag;
__u32 resume_offset;
struct scsi_cmnd *cmnd;
@ -196,6 +197,8 @@ struct NCR_700_Host_Parameters {
void __iomem *base; /* the base for the port (copied to host) */
struct device *dev;
__u32 dmode_extra; /* adjustable bus settings */
__u32 dcntl_extra; /* adjustable bus settings */
__u32 ctest7_extra; /* adjustable bus settings */
__u32 differential:1; /* if we are differential */
#ifdef CONFIG_53C700_LE_ON_BE
/* This option is for HP only. Set it if your chip is wired for
@ -352,6 +355,7 @@ struct NCR_700_Host_Parameters {
#define SEL_TIMEOUT_DISABLE 0x10 /* 710 only */
#define DFP 0x08
#define EVP 0x04
#define CTEST7_TT1 0x02
#define DIFF 0x01
#define CTEST6_REG 0x1A
#define TEMP_REG 0x1C
@ -385,6 +389,7 @@ struct NCR_700_Host_Parameters {
#define SOFTWARE_RESET 0x01
#define COMPAT_700_MODE 0x01
#define SCRPTS_16BITS 0x20
#define EA_710 0x20
#define ASYNC_DIV_2_0 0x00
#define ASYNC_DIV_1_5 0x40
#define ASYNC_DIV_1_0 0x80

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,102 +0,0 @@
#undef A_NCR53c7xx_msg_abort
#undef A_NCR53c7xx_msg_reject
#undef A_NCR53c7xx_sink
#undef A_NCR53c7xx_zero
#undef A_NOP_insn
#undef A_addr_dsa
#undef A_addr_reconnect_dsa_head
#undef A_addr_scratch
#undef A_addr_temp
#undef A_dmode_memory_to_memory
#undef A_dmode_memory_to_ncr
#undef A_dmode_ncr_to_memory
#undef A_dsa_check_reselect
#undef A_dsa_cmdout
#undef A_dsa_cmnd
#undef A_dsa_datain
#undef A_dsa_dataout
#undef A_dsa_end
#undef A_dsa_fields_start
#undef A_dsa_msgin
#undef A_dsa_msgout
#undef A_dsa_msgout_other
#undef A_dsa_next
#undef A_dsa_restore_pointers
#undef A_dsa_save_data_pointer
#undef A_dsa_select
#undef A_dsa_sscf_710
#undef A_dsa_status
#undef A_dsa_temp_addr_array_value
#undef A_dsa_temp_addr_dsa_value
#undef A_dsa_temp_addr_new_value
#undef A_dsa_temp_addr_next
#undef A_dsa_temp_addr_residual
#undef A_dsa_temp_addr_saved_pointer
#undef A_dsa_temp_addr_saved_residual
#undef A_dsa_temp_lun
#undef A_dsa_temp_next
#undef A_dsa_temp_sync
#undef A_dsa_temp_target
#undef A_emulfly
#undef A_int_debug_break
#undef A_int_debug_panic
#undef A_int_err_check_condition
#undef A_int_err_no_phase
#undef A_int_err_selected
#undef A_int_err_unexpected_phase
#undef A_int_err_unexpected_reselect
#undef A_int_msg_1
#undef A_int_msg_sdtr
#undef A_int_msg_wdtr
#undef A_int_norm_aborted
#undef A_int_norm_command_complete
#undef A_int_norm_disconnected
#undef A_int_norm_emulateintfly
#undef A_int_norm_reselect_complete
#undef A_int_norm_reset
#undef A_int_norm_select_complete
#undef A_int_test_1
#undef A_int_test_2
#undef A_int_test_3
#undef A_msg_buf
#undef A_reconnect_dsa_head
#undef A_reselected_identify
#undef A_reselected_tag
#undef A_saved_dsa
#undef A_schedule
#undef A_test_dest
#undef A_test_src
#undef Ent_accept_message
#undef Ent_cmdout_cmdout
#undef Ent_command_complete
#undef Ent_command_complete_msgin
#undef Ent_data_transfer
#undef Ent_datain_to_jump
#undef Ent_debug_break
#undef Ent_dsa_code_begin
#undef Ent_dsa_code_check_reselect
#undef Ent_dsa_code_fix_jump
#undef Ent_dsa_code_restore_pointers
#undef Ent_dsa_code_save_data_pointer
#undef Ent_dsa_code_template
#undef Ent_dsa_code_template_end
#undef Ent_dsa_schedule
#undef Ent_dsa_zero
#undef Ent_end_data_transfer
#undef Ent_initiator_abort
#undef Ent_msg_in
#undef Ent_msg_in_restart
#undef Ent_other_in
#undef Ent_other_out
#undef Ent_other_transfer
#undef Ent_reject_message
#undef Ent_reselected_check_next
#undef Ent_reselected_ok
#undef Ent_respond_message
#undef Ent_select
#undef Ent_select_msgout
#undef Ent_target_abort
#undef Ent_test_1
#undef Ent_test_2
#undef Ent_test_2_msgout
#undef Ent_wait_reselect

View File

@ -304,18 +304,10 @@ static struct BusLogic_CCB *BusLogic_AllocateCCB(struct BusLogic_HostAdapter
static void BusLogic_DeallocateCCB(struct BusLogic_CCB *CCB)
{
struct BusLogic_HostAdapter *HostAdapter = CCB->HostAdapter;
struct scsi_cmnd *cmd = CCB->Command;
if (cmd->use_sg != 0) {
pci_unmap_sg(HostAdapter->PCI_Device,
(struct scatterlist *)cmd->request_buffer,
cmd->use_sg, cmd->sc_data_direction);
} else if (cmd->request_bufflen != 0) {
pci_unmap_single(HostAdapter->PCI_Device, CCB->DataPointer,
CCB->DataLength, cmd->sc_data_direction);
}
scsi_dma_unmap(CCB->Command);
pci_unmap_single(HostAdapter->PCI_Device, CCB->SenseDataPointer,
CCB->SenseDataLength, PCI_DMA_FROMDEVICE);
CCB->SenseDataLength, PCI_DMA_FROMDEVICE);
CCB->Command = NULL;
CCB->Status = BusLogic_CCB_Free;
@ -2648,7 +2640,8 @@ static void BusLogic_ProcessCompletedCCBs(struct BusLogic_HostAdapter *HostAdapt
*/
if (CCB->CDB[0] == INQUIRY && CCB->CDB[1] == 0 && CCB->HostAdapterStatus == BusLogic_CommandCompletedNormally) {
struct BusLogic_TargetFlags *TargetFlags = &HostAdapter->TargetFlags[CCB->TargetID];
struct SCSI_Inquiry *InquiryResult = (struct SCSI_Inquiry *) Command->request_buffer;
struct SCSI_Inquiry *InquiryResult =
(struct SCSI_Inquiry *) scsi_sglist(Command);
TargetFlags->TargetExists = true;
TargetFlags->TaggedQueuingSupported = InquiryResult->CmdQue;
TargetFlags->WideTransfersSupported = InquiryResult->WBus16;
@ -2819,9 +2812,8 @@ static int BusLogic_QueueCommand(struct scsi_cmnd *Command, void (*CompletionRou
int CDB_Length = Command->cmd_len;
int TargetID = Command->device->id;
int LogicalUnit = Command->device->lun;
void *BufferPointer = Command->request_buffer;
int BufferLength = Command->request_bufflen;
int SegmentCount = Command->use_sg;
int BufferLength = scsi_bufflen(Command);
int Count;
struct BusLogic_CCB *CCB;
/*
SCSI REQUEST_SENSE commands will be executed automatically by the Host
@ -2851,36 +2843,35 @@ static int BusLogic_QueueCommand(struct scsi_cmnd *Command, void (*CompletionRou
return 0;
}
}
/*
Initialize the fields in the BusLogic Command Control Block (CCB).
*/
if (SegmentCount == 0 && BufferLength != 0) {
CCB->Opcode = BusLogic_InitiatorCCB;
CCB->DataLength = BufferLength;
CCB->DataPointer = pci_map_single(HostAdapter->PCI_Device,
BufferPointer, BufferLength,
Command->sc_data_direction);
} else if (SegmentCount != 0) {
struct scatterlist *ScatterList = (struct scatterlist *) BufferPointer;
int Segment, Count;
Count = scsi_dma_map(Command);
BUG_ON(Count < 0);
if (Count) {
struct scatterlist *sg;
int i;
Count = pci_map_sg(HostAdapter->PCI_Device, ScatterList, SegmentCount,
Command->sc_data_direction);
CCB->Opcode = BusLogic_InitiatorCCB_ScatterGather;
CCB->DataLength = Count * sizeof(struct BusLogic_ScatterGatherSegment);
if (BusLogic_MultiMasterHostAdapterP(HostAdapter))
CCB->DataPointer = (unsigned int) CCB->DMA_Handle + ((unsigned long) &CCB->ScatterGatherList - (unsigned long) CCB);
else
CCB->DataPointer = Virtual_to_32Bit_Virtual(CCB->ScatterGatherList);
for (Segment = 0; Segment < Count; Segment++) {
CCB->ScatterGatherList[Segment].SegmentByteCount = sg_dma_len(ScatterList + Segment);
CCB->ScatterGatherList[Segment].SegmentDataPointer = sg_dma_address(ScatterList + Segment);
scsi_for_each_sg(Command, sg, Count, i) {
CCB->ScatterGatherList[i].SegmentByteCount =
sg_dma_len(sg);
CCB->ScatterGatherList[i].SegmentDataPointer =
sg_dma_address(sg);
}
} else {
} else if (!Count) {
CCB->Opcode = BusLogic_InitiatorCCB;
CCB->DataLength = BufferLength;
CCB->DataPointer = 0;
}
switch (CDB[0]) {
case READ_6:
case READ_10:

View File

@ -10,6 +10,7 @@ config RAID_ATTRS
config SCSI
tristate "SCSI device support"
depends on BLOCK
select SCSI_DMA if HAS_DMA
---help---
If you want to use a SCSI hard disk, SCSI tape drive, SCSI CD-ROM or
any other SCSI device under Linux, say Y and make sure that you know
@ -29,6 +30,10 @@ config SCSI
However, do not compile this as a module if your root file system
(the one containing the directory /) is located on a SCSI device.
config SCSI_DMA
bool
default n
config SCSI_TGT
tristate "SCSI target support"
depends on SCSI && EXPERIMENTAL
@ -739,7 +744,7 @@ config SCSI_GENERIC_NCR53C400
config SCSI_IBMMCA
tristate "IBMMCA SCSI support"
depends on MCA_LEGACY && SCSI
depends on MCA && SCSI
---help---
This is support for the IBM SCSI adapter found in many of the PS/2
series computers. These machines have an MCA bus, so you need to
@ -1007,6 +1012,11 @@ config SCSI_STEX
To compile this driver as a module, choose M here: the
module will be called stex.
config 53C700_BE_BUS
bool
depends on SCSI_A4000T || SCSI_ZORRO7XX || MVME16x_SCSI || BVME6000_SCSI
default y
config SCSI_SYM53C8XX_2
tristate "SYM53C8XX Version 2 SCSI support"
depends on PCI && SCSI
@ -1611,13 +1621,25 @@ config FASTLANE_SCSI
If you have the Phase5 Fastlane Z3 SCSI controller, or plan to use
one in the near future, say Y to this question. Otherwise, say N.
config SCSI_AMIGA7XX
bool "Amiga NCR53c710 SCSI support (EXPERIMENTAL)"
depends on AMIGA && SCSI && EXPERIMENTAL && BROKEN
config SCSI_A4000T
tristate "A4000T NCR53c710 SCSI support (EXPERIMENTAL)"
depends on AMIGA && SCSI && EXPERIMENTAL
select SCSI_SPI_ATTRS
help
Support for various NCR53c710-based SCSI controllers on the Amiga.
If you have an Amiga 4000T and have SCSI devices connected to the
built-in SCSI controller, say Y. Otherwise, say N.
To compile this driver as a module, choose M here: the
module will be called a4000t.
config SCSI_ZORRO7XX
tristate "Zorro NCR53c710 SCSI support (EXPERIMENTAL)"
depends on ZORRO && SCSI && EXPERIMENTAL
select SCSI_SPI_ATTRS
help
Support for various NCR53c710-based SCSI controllers on Zorro
expansion boards for the Amiga.
This includes:
- the builtin SCSI controller on the Amiga 4000T,
- the Amiga 4091 Zorro III SCSI-2 controller,
- the MacroSystem Development's WarpEngine Amiga SCSI-2 controller
(info at
@ -1625,10 +1647,6 @@ config SCSI_AMIGA7XX
- the SCSI controller on the Phase5 Blizzard PowerUP 603e+
accelerator card for the Amiga 1200,
- the SCSI controller on the GVP Turbo 040/060 accelerator.
Note that all of the above SCSI controllers, except for the builtin
SCSI controller on the Amiga 4000T, reside on the Zorro expansion
bus, so you also have to enable Zorro bus support if you want to use
them.
config OKTAGON_SCSI
tristate "BSC Oktagon SCSI support (EXPERIMENTAL)"
@ -1712,8 +1730,8 @@ config MVME147_SCSI
single-board computer.
config MVME16x_SCSI
bool "NCR53C710 SCSI driver for MVME16x"
depends on MVME16x && SCSI && BROKEN
tristate "NCR53C710 SCSI driver for MVME16x"
depends on MVME16x && SCSI
select SCSI_SPI_ATTRS
help
The Motorola MVME162, 166, 167, 172 and 177 boards use the NCR53C710
@ -1721,22 +1739,14 @@ config MVME16x_SCSI
will want to say Y to this question.
config BVME6000_SCSI
bool "NCR53C710 SCSI driver for BVME6000"
depends on BVME6000 && SCSI && BROKEN
tristate "NCR53C710 SCSI driver for BVME6000"
depends on BVME6000 && SCSI
select SCSI_SPI_ATTRS
help
The BVME4000 and BVME6000 boards from BVM Ltd use the NCR53C710
SCSI controller chip. Almost everyone using one of these boards
will want to say Y to this question.
config SCSI_NCR53C7xx_FAST
bool "allow FAST-SCSI [10MHz]"
depends on SCSI_AMIGA7XX || MVME16x_SCSI || BVME6000_SCSI
help
This will enable 10MHz FAST-SCSI transfers with your host
adapter. Some systems have problems with that speed, so it's safest
to say N here.
config SUN3_SCSI
tristate "Sun3 NCR5380 SCSI"
depends on SUN3 && SCSI
@ -1766,8 +1776,6 @@ config SCSI_SUNESP
To compile this driver as a module, choose M here: the
module will be called esp.
# bool 'Cyberstorm Mk III SCSI support (EXPERIMENTAL)' CONFIG_CYBERSTORMIII_SCSI
config ZFCP
tristate "FCP host bus adapter driver for IBM eServer zSeries"
depends on S390 && QDIO && SCSI

View File

@ -37,7 +37,8 @@ obj-$(CONFIG_SCSI_SAS_LIBSAS) += libsas/
obj-$(CONFIG_ISCSI_TCP) += libiscsi.o iscsi_tcp.o
obj-$(CONFIG_INFINIBAND_ISER) += libiscsi.o
obj-$(CONFIG_SCSI_AMIGA7XX) += amiga7xx.o 53c7xx.o
obj-$(CONFIG_SCSI_A4000T) += 53c700.o a4000t.o
obj-$(CONFIG_SCSI_ZORRO7XX) += 53c700.o zorro7xx.o
obj-$(CONFIG_A3000_SCSI) += a3000.o wd33c93.o
obj-$(CONFIG_A2091_SCSI) += a2091.o wd33c93.o
obj-$(CONFIG_GVP11_SCSI) += gvp11.o wd33c93.o
@ -53,8 +54,8 @@ obj-$(CONFIG_ATARI_SCSI) += atari_scsi.o
obj-$(CONFIG_MAC_SCSI) += mac_scsi.o
obj-$(CONFIG_SCSI_MAC_ESP) += mac_esp.o NCR53C9x.o
obj-$(CONFIG_SUN3_SCSI) += sun3_scsi.o sun3_scsi_vme.o
obj-$(CONFIG_MVME16x_SCSI) += mvme16x.o 53c7xx.o
obj-$(CONFIG_BVME6000_SCSI) += bvme6000.o 53c7xx.o
obj-$(CONFIG_MVME16x_SCSI) += 53c700.o mvme16x_scsi.o
obj-$(CONFIG_BVME6000_SCSI) += 53c700.o bvme6000_scsi.o
obj-$(CONFIG_SCSI_SIM710) += 53c700.o sim710.o
obj-$(CONFIG_SCSI_ADVANSYS) += advansys.o
obj-$(CONFIG_SCSI_PSI240I) += psi240i.o
@ -89,7 +90,6 @@ obj-$(CONFIG_SCSI_QLA_ISCSI) += qla4xxx/
obj-$(CONFIG_SCSI_LPFC) += lpfc/
obj-$(CONFIG_SCSI_PAS16) += pas16.o
obj-$(CONFIG_SCSI_SEAGATE) += seagate.o
obj-$(CONFIG_SCSI_FD_8xx) += seagate.o
obj-$(CONFIG_SCSI_T128) += t128.o
obj-$(CONFIG_SCSI_DMX3191D) += dmx3191d.o
obj-$(CONFIG_SCSI_DTC3280) += dtc.o
@ -148,9 +148,9 @@ obj-$(CONFIG_SCSI_DEBUG) += scsi_debug.o
obj-$(CONFIG_SCSI_WAIT_SCAN) += scsi_wait_scan.o
scsi_mod-y += scsi.o hosts.o scsi_ioctl.o constants.o \
scsicam.o scsi_error.o scsi_lib.o \
scsi_scan.o scsi_sysfs.o \
scsi_devinfo.o
scsicam.o scsi_error.o scsi_lib.o
scsi_mod-$(CONFIG_SCSI_DMA) += scsi_lib_dma.o
scsi_mod-y += scsi_scan.o scsi_sysfs.o scsi_devinfo.o
scsi_mod-$(CONFIG_SCSI_NETLINK) += scsi_netlink.o
scsi_mod-$(CONFIG_SYSCTL) += scsi_sysctl.o
scsi_mod-$(CONFIG_SCSI_PROC_FS) += scsi_proc.o
@ -168,10 +168,8 @@ NCR_Q720_mod-objs := NCR_Q720.o ncr53c8xx.o
oktagon_esp_mod-objs := oktagon_esp.o oktagon_io.o
# Files generated that shall be removed upon make clean
clean-files := 53c7xx_d.h 53c700_d.h \
53c7xx_u.h 53c700_u.h
clean-files := 53c700_d.h 53c700_u.h
$(obj)/53c7xx.o: $(obj)/53c7xx_d.h $(obj)/53c7xx_u.h
$(obj)/53c700.o $(MODVERDIR)/$(obj)/53c700.ver: $(obj)/53c700_d.h
# If you want to play with the firmware, uncomment
@ -179,11 +177,6 @@ $(obj)/53c700.o $(MODVERDIR)/$(obj)/53c700.ver: $(obj)/53c700_d.h
ifdef GENERATE_FIRMWARE
$(obj)/53c7xx_d.h: $(src)/53c7xx.scr $(src)/script_asm.pl
$(CPP) -traditional -DCHIP=710 - < $< | grep -v '^#' | $(PERL) -s $(src)/script_asm.pl -ncr7x0_family $@ $(@:_d.h=_u.h)
$(obj)/53c7xx_u.h: $(obj)/53c7xx_d.h
$(obj)/53c700_d.h: $(src)/53c700.scr $(src)/script_asm.pl
$(PERL) -s $(src)/script_asm.pl -ncr7x0_family $@ $(@:_d.h=_u.h) < $<

View File

@ -347,7 +347,7 @@ static int NCR5380_poll_politely(struct Scsi_Host *instance, int reg, int bit, i
if((r & bit) == val)
return 0;
if(!in_interrupt())
yield();
cond_resched();
else
cpu_relax();
}
@ -357,7 +357,7 @@ static int NCR5380_poll_politely(struct Scsi_Host *instance, int reg, int bit, i
static struct {
unsigned char value;
const char *name;
} phases[] = {
} phases[] __maybe_unused = {
{PHASE_DATAOUT, "DATAOUT"},
{PHASE_DATAIN, "DATAIN"},
{PHASE_CMDOUT, "CMDOUT"},
@ -575,7 +575,8 @@ static irqreturn_t __init probe_intr(int irq, void *dev_id)
* Locks: none, irqs must be enabled on entry
*/
static int __init NCR5380_probe_irq(struct Scsi_Host *instance, int possible)
static int __init __maybe_unused NCR5380_probe_irq(struct Scsi_Host *instance,
int possible)
{
NCR5380_local_declare();
struct NCR5380_hostdata *hostdata = (struct NCR5380_hostdata *) instance->hostdata;
@ -629,7 +630,8 @@ static int __init NCR5380_probe_irq(struct Scsi_Host *instance, int possible)
* Locks: none
*/
static void __init NCR5380_print_options(struct Scsi_Host *instance)
static void __init __maybe_unused
NCR5380_print_options(struct Scsi_Host *instance)
{
printk(" generic options"
#ifdef AUTOPROBE_IRQ
@ -703,8 +705,8 @@ char *lprint_command(unsigned char *cmd, char *pos, char *buffer, int len);
static
char *lprint_opcode(int opcode, char *pos, char *buffer, int length);
static
int NCR5380_proc_info(struct Scsi_Host *instance, char *buffer, char **start, off_t offset, int length, int inout)
static int __maybe_unused NCR5380_proc_info(struct Scsi_Host *instance,
char *buffer, char **start, off_t offset, int length, int inout)
{
char *pos = buffer;
struct NCR5380_hostdata *hostdata;

View File

@ -299,7 +299,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance);
static irqreturn_t NCR5380_intr(int irq, void *dev_id);
#endif
static void NCR5380_main(struct work_struct *work);
static void NCR5380_print_options(struct Scsi_Host *instance);
static void __maybe_unused NCR5380_print_options(struct Scsi_Host *instance);
#ifdef NDEBUG
static void NCR5380_print_phase(struct Scsi_Host *instance);
static void NCR5380_print(struct Scsi_Host *instance);
@ -307,8 +307,8 @@ static void NCR5380_print(struct Scsi_Host *instance);
static int NCR5380_abort(Scsi_Cmnd * cmd);
static int NCR5380_bus_reset(Scsi_Cmnd * cmd);
static int NCR5380_queue_command(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *));
static int NCR5380_proc_info(struct Scsi_Host *instance, char *buffer, char **start,
off_t offset, int length, int inout);
static int __maybe_unused NCR5380_proc_info(struct Scsi_Host *instance,
char *buffer, char **start, off_t offset, int length, int inout);
static void NCR5380_reselect(struct Scsi_Host *instance);
static int NCR5380_select(struct Scsi_Host *instance, Scsi_Cmnd * cmd, int tag);

View File

@ -698,7 +698,7 @@ static int NCR53c406a_queue(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
int i;
VDEB(printk("NCR53c406a_queue called\n"));
DEB(printk("cmd=%02x, cmd_len=%02x, target=%02x, lun=%02x, bufflen=%d\n", SCpnt->cmnd[0], SCpnt->cmd_len, SCpnt->target, SCpnt->lun, SCpnt->request_bufflen));
DEB(printk("cmd=%02x, cmd_len=%02x, target=%02x, lun=%02x, bufflen=%d\n", SCpnt->cmnd[0], SCpnt->cmd_len, SCpnt->target, SCpnt->lun, scsi_bufflen(SCpnt)));
#if 0
VDEB(for (i = 0; i < SCpnt->cmd_len; i++)
@ -785,8 +785,8 @@ static void NCR53c406a_intr(void *dev_id)
unsigned char status, int_reg;
#if USE_PIO
unsigned char pio_status;
struct scatterlist *sglist;
unsigned int sgcount;
struct scatterlist *sg;
int i;
#endif
VDEB(printk("NCR53c406a_intr called\n"));
@ -866,22 +866,18 @@ static void NCR53c406a_intr(void *dev_id)
current_SC->SCp.phase = data_out;
VDEB(printk("NCR53c406a: Data-Out phase\n"));
outb(FLUSH_FIFO, CMD_REG);
LOAD_DMA_COUNT(current_SC->request_bufflen); /* Max transfer size */
LOAD_DMA_COUNT(scsi_bufflen(current_SC)); /* Max transfer size */
#if USE_DMA /* No s/g support for DMA */
NCR53c406a_dma_write(current_SC->request_buffer, current_SC->request_bufflen);
NCR53c406a_dma_write(scsi_sglist(current_SC),
scsdi_bufflen(current_SC));
#endif /* USE_DMA */
outb(TRANSFER_INFO | DMA_OP, CMD_REG);
#if USE_PIO
if (!current_SC->use_sg) /* Don't use scatter-gather */
NCR53c406a_pio_write(current_SC->request_buffer, current_SC->request_bufflen);
else { /* use scatter-gather */
sgcount = current_SC->use_sg;
sglist = current_SC->request_buffer;
while (sgcount--) {
NCR53c406a_pio_write(page_address(sglist->page) + sglist->offset, sglist->length);
sglist++;
}
}
scsi_for_each_sg(current_SC, sg, scsi_sg_count(current_SC), i) {
NCR53c406a_pio_write(page_address(sg->page) + sg->offset,
sg->length);
}
REG0;
#endif /* USE_PIO */
}
@ -893,22 +889,17 @@ static void NCR53c406a_intr(void *dev_id)
current_SC->SCp.phase = data_in;
VDEB(printk("NCR53c406a: Data-In phase\n"));
outb(FLUSH_FIFO, CMD_REG);
LOAD_DMA_COUNT(current_SC->request_bufflen); /* Max transfer size */
LOAD_DMA_COUNT(scsi_bufflen(current_SC)); /* Max transfer size */
#if USE_DMA /* No s/g support for DMA */
NCR53c406a_dma_read(current_SC->request_buffer, current_SC->request_bufflen);
NCR53c406a_dma_read(scsi_sglist(current_SC),
scsdi_bufflen(current_SC));
#endif /* USE_DMA */
outb(TRANSFER_INFO | DMA_OP, CMD_REG);
#if USE_PIO
if (!current_SC->use_sg) /* Don't use scatter-gather */
NCR53c406a_pio_read(current_SC->request_buffer, current_SC->request_bufflen);
else { /* Use scatter-gather */
sgcount = current_SC->use_sg;
sglist = current_SC->request_buffer;
while (sgcount--) {
NCR53c406a_pio_read(page_address(sglist->page) + sglist->offset, sglist->length);
sglist++;
}
}
scsi_for_each_sg(current_SC, sg, scsi_sg_count(current_SC), i) {
NCR53c406a_pio_read(page_address(sg->page) + sg->offset,
sg->length);
}
REG0;
#endif /* USE_PIO */
}

File diff suppressed because it is too large Load Diff

View File

@ -18,27 +18,6 @@
* along with this program; see the file COPYING. If not, write to
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
*
* --------------------------------------------------------------------------
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification, immediately at the beginning of the file.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* Where this Software is combined with software released under the terms of
* the GNU General Public License ("GPL") and the terms of the GPL would require the
* combined work to also be released under the terms of the GPL, the terms
* and conditions of this License will apply in addition to those of the
* GPL with the exception of any terms or conditions of this License that
* conflict with, or are expressly prohibited by, the GPL.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
@ -50,30 +29,19 @@
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*
*
* Revision History:
* 06/18/98 HL, Initial production Version 1.02
* 12/19/98 bv, Use spinlocks for 2.1.95 and up
* 06/25/02 Doug Ledford <dledford@redhat.com>
* - This and the i60uscsi.h file are almost identical,
* merged them into a single header used by both .c files.
* 14/06/07 Alan Cox <alan@redhat.com>
* - Grand cleanup and Linuxisation
*/
#define inia100_REVID "Initio INI-A100U2W SCSI device driver; Revision: 1.02d"
#define ULONG unsigned long
#define USHORT unsigned short
#define UCHAR unsigned char
#define BYTE unsigned char
#define WORD unsigned short
#define DWORD unsigned long
#define UBYTE unsigned char
#define UWORD unsigned short
#define UDWORD unsigned long
#define U32 u32
#if 1
#define ORC_MAXQUEUE 245
#define ORC_MAXTAGS 64
@ -90,10 +58,10 @@
/************************************************************************/
/* Scatter-Gather Element Structure */
/************************************************************************/
typedef struct ORC_SG_Struc {
U32 SG_Ptr; /* Data Pointer */
U32 SG_Len; /* Data Length */
} ORC_SG;
struct orc_sgent {
u32 base; /* Data Pointer */
u32 length; /* Data Length */
};
/* SCSI related definition */
#define DISC_NOT_ALLOW 0x80 /* Disconnect is not allowed */
@ -165,42 +133,45 @@ typedef struct ORC_SG_Struc {
#define ORC_PRGMCTR1 0xE3 /* RISC program counter */
#define ORC_RISCRAM 0xEC /* RISC RAM data port 4 bytes */
typedef struct orc_extended_scb { /* Extended SCB */
ORC_SG ESCB_SGList[TOTAL_SG_ENTRY]; /*0 Start of SG list */
struct scsi_cmnd *SCB_Srb; /*50 SRB Pointer */
} ESCB;
struct orc_extended_scb { /* Extended SCB */
struct orc_sgent sglist[TOTAL_SG_ENTRY]; /*0 Start of SG list */
struct scsi_cmnd *srb; /*50 SRB Pointer */
};
/***********************************************************************
SCSI Control Block
************************************************************************/
typedef struct orc_scb { /* Scsi_Ctrl_Blk */
UBYTE SCB_Opcode; /*00 SCB command code&residual */
UBYTE SCB_Flags; /*01 SCB Flags */
UBYTE SCB_Target; /*02 Target Id */
UBYTE SCB_Lun; /*03 Lun */
U32 SCB_Reserved0; /*04 Reserved for ORCHID must 0 */
U32 SCB_XferLen; /*08 Data Transfer Length */
U32 SCB_Reserved1; /*0C Reserved for ORCHID must 0 */
U32 SCB_SGLen; /*10 SG list # * 8 */
U32 SCB_SGPAddr; /*14 SG List Buf physical Addr */
U32 SCB_SGPAddrHigh; /*18 SG Buffer high physical Addr */
UBYTE SCB_HaStat; /*1C Host Status */
UBYTE SCB_TaStat; /*1D Target Status */
UBYTE SCB_Status; /*1E SCB status */
UBYTE SCB_Link; /*1F Link pointer, default 0xFF */
UBYTE SCB_SenseLen; /*20 Sense Allocation Length */
UBYTE SCB_CDBLen; /*21 CDB Length */
UBYTE SCB_Ident; /*22 Identify */
UBYTE SCB_TagMsg; /*23 Tag Message */
UBYTE SCB_CDB[IMAX_CDB]; /*24 SCSI CDBs */
UBYTE SCB_ScbIdx; /*3C Index for this ORCSCB */
U32 SCB_SensePAddr; /*34 Sense Buffer physical Addr */
ESCB *SCB_EScb; /*38 Extended SCB Pointer */
#ifndef ALPHA
UBYTE SCB_Reserved2[4]; /*3E Reserved for Driver use */
0x40 bytes long, the last 8 are user bytes
************************************************************************/
struct orc_scb { /* Scsi_Ctrl_Blk */
u8 opcode; /*00 SCB command code&residual */
u8 flags; /*01 SCB Flags */
u8 target; /*02 Target Id */
u8 lun; /*03 Lun */
u32 reserved0; /*04 Reserved for ORCHID must 0 */
u32 xferlen; /*08 Data Transfer Length */
u32 reserved1; /*0C Reserved for ORCHID must 0 */
u32 sg_len; /*10 SG list # * 8 */
u32 sg_addr; /*14 SG List Buf physical Addr */
u32 sg_addrhigh; /*18 SG Buffer high physical Addr */
u8 hastat; /*1C Host Status */
u8 tastat; /*1D Target Status */
u8 status; /*1E SCB status */
u8 link; /*1F Link pointer, default 0xFF */
u8 sense_len; /*20 Sense Allocation Length */
u8 cdb_len; /*21 CDB Length */
u8 ident; /*22 Identify */
u8 tag_msg; /*23 Tag Message */
u8 cdb[IMAX_CDB]; /*24 SCSI CDBs */
u8 scbidx; /*3C Index for this ORCSCB */
u32 sense_addr; /*34 Sense Buffer physical Addr */
struct orc_extended_scb *escb; /*38 Extended SCB Pointer */
/* 64bit pointer or 32bit pointer + reserved ? */
#ifndef CONFIG_64BIT
u8 reserved2[4]; /*3E Reserved for Driver use */
#endif
} ORC_SCB;
};
/* Opcodes of ORCSCB_Opcode */
#define ORC_EXECSCSI 0x00 /* SCSI initiator command with residual */
@ -239,13 +210,13 @@ typedef struct orc_scb { /* Scsi_Ctrl_Blk */
Target Device Control Structure
**********************************************************************/
typedef struct ORC_Tar_Ctrl_Struc {
UBYTE TCS_DrvDASD; /* 6 */
UBYTE TCS_DrvSCSI; /* 7 */
UBYTE TCS_DrvHead; /* 8 */
UWORD TCS_DrvFlags; /* 4 */
UBYTE TCS_DrvSector; /* 7 */
} ORC_TCS;
struct orc_target {
u8 TCS_DrvDASD; /* 6 */
u8 TCS_DrvSCSI; /* 7 */
u8 TCS_DrvHead; /* 8 */
u16 TCS_DrvFlags; /* 4 */
u8 TCS_DrvSector; /* 7 */
};
/* Bit Definition for TCF_DrvFlags */
#define TCS_DF_NODASD_SUPT 0x20 /* Suppress OS/2 DASD Mgr support */
@ -255,32 +226,23 @@ typedef struct ORC_Tar_Ctrl_Struc {
/***********************************************************************
Host Adapter Control Structure
************************************************************************/
typedef struct ORC_Ha_Ctrl_Struc {
USHORT HCS_Base; /* 00 */
UBYTE HCS_Index; /* 02 */
UBYTE HCS_Intr; /* 04 */
UBYTE HCS_SCSI_ID; /* 06 H/A SCSI ID */
UBYTE HCS_BIOS; /* 07 BIOS configuration */
UBYTE HCS_Flags; /* 0B */
UBYTE HCS_HAConfig1; /* 1B SCSI0MAXTags */
UBYTE HCS_MaxTar; /* 1B SCSI0MAXTags */
USHORT HCS_Units; /* Number of units this adapter */
USHORT HCS_AFlags; /* Adapter info. defined flags */
ULONG HCS_Timeout; /* Adapter timeout value */
ORC_SCB *HCS_virScbArray; /* 28 Virtual Pointer to SCB array */
dma_addr_t HCS_physScbArray; /* Scb Physical address */
ESCB *HCS_virEscbArray; /* Virtual pointer to ESCB Scatter list */
dma_addr_t HCS_physEscbArray; /* scatter list Physical address */
UBYTE TargetFlag[16]; /* 30 target configuration, TCF_EN_TAG */
UBYTE MaximumTags[16]; /* 40 ORC_MAX_SCBS */
UBYTE ActiveTags[16][16]; /* 50 */
ORC_TCS HCS_Tcs[16]; /* 28 */
U32 BitAllocFlag[MAX_CHANNELS][8]; /* Max STB is 256, So 256/32 */
spinlock_t BitAllocFlagLock;
struct orc_host {
unsigned long base; /* Base address */
u8 index; /* Index (Channel)*/
u8 scsi_id; /* H/A SCSI ID */
u8 BIOScfg; /*BIOS configuration */
u8 flags;
u8 max_targets; /* SCSI0MAXTags */
struct orc_scb *scb_virt; /* Virtual Pointer to SCB array */
dma_addr_t scb_phys; /* Scb Physical address */
struct orc_extended_scb *escb_virt; /* Virtual pointer to ESCB Scatter list */
dma_addr_t escb_phys; /* scatter list Physical address */
u8 target_flag[16]; /* target configuration, TCF_EN_TAG */
u8 max_tags[16]; /* ORC_MAX_SCBS */
u32 allocation_map[MAX_CHANNELS][8]; /* Max STB is 256, So 256/32 */
spinlock_t allocation_lock;
struct pci_dev *pdev;
} ORC_HCS;
};
/* Bit Definition for HCS_Flags */
@ -301,79 +263,79 @@ typedef struct ORC_Ha_Ctrl_Struc {
#define HCS_AF_DISABLE_RESET 0x10 /* Adapter disable reset */
#define HCS_AF_DISABLE_ADPT 0x80 /* Adapter disable */
typedef struct _NVRAM {
struct orc_nvram {
/*----------header ---------------*/
UCHAR SubVendorID0; /* 00 - Sub Vendor ID */
UCHAR SubVendorID1; /* 00 - Sub Vendor ID */
UCHAR SubSysID0; /* 02 - Sub System ID */
UCHAR SubSysID1; /* 02 - Sub System ID */
UCHAR SubClass; /* 04 - Sub Class */
UCHAR VendorID0; /* 05 - Vendor ID */
UCHAR VendorID1; /* 05 - Vendor ID */
UCHAR DeviceID0; /* 07 - Device ID */
UCHAR DeviceID1; /* 07 - Device ID */
UCHAR Reserved0[2]; /* 09 - Reserved */
UCHAR Revision; /* 0B - Revision of data structure */
u8 SubVendorID0; /* 00 - Sub Vendor ID */
u8 SubVendorID1; /* 00 - Sub Vendor ID */
u8 SubSysID0; /* 02 - Sub System ID */
u8 SubSysID1; /* 02 - Sub System ID */
u8 SubClass; /* 04 - Sub Class */
u8 VendorID0; /* 05 - Vendor ID */
u8 VendorID1; /* 05 - Vendor ID */
u8 DeviceID0; /* 07 - Device ID */
u8 DeviceID1; /* 07 - Device ID */
u8 Reserved0[2]; /* 09 - Reserved */
u8 revision; /* 0B - revision of data structure */
/* ----Host Adapter Structure ---- */
UCHAR NumOfCh; /* 0C - Number of SCSI channel */
UCHAR BIOSConfig1; /* 0D - BIOS configuration 1 */
UCHAR BIOSConfig2; /* 0E - BIOS boot channel&target ID */
UCHAR BIOSConfig3; /* 0F - BIOS configuration 3 */
u8 NumOfCh; /* 0C - Number of SCSI channel */
u8 BIOSConfig1; /* 0D - BIOS configuration 1 */
u8 BIOSConfig2; /* 0E - BIOS boot channel&target ID */
u8 BIOSConfig3; /* 0F - BIOS configuration 3 */
/* ----SCSI channel Structure ---- */
/* from "CTRL-I SCSI Host Adapter SetUp menu " */
UCHAR SCSI0Id; /* 10 - Channel 0 SCSI ID */
UCHAR SCSI0Config; /* 11 - Channel 0 SCSI configuration */
UCHAR SCSI0MaxTags; /* 12 - Channel 0 Maximum tags */
UCHAR SCSI0ResetTime; /* 13 - Channel 0 Reset recovering time */
UCHAR ReservedforChannel0[2]; /* 14 - Reserved */
u8 scsi_id; /* 10 - Channel 0 SCSI ID */
u8 SCSI0Config; /* 11 - Channel 0 SCSI configuration */
u8 SCSI0MaxTags; /* 12 - Channel 0 Maximum tags */
u8 SCSI0ResetTime; /* 13 - Channel 0 Reset recovering time */
u8 ReservedforChannel0[2]; /* 14 - Reserved */
/* ----SCSI target Structure ---- */
/* from "CTRL-I SCSI device SetUp menu " */
UCHAR Target00Config; /* 16 - Channel 0 Target 0 config */
UCHAR Target01Config; /* 17 - Channel 0 Target 1 config */
UCHAR Target02Config; /* 18 - Channel 0 Target 2 config */
UCHAR Target03Config; /* 19 - Channel 0 Target 3 config */
UCHAR Target04Config; /* 1A - Channel 0 Target 4 config */
UCHAR Target05Config; /* 1B - Channel 0 Target 5 config */
UCHAR Target06Config; /* 1C - Channel 0 Target 6 config */
UCHAR Target07Config; /* 1D - Channel 0 Target 7 config */
UCHAR Target08Config; /* 1E - Channel 0 Target 8 config */
UCHAR Target09Config; /* 1F - Channel 0 Target 9 config */
UCHAR Target0AConfig; /* 20 - Channel 0 Target A config */
UCHAR Target0BConfig; /* 21 - Channel 0 Target B config */
UCHAR Target0CConfig; /* 22 - Channel 0 Target C config */
UCHAR Target0DConfig; /* 23 - Channel 0 Target D config */
UCHAR Target0EConfig; /* 24 - Channel 0 Target E config */
UCHAR Target0FConfig; /* 25 - Channel 0 Target F config */
u8 Target00Config; /* 16 - Channel 0 Target 0 config */
u8 Target01Config; /* 17 - Channel 0 Target 1 config */
u8 Target02Config; /* 18 - Channel 0 Target 2 config */
u8 Target03Config; /* 19 - Channel 0 Target 3 config */
u8 Target04Config; /* 1A - Channel 0 Target 4 config */
u8 Target05Config; /* 1B - Channel 0 Target 5 config */
u8 Target06Config; /* 1C - Channel 0 Target 6 config */
u8 Target07Config; /* 1D - Channel 0 Target 7 config */
u8 Target08Config; /* 1E - Channel 0 Target 8 config */
u8 Target09Config; /* 1F - Channel 0 Target 9 config */
u8 Target0AConfig; /* 20 - Channel 0 Target A config */
u8 Target0BConfig; /* 21 - Channel 0 Target B config */
u8 Target0CConfig; /* 22 - Channel 0 Target C config */
u8 Target0DConfig; /* 23 - Channel 0 Target D config */
u8 Target0EConfig; /* 24 - Channel 0 Target E config */
u8 Target0FConfig; /* 25 - Channel 0 Target F config */
UCHAR SCSI1Id; /* 26 - Channel 1 SCSI ID */
UCHAR SCSI1Config; /* 27 - Channel 1 SCSI configuration */
UCHAR SCSI1MaxTags; /* 28 - Channel 1 Maximum tags */
UCHAR SCSI1ResetTime; /* 29 - Channel 1 Reset recovering time */
UCHAR ReservedforChannel1[2]; /* 2A - Reserved */
u8 SCSI1Id; /* 26 - Channel 1 SCSI ID */
u8 SCSI1Config; /* 27 - Channel 1 SCSI configuration */
u8 SCSI1MaxTags; /* 28 - Channel 1 Maximum tags */
u8 SCSI1ResetTime; /* 29 - Channel 1 Reset recovering time */
u8 ReservedforChannel1[2]; /* 2A - Reserved */
/* ----SCSI target Structure ---- */
/* from "CTRL-I SCSI device SetUp menu " */
UCHAR Target10Config; /* 2C - Channel 1 Target 0 config */
UCHAR Target11Config; /* 2D - Channel 1 Target 1 config */
UCHAR Target12Config; /* 2E - Channel 1 Target 2 config */
UCHAR Target13Config; /* 2F - Channel 1 Target 3 config */
UCHAR Target14Config; /* 30 - Channel 1 Target 4 config */
UCHAR Target15Config; /* 31 - Channel 1 Target 5 config */
UCHAR Target16Config; /* 32 - Channel 1 Target 6 config */
UCHAR Target17Config; /* 33 - Channel 1 Target 7 config */
UCHAR Target18Config; /* 34 - Channel 1 Target 8 config */
UCHAR Target19Config; /* 35 - Channel 1 Target 9 config */
UCHAR Target1AConfig; /* 36 - Channel 1 Target A config */
UCHAR Target1BConfig; /* 37 - Channel 1 Target B config */
UCHAR Target1CConfig; /* 38 - Channel 1 Target C config */
UCHAR Target1DConfig; /* 39 - Channel 1 Target D config */
UCHAR Target1EConfig; /* 3A - Channel 1 Target E config */
UCHAR Target1FConfig; /* 3B - Channel 1 Target F config */
UCHAR reserved[3]; /* 3C - Reserved */
u8 Target10Config; /* 2C - Channel 1 Target 0 config */
u8 Target11Config; /* 2D - Channel 1 Target 1 config */
u8 Target12Config; /* 2E - Channel 1 Target 2 config */
u8 Target13Config; /* 2F - Channel 1 Target 3 config */
u8 Target14Config; /* 30 - Channel 1 Target 4 config */
u8 Target15Config; /* 31 - Channel 1 Target 5 config */
u8 Target16Config; /* 32 - Channel 1 Target 6 config */
u8 Target17Config; /* 33 - Channel 1 Target 7 config */
u8 Target18Config; /* 34 - Channel 1 Target 8 config */
u8 Target19Config; /* 35 - Channel 1 Target 9 config */
u8 Target1AConfig; /* 36 - Channel 1 Target A config */
u8 Target1BConfig; /* 37 - Channel 1 Target B config */
u8 Target1CConfig; /* 38 - Channel 1 Target C config */
u8 Target1DConfig; /* 39 - Channel 1 Target D config */
u8 Target1EConfig; /* 3A - Channel 1 Target E config */
u8 Target1FConfig; /* 3B - Channel 1 Target F config */
u8 reserved[3]; /* 3C - Reserved */
/* ---------- CheckSum ---------- */
UCHAR CheckSum; /* 3F - Checksum of NVRam */
} NVRAM, *PNVRAM;
u8 CheckSum; /* 3F - Checksum of NVRam */
};
/* Bios Configuration for nvram->BIOSConfig1 */
#define NBC_BIOSENABLE 0x01 /* BIOS enable */
@ -407,10 +369,3 @@ typedef struct _NVRAM {
#define NCC_RESET_TIME 0x0A /* SCSI RESET recovering time */
#define NTC_DEFAULT (NTC_1GIGA | NTC_NO_WIDESYNC | NTC_DISC_ENABLE)
#define ORC_RD(x,y) (UCHAR)(inb( (int)((ULONG)((ULONG)x+(UCHAR)y)) ))
#define ORC_RDWORD(x,y) (short)(inl((int)((ULONG)((ULONG)x+(UCHAR)y)) ))
#define ORC_RDLONG(x,y) (long)(inl((int)((ULONG)((ULONG)x+(UCHAR)y)) ))
#define ORC_WR( adr,data) outb( (UCHAR)(data), (int)(adr))
#define ORC_WRSHORT(adr,data) outw( (UWORD)(data), (int)(adr))
#define ORC_WRLONG( adr,data) outl( (ULONG)(data), (int)(adr))

View File

@ -0,0 +1,143 @@
/*
* Detection routine for the NCR53c710 based Amiga SCSI Controllers for Linux.
* Amiga Technologies A4000T SCSI controller.
*
* Written 1997 by Alan Hourihane <alanh@fairlite.demon.co.uk>
* plus modifications of the 53c7xx.c driver to support the Amiga.
*
* Rewritten to use 53c700.c by Kars de Jong <jongk@linux-m68k.org>
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <asm/amigahw.h>
#include <asm/amigaints.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi_transport_spi.h>
#include "53c700.h"
MODULE_AUTHOR("Alan Hourihane <alanh@fairlite.demon.co.uk> / Kars de Jong <jongk@linux-m68k.org>");
MODULE_DESCRIPTION("Amiga A4000T NCR53C710 driver");
MODULE_LICENSE("GPL");
static struct scsi_host_template a4000t_scsi_driver_template = {
.name = "A4000T builtin SCSI",
.proc_name = "A4000t",
.this_id = 7,
.module = THIS_MODULE,
};
static struct platform_device *a4000t_scsi_device;
#define A4000T_SCSI_ADDR 0xdd0040
static int __devinit a4000t_probe(struct device *dev)
{
struct Scsi_Host * host = NULL;
struct NCR_700_Host_Parameters *hostdata;
if (!(MACH_IS_AMIGA && AMIGAHW_PRESENT(A4000_SCSI)))
goto out;
if (!request_mem_region(A4000T_SCSI_ADDR, 0x1000,
"A4000T builtin SCSI"))
goto out;
hostdata = kmalloc(sizeof(struct NCR_700_Host_Parameters), GFP_KERNEL);
if (hostdata == NULL) {
printk(KERN_ERR "a4000t-scsi: Failed to allocate host data\n");
goto out_release;
}
memset(hostdata, 0, sizeof(struct NCR_700_Host_Parameters));
/* Fill in the required pieces of hostdata */
hostdata->base = (void __iomem *)ZTWO_VADDR(A4000T_SCSI_ADDR);
hostdata->clock = 50;
hostdata->chip710 = 1;
hostdata->dmode_extra = DMODE_FC2;
hostdata->dcntl_extra = EA_710;
/* and register the chip */
host = NCR_700_detect(&a4000t_scsi_driver_template, hostdata, dev);
if (!host) {
printk(KERN_ERR "a4000t-scsi: No host detected; "
"board configuration problem?\n");
goto out_free;
}
host->this_id = 7;
host->base = A4000T_SCSI_ADDR;
host->irq = IRQ_AMIGA_PORTS;
if (request_irq(host->irq, NCR_700_intr, IRQF_SHARED, "a4000t-scsi",
host)) {
printk(KERN_ERR "a4000t-scsi: request_irq failed\n");
goto out_put_host;
}
scsi_scan_host(host);
return 0;
out_put_host:
scsi_host_put(host);
out_free:
kfree(hostdata);
out_release:
release_mem_region(A4000T_SCSI_ADDR, 0x1000);
out:
return -ENODEV;
}
static __devexit int a4000t_device_remove(struct device *dev)
{
struct Scsi_Host *host = dev_to_shost(dev);
struct NCR_700_Host_Parameters *hostdata = shost_priv(host);
scsi_remove_host(host);
NCR_700_release(host);
kfree(hostdata);
free_irq(host->irq, host);
release_mem_region(A4000T_SCSI_ADDR, 0x1000);
return 0;
}
static struct device_driver a4000t_scsi_driver = {
.name = "a4000t-scsi",
.bus = &platform_bus_type,
.probe = a4000t_probe,
.remove = __devexit_p(a4000t_device_remove),
};
static int __init a4000t_scsi_init(void)
{
int err;
err = driver_register(&a4000t_scsi_driver);
if (err)
return err;
a4000t_scsi_device = platform_device_register_simple("a4000t-scsi",
-1, NULL, 0);
if (IS_ERR(a4000t_scsi_device)) {
driver_unregister(&a4000t_scsi_driver);
return PTR_ERR(a4000t_scsi_device);
}
return err;
}
static void __exit a4000t_scsi_exit(void)
{
platform_device_unregister(a4000t_scsi_device);
driver_unregister(&a4000t_scsi_driver);
}
module_init(a4000t_scsi_init);
module_exit(a4000t_scsi_exit);

View File

@ -169,6 +169,18 @@ int acbsize = -1;
module_param(acbsize, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(acbsize, "Request a specific adapter control block (FIB) size. Valid values are 512, 2048, 4096 and 8192. Default is to use suggestion from Firmware.");
int update_interval = 30 * 60;
module_param(update_interval, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(update_interval, "Interval in seconds between time sync updates issued to adapter.");
int check_interval = 24 * 60 * 60;
module_param(check_interval, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(check_interval, "Interval in seconds between adapter health checks.");
int check_reset = 1;
module_param(check_reset, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(check_reset, "If adapter fails health check, reset the adapter.");
int expose_physicals = -1;
module_param(expose_physicals, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(expose_physicals, "Expose physical components of the arrays. -1=protect 0=off, 1=on");
@ -312,11 +324,10 @@ int aac_get_containers(struct aac_dev *dev)
if (maximum_num_containers < MAXIMUM_NUM_CONTAINERS)
maximum_num_containers = MAXIMUM_NUM_CONTAINERS;
fsa_dev_ptr = kmalloc(sizeof(*fsa_dev_ptr) * maximum_num_containers,
fsa_dev_ptr = kzalloc(sizeof(*fsa_dev_ptr) * maximum_num_containers,
GFP_KERNEL);
if (!fsa_dev_ptr)
return -ENOMEM;
memset(fsa_dev_ptr, 0, sizeof(*fsa_dev_ptr) * maximum_num_containers);
dev->fsa_dev = fsa_dev_ptr;
dev->maximum_num_containers = maximum_num_containers;
@ -344,21 +355,16 @@ static void aac_internal_transfer(struct scsi_cmnd *scsicmd, void *data, unsigne
{
void *buf;
int transfer_len;
struct scatterlist *sg = scsicmd->request_buffer;
struct scatterlist *sg = scsi_sglist(scsicmd);
buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
transfer_len = min(sg->length, len + offset);
if (scsicmd->use_sg) {
buf = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
transfer_len = min(sg->length, len + offset);
} else {
buf = scsicmd->request_buffer;
transfer_len = min(scsicmd->request_bufflen, len + offset);
}
transfer_len -= offset;
if (buf && transfer_len > 0)
memcpy(buf + offset, data, transfer_len);
if (scsicmd->use_sg)
kunmap_atomic(buf - sg->offset, KM_IRQ0);
kunmap_atomic(buf - sg->offset, KM_IRQ0);
}
@ -451,7 +457,7 @@ static int aac_probe_container_callback2(struct scsi_cmnd * scsicmd)
{
struct fsa_dev_info *fsa_dev_ptr = ((struct aac_dev *)(scsicmd->device->host->hostdata))->fsa_dev;
if (fsa_dev_ptr[scmd_id(scsicmd)].valid)
if ((fsa_dev_ptr[scmd_id(scsicmd)].valid & 1))
return aac_scsi_cmd(scsicmd);
scsicmd->result = DID_NO_CONNECT << 16;
@ -459,18 +465,18 @@ static int aac_probe_container_callback2(struct scsi_cmnd * scsicmd)
return 0;
}
static int _aac_probe_container2(void * context, struct fib * fibptr)
static void _aac_probe_container2(void * context, struct fib * fibptr)
{
struct fsa_dev_info *fsa_dev_ptr;
int (*callback)(struct scsi_cmnd *);
struct scsi_cmnd * scsicmd = (struct scsi_cmnd *)context;
if (!aac_valid_context(scsicmd, fibptr))
return 0;
fsa_dev_ptr = ((struct aac_dev *)(scsicmd->device->host->hostdata))->fsa_dev;
if (!aac_valid_context(scsicmd, fibptr))
return;
scsicmd->SCp.Status = 0;
fsa_dev_ptr = fibptr->dev->fsa_dev;
if (fsa_dev_ptr) {
struct aac_mount * dresp = (struct aac_mount *) fib_data(fibptr);
fsa_dev_ptr += scmd_id(scsicmd);
@ -493,10 +499,11 @@ static int _aac_probe_container2(void * context, struct fib * fibptr)
aac_fib_free(fibptr);
callback = (int (*)(struct scsi_cmnd *))(scsicmd->SCp.ptr);
scsicmd->SCp.ptr = NULL;
return (*callback)(scsicmd);
(*callback)(scsicmd);
return;
}
static int _aac_probe_container1(void * context, struct fib * fibptr)
static void _aac_probe_container1(void * context, struct fib * fibptr)
{
struct scsi_cmnd * scsicmd;
struct aac_mount * dresp;
@ -506,13 +513,14 @@ static int _aac_probe_container1(void * context, struct fib * fibptr)
dresp = (struct aac_mount *) fib_data(fibptr);
dresp->mnt[0].capacityhigh = 0;
if ((le32_to_cpu(dresp->status) != ST_OK) ||
(le32_to_cpu(dresp->mnt[0].vol) != CT_NONE))
return _aac_probe_container2(context, fibptr);
(le32_to_cpu(dresp->mnt[0].vol) != CT_NONE)) {
_aac_probe_container2(context, fibptr);
return;
}
scsicmd = (struct scsi_cmnd *) context;
scsicmd->SCp.phase = AAC_OWNER_MIDLEVEL;
if (!aac_valid_context(scsicmd, fibptr))
return 0;
return;
aac_fib_init(fibptr);
@ -527,21 +535,18 @@ static int _aac_probe_container1(void * context, struct fib * fibptr)
sizeof(struct aac_query_mount),
FsaNormal,
0, 1,
(fib_callback) _aac_probe_container2,
_aac_probe_container2,
(void *) scsicmd);
/*
* Check that the command queued to the controller
*/
if (status == -EINPROGRESS) {
if (status == -EINPROGRESS)
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
return 0;
}
if (status < 0) {
else if (status < 0) {
/* Inherit results from VM_NameServe, if any */
dresp->status = cpu_to_le32(ST_OK);
return _aac_probe_container2(context, fibptr);
_aac_probe_container2(context, fibptr);
}
return 0;
}
static int _aac_probe_container(struct scsi_cmnd * scsicmd, int (*callback)(struct scsi_cmnd *))
@ -566,7 +571,7 @@ static int _aac_probe_container(struct scsi_cmnd * scsicmd, int (*callback)(stru
sizeof(struct aac_query_mount),
FsaNormal,
0, 1,
(fib_callback) _aac_probe_container1,
_aac_probe_container1,
(void *) scsicmd);
/*
* Check that the command queued to the controller
@ -620,7 +625,7 @@ int aac_probe_container(struct aac_dev *dev, int cid)
return -ENOMEM;
}
scsicmd->list.next = NULL;
scsicmd->scsi_done = (void (*)(struct scsi_cmnd*))_aac_probe_container1;
scsicmd->scsi_done = (void (*)(struct scsi_cmnd*))aac_probe_container_callback1;
scsicmd->device = scsidev;
scsidev->sdev_state = 0;
@ -825,7 +830,7 @@ static int aac_read_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u3
readcmd->block[1] = cpu_to_le32((u32)((lba&0xffffffff00000000LL)>>32));
readcmd->count = cpu_to_le32(count<<9);
readcmd->cid = cpu_to_le16(scmd_id(cmd));
readcmd->flags = cpu_to_le16(1);
readcmd->flags = cpu_to_le16(IO_TYPE_READ);
readcmd->bpTotal = 0;
readcmd->bpComplete = 0;
@ -904,7 +909,7 @@ static int aac_read_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32
(void *) cmd);
}
static int aac_write_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32 count)
static int aac_write_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32 count, int fua)
{
u16 fibsize;
struct aac_raw_io *writecmd;
@ -914,7 +919,9 @@ static int aac_write_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u
writecmd->block[1] = cpu_to_le32((u32)((lba&0xffffffff00000000LL)>>32));
writecmd->count = cpu_to_le32(count<<9);
writecmd->cid = cpu_to_le16(scmd_id(cmd));
writecmd->flags = 0;
writecmd->flags = fua ?
cpu_to_le16(IO_TYPE_WRITE|IO_SUREWRITE) :
cpu_to_le16(IO_TYPE_WRITE);
writecmd->bpTotal = 0;
writecmd->bpComplete = 0;
@ -933,7 +940,7 @@ static int aac_write_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u
(void *) cmd);
}
static int aac_write_block64(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32 count)
static int aac_write_block64(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32 count, int fua)
{
u16 fibsize;
struct aac_write64 *writecmd;
@ -964,7 +971,7 @@ static int aac_write_block64(struct fib * fib, struct scsi_cmnd * cmd, u64 lba,
(void *) cmd);
}
static int aac_write_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32 count)
static int aac_write_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32 count, int fua)
{
u16 fibsize;
struct aac_write *writecmd;
@ -1041,7 +1048,7 @@ static int aac_scsi_64(struct fib * fib, struct scsi_cmnd * cmd)
struct aac_srb * srbcmd = aac_scsi_common(fib, cmd);
aac_build_sg64(cmd, (struct sgmap64*) &srbcmd->sg);
srbcmd->count = cpu_to_le32(cmd->request_bufflen);
srbcmd->count = cpu_to_le32(scsi_bufflen(cmd));
memset(srbcmd->cdb, 0, sizeof(srbcmd->cdb));
memcpy(srbcmd->cdb, cmd->cmnd, cmd->cmd_len);
@ -1069,7 +1076,7 @@ static int aac_scsi_32(struct fib * fib, struct scsi_cmnd * cmd)
struct aac_srb * srbcmd = aac_scsi_common(fib, cmd);
aac_build_sg(cmd, (struct sgmap*)&srbcmd->sg);
srbcmd->count = cpu_to_le32(cmd->request_bufflen);
srbcmd->count = cpu_to_le32(scsi_bufflen(cmd));
memset(srbcmd->cdb, 0, sizeof(srbcmd->cdb));
memcpy(srbcmd->cdb, cmd->cmnd, cmd->cmd_len);
@ -1172,6 +1179,7 @@ int aac_get_adapter_info(struct aac_dev* dev)
}
if (!dev->in_reset) {
char buffer[16];
tmp = le32_to_cpu(dev->adapter_info.kernelrev);
printk(KERN_INFO "%s%d: kernel %d.%d-%d[%d] %.*s\n",
dev->name,
@ -1192,16 +1200,23 @@ int aac_get_adapter_info(struct aac_dev* dev)
dev->name, dev->id,
tmp>>24,(tmp>>16)&0xff,tmp&0xff,
le32_to_cpu(dev->adapter_info.biosbuild));
if (le32_to_cpu(dev->adapter_info.serial[0]) != 0xBAD0)
printk(KERN_INFO "%s%d: serial %x\n",
dev->name, dev->id,
le32_to_cpu(dev->adapter_info.serial[0]));
buffer[0] = '\0';
if (aac_show_serial_number(
shost_to_class(dev->scsi_host_ptr), buffer))
printk(KERN_INFO "%s%d: serial %s",
dev->name, dev->id, buffer);
if (dev->supplement_adapter_info.VpdInfo.Tsid[0]) {
printk(KERN_INFO "%s%d: TSID %.*s\n",
dev->name, dev->id,
(int)sizeof(dev->supplement_adapter_info.VpdInfo.Tsid),
dev->supplement_adapter_info.VpdInfo.Tsid);
}
if (!check_reset ||
(dev->supplement_adapter_info.SupportedOptions2 &
le32_to_cpu(AAC_OPTION_IGNORE_RESET))) {
printk(KERN_INFO "%s%d: Reset Adapter Ignored\n",
dev->name, dev->id);
}
}
dev->nondasd_support = 0;
@ -1332,7 +1347,7 @@ static void io_callback(void *context, struct fib * fibptr)
if (!aac_valid_context(scsicmd, fibptr))
return;
dev = (struct aac_dev *)scsicmd->device->host->hostdata;
dev = fibptr->dev;
cid = scmd_id(scsicmd);
if (nblank(dprintk(x))) {
@ -1371,16 +1386,9 @@ static void io_callback(void *context, struct fib * fibptr)
}
BUG_ON(fibptr == NULL);
if(scsicmd->use_sg)
pci_unmap_sg(dev->pdev,
(struct scatterlist *)scsicmd->request_buffer,
scsicmd->use_sg,
scsicmd->sc_data_direction);
else if(scsicmd->request_bufflen)
pci_unmap_single(dev->pdev, scsicmd->SCp.dma_handle,
scsicmd->request_bufflen,
scsicmd->sc_data_direction);
scsi_dma_unmap(scsicmd);
readreply = (struct aac_read_reply *)fib_data(fibptr);
if (le32_to_cpu(readreply->status) == ST_OK)
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_GOOD;
@ -1498,6 +1506,7 @@ static int aac_write(struct scsi_cmnd * scsicmd)
{
u64 lba;
u32 count;
int fua;
int status;
struct aac_dev *dev;
struct fib * cmd_fibcontext;
@ -1512,6 +1521,7 @@ static int aac_write(struct scsi_cmnd * scsicmd)
count = scsicmd->cmnd[4];
if (count == 0)
count = 256;
fua = 0;
} else if (scsicmd->cmnd[0] == WRITE_16) { /* 16 byte command */
dprintk((KERN_DEBUG "aachba: received a write(16) command on id %d.\n", scmd_id(scsicmd)));
@ -1524,6 +1534,7 @@ static int aac_write(struct scsi_cmnd * scsicmd)
(scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
count = (scsicmd->cmnd[10] << 24) | (scsicmd->cmnd[11] << 16) |
(scsicmd->cmnd[12] << 8) | scsicmd->cmnd[13];
fua = scsicmd->cmnd[1] & 0x8;
} else if (scsicmd->cmnd[0] == WRITE_12) { /* 12 byte command */
dprintk((KERN_DEBUG "aachba: received a write(12) command on id %d.\n", scmd_id(scsicmd)));
@ -1531,10 +1542,12 @@ static int aac_write(struct scsi_cmnd * scsicmd)
| (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
count = (scsicmd->cmnd[6] << 24) | (scsicmd->cmnd[7] << 16)
| (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
fua = scsicmd->cmnd[1] & 0x8;
} else {
dprintk((KERN_DEBUG "aachba: received a write(10) command on id %d.\n", scmd_id(scsicmd)));
lba = ((u64)scsicmd->cmnd[2] << 24) | (scsicmd->cmnd[3] << 16) | (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
count = (scsicmd->cmnd[7] << 8) | scsicmd->cmnd[8];
fua = scsicmd->cmnd[1] & 0x8;
}
dprintk((KERN_DEBUG "aac_write[cpu %d]: lba = %llu, t = %ld.\n",
smp_processor_id(), (unsigned long long)lba, jiffies));
@ -1549,7 +1562,7 @@ static int aac_write(struct scsi_cmnd * scsicmd)
return 0;
}
status = aac_adapter_write(cmd_fibcontext, scsicmd, lba, count);
status = aac_adapter_write(cmd_fibcontext, scsicmd, lba, count, fua);
/*
* Check that the command queued to the controller
@ -1592,7 +1605,7 @@ static void synchronize_callback(void *context, struct fib *fibptr)
COMMAND_COMPLETE << 8 | SAM_STAT_GOOD;
else {
struct scsi_device *sdev = cmd->device;
struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
struct aac_dev *dev = fibptr->dev;
u32 cid = sdev_id(sdev);
printk(KERN_WARNING
"synchronize_callback: synchronize failed, status = %d\n",
@ -1699,7 +1712,7 @@ static int aac_synchronize(struct scsi_cmnd *scsicmd)
int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
{
u32 cid = 0;
u32 cid;
struct Scsi_Host *host = scsicmd->device->host;
struct aac_dev *dev = (struct aac_dev *)host->hostdata;
struct fsa_dev_info *fsa_dev_ptr = dev->fsa_dev;
@ -1711,15 +1724,15 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
* Test does not apply to ID 16, the pseudo id for the controller
* itself.
*/
if (scmd_id(scsicmd) != host->this_id) {
if ((scmd_channel(scsicmd) == CONTAINER_CHANNEL)) {
if((scmd_id(scsicmd) >= dev->maximum_num_containers) ||
cid = scmd_id(scsicmd);
if (cid != host->this_id) {
if (scmd_channel(scsicmd) == CONTAINER_CHANNEL) {
if((cid >= dev->maximum_num_containers) ||
(scsicmd->device->lun != 0)) {
scsicmd->result = DID_NO_CONNECT << 16;
scsicmd->scsi_done(scsicmd);
return 0;
}
cid = scmd_id(scsicmd);
/*
* If the target container doesn't exist, it may have
@ -1782,7 +1795,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
{
struct inquiry_data inq_data;
dprintk((KERN_DEBUG "INQUIRY command, ID: %d.\n", scmd_id(scsicmd)));
dprintk((KERN_DEBUG "INQUIRY command, ID: %d.\n", cid));
memset(&inq_data, 0, sizeof (struct inquiry_data));
inq_data.inqd_ver = 2; /* claim compliance to SCSI-2 */
@ -1794,7 +1807,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
* Set the Vendor, Product, and Revision Level
* see: <vendor>.c i.e. aac.c
*/
if (scmd_id(scsicmd) == host->this_id) {
if (cid == host->this_id) {
setinqstr(dev, (void *) (inq_data.inqd_vid), ARRAY_SIZE(container_types));
inq_data.inqd_pdt = INQD_PDT_PROC; /* Processor device */
aac_internal_transfer(scsicmd, &inq_data, 0, sizeof(inq_data));
@ -1886,15 +1899,29 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
case MODE_SENSE:
{
char mode_buf[4];
char mode_buf[7];
int mode_buf_length = 4;
dprintk((KERN_DEBUG "MODE SENSE command.\n"));
mode_buf[0] = 3; /* Mode data length */
mode_buf[1] = 0; /* Medium type - default */
mode_buf[2] = 0; /* Device-specific param, bit 8: 0/1 = write enabled/protected */
mode_buf[2] = 0; /* Device-specific param,
bit 8: 0/1 = write enabled/protected
bit 4: 0/1 = FUA enabled */
if (dev->raw_io_interface)
mode_buf[2] = 0x10;
mode_buf[3] = 0; /* Block descriptor length */
aac_internal_transfer(scsicmd, mode_buf, 0, sizeof(mode_buf));
if (((scsicmd->cmnd[2] & 0x3f) == 8) ||
((scsicmd->cmnd[2] & 0x3f) == 0x3f)) {
mode_buf[0] = 6;
mode_buf[4] = 8;
mode_buf[5] = 1;
mode_buf[6] = 0x04; /* WCE */
mode_buf_length = 7;
if (mode_buf_length > scsicmd->cmnd[4])
mode_buf_length = scsicmd->cmnd[4];
}
aac_internal_transfer(scsicmd, mode_buf, 0, mode_buf_length);
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_GOOD;
scsicmd->scsi_done(scsicmd);
@ -1902,18 +1929,33 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
}
case MODE_SENSE_10:
{
char mode_buf[8];
char mode_buf[11];
int mode_buf_length = 8;
dprintk((KERN_DEBUG "MODE SENSE 10 byte command.\n"));
mode_buf[0] = 0; /* Mode data length (MSB) */
mode_buf[1] = 6; /* Mode data length (LSB) */
mode_buf[2] = 0; /* Medium type - default */
mode_buf[3] = 0; /* Device-specific param, bit 8: 0/1 = write enabled/protected */
mode_buf[3] = 0; /* Device-specific param,
bit 8: 0/1 = write enabled/protected
bit 4: 0/1 = FUA enabled */
if (dev->raw_io_interface)
mode_buf[3] = 0x10;
mode_buf[4] = 0; /* reserved */
mode_buf[5] = 0; /* reserved */
mode_buf[6] = 0; /* Block descriptor length (MSB) */
mode_buf[7] = 0; /* Block descriptor length (LSB) */
aac_internal_transfer(scsicmd, mode_buf, 0, sizeof(mode_buf));
if (((scsicmd->cmnd[2] & 0x3f) == 8) ||
((scsicmd->cmnd[2] & 0x3f) == 0x3f)) {
mode_buf[1] = 9;
mode_buf[8] = 8;
mode_buf[9] = 1;
mode_buf[10] = 0x04; /* WCE */
mode_buf_length = 11;
if (mode_buf_length > scsicmd->cmnd[8])
mode_buf_length = scsicmd->cmnd[8];
}
aac_internal_transfer(scsicmd, mode_buf, 0, mode_buf_length);
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_GOOD;
scsicmd->scsi_done(scsicmd);
@ -2136,28 +2178,21 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
if (!aac_valid_context(scsicmd, fibptr))
return;
dev = (struct aac_dev *)scsicmd->device->host->hostdata;
BUG_ON(fibptr == NULL);
dev = fibptr->dev;
srbreply = (struct aac_srb_reply *) fib_data(fibptr);
scsicmd->sense_buffer[0] = '\0'; /* Initialize sense valid flag to false */
/*
* Calculate resid for sg
*/
scsicmd->resid = scsicmd->request_bufflen -
le32_to_cpu(srbreply->data_xfer_length);
if(scsicmd->use_sg)
pci_unmap_sg(dev->pdev,
(struct scatterlist *)scsicmd->request_buffer,
scsicmd->use_sg,
scsicmd->sc_data_direction);
else if(scsicmd->request_bufflen)
pci_unmap_single(dev->pdev, scsicmd->SCp.dma_handle, scsicmd->request_bufflen,
scsicmd->sc_data_direction);
scsi_set_resid(scsicmd, scsi_bufflen(scsicmd)
- le32_to_cpu(srbreply->data_xfer_length));
scsi_dma_unmap(scsicmd);
/*
* First check the fib status
@ -2233,7 +2268,7 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
break;
case SRB_STATUS_BUSY:
scsicmd->result = DID_NO_CONNECT << 16 | COMMAND_COMPLETE << 8;
scsicmd->result = DID_BUS_BUSY << 16 | COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_BUS_RESET:
@ -2343,34 +2378,33 @@ static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* psg)
{
struct aac_dev *dev;
unsigned long byte_count = 0;
int nseg;
dev = (struct aac_dev *)scsicmd->device->host->hostdata;
// Get rid of old data
psg->count = 0;
psg->sg[0].addr = 0;
psg->sg[0].count = 0;
if (scsicmd->use_sg) {
psg->sg[0].count = 0;
nseg = scsi_dma_map(scsicmd);
BUG_ON(nseg < 0);
if (nseg) {
struct scatterlist *sg;
int i;
int sg_count;
sg = (struct scatterlist *) scsicmd->request_buffer;
sg_count = pci_map_sg(dev->pdev, sg, scsicmd->use_sg,
scsicmd->sc_data_direction);
psg->count = cpu_to_le32(sg_count);
psg->count = cpu_to_le32(nseg);
for (i = 0; i < sg_count; i++) {
scsi_for_each_sg(scsicmd, sg, nseg, i) {
psg->sg[i].addr = cpu_to_le32(sg_dma_address(sg));
psg->sg[i].count = cpu_to_le32(sg_dma_len(sg));
byte_count += sg_dma_len(sg);
sg++;
}
/* hba wants the size to be exact */
if(byte_count > scsicmd->request_bufflen){
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
(byte_count - scsicmd->request_bufflen);
if (byte_count > scsi_bufflen(scsicmd)) {
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
(byte_count - scsi_bufflen(scsicmd));
psg->sg[i-1].count = cpu_to_le32(temp);
byte_count = scsicmd->request_bufflen;
byte_count = scsi_bufflen(scsicmd);
}
/* Check for command underflow */
if(scsicmd->underflow && (byte_count < scsicmd->underflow)){
@ -2378,18 +2412,6 @@ static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* psg)
byte_count, scsicmd->underflow);
}
}
else if(scsicmd->request_bufflen) {
u32 addr;
scsicmd->SCp.dma_handle = pci_map_single(dev->pdev,
scsicmd->request_buffer,
scsicmd->request_bufflen,
scsicmd->sc_data_direction);
addr = scsicmd->SCp.dma_handle;
psg->count = cpu_to_le32(1);
psg->sg[0].addr = cpu_to_le32(addr);
psg->sg[0].count = cpu_to_le32(scsicmd->request_bufflen);
byte_count = scsicmd->request_bufflen;
}
return byte_count;
}
@ -2399,6 +2421,7 @@ static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* p
struct aac_dev *dev;
unsigned long byte_count = 0;
u64 addr;
int nseg;
dev = (struct aac_dev *)scsicmd->device->host->hostdata;
// Get rid of old data
@ -2406,31 +2429,28 @@ static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* p
psg->sg[0].addr[0] = 0;
psg->sg[0].addr[1] = 0;
psg->sg[0].count = 0;
if (scsicmd->use_sg) {
nseg = scsi_dma_map(scsicmd);
BUG_ON(nseg < 0);
if (nseg) {
struct scatterlist *sg;
int i;
int sg_count;
sg = (struct scatterlist *) scsicmd->request_buffer;
sg_count = pci_map_sg(dev->pdev, sg, scsicmd->use_sg,
scsicmd->sc_data_direction);
for (i = 0; i < sg_count; i++) {
scsi_for_each_sg(scsicmd, sg, nseg, i) {
int count = sg_dma_len(sg);
addr = sg_dma_address(sg);
psg->sg[i].addr[0] = cpu_to_le32(addr & 0xffffffff);
psg->sg[i].addr[1] = cpu_to_le32(addr>>32);
psg->sg[i].count = cpu_to_le32(count);
byte_count += count;
sg++;
}
psg->count = cpu_to_le32(sg_count);
psg->count = cpu_to_le32(nseg);
/* hba wants the size to be exact */
if(byte_count > scsicmd->request_bufflen){
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
(byte_count - scsicmd->request_bufflen);
if (byte_count > scsi_bufflen(scsicmd)) {
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
(byte_count - scsi_bufflen(scsicmd));
psg->sg[i-1].count = cpu_to_le32(temp);
byte_count = scsicmd->request_bufflen;
byte_count = scsi_bufflen(scsicmd);
}
/* Check for command underflow */
if(scsicmd->underflow && (byte_count < scsicmd->underflow)){
@ -2438,26 +2458,13 @@ static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* p
byte_count, scsicmd->underflow);
}
}
else if(scsicmd->request_bufflen) {
scsicmd->SCp.dma_handle = pci_map_single(dev->pdev,
scsicmd->request_buffer,
scsicmd->request_bufflen,
scsicmd->sc_data_direction);
addr = scsicmd->SCp.dma_handle;
psg->count = cpu_to_le32(1);
psg->sg[0].addr[0] = cpu_to_le32(addr & 0xffffffff);
psg->sg[0].addr[1] = cpu_to_le32(addr >> 32);
psg->sg[0].count = cpu_to_le32(scsicmd->request_bufflen);
byte_count = scsicmd->request_bufflen;
}
return byte_count;
}
static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw* psg)
{
struct Scsi_Host *host = scsicmd->device->host;
struct aac_dev *dev = (struct aac_dev *)host->hostdata;
unsigned long byte_count = 0;
int nseg;
// Get rid of old data
psg->count = 0;
@ -2467,16 +2474,14 @@ static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw*
psg->sg[0].addr[1] = 0;
psg->sg[0].count = 0;
psg->sg[0].flags = 0;
if (scsicmd->use_sg) {
nseg = scsi_dma_map(scsicmd);
BUG_ON(nseg < 0);
if (nseg) {
struct scatterlist *sg;
int i;
int sg_count;
sg = (struct scatterlist *) scsicmd->request_buffer;
sg_count = pci_map_sg(dev->pdev, sg, scsicmd->use_sg,
scsicmd->sc_data_direction);
for (i = 0; i < sg_count; i++) {
scsi_for_each_sg(scsicmd, sg, nseg, i) {
int count = sg_dma_len(sg);
u64 addr = sg_dma_address(sg);
psg->sg[i].next = 0;
@ -2486,15 +2491,14 @@ static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw*
psg->sg[i].count = cpu_to_le32(count);
psg->sg[i].flags = 0;
byte_count += count;
sg++;
}
psg->count = cpu_to_le32(sg_count);
psg->count = cpu_to_le32(nseg);
/* hba wants the size to be exact */
if(byte_count > scsicmd->request_bufflen){
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
(byte_count - scsicmd->request_bufflen);
if (byte_count > scsi_bufflen(scsicmd)) {
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
(byte_count - scsi_bufflen(scsicmd));
psg->sg[i-1].count = cpu_to_le32(temp);
byte_count = scsicmd->request_bufflen;
byte_count = scsi_bufflen(scsicmd);
}
/* Check for command underflow */
if(scsicmd->underflow && (byte_count < scsicmd->underflow)){
@ -2502,24 +2506,6 @@ static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw*
byte_count, scsicmd->underflow);
}
}
else if(scsicmd->request_bufflen) {
int count;
u64 addr;
scsicmd->SCp.dma_handle = pci_map_single(dev->pdev,
scsicmd->request_buffer,
scsicmd->request_bufflen,
scsicmd->sc_data_direction);
addr = scsicmd->SCp.dma_handle;
count = scsicmd->request_bufflen;
psg->count = cpu_to_le32(1);
psg->sg[0].next = 0;
psg->sg[0].prev = 0;
psg->sg[0].addr[1] = cpu_to_le32((u32)(addr>>32));
psg->sg[0].addr[0] = cpu_to_le32((u32)(addr & 0xffffffff));
psg->sg[0].count = cpu_to_le32(count);
psg->sg[0].flags = 0;
byte_count = scsicmd->request_bufflen;
}
return byte_count;
}

View File

@ -12,8 +12,8 @@
*----------------------------------------------------------------------------*/
#ifndef AAC_DRIVER_BUILD
# define AAC_DRIVER_BUILD 2437
# define AAC_DRIVER_BRANCH "-mh4"
# define AAC_DRIVER_BUILD 2447
# define AAC_DRIVER_BRANCH "-ms"
#endif
#define MAXIMUM_NUM_CONTAINERS 32
@ -464,12 +464,12 @@ struct adapter_ops
int (*adapter_restart)(struct aac_dev *dev, int bled);
/* Transport operations */
int (*adapter_ioremap)(struct aac_dev * dev, u32 size);
irqreturn_t (*adapter_intr)(int irq, void *dev_id);
irq_handler_t adapter_intr;
/* Packet operations */
int (*adapter_deliver)(struct fib * fib);
int (*adapter_bounds)(struct aac_dev * dev, struct scsi_cmnd * cmd, u64 lba);
int (*adapter_read)(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32 count);
int (*adapter_write)(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32 count);
int (*adapter_write)(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32 count, int fua);
int (*adapter_scsi)(struct fib * fib, struct scsi_cmnd * cmd);
/* Administrative operations */
int (*adapter_comm)(struct aac_dev * dev, int comm);
@ -860,10 +860,12 @@ struct aac_supplement_adapter_info
__le32 FlashFirmwareBootBuild;
u8 MfgPcbaSerialNo[12];
u8 MfgWWNName[8];
__le32 MoreFeatureBits;
__le32 SupportedOptions2;
__le32 ReservedGrowth[1];
};
#define AAC_FEATURE_FALCON 0x00000010
#define AAC_OPTION_MU_RESET 0x00000001
#define AAC_OPTION_IGNORE_RESET 0x00000002
#define AAC_SIS_VERSION_V3 3
#define AAC_SIS_SLOT_UNKNOWN 0xFF
@ -1054,8 +1056,8 @@ struct aac_dev
#define aac_adapter_read(fib,cmd,lba,count) \
((fib)->dev)->a_ops.adapter_read(fib,cmd,lba,count)
#define aac_adapter_write(fib,cmd,lba,count) \
((fib)->dev)->a_ops.adapter_write(fib,cmd,lba,count)
#define aac_adapter_write(fib,cmd,lba,count,fua) \
((fib)->dev)->a_ops.adapter_write(fib,cmd,lba,count,fua)
#define aac_adapter_scsi(fib,cmd) \
((fib)->dev)->a_ops.adapter_scsi(fib,cmd)
@ -1213,6 +1215,9 @@ struct aac_write64
__le32 block;
__le16 pad;
__le16 flags;
#define IO_TYPE_WRITE 0x00000000
#define IO_TYPE_READ 0x00000001
#define IO_SUREWRITE 0x00000008
struct sgmap64 sg; // Must be last in struct because it is variable
};
struct aac_write_reply
@ -1257,6 +1262,19 @@ struct aac_synchronize_reply {
u8 data[16];
};
#define CT_PAUSE_IO 65
#define CT_RELEASE_IO 66
struct aac_pause {
__le32 command; /* VM_ContainerConfig */
__le32 type; /* CT_PAUSE_IO */
__le32 timeout; /* 10ms ticks */
__le32 min;
__le32 noRescan;
__le32 parm3;
__le32 parm4;
__le32 count; /* sizeof(((struct aac_pause_reply *)NULL)->data) */
};
struct aac_srb
{
__le32 function;
@ -1804,6 +1822,10 @@ int aac_get_config_status(struct aac_dev *dev, int commit_flag);
int aac_get_containers(struct aac_dev *dev);
int aac_scsi_cmd(struct scsi_cmnd *cmd);
int aac_dev_ioctl(struct aac_dev *dev, int cmd, void __user *arg);
#ifndef shost_to_class
#define shost_to_class(shost) &shost->shost_classdev
#endif
ssize_t aac_show_serial_number(struct class_device *class_dev, char *buf);
int aac_do_ioctl(struct aac_dev * dev, int cmd, void __user *arg);
int aac_rx_init(struct aac_dev *dev);
int aac_rkt_init(struct aac_dev *dev);
@ -1813,6 +1835,7 @@ int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw
unsigned int aac_response_normal(struct aac_queue * q);
unsigned int aac_command_normal(struct aac_queue * q);
unsigned int aac_intr_normal(struct aac_dev * dev, u32 Index);
int aac_reset_adapter(struct aac_dev * dev, int forced);
int aac_check_health(struct aac_dev * dev);
int aac_command_thread(void *data);
int aac_close_fib_context(struct aac_dev * dev, struct aac_fib_context *fibctx);
@ -1832,3 +1855,6 @@ extern int aif_timeout;
extern int expose_physicals;
extern int aac_reset_devices;
extern int aac_commit;
extern int update_interval;
extern int check_interval;
extern int check_reset;

View File

@ -1021,7 +1021,7 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
}
static int _aac_reset_adapter(struct aac_dev *aac)
static int _aac_reset_adapter(struct aac_dev *aac, int forced)
{
int index, quirks;
int retval;
@ -1029,25 +1029,32 @@ static int _aac_reset_adapter(struct aac_dev *aac)
struct scsi_device *dev;
struct scsi_cmnd *command;
struct scsi_cmnd *command_list;
int jafo = 0;
/*
* Assumptions:
* - host is locked.
* - host is locked, unless called by the aacraid thread.
* (a matter of convenience, due to legacy issues surrounding
* eh_host_adapter_reset).
* - in_reset is asserted, so no new i/o is getting to the
* card.
* - The card is dead.
* - The card is dead, or will be very shortly ;-/ so no new
* commands are completing in the interrupt service.
*/
host = aac->scsi_host_ptr;
scsi_block_requests(host);
aac_adapter_disable_int(aac);
spin_unlock_irq(host->host_lock);
kthread_stop(aac->thread);
if (aac->thread->pid != current->pid) {
spin_unlock_irq(host->host_lock);
kthread_stop(aac->thread);
jafo = 1;
}
/*
* If a positive health, means in a known DEAD PANIC
* state and the adapter could be reset to `try again'.
*/
retval = aac_adapter_restart(aac, aac_adapter_check_health(aac));
retval = aac_adapter_restart(aac, forced ? 0 : aac_adapter_check_health(aac));
if (retval)
goto out;
@ -1104,10 +1111,12 @@ static int _aac_reset_adapter(struct aac_dev *aac)
if (aac_get_driver_ident(index)->quirks & AAC_QUIRK_31BIT)
if ((retval = pci_set_dma_mask(aac->pdev, DMA_32BIT_MASK)))
goto out;
aac->thread = kthread_run(aac_command_thread, aac, aac->name);
if (IS_ERR(aac->thread)) {
retval = PTR_ERR(aac->thread);
goto out;
if (jafo) {
aac->thread = kthread_run(aac_command_thread, aac, aac->name);
if (IS_ERR(aac->thread)) {
retval = PTR_ERR(aac->thread);
goto out;
}
}
(void)aac_get_adapter_info(aac);
quirks = aac_get_driver_ident(index)->quirks;
@ -1150,7 +1159,98 @@ static int _aac_reset_adapter(struct aac_dev *aac)
out:
aac->in_reset = 0;
scsi_unblock_requests(host);
spin_lock_irq(host->host_lock);
if (jafo) {
spin_lock_irq(host->host_lock);
}
return retval;
}
int aac_reset_adapter(struct aac_dev * aac, int forced)
{
unsigned long flagv = 0;
int retval;
struct Scsi_Host * host;
if (spin_trylock_irqsave(&aac->fib_lock, flagv) == 0)
return -EBUSY;
if (aac->in_reset) {
spin_unlock_irqrestore(&aac->fib_lock, flagv);
return -EBUSY;
}
aac->in_reset = 1;
spin_unlock_irqrestore(&aac->fib_lock, flagv);
/*
* Wait for all commands to complete to this specific
* target (block maximum 60 seconds). Although not necessary,
* it does make us a good storage citizen.
*/
host = aac->scsi_host_ptr;
scsi_block_requests(host);
if (forced < 2) for (retval = 60; retval; --retval) {
struct scsi_device * dev;
struct scsi_cmnd * command;
int active = 0;
__shost_for_each_device(dev, host) {
spin_lock_irqsave(&dev->list_lock, flagv);
list_for_each_entry(command, &dev->cmd_list, list) {
if (command->SCp.phase == AAC_OWNER_FIRMWARE) {
active++;
break;
}
}
spin_unlock_irqrestore(&dev->list_lock, flagv);
if (active)
break;
}
/*
* We can exit If all the commands are complete
*/
if (active == 0)
break;
ssleep(1);
}
/* Quiesce build, flush cache, write through mode */
aac_send_shutdown(aac);
spin_lock_irqsave(host->host_lock, flagv);
retval = _aac_reset_adapter(aac, forced);
spin_unlock_irqrestore(host->host_lock, flagv);
if (retval == -ENODEV) {
/* Unwind aac_send_shutdown() IOP_RESET unsupported/disabled */
struct fib * fibctx = aac_fib_alloc(aac);
if (fibctx) {
struct aac_pause *cmd;
int status;
aac_fib_init(fibctx);
cmd = (struct aac_pause *) fib_data(fibctx);
cmd->command = cpu_to_le32(VM_ContainerConfig);
cmd->type = cpu_to_le32(CT_PAUSE_IO);
cmd->timeout = cpu_to_le32(1);
cmd->min = cpu_to_le32(1);
cmd->noRescan = cpu_to_le32(1);
cmd->count = cpu_to_le32(0);
status = aac_fib_send(ContainerCommand,
fibctx,
sizeof(struct aac_pause),
FsaNormal,
-2 /* Timeout silently */, 1,
NULL, NULL);
if (status >= 0)
aac_fib_complete(fibctx);
aac_fib_free(fibctx);
}
}
return retval;
}
@ -1270,10 +1370,15 @@ int aac_check_health(struct aac_dev * aac)
printk(KERN_ERR "%s: Host adapter BLINK LED 0x%x\n", aac->name, BlinkLED);
if (!check_reset || (aac->supplement_adapter_info.SupportedOptions2 &
le32_to_cpu(AAC_OPTION_IGNORE_RESET)))
goto out;
host = aac->scsi_host_ptr;
spin_lock_irqsave(host->host_lock, flagv);
BlinkLED = _aac_reset_adapter(aac);
spin_unlock_irqrestore(host->host_lock, flagv);
if (aac->thread->pid != current->pid)
spin_lock_irqsave(host->host_lock, flagv);
BlinkLED = _aac_reset_adapter(aac, 0);
if (aac->thread->pid != current->pid)
spin_unlock_irqrestore(host->host_lock, flagv);
return BlinkLED;
out:
@ -1300,6 +1405,9 @@ int aac_command_thread(void *data)
struct aac_fib_context *fibctx;
unsigned long flags;
DECLARE_WAITQUEUE(wait, current);
unsigned long next_jiffies = jiffies + HZ;
unsigned long next_check_jiffies = next_jiffies;
long difference = HZ;
/*
* We can only have one thread per adapter for AIF's.
@ -1368,7 +1476,7 @@ int aac_command_thread(void *data)
cpu_to_le32(AifCmdJobProgress))) {
aac_handle_aif(dev, fib);
}
time_now = jiffies/HZ;
/*
@ -1507,11 +1615,79 @@ int aac_command_thread(void *data)
* There are no more AIF's
*/
spin_unlock_irqrestore(dev->queues->queue[HostNormCmdQueue].lock, flags);
schedule();
/*
* Background activity
*/
if ((time_before(next_check_jiffies,next_jiffies))
&& ((difference = next_check_jiffies - jiffies) <= 0)) {
next_check_jiffies = next_jiffies;
if (aac_check_health(dev) == 0) {
difference = ((long)(unsigned)check_interval)
* HZ;
next_check_jiffies = jiffies + difference;
} else if (!dev->queues)
break;
}
if (!time_before(next_check_jiffies,next_jiffies)
&& ((difference = next_jiffies - jiffies) <= 0)) {
struct timeval now;
int ret;
/* Don't even try to talk to adapter if its sick */
ret = aac_check_health(dev);
if (!ret && !dev->queues)
break;
next_check_jiffies = jiffies
+ ((long)(unsigned)check_interval)
* HZ;
do_gettimeofday(&now);
/* Synchronize our watches */
if (((1000000 - (1000000 / HZ)) > now.tv_usec)
&& (now.tv_usec > (1000000 / HZ)))
difference = (((1000000 - now.tv_usec) * HZ)
+ 500000) / 1000000;
else if (ret == 0) {
struct fib *fibptr;
if ((fibptr = aac_fib_alloc(dev))) {
u32 * info;
aac_fib_init(fibptr);
info = (u32 *) fib_data(fibptr);
if (now.tv_usec > 500000)
++now.tv_sec;
*info = cpu_to_le32(now.tv_sec);
(void)aac_fib_send(SendHostTime,
fibptr,
sizeof(*info),
FsaNormal,
1, 1,
NULL,
NULL);
aac_fib_complete(fibptr);
aac_fib_free(fibptr);
}
difference = (long)(unsigned)update_interval*HZ;
} else {
/* retry shortly */
difference = 10 * HZ;
}
next_jiffies = jiffies + difference;
if (time_before(next_check_jiffies,next_jiffies))
difference = next_check_jiffies - jiffies;
}
if (difference <= 0)
difference = 1;
set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout(difference);
if (kthread_should_stop())
break;
set_current_state(TASK_INTERRUPTIBLE);
}
if (dev->queues)
remove_wait_queue(&dev->queues->queue[HostNormCmdQueue].cmdready, &wait);

View File

@ -39,10 +39,8 @@
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/dma-mapping.h>
#include <linux/syscalls.h>
#include <linux/delay.h>
#include <linux/smp_lock.h>
#include <linux/kthread.h>
#include <asm/semaphore.h>
@ -223,12 +221,12 @@ static struct aac_driver_ident aac_drivers[] = {
{ aac_rx_init, "percraid", "DELL ", "PERC 320/DC ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Perc 320/DC*/
{ aac_sa_init, "aacraid", "ADAPTEC ", "Adaptec 5400S ", 4, AAC_QUIRK_34SG }, /* Adaptec 5400S (Mustang)*/
{ aac_sa_init, "aacraid", "ADAPTEC ", "AAC-364 ", 4, AAC_QUIRK_34SG }, /* Adaptec 5400S (Mustang)*/
{ aac_sa_init, "percraid", "DELL ", "PERCRAID ", 4, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Dell PERC2/QC */
{ aac_sa_init, "percraid", "DELL ", "PERCRAID ", 4, AAC_QUIRK_34SG }, /* Dell PERC2/QC */
{ aac_sa_init, "hpnraid", "HP ", "NetRAID ", 4, AAC_QUIRK_34SG }, /* HP NetRAID-4M */
{ aac_rx_init, "aacraid", "DELL ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Dell Catchall */
{ aac_rx_init, "aacraid", "Legend ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Legend Catchall */
{ aac_rx_init, "aacraid", "ADAPTEC ", "RAID ", 2, AAC_QUIRK_31BIT | AAC_QUIRK_34SG }, /* Adaptec Catch All */
{ aac_rx_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Catch All */
{ aac_rkt_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Rocket Catch All */
{ aac_nark_init, "aacraid", "ADAPTEC ", "RAID ", 2 } /* Adaptec NEMER/ARK Catch All */
};
@ -403,10 +401,6 @@ static int aac_biosparm(struct scsi_device *sdev, struct block_device *bdev,
static int aac_slave_configure(struct scsi_device *sdev)
{
if (sdev_channel(sdev) == CONTAINER_CHANNEL) {
sdev->skip_ms_page_8 = 1;
sdev->skip_ms_page_3f = 1;
}
if ((sdev->type == TYPE_DISK) &&
(sdev_channel(sdev) != CONTAINER_CHANNEL)) {
if (expose_physicals == 0)
@ -450,6 +444,43 @@ static int aac_slave_configure(struct scsi_device *sdev)
return 0;
}
/**
* aac_change_queue_depth - alter queue depths
* @sdev: SCSI device we are considering
* @depth: desired queue depth
*
* Alters queue depths for target device based on the host adapter's
* total capacity and the queue depth supported by the target device.
*/
static int aac_change_queue_depth(struct scsi_device *sdev, int depth)
{
if (sdev->tagged_supported && (sdev->type == TYPE_DISK) &&
(sdev_channel(sdev) == CONTAINER_CHANNEL)) {
struct scsi_device * dev;
struct Scsi_Host *host = sdev->host;
unsigned num = 0;
__shost_for_each_device(dev, host) {
if (dev->tagged_supported && (dev->type == TYPE_DISK) &&
(sdev_channel(dev) == CONTAINER_CHANNEL))
++num;
++num;
}
if (num >= host->can_queue)
num = host->can_queue - 1;
if (depth > (host->can_queue - num))
depth = host->can_queue - num;
if (depth > 256)
depth = 256;
else if (depth < 2)
depth = 2;
scsi_adjust_queue_depth(sdev, MSG_ORDERED_TAG, depth);
} else
scsi_adjust_queue_depth(sdev, 0, 1);
return sdev->queue_depth;
}
static int aac_ioctl(struct scsi_device *sdev, int cmd, void __user * arg)
{
struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
@ -548,6 +579,14 @@ static int aac_eh_reset(struct scsi_cmnd* cmd)
ssleep(1);
}
printk(KERN_ERR "%s: SCSI bus appears hung\n", AAC_DRIVERNAME);
/*
* This adapter needs a blind reset, only do so for Adapters that
* support a register, instead of a commanded, reset.
*/
if ((aac->supplement_adapter_info.SupportedOptions2 &
le32_to_cpu(AAC_OPTION_MU_RESET|AAC_OPTION_IGNORE_RESET)) ==
le32_to_cpu(AAC_OPTION_MU_RESET))
aac_reset_adapter(aac, 2); /* Bypass wait for command quiesce */
return SUCCESS; /* Cause an immediate retry of the command with a ten second delay after successful tur */
}
@ -731,15 +770,21 @@ static ssize_t aac_show_bios_version(struct class_device *class_dev,
return len;
}
static ssize_t aac_show_serial_number(struct class_device *class_dev,
char *buf)
ssize_t aac_show_serial_number(struct class_device *class_dev, char *buf)
{
struct aac_dev *dev = (struct aac_dev*)class_to_shost(class_dev)->hostdata;
int len = 0;
if (le32_to_cpu(dev->adapter_info.serial[0]) != 0xBAD0)
len = snprintf(buf, PAGE_SIZE, "%x\n",
len = snprintf(buf, PAGE_SIZE, "%06X\n",
le32_to_cpu(dev->adapter_info.serial[0]));
if (len &&
!memcmp(&dev->supplement_adapter_info.MfgPcbaSerialNo[
sizeof(dev->supplement_adapter_info.MfgPcbaSerialNo)+2-len],
buf, len))
len = snprintf(buf, PAGE_SIZE, "%.*s\n",
(int)sizeof(dev->supplement_adapter_info.MfgPcbaSerialNo),
dev->supplement_adapter_info.MfgPcbaSerialNo);
return len;
}
@ -755,6 +800,31 @@ static ssize_t aac_show_max_id(struct class_device *class_dev, char *buf)
class_to_shost(class_dev)->max_id);
}
static ssize_t aac_store_reset_adapter(struct class_device *class_dev,
const char *buf, size_t count)
{
int retval = -EACCES;
if (!capable(CAP_SYS_ADMIN))
return retval;
retval = aac_reset_adapter((struct aac_dev*)class_to_shost(class_dev)->hostdata, buf[0] == '!');
if (retval >= 0)
retval = count;
return retval;
}
static ssize_t aac_show_reset_adapter(struct class_device *class_dev,
char *buf)
{
struct aac_dev *dev = (struct aac_dev*)class_to_shost(class_dev)->hostdata;
int len, tmp;
tmp = aac_adapter_check_health(dev);
if ((tmp == 0) && dev->in_reset)
tmp = -EBUSY;
len = snprintf(buf, PAGE_SIZE, "0x%x", tmp);
return len;
}
static struct class_device_attribute aac_model = {
.attr = {
@ -812,6 +882,14 @@ static struct class_device_attribute aac_max_id = {
},
.show = aac_show_max_id,
};
static struct class_device_attribute aac_reset = {
.attr = {
.name = "reset_host",
.mode = S_IWUSR|S_IRUGO,
},
.store = aac_store_reset_adapter,
.show = aac_show_reset_adapter,
};
static struct class_device_attribute *aac_attrs[] = {
&aac_model,
@ -822,6 +900,7 @@ static struct class_device_attribute *aac_attrs[] = {
&aac_serial_number,
&aac_max_channel,
&aac_max_id,
&aac_reset,
NULL
};
@ -848,6 +927,7 @@ static struct scsi_host_template aac_driver_template = {
.bios_param = aac_biosparm,
.shost_attrs = aac_attrs,
.slave_configure = aac_slave_configure,
.change_queue_depth = aac_change_queue_depth,
.eh_abort_handler = aac_eh_abort,
.eh_host_reset_handler = aac_eh_reset,
.can_queue = AAC_NUM_IO_FIB,
@ -1086,7 +1166,7 @@ static int __init aac_init(void)
{
int error;
printk(KERN_INFO "Adaptec %s driver (%s)\n",
printk(KERN_INFO "Adaptec %s driver %s\n",
AAC_DRIVERNAME, aac_driver_version);
error = pci_register_driver(&aac_pci_driver);

View File

@ -464,21 +464,24 @@ static int aac_rx_restart_adapter(struct aac_dev *dev, int bled)
{
u32 var;
if (bled)
printk(KERN_ERR "%s%d: adapter kernel panic'd %x.\n",
dev->name, dev->id, bled);
else {
bled = aac_adapter_sync_cmd(dev, IOP_RESET_ALWAYS,
0, 0, 0, 0, 0, 0, &var, NULL, NULL, NULL, NULL);
if (!bled && (var != 0x00000001))
bled = -EINVAL;
}
if (bled && (bled != -ETIMEDOUT))
bled = aac_adapter_sync_cmd(dev, IOP_RESET,
0, 0, 0, 0, 0, 0, &var, NULL, NULL, NULL, NULL);
if (!(dev->supplement_adapter_info.SupportedOptions2 &
le32_to_cpu(AAC_OPTION_MU_RESET)) || (bled >= 0) || (bled == -2)) {
if (bled)
printk(KERN_ERR "%s%d: adapter kernel panic'd %x.\n",
dev->name, dev->id, bled);
else {
bled = aac_adapter_sync_cmd(dev, IOP_RESET_ALWAYS,
0, 0, 0, 0, 0, 0, &var, NULL, NULL, NULL, NULL);
if (!bled && (var != 0x00000001))
bled = -EINVAL;
}
if (bled && (bled != -ETIMEDOUT))
bled = aac_adapter_sync_cmd(dev, IOP_RESET,
0, 0, 0, 0, 0, 0, &var, NULL, NULL, NULL, NULL);
if (bled && (bled != -ETIMEDOUT))
return -EINVAL;
if (bled && (bled != -ETIMEDOUT))
return -EINVAL;
}
if (bled || (var == 0x3803000F)) { /* USE_OTHER_METHOD */
rx_writel(dev, MUnit.reserved2, 3);
msleep(5000); /* Delay 5 seconds */
@ -596,7 +599,7 @@ int _aac_rx_init(struct aac_dev *dev)
}
msleep(1);
}
if (restart)
if (restart && aac_commit)
aac_commit = 1;
/*
* Fill in the common function dispatch table.

View File

@ -798,7 +798,6 @@
#include <scsi/scsi_tcq.h>
#include <scsi/scsi.h>
#include <scsi/scsi_host.h>
#include "advansys.h"
#ifdef CONFIG_PCI
#include <linux/pci.h>
#endif /* CONFIG_PCI */
@ -2014,7 +2013,7 @@ STATIC int AscSgListToQueue(int);
STATIC void AscEnableIsaDma(uchar);
#endif /* CONFIG_ISA */
STATIC ASC_DCNT AscGetMaxDmaCount(ushort);
static const char *advansys_info(struct Scsi_Host *shp);
/*
* --- Adv Library Constants and Macros
@ -3970,10 +3969,6 @@ STATIC ushort asc_bus[ASC_NUM_BUS] __initdata = {
ASC_IS_PCI,
};
/*
* Used with the LILO 'advansys' option to eliminate or
* limit I/O port probing at boot time, cf. advansys_setup().
*/
STATIC int asc_iopflag = ASC_FALSE;
STATIC int asc_ioport[ASC_NUM_IOPORT_PROBE] = { 0, 0, 0, 0 };
@ -4055,10 +4050,6 @@ STATIC void asc_prt_hex(char *f, uchar *, int);
#endif /* ADVANSYS_DEBUG */
/*
* --- Linux 'struct scsi_host_template' and advansys_setup() Functions
*/
#ifdef CONFIG_PROC_FS
/*
* advansys_proc_info() - /proc/scsi/advansys/[0-(ASC_NUM_BOARD_SUPPORTED-1)]
@ -4080,7 +4071,7 @@ STATIC void asc_prt_hex(char *f, uchar *, int);
* if 'prtbuf' is too small it will not be overwritten. Instead the
* user just won't get all the available statistics.
*/
int
static int
advansys_proc_info(struct Scsi_Host *shost, char *buffer, char **start,
off_t offset, int length, int inout)
{
@ -4296,7 +4287,7 @@ advansys_proc_info(struct Scsi_Host *shost, char *buffer, char **start,
* it must not call SCSI mid-level functions including scsi_malloc()
* and scsi_free().
*/
int __init
static int __init
advansys_detect(struct scsi_host_template *tpnt)
{
static int detect_called = ASC_FALSE;
@ -5428,7 +5419,7 @@ advansys_detect(struct scsi_host_template *tpnt)
*
* Release resources allocated for a single AdvanSys adapter.
*/
int
static int
advansys_release(struct Scsi_Host *shp)
{
asc_board_t *boardp;
@ -5475,7 +5466,7 @@ advansys_release(struct Scsi_Host *shp)
* Note: The information line should not exceed ASC_INFO_SIZE bytes,
* otherwise the static 'info' array will be overrun.
*/
const char *
static const char *
advansys_info(struct Scsi_Host *shp)
{
static char info[ASC_INFO_SIZE];
@ -5568,7 +5559,7 @@ advansys_info(struct Scsi_Host *shp)
* This function always returns 0. Command return status is saved
* in the 'scp' result field.
*/
int
static int
advansys_queuecommand(struct scsi_cmnd *scp, void (*done)(struct scsi_cmnd *))
{
struct Scsi_Host *shp;
@ -5656,7 +5647,7 @@ advansys_queuecommand(struct scsi_cmnd *scp, void (*done)(struct scsi_cmnd *))
* sleeping is allowed and no locking other than for host structures is
* required. Returns SUCCESS or FAILED.
*/
int
static int
advansys_reset(struct scsi_cmnd *scp)
{
struct Scsi_Host *shp;
@ -5841,7 +5832,7 @@ advansys_reset(struct scsi_cmnd *scp)
* ip[1]: sectors
* ip[2]: cylinders
*/
int
static int
advansys_biosparam(struct scsi_device *sdev, struct block_device *bdev,
sector_t capacity, int ip[])
{
@ -5874,82 +5865,6 @@ advansys_biosparam(struct scsi_device *sdev, struct block_device *bdev,
return 0;
}
/*
* advansys_setup()
*
* This function is called from init/main.c at boot time.
* It it passed LILO parameters that can be set from the
* LILO command line or in /etc/lilo.conf.
*
* It is used by the AdvanSys driver to either disable I/O
* port scanning or to limit scanning to 1 - 4 I/O ports.
* Regardless of the option setting EISA and PCI boards
* will still be searched for and detected. This option
* only affects searching for ISA and VL boards.
*
* If ADVANSYS_DEBUG is defined the driver debug level may
* be set using the 5th (ASC_NUM_IOPORT_PROBE + 1) I/O Port.
*
* Examples:
* 1. Eliminate I/O port scanning:
* boot: linux advansys=
* or
* boot: linux advansys=0x0
* 2. Limit I/O port scanning to one I/O port:
* boot: linux advansys=0x110
* 3. Limit I/O port scanning to four I/O ports:
* boot: linux advansys=0x110,0x210,0x230,0x330
* 4. If ADVANSYS_DEBUG, limit I/O port scanning to four I/O ports and
* set the driver debug level to 2.
* boot: linux advansys=0x110,0x210,0x230,0x330,0xdeb2
*
* ints[0] - number of arguments
* ints[1] - first argument
* ints[2] - second argument
* ...
*/
void __init
advansys_setup(char *str, int *ints)
{
int i;
if (asc_iopflag == ASC_TRUE) {
printk("AdvanSys SCSI: 'advansys' LILO option may appear only once\n");
return;
}
asc_iopflag = ASC_TRUE;
if (ints[0] > ASC_NUM_IOPORT_PROBE) {
#ifdef ADVANSYS_DEBUG
if ((ints[0] == ASC_NUM_IOPORT_PROBE + 1) &&
(ints[ASC_NUM_IOPORT_PROBE + 1] >> 4 == 0xdeb)) {
asc_dbglvl = ints[ASC_NUM_IOPORT_PROBE + 1] & 0xf;
} else {
#endif /* ADVANSYS_DEBUG */
printk("AdvanSys SCSI: only %d I/O ports accepted\n",
ASC_NUM_IOPORT_PROBE);
#ifdef ADVANSYS_DEBUG
}
#endif /* ADVANSYS_DEBUG */
}
#ifdef ADVANSYS_DEBUG
ASC_DBG1(1, "advansys_setup: ints[0] %d\n", ints[0]);
for (i = 1; i < ints[0]; i++) {
ASC_DBG2(1, " ints[%d] 0x%x", i, ints[i]);
}
ASC_DBG(1, "\n");
#endif /* ADVANSYS_DEBUG */
for (i = 1; i <= ints[0] && i <= ASC_NUM_IOPORT_PROBE; i++) {
asc_ioport[i-1] = ints[i];
ASC_DBG2(1, "advansys_setup: asc_ioport[%d] 0x%x\n",
i - 1, asc_ioport[i-1]);
}
}
/*
* --- Loadable Driver Support
*/

View File

@ -1,36 +0,0 @@
/*
* advansys.h - Linux Host Driver for AdvanSys SCSI Adapters
*
* Copyright (c) 1995-2000 Advanced System Products, Inc.
* Copyright (c) 2000-2001 ConnectCom Solutions, Inc.
* All Rights Reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that redistributions of source
* code retain the above copyright notice and this comment without
* modification.
*
* As of March 8, 2000 Advanced System Products, Inc. (AdvanSys)
* changed its name to ConnectCom Solutions, Inc.
*
*/
#ifndef _ADVANSYS_H
#define _ADVANSYS_H
/*
* struct scsi_host_template function prototypes.
*/
int advansys_detect(struct scsi_host_template *);
int advansys_release(struct Scsi_Host *);
const char *advansys_info(struct Scsi_Host *);
int advansys_queuecommand(struct scsi_cmnd *, void (* done)(struct scsi_cmnd *));
int advansys_reset(struct scsi_cmnd *);
int advansys_biosparam(struct scsi_device *, struct block_device *,
sector_t, int[]);
static int advansys_slave_configure(struct scsi_device *);
/* init/main.c setup function */
void advansys_setup(char *, int *);
#endif /* _ADVANSYS_H */

View File

@ -240,6 +240,7 @@
#include <linux/io.h>
#include <linux/blkdev.h>
#include <asm/system.h>
#include <linux/completion.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/wait.h>
@ -253,7 +254,6 @@
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <linux/list.h>
#include <asm/semaphore.h>
#include <scsi/scsicam.h>
#include "scsi.h"
@ -551,7 +551,7 @@ struct aha152x_hostdata {
*/
struct aha152x_scdata {
Scsi_Cmnd *next; /* next sc in queue */
struct semaphore *sem; /* semaphore to block on */
struct completion *done;/* semaphore to block on */
unsigned char cmd_len;
unsigned char cmnd[MAX_COMMAND_SIZE];
unsigned short use_sg;
@ -608,7 +608,7 @@ struct aha152x_scdata {
#define SCDATA(SCpnt) ((struct aha152x_scdata *) (SCpnt)->host_scribble)
#define SCNEXT(SCpnt) SCDATA(SCpnt)->next
#define SCSEM(SCpnt) SCDATA(SCpnt)->sem
#define SCSEM(SCpnt) SCDATA(SCpnt)->done
#define SG_ADDRESS(buffer) ((char *) (page_address((buffer)->page)+(buffer)->offset))
@ -969,7 +969,8 @@ static int setup_expected_interrupts(struct Scsi_Host *shpnt)
/*
* Queue a command and setup interrupts for a free bus.
*/
static int aha152x_internal_queue(Scsi_Cmnd *SCpnt, struct semaphore *sem, int phase, void (*done)(Scsi_Cmnd *))
static int aha152x_internal_queue(Scsi_Cmnd *SCpnt, struct completion *complete,
int phase, void (*done)(Scsi_Cmnd *))
{
struct Scsi_Host *shpnt = SCpnt->device->host;
unsigned long flags;
@ -1013,7 +1014,7 @@ static int aha152x_internal_queue(Scsi_Cmnd *SCpnt, struct semaphore *sem, int p
}
SCNEXT(SCpnt) = NULL;
SCSEM(SCpnt) = sem;
SCSEM(SCpnt) = complete;
/* setup scratch area
SCp.ptr : buffer pointer
@ -1084,9 +1085,9 @@ static void reset_done(Scsi_Cmnd *SCpnt)
DPRINTK(debug_eh, INFO_LEAD "reset_done called\n", CMDINFO(SCpnt));
#endif
if(SCSEM(SCpnt)) {
up(SCSEM(SCpnt));
complete(SCSEM(SCpnt));
} else {
printk(KERN_ERR "aha152x: reset_done w/o semaphore\n");
printk(KERN_ERR "aha152x: reset_done w/o completion\n");
}
}
@ -1139,21 +1140,6 @@ static int aha152x_abort(Scsi_Cmnd *SCpnt)
return FAILED;
}
static void timer_expired(unsigned long p)
{
Scsi_Cmnd *SCp = (Scsi_Cmnd *)p;
struct semaphore *sem = SCSEM(SCp);
struct Scsi_Host *shpnt = SCp->device->host;
unsigned long flags;
/* remove command from issue queue */
DO_LOCK(flags);
remove_SC(&ISSUE_SC, SCp);
DO_UNLOCK(flags);
up(sem);
}
/*
* Reset a device
*
@ -1161,14 +1147,14 @@ static void timer_expired(unsigned long p)
static int aha152x_device_reset(Scsi_Cmnd * SCpnt)
{
struct Scsi_Host *shpnt = SCpnt->device->host;
DECLARE_MUTEX_LOCKED(sem);
struct timer_list timer;
DECLARE_COMPLETION(done);
int ret, issued, disconnected;
unsigned char old_cmd_len = SCpnt->cmd_len;
unsigned short old_use_sg = SCpnt->use_sg;
void *old_buffer = SCpnt->request_buffer;
unsigned old_bufflen = SCpnt->request_bufflen;
unsigned long flags;
unsigned long timeleft;
#if defined(AHA152X_DEBUG)
if(HOSTDATA(shpnt)->debug & debug_eh) {
@ -1192,15 +1178,15 @@ static int aha152x_device_reset(Scsi_Cmnd * SCpnt)
SCpnt->request_buffer = NULL;
SCpnt->request_bufflen = 0;
init_timer(&timer);
timer.data = (unsigned long) SCpnt;
timer.expires = jiffies + 100*HZ; /* 10s */
timer.function = (void (*)(unsigned long)) timer_expired;
aha152x_internal_queue(SCpnt, &done, resetting, reset_done);
aha152x_internal_queue(SCpnt, &sem, resetting, reset_done);
add_timer(&timer);
down(&sem);
del_timer(&timer);
timeleft = wait_for_completion_timeout(&done, 100*HZ);
if (!timeleft) {
/* remove command from issue queue */
DO_LOCK(flags);
remove_SC(&ISSUE_SC, SCpnt);
DO_UNLOCK(flags);
}
SCpnt->cmd_len = old_cmd_len;
SCpnt->use_sg = old_use_sg;

View File

@ -271,20 +271,8 @@ static irqreturn_t aha1740_intr_handle(int irq, void *dev_id)
continue;
}
sgptr = (struct aha1740_sg *) SCtmp->host_scribble;
if (SCtmp->use_sg) {
/* We used scatter-gather.
Do the unmapping dance. */
dma_unmap_sg (&edev->dev,
(struct scatterlist *) SCtmp->request_buffer,
SCtmp->use_sg,
SCtmp->sc_data_direction);
} else {
dma_unmap_single (&edev->dev,
sgptr->buf_dma_addr,
SCtmp->request_bufflen,
DMA_BIDIRECTIONAL);
}
scsi_dma_unmap(SCtmp);
/* Free the sg block */
dma_free_coherent (&edev->dev,
sizeof (struct aha1740_sg),
@ -349,11 +337,9 @@ static int aha1740_queuecommand(Scsi_Cmnd * SCpnt, void (*done)(Scsi_Cmnd *))
unchar target = scmd_id(SCpnt);
struct aha1740_hostdata *host = HOSTDATA(SCpnt->device->host);
unsigned long flags;
void *buff = SCpnt->request_buffer;
int bufflen = SCpnt->request_bufflen;
dma_addr_t sg_dma;
struct aha1740_sg *sgptr;
int ecbno;
int ecbno, nseg;
DEB(int i);
if(*cmd == REQUEST_SENSE) {
@ -423,24 +409,23 @@ static int aha1740_queuecommand(Scsi_Cmnd * SCpnt, void (*done)(Scsi_Cmnd *))
}
sgptr = (struct aha1740_sg *) SCpnt->host_scribble;
sgptr->sg_dma_addr = sg_dma;
if (SCpnt->use_sg) {
struct scatterlist * sgpnt;
nseg = scsi_dma_map(SCpnt);
BUG_ON(nseg < 0);
if (nseg) {
struct scatterlist *sg;
struct aha1740_chain * cptr;
int i, count;
int i;
DEB(unsigned char * ptr);
host->ecb[ecbno].sg = 1; /* SCSI Initiator Command
* w/scatter-gather*/
sgpnt = (struct scatterlist *) SCpnt->request_buffer;
cptr = sgptr->sg_chain;
count = dma_map_sg (&host->edev->dev, sgpnt, SCpnt->use_sg,
SCpnt->sc_data_direction);
for(i=0; i < count; i++) {
cptr[i].datalen = sg_dma_len (sgpnt + i);
cptr[i].dataptr = sg_dma_address (sgpnt + i);
scsi_for_each_sg(SCpnt, sg, nseg, i) {
cptr[i].datalen = sg_dma_len (sg);
cptr[i].dataptr = sg_dma_address (sg);
}
host->ecb[ecbno].datalen = count*sizeof(struct aha1740_chain);
host->ecb[ecbno].datalen = nseg * sizeof(struct aha1740_chain);
host->ecb[ecbno].dataptr = sg_dma;
#ifdef DEBUG
printk("cptr %x: ",cptr);
@ -448,11 +433,8 @@ static int aha1740_queuecommand(Scsi_Cmnd * SCpnt, void (*done)(Scsi_Cmnd *))
for(i=0;i<24;i++) printk("%02x ", ptr[i]);
#endif
} else {
host->ecb[ecbno].datalen = bufflen;
sgptr->buf_dma_addr = dma_map_single (&host->edev->dev,
buff, bufflen,
DMA_BIDIRECTIONAL);
host->ecb[ecbno].dataptr = sgptr->buf_dma_addr;
host->ecb[ecbno].datalen = 0;
host->ecb[ecbno].dataptr = 0;
}
host->ecb[ecbno].lun = SCpnt->device->lun;
host->ecb[ecbno].ses = 1; /* Suppress underrun errors */

View File

@ -376,21 +376,10 @@ static __inline void
ahd_linux_unmap_scb(struct ahd_softc *ahd, struct scb *scb)
{
struct scsi_cmnd *cmd;
int direction;
cmd = scb->io_ctx;
direction = cmd->sc_data_direction;
ahd_sync_sglist(ahd, scb, BUS_DMASYNC_POSTWRITE);
if (cmd->use_sg != 0) {
struct scatterlist *sg;
sg = (struct scatterlist *)cmd->request_buffer;
pci_unmap_sg(ahd->dev_softc, sg, cmd->use_sg, direction);
} else if (cmd->request_bufflen != 0) {
pci_unmap_single(ahd->dev_softc,
scb->platform_data->buf_busaddr,
cmd->request_bufflen, direction);
}
scsi_dma_unmap(cmd);
}
/******************************** Macros **************************************/
@ -1422,6 +1411,7 @@ ahd_linux_run_command(struct ahd_softc *ahd, struct ahd_linux_device *dev,
u_int col_idx;
uint16_t mask;
unsigned long flags;
int nseg;
ahd_lock(ahd, &flags);
@ -1494,18 +1484,17 @@ ahd_linux_run_command(struct ahd_softc *ahd, struct ahd_linux_device *dev,
ahd_set_residual(scb, 0);
ahd_set_sense_residual(scb, 0);
scb->sg_count = 0;
if (cmd->use_sg != 0) {
void *sg;
struct scatterlist *cur_seg;
u_int nseg;
int dir;
cur_seg = (struct scatterlist *)cmd->request_buffer;
dir = cmd->sc_data_direction;
nseg = pci_map_sg(ahd->dev_softc, cur_seg,
cmd->use_sg, dir);
nseg = scsi_dma_map(cmd);
BUG_ON(nseg < 0);
if (nseg > 0) {
void *sg = scb->sg_list;
struct scatterlist *cur_seg;
int i;
scb->platform_data->xfer_len = 0;
for (sg = scb->sg_list; nseg > 0; nseg--, cur_seg++) {
scsi_for_each_sg(cmd, cur_seg, nseg, i) {
dma_addr_t addr;
bus_size_t len;
@ -1513,22 +1502,8 @@ ahd_linux_run_command(struct ahd_softc *ahd, struct ahd_linux_device *dev,
len = sg_dma_len(cur_seg);
scb->platform_data->xfer_len += len;
sg = ahd_sg_setup(ahd, scb, sg, addr, len,
/*last*/nseg == 1);
i == (nseg - 1));
}
} else if (cmd->request_bufflen != 0) {
void *sg;
dma_addr_t addr;
int dir;
sg = scb->sg_list;
dir = cmd->sc_data_direction;
addr = pci_map_single(ahd->dev_softc,
cmd->request_buffer,
cmd->request_bufflen, dir);
scb->platform_data->xfer_len = cmd->request_bufflen;
scb->platform_data->buf_busaddr = addr;
sg = ahd_sg_setup(ahd, scb, sg, addr,
cmd->request_bufflen, /*last*/TRUE);
}
LIST_INSERT_HEAD(&ahd->pending_scbs, scb, pending_links);

View File

@ -781,7 +781,7 @@ int ahd_get_transfer_dir(struct scb *scb)
static __inline
void ahd_set_residual(struct scb *scb, u_long resid)
{
scb->io_ctx->resid = resid;
scsi_set_resid(scb->io_ctx, resid);
}
static __inline
@ -793,7 +793,7 @@ void ahd_set_sense_residual(struct scb *scb, u_long resid)
static __inline
u_long ahd_get_residual(struct scb *scb)
{
return (scb->io_ctx->resid);
return scsi_get_resid(scb->io_ctx);
}
static __inline

View File

@ -402,18 +402,8 @@ ahc_linux_unmap_scb(struct ahc_softc *ahc, struct scb *scb)
cmd = scb->io_ctx;
ahc_sync_sglist(ahc, scb, BUS_DMASYNC_POSTWRITE);
if (cmd->use_sg != 0) {
struct scatterlist *sg;
sg = (struct scatterlist *)cmd->request_buffer;
pci_unmap_sg(ahc->dev_softc, sg, cmd->use_sg,
cmd->sc_data_direction);
} else if (cmd->request_bufflen != 0) {
pci_unmap_single(ahc->dev_softc,
scb->platform_data->buf_busaddr,
cmd->request_bufflen,
cmd->sc_data_direction);
}
scsi_dma_unmap(cmd);
}
static __inline int
@ -1381,6 +1371,7 @@ ahc_linux_run_command(struct ahc_softc *ahc, struct ahc_linux_device *dev,
struct ahc_tmode_tstate *tstate;
uint16_t mask;
struct scb_tailq *untagged_q = NULL;
int nseg;
/*
* Schedule us to run later. The only reason we are not
@ -1472,23 +1463,21 @@ ahc_linux_run_command(struct ahc_softc *ahc, struct ahc_linux_device *dev,
ahc_set_residual(scb, 0);
ahc_set_sense_residual(scb, 0);
scb->sg_count = 0;
if (cmd->use_sg != 0) {
nseg = scsi_dma_map(cmd);
BUG_ON(nseg < 0);
if (nseg > 0) {
struct ahc_dma_seg *sg;
struct scatterlist *cur_seg;
struct scatterlist *end_seg;
int nseg;
int i;
cur_seg = (struct scatterlist *)cmd->request_buffer;
nseg = pci_map_sg(ahc->dev_softc, cur_seg, cmd->use_sg,
cmd->sc_data_direction);
end_seg = cur_seg + nseg;
/* Copy the segments into the SG list. */
sg = scb->sg_list;
/*
* The sg_count may be larger than nseg if
* a transfer crosses a 32bit page.
*/
while (cur_seg < end_seg) {
*/
scsi_for_each_sg(cmd, cur_seg, nseg, i) {
dma_addr_t addr;
bus_size_t len;
int consumed;
@ -1499,7 +1488,6 @@ ahc_linux_run_command(struct ahc_softc *ahc, struct ahc_linux_device *dev,
sg, addr, len);
sg += consumed;
scb->sg_count += consumed;
cur_seg++;
}
sg--;
sg->len |= ahc_htole32(AHC_DMA_LAST_SEG);
@ -1516,33 +1504,6 @@ ahc_linux_run_command(struct ahc_softc *ahc, struct ahc_linux_device *dev,
*/
scb->hscb->dataptr = scb->sg_list->addr;
scb->hscb->datacnt = scb->sg_list->len;
} else if (cmd->request_bufflen != 0) {
struct ahc_dma_seg *sg;
dma_addr_t addr;
sg = scb->sg_list;
addr = pci_map_single(ahc->dev_softc,
cmd->request_buffer,
cmd->request_bufflen,
cmd->sc_data_direction);
scb->platform_data->buf_busaddr = addr;
scb->sg_count = ahc_linux_map_seg(ahc, scb,
sg, addr,
cmd->request_bufflen);
sg->len |= ahc_htole32(AHC_DMA_LAST_SEG);
/*
* Reset the sg list pointer.
*/
scb->hscb->sgptr =
ahc_htole32(scb->sg_list_phys | SG_FULL_RESID);
/*
* Copy the first SG into the "current"
* data pointer area.
*/
scb->hscb->dataptr = sg->addr;
scb->hscb->datacnt = sg->len;
} else {
scb->hscb->sgptr = ahc_htole32(SG_LIST_NULL);
scb->hscb->dataptr = 0;

View File

@ -751,7 +751,7 @@ int ahc_get_transfer_dir(struct scb *scb)
static __inline
void ahc_set_residual(struct scb *scb, u_long resid)
{
scb->io_ctx->resid = resid;
scsi_set_resid(scb->io_ctx, resid);
}
static __inline
@ -763,7 +763,7 @@ void ahc_set_sense_residual(struct scb *scb, u_long resid)
static __inline
u_long ahc_get_residual(struct scb *scb)
{
return (scb->io_ctx->resid);
return scsi_get_resid(scb->io_ctx);
}
static __inline

View File

@ -2690,17 +2690,8 @@ aic7xxx_done(struct aic7xxx_host *p, struct aic7xxx_scb *scb)
struct aic7xxx_scb *scbp;
unsigned char queue_depth;
if (cmd->use_sg > 1)
{
struct scatterlist *sg;
scsi_dma_unmap(cmd);
sg = (struct scatterlist *)cmd->request_buffer;
pci_unmap_sg(p->pdev, sg, cmd->use_sg, cmd->sc_data_direction);
}
else if (cmd->request_bufflen)
pci_unmap_single(p->pdev, aic7xxx_mapping(cmd),
cmd->request_bufflen,
cmd->sc_data_direction);
if (scb->flags & SCB_SENSE)
{
pci_unmap_single(p->pdev,
@ -3869,7 +3860,7 @@ aic7xxx_calculate_residual (struct aic7xxx_host *p, struct aic7xxx_scb *scb)
* the mid layer didn't check residual data counts to see if the
* command needs retried.
*/
cmd->resid = scb->sg_length - actual;
scsi_set_resid(cmd, scb->sg_length - actual);
aic7xxx_status(cmd) = hscb->target_status;
}
}
@ -6581,7 +6572,7 @@ aic7xxx_slave_alloc(struct scsi_device *SDptr)
struct aic7xxx_host *p = (struct aic7xxx_host *)SDptr->host->hostdata;
struct aic_dev_data *aic_dev;
aic_dev = kmalloc(sizeof(struct aic_dev_data), GFP_ATOMIC | GFP_KERNEL);
aic_dev = kmalloc(sizeof(struct aic_dev_data), GFP_KERNEL);
if(!aic_dev)
return 1;
/*
@ -10137,6 +10128,7 @@ static void aic7xxx_buildscb(struct aic7xxx_host *p, struct scsi_cmnd *cmd,
struct scsi_device *sdptr = cmd->device;
unsigned char tindex = TARGET_INDEX(cmd);
struct request *req = cmd->request;
int use_sg;
mask = (0x01 << tindex);
hscb = scb->hscb;
@ -10209,8 +10201,10 @@ static void aic7xxx_buildscb(struct aic7xxx_host *p, struct scsi_cmnd *cmd,
memcpy(scb->cmnd, cmd->cmnd, cmd->cmd_len);
hscb->SCSI_cmd_pointer = cpu_to_le32(SCB_DMA_ADDR(scb, scb->cmnd));
if (cmd->use_sg)
{
use_sg = scsi_dma_map(cmd);
BUG_ON(use_sg < 0);
if (use_sg) {
struct scatterlist *sg; /* Must be mid-level SCSI code scatterlist */
/*
@ -10219,11 +10213,11 @@ static void aic7xxx_buildscb(struct aic7xxx_host *p, struct scsi_cmnd *cmd,
* differences and the kernel SG list uses virtual addresses where
* we need physical addresses.
*/
int i, use_sg;
int i;
sg = (struct scatterlist *)cmd->request_buffer;
scb->sg_length = 0;
use_sg = pci_map_sg(p->pdev, sg, cmd->use_sg, cmd->sc_data_direction);
/*
* Copy the segments into the SG array. NOTE!!! - We used to
* have the first entry both in the data_pointer area and the first
@ -10231,10 +10225,9 @@ static void aic7xxx_buildscb(struct aic7xxx_host *p, struct scsi_cmnd *cmd,
* entry in both places, but now we download the address of
* scb->sg_list[1] instead of 0 to the sg pointer in the hscb.
*/
for (i = 0; i < use_sg; i++)
{
unsigned int len = sg_dma_len(sg+i);
scb->sg_list[i].address = cpu_to_le32(sg_dma_address(sg+i));
scsi_for_each_sg(cmd, sg, use_sg, i) {
unsigned int len = sg_dma_len(sg);
scb->sg_list[i].address = cpu_to_le32(sg_dma_address(sg));
scb->sg_list[i].length = cpu_to_le32(len);
scb->sg_length += len;
}
@ -10244,33 +10237,13 @@ static void aic7xxx_buildscb(struct aic7xxx_host *p, struct scsi_cmnd *cmd,
scb->sg_count = i;
hscb->SG_segment_count = i;
hscb->SG_list_pointer = cpu_to_le32(SCB_DMA_ADDR(scb, &scb->sg_list[1]));
}
else
{
if (cmd->request_bufflen)
{
unsigned int address = pci_map_single(p->pdev, cmd->request_buffer,
cmd->request_bufflen,
cmd->sc_data_direction);
aic7xxx_mapping(cmd) = address;
scb->sg_list[0].address = cpu_to_le32(address);
scb->sg_list[0].length = cpu_to_le32(cmd->request_bufflen);
scb->sg_count = 1;
scb->sg_length = cmd->request_bufflen;
hscb->SG_segment_count = 1;
hscb->SG_list_pointer = cpu_to_le32(SCB_DMA_ADDR(scb, &scb->sg_list[0]));
hscb->data_count = scb->sg_list[0].length;
hscb->data_pointer = scb->sg_list[0].address;
}
else
{
} else {
scb->sg_count = 0;
scb->sg_length = 0;
hscb->SG_segment_count = 0;
hscb->SG_list_pointer = 0;
hscb->data_count = 0;
hscb->data_pointer = 0;
}
}
}

View File

@ -1,138 +0,0 @@
/*
* Detection routine for the NCR53c710 based Amiga SCSI Controllers for Linux.
* Amiga MacroSystemUS WarpEngine SCSI controller.
* Amiga Technologies A4000T SCSI controller.
* Amiga Technologies/DKB A4091 SCSI controller.
*
* Written 1997 by Alan Hourihane <alanh@fairlite.demon.co.uk>
* plus modifications of the 53c7xx.c driver to support the Amiga.
*/
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/blkdev.h>
#include <linux/zorro.h>
#include <linux/stat.h>
#include <asm/setup.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/amigaints.h>
#include <asm/amigahw.h>
#include <asm/dma.h>
#include <asm/irq.h>
#include "scsi.h"
#include <scsi/scsi_host.h>
#include "53c7xx.h"
#include "amiga7xx.h"
static int amiga7xx_register_one(struct scsi_host_template *tpnt,
unsigned long address)
{
long long options;
int clock;
if (!request_mem_region(address, 0x1000, "ncr53c710"))
return 0;
address = (unsigned long)z_ioremap(address, 0x1000);
options = OPTION_MEMORY_MAPPED | OPTION_DEBUG_TEST1 | OPTION_INTFLY |
OPTION_SYNCHRONOUS | OPTION_ALWAYS_SYNCHRONOUS |
OPTION_DISCONNECT;
clock = 50000000; /* 50 MHz SCSI Clock */
ncr53c7xx_init(tpnt, 0, 710, address, 0, IRQ_AMIGA_PORTS, DMA_NONE,
options, clock);
return 1;
}
#ifdef CONFIG_ZORRO
static struct {
zorro_id id;
unsigned long offset;
int absolute; /* offset is absolute address */
} amiga7xx_table[] = {
{ .id = ZORRO_PROD_PHASE5_BLIZZARD_603E_PLUS, .offset = 0xf40000,
.absolute = 1 },
{ .id = ZORRO_PROD_MACROSYSTEMS_WARP_ENGINE_40xx, .offset = 0x40000 },
{ .id = ZORRO_PROD_CBM_A4091_1, .offset = 0x800000 },
{ .id = ZORRO_PROD_CBM_A4091_2, .offset = 0x800000 },
{ .id = ZORRO_PROD_GVP_GFORCE_040_060, .offset = 0x40000 },
{ 0 }
};
static int __init amiga7xx_zorro_detect(struct scsi_host_template *tpnt)
{
int num = 0, i;
struct zorro_dev *z = NULL;
unsigned long address;
while ((z = zorro_find_device(ZORRO_WILDCARD, z))) {
for (i = 0; amiga7xx_table[i].id; i++)
if (z->id == amiga7xx_table[i].id)
break;
if (!amiga7xx_table[i].id)
continue;
if (amiga7xx_table[i].absolute)
address = amiga7xx_table[i].offset;
else
address = z->resource.start + amiga7xx_table[i].offset;
num += amiga7xx_register_one(tpnt, address);
}
return num;
}
#endif /* CONFIG_ZORRO */
int __init amiga7xx_detect(struct scsi_host_template *tpnt)
{
static unsigned char called = 0;
int num = 0;
if (called || !MACH_IS_AMIGA)
return 0;
tpnt->proc_name = "Amiga7xx";
if (AMIGAHW_PRESENT(A4000_SCSI))
num += amiga7xx_register_one(tpnt, 0xdd0040);
#ifdef CONFIG_ZORRO
num += amiga7xx_zorro_detect(tpnt);
#endif
called = 1;
return num;
}
static int amiga7xx_release(struct Scsi_Host *shost)
{
if (shost->irq)
free_irq(shost->irq, NULL);
if (shost->dma_channel != 0xff)
free_dma(shost->dma_channel);
if (shost->io_port && shost->n_io_port)
release_region(shost->io_port, shost->n_io_port);
scsi_unregister(shost);
return 0;
}
static struct scsi_host_template driver_template = {
.name = "Amiga NCR53c710 SCSI",
.detect = amiga7xx_detect,
.release = amiga7xx_release,
.queuecommand = NCR53c7xx_queue_command,
.abort = NCR53c7xx_abort,
.reset = NCR53c7xx_reset,
.can_queue = 24,
.this_id = 7,
.sg_tablesize = 63,
.cmd_per_lun = 3,
.use_clustering = DISABLE_CLUSTERING
};
#include "scsi_module.c"

View File

@ -1,23 +0,0 @@
#ifndef AMIGA7XX_H
#include <linux/types.h>
int amiga7xx_detect(struct scsi_host_template *);
const char *NCR53c7x0_info(void);
int NCR53c7xx_queue_command(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));
int NCR53c7xx_abort(Scsi_Cmnd *);
int NCR53c7x0_release (struct Scsi_Host *);
int NCR53c7xx_reset(Scsi_Cmnd *, unsigned int);
void NCR53c7x0_intr(int irq, void *dev_id);
#ifndef CMD_PER_LUN
#define CMD_PER_LUN 3
#endif
#ifndef CAN_QUEUE
#define CAN_QUEUE 24
#endif
#include <scsi/scsicam.h>
#endif /* AMIGA7XX_H */

View File

@ -48,9 +48,10 @@ struct class_device_attribute;
#define ARCMSR_MAX_OUTSTANDING_CMD 256
#define ARCMSR_MAX_FREECCB_NUM 288
#define ARCMSR_DRIVER_VERSION "Driver Version 1.20.00.13"
#define ARCMSR_DRIVER_VERSION "Driver Version 1.20.00.14"
#define ARCMSR_SCSI_INITIATOR_ID 255
#define ARCMSR_MAX_XFER_SECTORS 512
#define ARCMSR_MAX_XFER_SECTORS_B 4096
#define ARCMSR_MAX_TARGETID 17
#define ARCMSR_MAX_TARGETLUN 8
#define ARCMSR_MAX_CMD_PERLUN ARCMSR_MAX_OUTSTANDING_CMD
@ -469,4 +470,3 @@ extern void arcmsr_post_Qbuffer(struct AdapterControlBlock *acb);
extern struct class_device_attribute *arcmsr_host_attrs[];
extern int arcmsr_alloc_sysfs_attr(struct AdapterControlBlock *acb);
void arcmsr_free_sysfs_attr(struct AdapterControlBlock *acb);

View File

@ -57,6 +57,7 @@
#include <linux/dma-mapping.h>
#include <linux/timer.h>
#include <linux/pci.h>
#include <linux/aer.h>
#include <asm/dma.h>
#include <asm/io.h>
#include <asm/system.h>
@ -71,7 +72,7 @@
#include "arcmsr.h"
MODULE_AUTHOR("Erich Chen <erich@areca.com.tw>");
MODULE_DESCRIPTION("ARECA (ARC11xx/12xx) SATA RAID HOST Adapter");
MODULE_DESCRIPTION("ARECA (ARC11xx/12xx/13xx/16xx) SATA/SAS RAID HOST Adapter");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(ARCMSR_DRIVER_VERSION);
@ -93,7 +94,9 @@ static void arcmsr_flush_adapter_cache(struct AdapterControlBlock *acb);
static uint8_t arcmsr_wait_msgint_ready(struct AdapterControlBlock *acb);
static const char *arcmsr_info(struct Scsi_Host *);
static irqreturn_t arcmsr_interrupt(struct AdapterControlBlock *acb);
static pci_ers_result_t arcmsr_pci_error_detected(struct pci_dev *pdev,
pci_channel_state_t state);
static pci_ers_result_t arcmsr_pci_slot_reset(struct pci_dev *pdev);
static int arcmsr_adjust_disk_queue_depth(struct scsi_device *sdev, int queue_depth)
{
if (queue_depth > ARCMSR_MAX_CMD_PERLUN)
@ -104,7 +107,8 @@ static int arcmsr_adjust_disk_queue_depth(struct scsi_device *sdev, int queue_de
static struct scsi_host_template arcmsr_scsi_host_template = {
.module = THIS_MODULE,
.name = "ARCMSR ARECA SATA RAID HOST Adapter" ARCMSR_DRIVER_VERSION,
.name = "ARCMSR ARECA SATA/SAS RAID HOST Adapter"
ARCMSR_DRIVER_VERSION,
.info = arcmsr_info,
.queuecommand = arcmsr_queue_command,
.eh_abort_handler = arcmsr_abort,
@ -119,6 +123,10 @@ static struct scsi_host_template arcmsr_scsi_host_template = {
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = arcmsr_host_attrs,
};
static struct pci_error_handlers arcmsr_pci_error_handlers = {
.error_detected = arcmsr_pci_error_detected,
.slot_reset = arcmsr_pci_slot_reset,
};
static struct pci_device_id arcmsr_device_id_table[] = {
{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1110)},
@ -144,7 +152,8 @@ static struct pci_driver arcmsr_pci_driver = {
.id_table = arcmsr_device_id_table,
.probe = arcmsr_probe,
.remove = arcmsr_remove,
.shutdown = arcmsr_shutdown
.shutdown = arcmsr_shutdown,
.err_handler = &arcmsr_pci_error_handlers,
};
static irqreturn_t arcmsr_do_interrupt(int irq, void *dev_id)
@ -328,6 +337,8 @@ static int arcmsr_probe(struct pci_dev *pdev,
arcmsr_iop_init(acb);
pci_set_drvdata(pdev, host);
if (strncmp(acb->firm_version, "V1.42", 5) >= 0)
host->max_sectors= ARCMSR_MAX_XFER_SECTORS_B;
error = scsi_add_host(host, &pdev->dev);
if (error)
@ -338,6 +349,7 @@ static int arcmsr_probe(struct pci_dev *pdev,
goto out_free_sysfs;
scsi_scan_host(host);
pci_enable_pcie_error_reporting(pdev);
return 0;
out_free_sysfs:
out_free_irq:
@ -369,19 +381,9 @@ static void arcmsr_abort_allcmd(struct AdapterControlBlock *acb)
static void arcmsr_pci_unmap_dma(struct CommandControlBlock *ccb)
{
struct AdapterControlBlock *acb = ccb->acb;
struct scsi_cmnd *pcmd = ccb->pcmd;
if (pcmd->use_sg != 0) {
struct scatterlist *sl;
sl = (struct scatterlist *)pcmd->request_buffer;
pci_unmap_sg(acb->pdev, sl, pcmd->use_sg, pcmd->sc_data_direction);
}
else if (pcmd->request_bufflen != 0)
pci_unmap_single(acb->pdev,
pcmd->SCp.dma_handle,
pcmd->request_bufflen, pcmd->sc_data_direction);
scsi_dma_unmap(pcmd);
}
static void arcmsr_ccb_complete(struct CommandControlBlock *ccb, int stand_flag)
@ -498,7 +500,7 @@ static void arcmsr_enable_outbound_ints(struct AdapterControlBlock *acb,
static void arcmsr_flush_adapter_cache(struct AdapterControlBlock *acb)
{
struct MessageUnit __iomem *reg=acb->pmu;
struct MessageUnit __iomem *reg = acb->pmu;
writel(ARCMSR_INBOUND_MESG0_FLUSH_CACHE, &reg->inbound_msgaddr0);
if (arcmsr_wait_msgint_ready(acb))
@ -551,6 +553,7 @@ static void arcmsr_build_ccb(struct AdapterControlBlock *acb,
int8_t *psge = (int8_t *)&arcmsr_cdb->u;
uint32_t address_lo, address_hi;
int arccdbsize = 0x30;
int nseg;
ccb->pcmd = pcmd;
memset(arcmsr_cdb, 0, sizeof (struct ARCMSR_CDB));
@ -561,20 +564,20 @@ static void arcmsr_build_ccb(struct AdapterControlBlock *acb,
arcmsr_cdb->CdbLength = (uint8_t)pcmd->cmd_len;
arcmsr_cdb->Context = (unsigned long)arcmsr_cdb;
memcpy(arcmsr_cdb->Cdb, pcmd->cmnd, pcmd->cmd_len);
if (pcmd->use_sg) {
int length, sgcount, i, cdb_sgcount = 0;
struct scatterlist *sl;
/* Get Scatter Gather List from scsiport. */
sl = (struct scatterlist *) pcmd->request_buffer;
sgcount = pci_map_sg(acb->pdev, sl, pcmd->use_sg,
pcmd->sc_data_direction);
nseg = scsi_dma_map(pcmd);
BUG_ON(nseg < 0);
if (nseg) {
int length, i, cdb_sgcount = 0;
struct scatterlist *sg;
/* map stor port SG list to our iop SG List. */
for (i = 0; i < sgcount; i++) {
scsi_for_each_sg(pcmd, sg, nseg, i) {
/* Get the physical address of the current data pointer */
length = cpu_to_le32(sg_dma_len(sl));
address_lo = cpu_to_le32(dma_addr_lo32(sg_dma_address(sl)));
address_hi = cpu_to_le32(dma_addr_hi32(sg_dma_address(sl)));
length = cpu_to_le32(sg_dma_len(sg));
address_lo = cpu_to_le32(dma_addr_lo32(sg_dma_address(sg)));
address_hi = cpu_to_le32(dma_addr_hi32(sg_dma_address(sg)));
if (address_hi == 0) {
struct SG32ENTRY *pdma_sg = (struct SG32ENTRY *)psge;
@ -591,32 +594,12 @@ static void arcmsr_build_ccb(struct AdapterControlBlock *acb,
psge += sizeof (struct SG64ENTRY);
arccdbsize += sizeof (struct SG64ENTRY);
}
sl++;
cdb_sgcount++;
}
arcmsr_cdb->sgcount = (uint8_t)cdb_sgcount;
arcmsr_cdb->DataLength = pcmd->request_bufflen;
arcmsr_cdb->DataLength = scsi_bufflen(pcmd);
if ( arccdbsize > 256)
arcmsr_cdb->Flags |= ARCMSR_CDB_FLAG_SGL_BSIZE;
} else if (pcmd->request_bufflen) {
dma_addr_t dma_addr;
dma_addr = pci_map_single(acb->pdev, pcmd->request_buffer,
pcmd->request_bufflen, pcmd->sc_data_direction);
pcmd->SCp.dma_handle = dma_addr;
address_lo = cpu_to_le32(dma_addr_lo32(dma_addr));
address_hi = cpu_to_le32(dma_addr_hi32(dma_addr));
if (address_hi == 0) {
struct SG32ENTRY *pdma_sg = (struct SG32ENTRY *)psge;
pdma_sg->address = address_lo;
pdma_sg->length = pcmd->request_bufflen;
} else {
struct SG64ENTRY *pdma_sg = (struct SG64ENTRY *)psge;
pdma_sg->addresshigh = address_hi;
pdma_sg->address = address_lo;
pdma_sg->length = pcmd->request_bufflen|IS_SG64_ADDR;
}
arcmsr_cdb->sgcount = 1;
arcmsr_cdb->DataLength = pcmd->request_bufflen;
}
if (pcmd->sc_data_direction == DMA_TO_DEVICE ) {
arcmsr_cdb->Flags |= ARCMSR_CDB_FLAG_WRITE;
@ -747,7 +730,7 @@ static irqreturn_t arcmsr_interrupt(struct AdapterControlBlock *acb)
int id, lun;
/*
****************************************************************
** areca cdb command done
** areca cdb command done
****************************************************************
*/
while (1) {
@ -758,20 +741,20 @@ static irqreturn_t arcmsr_interrupt(struct AdapterControlBlock *acb)
(flag_ccb << 5));
if ((ccb->acb != acb) || (ccb->startdone != ARCMSR_CCB_START)) {
if (ccb->startdone == ARCMSR_CCB_ABORTED) {
struct scsi_cmnd *abortcmd=ccb->pcmd;
struct scsi_cmnd *abortcmd = ccb->pcmd;
if (abortcmd) {
abortcmd->result |= DID_ABORT >> 16;
arcmsr_ccb_complete(ccb, 1);
printk(KERN_NOTICE
"arcmsr%d: ccb='0x%p' isr got aborted command \n"
"arcmsr%d: ccb ='0x%p' isr got aborted command \n"
, acb->host->host_no, ccb);
}
continue;
}
printk(KERN_NOTICE
"arcmsr%d: isr get an illegal ccb command done acb='0x%p'"
"ccb='0x%p' ccbacb='0x%p' startdone = 0x%x"
" ccboutstandingcount=%d \n"
"arcmsr%d: isr get an illegal ccb command done acb = '0x%p'"
"ccb = '0x%p' ccbacb = '0x%p' startdone = 0x%x"
" ccboutstandingcount = %d \n"
, acb->host->host_no
, acb
, ccb
@ -791,7 +774,7 @@ static irqreturn_t arcmsr_interrupt(struct AdapterControlBlock *acb)
switch(ccb->arcmsr_cdb.DeviceStatus) {
case ARCMSR_DEV_SELECT_TIMEOUT: {
acb->devstate[id][lun] = ARECA_RAID_GONE;
ccb->pcmd->result = DID_TIME_OUT << 16;
ccb->pcmd->result = DID_NO_CONNECT << 16;
arcmsr_ccb_complete(ccb, 1);
}
break;
@ -810,8 +793,8 @@ static irqreturn_t arcmsr_interrupt(struct AdapterControlBlock *acb)
break;
default:
printk(KERN_NOTICE
"arcmsr%d: scsi id=%d lun=%d"
" isr get command error done,"
"arcmsr%d: scsi id = %d lun = %d"
" isr get command error done, "
"but got unknown DeviceStatus = 0x%x \n"
, acb->host->host_no
, id
@ -848,24 +831,21 @@ static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb, struct scsi_
struct CMD_MESSAGE_FIELD *pcmdmessagefld;
int retvalue = 0, transfer_len = 0;
char *buffer;
struct scatterlist *sg;
uint32_t controlcode = (uint32_t ) cmd->cmnd[5] << 24 |
(uint32_t ) cmd->cmnd[6] << 16 |
(uint32_t ) cmd->cmnd[7] << 8 |
(uint32_t ) cmd->cmnd[8];
/* 4 bytes: Areca io control code */
if (cmd->use_sg) {
struct scatterlist *sg = (struct scatterlist *)cmd->request_buffer;
buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
if (cmd->use_sg > 1) {
retvalue = ARCMSR_MESSAGE_FAIL;
goto message_out;
}
transfer_len += sg->length;
} else {
buffer = cmd->request_buffer;
transfer_len = cmd->request_bufflen;
sg = scsi_sglist(cmd);
buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
if (scsi_sg_count(cmd) > 1) {
retvalue = ARCMSR_MESSAGE_FAIL;
goto message_out;
}
transfer_len += sg->length;
if (transfer_len > sizeof(struct CMD_MESSAGE_FIELD)) {
retvalue = ARCMSR_MESSAGE_FAIL;
goto message_out;
@ -1057,12 +1037,9 @@ static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb, struct scsi_
retvalue = ARCMSR_MESSAGE_FAIL;
}
message_out:
if (cmd->use_sg) {
struct scatterlist *sg;
sg = scsi_sglist(cmd);
kunmap_atomic(buffer - sg->offset, KM_IRQ0);
sg = (struct scatterlist *) cmd->request_buffer;
kunmap_atomic(buffer - sg->offset, KM_IRQ0);
}
return retvalue;
}
@ -1085,6 +1062,7 @@ static void arcmsr_handle_virtual_command(struct AdapterControlBlock *acb,
case INQUIRY: {
unsigned char inqdata[36];
char *buffer;
struct scatterlist *sg;
if (cmd->device->lun) {
cmd->result = (DID_TIME_OUT << 16);
@ -1096,7 +1074,7 @@ static void arcmsr_handle_virtual_command(struct AdapterControlBlock *acb,
inqdata[1] = 0;
/* rem media bit & Dev Type Modifier */
inqdata[2] = 0;
/* ISO,ECMA,& ANSI versions */
/* ISO, ECMA, & ANSI versions */
inqdata[4] = 31;
/* length of additional data */
strncpy(&inqdata[8], "Areca ", 8);
@ -1104,21 +1082,14 @@ static void arcmsr_handle_virtual_command(struct AdapterControlBlock *acb,
strncpy(&inqdata[16], "RAID controller ", 16);
/* Product Identification */
strncpy(&inqdata[32], "R001", 4); /* Product Revision */
if (cmd->use_sg) {
struct scatterlist *sg;
sg = (struct scatterlist *) cmd->request_buffer;
buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
} else {
buffer = cmd->request_buffer;
}
sg = scsi_sglist(cmd);
buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
memcpy(buffer, inqdata, sizeof(inqdata));
if (cmd->use_sg) {
struct scatterlist *sg;
sg = scsi_sglist(cmd);
kunmap_atomic(buffer - sg->offset, KM_IRQ0);
sg = (struct scatterlist *) cmd->request_buffer;
kunmap_atomic(buffer - sg->offset, KM_IRQ0);
}
cmd->scsi_done(cmd);
}
break;
@ -1153,7 +1124,7 @@ static int arcmsr_queue_command(struct scsi_cmnd *cmd,
, acb->host->host_no);
return SCSI_MLQUEUE_HOST_BUSY;
}
if(target == 16) {
if (target == 16) {
/* virtual device for iop message transfer */
arcmsr_handle_virtual_command(acb, cmd);
return 0;
@ -1166,7 +1137,7 @@ static int arcmsr_queue_command(struct scsi_cmnd *cmd,
printk(KERN_NOTICE
"arcmsr%d: block 'read/write'"
"command with gone raid volume"
" Cmd=%2x, TargetId=%d, Lun=%d \n"
" Cmd = %2x, TargetId = %d, Lun = %d \n"
, acb->host->host_no
, cmd->cmnd[0]
, target, lun);
@ -1257,7 +1228,7 @@ static void arcmsr_polling_ccbdone(struct AdapterControlBlock *acb,
if ((ccb->startdone == ARCMSR_CCB_ABORTED) ||
(ccb == poll_ccb)) {
printk(KERN_NOTICE
"arcmsr%d: scsi id=%d lun=%d ccb='0x%p'"
"arcmsr%d: scsi id = %d lun = %d ccb = '0x%p'"
" poll command abort successfully \n"
, acb->host->host_no
, ccb->pcmd->device->id
@ -1270,8 +1241,8 @@ static void arcmsr_polling_ccbdone(struct AdapterControlBlock *acb,
}
printk(KERN_NOTICE
"arcmsr%d: polling get an illegal ccb"
" command done ccb='0x%p'"
"ccboutstandingcount=%d \n"
" command done ccb ='0x%p'"
"ccboutstandingcount = %d \n"
, acb->host->host_no
, ccb
, atomic_read(&acb->ccboutstandingcount));
@ -1288,7 +1259,7 @@ static void arcmsr_polling_ccbdone(struct AdapterControlBlock *acb,
switch(ccb->arcmsr_cdb.DeviceStatus) {
case ARCMSR_DEV_SELECT_TIMEOUT: {
acb->devstate[id][lun] = ARECA_RAID_GONE;
ccb->pcmd->result = DID_TIME_OUT << 16;
ccb->pcmd->result = DID_NO_CONNECT << 16;
arcmsr_ccb_complete(ccb, 1);
}
break;
@ -1307,7 +1278,7 @@ static void arcmsr_polling_ccbdone(struct AdapterControlBlock *acb,
break;
default:
printk(KERN_NOTICE
"arcmsr%d: scsi id=%d lun=%d"
"arcmsr%d: scsi id = %d lun = %d"
" polling and getting command error done"
"but got unknown DeviceStatus = 0x%x \n"
, acb->host->host_no
@ -1322,6 +1293,94 @@ static void arcmsr_polling_ccbdone(struct AdapterControlBlock *acb,
}
}
}
static void arcmsr_done4_abort_postqueue(struct AdapterControlBlock *acb)
{
int i = 0, found = 0;
int id, lun;
uint32_t flag_ccb, outbound_intstatus;
struct MessageUnit __iomem *reg = acb->pmu;
struct CommandControlBlock *ccb;
/*clear and abort all outbound posted Q*/
while (((flag_ccb = readl(&reg->outbound_queueport)) != 0xFFFFFFFF) &&
(i++ < 256)){
ccb = (struct CommandControlBlock *)(acb->vir2phy_offset +
(flag_ccb << 5));
if (ccb){
if ((ccb->acb != acb)||(ccb->startdone != \
ARCMSR_CCB_START)){
printk(KERN_NOTICE "arcmsr%d: polling get \
an illegal ccb" "command done ccb = '0x%p'""ccboutstandingcount = %d \n",
acb->host->host_no, ccb,
atomic_read(&acb->ccboutstandingcount));
continue;
}
id = ccb->pcmd->device->id;
lun = ccb->pcmd->device->lun;
if (!(flag_ccb & ARCMSR_CCBREPLY_FLAG_ERROR)){
if (acb->devstate[id][lun] == ARECA_RAID_GONE)
acb->devstate[id][lun] = ARECA_RAID_GOOD;
ccb->pcmd->result = DID_OK << 16;
arcmsr_ccb_complete(ccb, 1);
}
else {
switch(ccb->arcmsr_cdb.DeviceStatus) {
case ARCMSR_DEV_SELECT_TIMEOUT: {
acb->devstate[id][lun] = ARECA_RAID_GONE;
ccb->pcmd->result = DID_NO_CONNECT << 16;
arcmsr_ccb_complete(ccb, 1);
}
break;
case ARCMSR_DEV_ABORTED:
case ARCMSR_DEV_INIT_FAIL: {
acb->devstate[id][lun] =
ARECA_RAID_GONE;
ccb->pcmd->result =
DID_BAD_TARGET << 16;
arcmsr_ccb_complete(ccb, 1);
}
break;
case ARCMSR_DEV_CHECK_CONDITION: {
acb->devstate[id][lun] =
ARECA_RAID_GOOD;
arcmsr_report_sense_info(ccb);
arcmsr_ccb_complete(ccb, 1);
}
break;
default:
printk(KERN_NOTICE
"arcmsr%d: scsi id = %d \
lun = %d""polling and \
getting command error \
done""but got unknown \
DeviceStatus = 0x%x \n",
acb->host->host_no, id,
lun, ccb->arcmsr_cdb.DeviceStatus);
acb->devstate[id][lun] =
ARECA_RAID_GONE;
ccb->pcmd->result =
DID_BAD_TARGET << 16;
arcmsr_ccb_complete(ccb, 1);
break;
}
}
found = 1;
}
}
if (found){
outbound_intstatus = readl(&reg->outbound_intstatus) & \
acb->outbound_int_enable;
writel(outbound_intstatus, &reg->outbound_intstatus);
/*clear interrupt*/
}
return;
}
static void arcmsr_iop_init(struct AdapterControlBlock *acb)
{
@ -1355,7 +1414,6 @@ static void arcmsr_iop_init(struct AdapterControlBlock *acb)
static void arcmsr_iop_reset(struct AdapterControlBlock *acb)
{
struct MessageUnit __iomem *reg = acb->pmu;
struct CommandControlBlock *ccb;
uint32_t intmask_org;
int i = 0;
@ -1368,21 +1426,17 @@ static void arcmsr_iop_reset(struct AdapterControlBlock *acb)
/* disable all outbound interrupt */
intmask_org = arcmsr_disable_outbound_ints(acb);
/* clear all outbound posted Q */
for (i = 0; i < ARCMSR_MAX_OUTSTANDING_CMD; i++)
readl(&reg->outbound_queueport);
arcmsr_done4_abort_postqueue(acb);
for (i = 0; i < ARCMSR_MAX_FREECCB_NUM; i++) {
ccb = acb->pccb_pool[i];
if ((ccb->startdone == ARCMSR_CCB_START) ||
(ccb->startdone == ARCMSR_CCB_ABORTED)) {
if (ccb->startdone == ARCMSR_CCB_START) {
ccb->startdone = ARCMSR_CCB_ABORTED;
ccb->pcmd->result = DID_ABORT << 16;
arcmsr_ccb_complete(ccb, 1);
}
}
/* enable all outbound interrupt */
arcmsr_enable_outbound_ints(acb, intmask_org);
}
atomic_set(&acb->ccboutstandingcount, 0);
}
static int arcmsr_bus_reset(struct scsi_cmnd *cmd)
@ -1428,10 +1482,9 @@ static int arcmsr_abort(struct scsi_cmnd *cmd)
int i = 0;
printk(KERN_NOTICE
"arcmsr%d: abort device command of scsi id=%d lun=%d \n",
"arcmsr%d: abort device command of scsi id = %d lun = %d \n",
acb->host->host_no, cmd->device->id, cmd->device->lun);
acb->num_aborts++;
/*
************************************************
** the all interrupt service routine is locked
@ -1486,10 +1539,306 @@ static const char *arcmsr_info(struct Scsi_Host *host)
type = "X-TYPE";
break;
}
sprintf(buf, "Areca %s Host Adapter RAID Controller%s\n %s",
sprintf(buf, "Areca %s Host Adapter RAID Controller%s\n %s",
type, raid6 ? "( RAID6 capable)" : "",
ARCMSR_DRIVER_VERSION);
return buf;
}
static pci_ers_result_t arcmsr_pci_slot_reset(struct pci_dev *pdev)
{
struct Scsi_Host *host;
struct AdapterControlBlock *acb;
uint8_t bus, dev_fun;
int error;
error = pci_enable_device(pdev);
if (error)
return PCI_ERS_RESULT_DISCONNECT;
pci_set_master(pdev);
host = scsi_host_alloc(&arcmsr_scsi_host_template, sizeof \
(struct AdapterControlBlock));
if (!host)
return PCI_ERS_RESULT_DISCONNECT;
acb = (struct AdapterControlBlock *)host->hostdata;
memset(acb, 0, sizeof (struct AdapterControlBlock));
error = pci_set_dma_mask(pdev, DMA_64BIT_MASK);
if (error) {
error = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
if (error) {
printk(KERN_WARNING
"scsi%d: No suitable DMA mask available\n",
host->host_no);
return PCI_ERS_RESULT_DISCONNECT;
}
}
bus = pdev->bus->number;
dev_fun = pdev->devfn;
acb = (struct AdapterControlBlock *) host->hostdata;
memset(acb, 0, sizeof(struct AdapterControlBlock));
acb->pdev = pdev;
acb->host = host;
host->max_sectors = ARCMSR_MAX_XFER_SECTORS;
host->max_lun = ARCMSR_MAX_TARGETLUN;
host->max_id = ARCMSR_MAX_TARGETID;/*16:8*/
host->max_cmd_len = 16; /*this is issue of 64bit LBA, over 2T byte*/
host->sg_tablesize = ARCMSR_MAX_SG_ENTRIES;
host->can_queue = ARCMSR_MAX_FREECCB_NUM; /* max simultaneous cmds */
host->cmd_per_lun = ARCMSR_MAX_CMD_PERLUN;
host->this_id = ARCMSR_SCSI_INITIATOR_ID;
host->unique_id = (bus << 8) | dev_fun;
host->irq = pdev->irq;
error = pci_request_regions(pdev, "arcmsr");
if (error)
return PCI_ERS_RESULT_DISCONNECT;
acb->pmu = ioremap(pci_resource_start(pdev, 0),
pci_resource_len(pdev, 0));
if (!acb->pmu) {
printk(KERN_NOTICE "arcmsr%d: memory"
" mapping region fail \n", acb->host->host_no);
return PCI_ERS_RESULT_DISCONNECT;
}
acb->acb_flags |= (ACB_F_MESSAGE_WQBUFFER_CLEARED |
ACB_F_MESSAGE_RQBUFFER_CLEARED |
ACB_F_MESSAGE_WQBUFFER_READED);
acb->acb_flags &= ~ACB_F_SCSISTOPADAPTER;
INIT_LIST_HEAD(&acb->ccb_free_list);
error = arcmsr_alloc_ccb_pool(acb);
if (error)
return PCI_ERS_RESULT_DISCONNECT;
error = request_irq(pdev->irq, arcmsr_do_interrupt,
IRQF_DISABLED | IRQF_SHARED, "arcmsr", acb);
if (error)
return PCI_ERS_RESULT_DISCONNECT;
arcmsr_iop_init(acb);
if (strncmp(acb->firm_version, "V1.42", 5) >= 0)
host->max_sectors = ARCMSR_MAX_XFER_SECTORS_B;
pci_set_drvdata(pdev, host);
error = scsi_add_host(host, &pdev->dev);
if (error)
return PCI_ERS_RESULT_DISCONNECT;
error = arcmsr_alloc_sysfs_attr(acb);
if (error)
return PCI_ERS_RESULT_DISCONNECT;
scsi_scan_host(host);
return PCI_ERS_RESULT_RECOVERED;
}
static void arcmsr_pci_ers_need_reset_forepart(struct pci_dev *pdev)
{
struct Scsi_Host *host = pci_get_drvdata(pdev);
struct AdapterControlBlock *acb = (struct AdapterControlBlock *) host->hostdata;
struct MessageUnit __iomem *reg = acb->pmu;
struct CommandControlBlock *ccb;
/*clear and abort all outbound posted Q*/
int i = 0, found = 0;
int id, lun;
uint32_t flag_ccb, outbound_intstatus;
while (((flag_ccb = readl(&reg->outbound_queueport)) != 0xFFFFFFFF) &&
(i++ < 256)){
ccb = (struct CommandControlBlock *)(acb->vir2phy_offset
+ (flag_ccb << 5));
if (ccb){
if ((ccb->acb != acb)||(ccb->startdone !=
ARCMSR_CCB_START)){
printk(KERN_NOTICE "arcmsr%d: polling \
get an illegal ccb"" command done ccb = '0x%p'"
"ccboutstandingcount = %d \n",
acb->host->host_no, ccb,
atomic_read(&acb->ccboutstandingcount));
continue;
}
id = ccb->pcmd->device->id;
lun = ccb->pcmd->device->lun;
if (!(flag_ccb & ARCMSR_CCBREPLY_FLAG_ERROR)) {
if (acb->devstate[id][lun] ==
ARECA_RAID_GONE)
acb->devstate[id][lun] =
ARECA_RAID_GOOD;
ccb->pcmd->result = DID_OK << 16;
arcmsr_ccb_complete(ccb, 1);
}
else {
switch(ccb->arcmsr_cdb.DeviceStatus) {
case ARCMSR_DEV_SELECT_TIMEOUT: {
acb->devstate[id][lun] =
ARECA_RAID_GONE;
ccb->pcmd->result =
DID_NO_CONNECT << 16;
arcmsr_ccb_complete(ccb, 1);
}
break;
case ARCMSR_DEV_ABORTED:
case ARCMSR_DEV_INIT_FAIL: {
acb->devstate[id][lun] =
ARECA_RAID_GONE;
ccb->pcmd->result =
DID_BAD_TARGET << 16;
arcmsr_ccb_complete(ccb, 1);
}
break;
case ARCMSR_DEV_CHECK_CONDITION: {
acb->devstate[id][lun] =
ARECA_RAID_GOOD;
arcmsr_report_sense_info(ccb);
arcmsr_ccb_complete(ccb, 1);
}
break;
default:
printk(KERN_NOTICE
"arcmsr%d: scsi \
id = %d lun = %d"
" polling and \
getting command \
error done"
"but got unknown \
DeviceStatus = 0x%x \n"
, acb->host->host_no,
id, lun,
ccb->arcmsr_cdb.DeviceStatus);
acb->devstate[id][lun] =
ARECA_RAID_GONE;
ccb->pcmd->result =
DID_BAD_TARGET << 16;
arcmsr_ccb_complete(ccb, 1);
break;
}
}
found = 1;
}
}
if (found){
outbound_intstatus = readl(&reg->outbound_intstatus) &
acb->outbound_int_enable;
writel(outbound_intstatus, &reg->outbound_intstatus);
/*clear interrupt*/
}
return;
}
static void arcmsr_pci_ers_disconnect_forepart(struct pci_dev *pdev)
{
struct Scsi_Host *host = pci_get_drvdata(pdev);
struct AdapterControlBlock *acb = (struct AdapterControlBlock *) host->hostdata;
struct MessageUnit __iomem *reg = acb->pmu;
struct CommandControlBlock *ccb;
/*clear and abort all outbound posted Q*/
int i = 0, found = 0;
int id, lun;
uint32_t flag_ccb, outbound_intstatus;
while (((flag_ccb = readl(&reg->outbound_queueport)) != 0xFFFFFFFF) &&
(i++ < 256)){
ccb = (struct CommandControlBlock *)(acb->vir2phy_offset +
(flag_ccb << 5));
if (ccb){
if ((ccb->acb != acb)||(ccb->startdone !=
ARCMSR_CCB_START)){
printk(KERN_NOTICE
"arcmsr%d: polling get an illegal ccb"
" command done ccb = '0x%p'"
"ccboutstandingcount = %d \n",
acb->host->host_no, ccb,
atomic_read(&acb->ccboutstandingcount));
continue;
}
id = ccb->pcmd->device->id;
lun = ccb->pcmd->device->lun;
if (!(flag_ccb & ARCMSR_CCBREPLY_FLAG_ERROR)) {
if (acb->devstate[id][lun] == ARECA_RAID_GONE)
acb->devstate[id][lun] = ARECA_RAID_GOOD;
ccb->pcmd->result = DID_OK << 16;
arcmsr_ccb_complete(ccb, 1);
}
else {
switch(ccb->arcmsr_cdb.DeviceStatus) {
case ARCMSR_DEV_SELECT_TIMEOUT: {
acb->devstate[id][lun] =
ARECA_RAID_GONE;
ccb->pcmd->result =
DID_NO_CONNECT << 16;
arcmsr_ccb_complete(ccb, 1);
}
break;
case ARCMSR_DEV_ABORTED:
case ARCMSR_DEV_INIT_FAIL: {
acb->devstate[id][lun] =
ARECA_RAID_GONE;
ccb->pcmd->result =
DID_BAD_TARGET << 16;
arcmsr_ccb_complete(ccb, 1);
}
break;
case ARCMSR_DEV_CHECK_CONDITION: {
acb->devstate[id][lun] =
ARECA_RAID_GOOD;
arcmsr_report_sense_info(ccb);
arcmsr_ccb_complete(ccb, 1);
}
break;
default:
printk(KERN_NOTICE "arcmsr%d: \
scsi id = %d lun = %d"
" polling and \
getting command error done"
"but got unknown \
DeviceStatus = 0x%x \n"
, acb->host->host_no,
id, lun, ccb->arcmsr_cdb.DeviceStatus);
acb->devstate[id][lun] =
ARECA_RAID_GONE;
ccb->pcmd->result =
DID_BAD_TARGET << 16;
arcmsr_ccb_complete(ccb, 1);
break;
}
}
found = 1;
}
}
if (found){
outbound_intstatus = readl(&reg->outbound_intstatus) &
acb->outbound_int_enable;
writel(outbound_intstatus, &reg->outbound_intstatus);
/*clear interrupt*/
}
return;
}
static pci_ers_result_t arcmsr_pci_error_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
switch (state) {
case pci_channel_io_frozen:
arcmsr_pci_ers_need_reset_forepart(pdev);
return PCI_ERS_RESULT_NEED_RESET;
case pci_channel_io_perm_failure:
arcmsr_pci_ers_disconnect_forepart(pdev);
return PCI_ERS_RESULT_DISCONNECT;
break;
default:
return PCI_ERS_RESULT_NEED_RESET;
}
}

View File

@ -1,76 +0,0 @@
/*
* Detection routine for the NCR53c710 based BVME6000 SCSI Controllers for Linux.
*
* Based on work by Alan Hourihane
*/
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/blkdev.h>
#include <linux/zorro.h>
#include <asm/setup.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/bvme6000hw.h>
#include <asm/irq.h>
#include "scsi.h"
#include <scsi/scsi_host.h>
#include "53c7xx.h"
#include "bvme6000.h"
#include<linux/stat.h>
int bvme6000_scsi_detect(struct scsi_host_template *tpnt)
{
static unsigned char called = 0;
int clock;
long long options;
if (called)
return 0;
if (!MACH_IS_BVME6000)
return 0;
tpnt->proc_name = "BVME6000";
options = OPTION_MEMORY_MAPPED|OPTION_DEBUG_TEST1|OPTION_INTFLY|OPTION_SYNCHRONOUS|OPTION_ALWAYS_SYNCHRONOUS|OPTION_DISCONNECT;
clock = 40000000; /* 66MHz SCSI Clock */
ncr53c7xx_init(tpnt, 0, 710, (unsigned long)BVME_NCR53C710_BASE,
0, BVME_IRQ_SCSI, DMA_NONE,
options, clock);
called = 1;
return 1;
}
static int bvme6000_scsi_release(struct Scsi_Host *shost)
{
if (shost->irq)
free_irq(shost->irq, NULL);
if (shost->dma_channel != 0xff)
free_dma(shost->dma_channel);
if (shost->io_port && shost->n_io_port)
release_region(shost->io_port, shost->n_io_port);
scsi_unregister(shost);
return 0;
}
static struct scsi_host_template driver_template = {
.name = "BVME6000 NCR53c710 SCSI",
.detect = bvme6000_scsi_detect,
.release = bvme6000_scsi_release,
.queuecommand = NCR53c7xx_queue_command,
.abort = NCR53c7xx_abort,
.reset = NCR53c7xx_reset,
.can_queue = 24,
.this_id = 7,
.sg_tablesize = 63,
.cmd_per_lun = 3,
.use_clustering = DISABLE_CLUSTERING
};
#include "scsi_module.c"

View File

@ -1,24 +0,0 @@
#ifndef BVME6000_SCSI_H
#define BVME6000_SCSI_H
#include <linux/types.h>
int bvme6000_scsi_detect(struct scsi_host_template *);
const char *NCR53c7x0_info(void);
int NCR53c7xx_queue_command(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));
int NCR53c7xx_abort(Scsi_Cmnd *);
int NCR53c7x0_release (struct Scsi_Host *);
int NCR53c7xx_reset(Scsi_Cmnd *, unsigned int);
void NCR53c7x0_intr(int irq, void *dev_id);
#ifndef CMD_PER_LUN
#define CMD_PER_LUN 3
#endif
#ifndef CAN_QUEUE
#define CAN_QUEUE 24
#endif
#include <scsi/scsicam.h>
#endif /* BVME6000_SCSI_H */

View File

@ -0,0 +1,135 @@
/*
* Detection routine for the NCR53c710 based BVME6000 SCSI Controllers for Linux.
*
* Based on work by Alan Hourihane and Kars de Jong
*
* Rewritten to use 53c700.c by Richard Hirst <richard@sleepie.demon.co.uk>
*/
#include <linux/module.h>
#include <linux/blkdev.h>
#include <linux/device.h>
#include <linux/platform_device.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <asm/bvme6000hw.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_transport.h>
#include <scsi/scsi_transport_spi.h>
#include "53c700.h"
MODULE_AUTHOR("Richard Hirst <richard@sleepie.demon.co.uk>");
MODULE_DESCRIPTION("BVME6000 NCR53C710 driver");
MODULE_LICENSE("GPL");
static struct scsi_host_template bvme6000_scsi_driver_template = {
.name = "BVME6000 NCR53c710 SCSI",
.proc_name = "BVME6000",
.this_id = 7,
.module = THIS_MODULE,
};
static struct platform_device *bvme6000_scsi_device;
static __devinit int
bvme6000_probe(struct device *dev)
{
struct Scsi_Host * host = NULL;
struct NCR_700_Host_Parameters *hostdata;
if (!MACH_IS_BVME6000)
goto out;
hostdata = kmalloc(sizeof(struct NCR_700_Host_Parameters), GFP_KERNEL);
if (hostdata == NULL) {
printk(KERN_ERR "bvme6000-scsi: "
"Failed to allocate host data\n");
goto out;
}
memset(hostdata, 0, sizeof(struct NCR_700_Host_Parameters));
/* Fill in the required pieces of hostdata */
hostdata->base = (void __iomem *)BVME_NCR53C710_BASE;
hostdata->clock = 40; /* XXX - depends on the CPU clock! */
hostdata->chip710 = 1;
hostdata->dmode_extra = DMODE_FC2;
hostdata->dcntl_extra = EA_710;
hostdata->ctest7_extra = CTEST7_TT1;
/* and register the chip */
host = NCR_700_detect(&bvme6000_scsi_driver_template, hostdata, dev);
if (!host) {
printk(KERN_ERR "bvme6000-scsi: No host detected; "
"board configuration problem?\n");
goto out_free;
}
host->base = BVME_NCR53C710_BASE;
host->this_id = 7;
host->irq = BVME_IRQ_SCSI;
if (request_irq(BVME_IRQ_SCSI, NCR_700_intr, 0, "bvme6000-scsi",
host)) {
printk(KERN_ERR "bvme6000-scsi: request_irq failed\n");
goto out_put_host;
}
scsi_scan_host(host);
return 0;
out_put_host:
scsi_host_put(host);
out_free:
kfree(hostdata);
out:
return -ENODEV;
}
static __devexit int
bvme6000_device_remove(struct device *dev)
{
struct Scsi_Host *host = dev_to_shost(dev);
struct NCR_700_Host_Parameters *hostdata = shost_priv(host);
scsi_remove_host(host);
NCR_700_release(host);
kfree(hostdata);
free_irq(host->irq, host);
return 0;
}
static struct device_driver bvme6000_scsi_driver = {
.name = "bvme6000-scsi",
.bus = &platform_bus_type,
.probe = bvme6000_probe,
.remove = __devexit_p(bvme6000_device_remove),
};
static int __init bvme6000_scsi_init(void)
{
int err;
err = driver_register(&bvme6000_scsi_driver);
if (err)
return err;
bvme6000_scsi_device = platform_device_register_simple("bvme6000-scsi",
-1, NULL, 0);
if (IS_ERR(bvme6000_scsi_device)) {
driver_unregister(&bvme6000_scsi_driver);
return PTR_ERR(bvme6000_scsi_device);
}
return 0;
}
static void __exit bvme6000_scsi_exit(void)
{
platform_device_unregister(bvme6000_scsi_device);
driver_unregister(&bvme6000_scsi_driver);
}
module_init(bvme6000_scsi_init);
module_exit(bvme6000_scsi_exit);

View File

@ -979,6 +979,7 @@ static void send_srb(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb)
static void build_srb(struct scsi_cmnd *cmd, struct DeviceCtlBlk *dcb,
struct ScsiReqBlk *srb)
{
int nseg;
enum dma_data_direction dir = cmd->sc_data_direction;
dprintkdbg(DBG_0, "build_srb: (pid#%li) <%02i-%i>\n",
cmd->pid, dcb->target_id, dcb->target_lun);
@ -1000,27 +1001,30 @@ static void build_srb(struct scsi_cmnd *cmd, struct DeviceCtlBlk *dcb,
srb->scsi_phase = PH_BUS_FREE; /* initial phase */
srb->end_message = 0;
if (dir == PCI_DMA_NONE || !cmd->request_buffer) {
nseg = scsi_dma_map(cmd);
BUG_ON(nseg < 0);
if (dir == PCI_DMA_NONE || !nseg) {
dprintkdbg(DBG_0,
"build_srb: [0] len=%d buf=%p use_sg=%d !MAP=%08x\n",
cmd->bufflen, cmd->request_buffer,
cmd->use_sg, srb->segment_x[0].address);
} else if (cmd->use_sg) {
cmd->bufflen, scsi_sglist(cmd), scsi_sg_count(cmd),
srb->segment_x[0].address);
} else {
int i;
u32 reqlen = cmd->request_bufflen;
struct scatterlist *sl = (struct scatterlist *)
cmd->request_buffer;
u32 reqlen = scsi_bufflen(cmd);
struct scatterlist *sg;
struct SGentry *sgp = srb->segment_x;
srb->sg_count = pci_map_sg(dcb->acb->dev, sl, cmd->use_sg,
dir);
dprintkdbg(DBG_0,
"build_srb: [n] len=%d buf=%p use_sg=%d segs=%d\n",
reqlen, cmd->request_buffer, cmd->use_sg,
srb->sg_count);
for (i = 0; i < srb->sg_count; i++) {
u32 busaddr = (u32)sg_dma_address(&sl[i]);
u32 seglen = (u32)sl[i].length;
srb->sg_count = nseg;
dprintkdbg(DBG_0,
"build_srb: [n] len=%d buf=%p use_sg=%d segs=%d\n",
reqlen, scsi_sglist(cmd), scsi_sg_count(cmd),
srb->sg_count);
scsi_for_each_sg(cmd, sg, srb->sg_count, i) {
u32 busaddr = (u32)sg_dma_address(sg);
u32 seglen = (u32)sg->length;
sgp[i].address = busaddr;
sgp[i].length = seglen;
srb->total_xfer_length += seglen;
@ -1050,23 +1054,6 @@ static void build_srb(struct scsi_cmnd *cmd, struct DeviceCtlBlk *dcb,
dprintkdbg(DBG_SG, "build_srb: [n] map sg %p->%08x(%05x)\n",
srb->segment_x, srb->sg_bus_addr, SEGMENTX_LEN);
} else {
srb->total_xfer_length = cmd->request_bufflen;
srb->sg_count = 1;
srb->segment_x[0].address =
pci_map_single(dcb->acb->dev, cmd->request_buffer,
srb->total_xfer_length, dir);
/* Fixup for WIDE padding - make sure length is even */
if (dcb->sync_period & WIDE_SYNC && srb->total_xfer_length % 2)
srb->total_xfer_length++;
srb->segment_x[0].length = srb->total_xfer_length;
dprintkdbg(DBG_0,
"build_srb: [1] len=%d buf=%p use_sg=%d map=%08x\n",
srb->total_xfer_length, cmd->request_buffer,
cmd->use_sg, srb->segment_x[0].address);
}
srb->request_length = srb->total_xfer_length;
@ -2128,7 +2115,7 @@ static void data_out_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
/*clear_fifo(acb, "DOP1"); */
/* KG: What is this supposed to be useful for? WIDE padding stuff? */
if (d_left_counter == 1 && dcb->sync_period & WIDE_SYNC
&& srb->cmd->request_bufflen % 2) {
&& scsi_bufflen(srb->cmd) % 2) {
d_left_counter = 0;
dprintkl(KERN_INFO,
"data_out_phase0: Discard 1 byte (0x%02x)\n",
@ -2159,7 +2146,7 @@ static void data_out_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
sg_update_list(srb, d_left_counter);
/* KG: Most ugly hack! Apparently, this works around a chip bug */
if ((srb->segment_x[srb->sg_index].length ==
diff && srb->cmd->use_sg)
diff && scsi_sg_count(srb->cmd))
|| ((oldxferred & ~PAGE_MASK) ==
(PAGE_SIZE - diff))
) {
@ -2289,19 +2276,15 @@ static void data_in_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
unsigned char *virt, *base = NULL;
unsigned long flags = 0;
size_t len = left_io;
size_t offset = srb->request_length - left_io;
local_irq_save(flags);
/* Assumption: it's inside one page as it's at most 4 bytes and
I just assume it's on a 4-byte boundary */
base = scsi_kmap_atomic_sg(scsi_sglist(srb->cmd),
srb->sg_count, &offset, &len);
virt = base + offset;
if (srb->cmd->use_sg) {
size_t offset = srb->request_length - left_io;
local_irq_save(flags);
/* Assumption: it's inside one page as it's at most 4 bytes and
I just assume it's on a 4-byte boundary */
base = scsi_kmap_atomic_sg((struct scatterlist *)srb->cmd->request_buffer,
srb->sg_count, &offset, &len);
virt = base + offset;
} else {
virt = srb->cmd->request_buffer + srb->cmd->request_bufflen - left_io;
len = left_io;
}
left_io -= len;
while (len) {
@ -2341,10 +2324,8 @@ static void data_in_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
DC395x_write8(acb, TRM_S1040_SCSI_CONFIG2, 0);
}
if (srb->cmd->use_sg) {
scsi_kunmap_atomic_sg(base);
local_irq_restore(flags);
}
scsi_kunmap_atomic_sg(base);
local_irq_restore(flags);
}
/*printk(" %08x", *(u32*)(bus_to_virt (addr))); */
/*srb->total_xfer_length = 0; */
@ -2455,7 +2436,7 @@ static void data_io_transfer(struct AdapterCtlBlk *acb,
*/
srb->state |= SRB_DATA_XFER;
DC395x_write32(acb, TRM_S1040_DMA_XHIGHADDR, 0);
if (srb->cmd->use_sg) { /* with S/G */
if (scsi_sg_count(srb->cmd)) { /* with S/G */
io_dir |= DMACMD_SG;
DC395x_write32(acb, TRM_S1040_DMA_XLOWADDR,
srb->sg_bus_addr +
@ -2513,18 +2494,14 @@ static void data_io_transfer(struct AdapterCtlBlk *acb,
unsigned char *virt, *base = NULL;
unsigned long flags = 0;
size_t len = left_io;
size_t offset = srb->request_length - left_io;
local_irq_save(flags);
/* Again, max 4 bytes */
base = scsi_kmap_atomic_sg(scsi_sglist(srb->cmd),
srb->sg_count, &offset, &len);
virt = base + offset;
if (srb->cmd->use_sg) {
size_t offset = srb->request_length - left_io;
local_irq_save(flags);
/* Again, max 4 bytes */
base = scsi_kmap_atomic_sg((struct scatterlist *)srb->cmd->request_buffer,
srb->sg_count, &offset, &len);
virt = base + offset;
} else {
virt = srb->cmd->request_buffer + srb->cmd->request_bufflen - left_io;
len = left_io;
}
left_io -= len;
while (len--) {
@ -2536,10 +2513,8 @@ static void data_io_transfer(struct AdapterCtlBlk *acb,
sg_subtract_one(srb);
}
if (srb->cmd->use_sg) {
scsi_kunmap_atomic_sg(base);
local_irq_restore(flags);
}
scsi_kunmap_atomic_sg(base);
local_irq_restore(flags);
}
if (srb->dcb->sync_period & WIDE_SYNC) {
if (ln % 2) {
@ -3295,7 +3270,8 @@ static void pci_unmap_srb(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb)
{
struct scsi_cmnd *cmd = srb->cmd;
enum dma_data_direction dir = cmd->sc_data_direction;
if (cmd->use_sg && dir != PCI_DMA_NONE) {
if (scsi_sg_count(cmd) && dir != PCI_DMA_NONE) {
/* unmap DC395x SG list */
dprintkdbg(DBG_SG, "pci_unmap_srb: list=%08x(%05x)\n",
srb->sg_bus_addr, SEGMENTX_LEN);
@ -3303,16 +3279,9 @@ static void pci_unmap_srb(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb)
SEGMENTX_LEN,
PCI_DMA_TODEVICE);
dprintkdbg(DBG_SG, "pci_unmap_srb: segs=%i buffer=%p\n",
cmd->use_sg, cmd->request_buffer);
scsi_sg_count(cmd), scsi_bufflen(cmd));
/* unmap the sg segments */
pci_unmap_sg(acb->dev,
(struct scatterlist *)cmd->request_buffer,
cmd->use_sg, dir);
} else if (cmd->request_buffer && dir != PCI_DMA_NONE) {
dprintkdbg(DBG_SG, "pci_unmap_srb: buffer=%08x(%05x)\n",
srb->segment_x[0].address, cmd->request_bufflen);
pci_unmap_single(acb->dev, srb->segment_x[0].address,
cmd->request_bufflen, dir);
scsi_dma_unmap(cmd);
}
}
@ -3352,8 +3321,8 @@ static void srb_done(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
dprintkdbg(DBG_1, "srb_done: (pid#%li) <%02i-%i>\n", srb->cmd->pid,
srb->cmd->device->id, srb->cmd->device->lun);
dprintkdbg(DBG_SG, "srb_done: srb=%p sg=%i(%i/%i) buf=%p\n",
srb, cmd->use_sg, srb->sg_index, srb->sg_count,
cmd->request_buffer);
srb, scsi_sg_count(cmd), srb->sg_index, srb->sg_count,
scsi_sgtalbe(cmd));
status = srb->target_status;
if (srb->flag & AUTO_REQSENSE) {
dprintkdbg(DBG_0, "srb_done: AUTO_REQSENSE1\n");
@ -3482,16 +3451,10 @@ static void srb_done(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
}
}
if (dir != PCI_DMA_NONE) {
if (cmd->use_sg)
pci_dma_sync_sg_for_cpu(acb->dev,
(struct scatterlist *)cmd->
request_buffer, cmd->use_sg, dir);
else if (cmd->request_buffer)
pci_dma_sync_single_for_cpu(acb->dev,
srb->segment_x[0].address,
cmd->request_bufflen, dir);
}
if (dir != PCI_DMA_NONE && scsi_sg_count(cmd))
pci_dma_sync_sg_for_cpu(acb->dev, scsi_sglist(cmd),
scsi_sg_count(cmd), dir);
ckc_only = 0;
/* Check Error Conditions */
ckc_e:
@ -3500,19 +3463,15 @@ static void srb_done(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
unsigned char *base = NULL;
struct ScsiInqData *ptr;
unsigned long flags = 0;
struct scatterlist* sg = scsi_sglist(cmd);
size_t offset = 0, len = sizeof(struct ScsiInqData);
if (cmd->use_sg) {
struct scatterlist* sg = (struct scatterlist *)cmd->request_buffer;
size_t offset = 0, len = sizeof(struct ScsiInqData);
local_irq_save(flags);
base = scsi_kmap_atomic_sg(sg, cmd->use_sg, &offset, &len);
ptr = (struct ScsiInqData *)(base + offset);
} else
ptr = (struct ScsiInqData *)(cmd->request_buffer);
local_irq_save(flags);
base = scsi_kmap_atomic_sg(sg, scsi_sg_count(cmd), &offset, &len);
ptr = (struct ScsiInqData *)(base + offset);
if (!ckc_only && (cmd->result & RES_DID) == 0
&& cmd->cmnd[2] == 0 && cmd->request_bufflen >= 8
&& cmd->cmnd[2] == 0 && scsi_bufflen(cmd) >= 8
&& dir != PCI_DMA_NONE && ptr && (ptr->Vers & 0x07) >= 2)
dcb->inquiry7 = ptr->Flags;
@ -3527,14 +3486,12 @@ static void srb_done(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
}
}
if (cmd->use_sg) {
scsi_kunmap_atomic_sg(base);
local_irq_restore(flags);
}
scsi_kunmap_atomic_sg(base);
local_irq_restore(flags);
}
/* Here is the info for Doug Gilbert's sg3 ... */
cmd->resid = srb->total_xfer_length;
scsi_set_resid(cmd, srb->total_xfer_length);
/* This may be interpreted by sb. or not ... */
cmd->SCp.this_residual = srb->total_xfer_length;
cmd->SCp.buffers_residual = 0;

View File

@ -2078,12 +2078,13 @@ static s32 adpt_scsi_to_i2o(adpt_hba* pHba, struct scsi_cmnd* cmd, struct adpt_d
u32 *lenptr;
int direction;
int scsidir;
int nseg;
u32 len;
u32 reqlen;
s32 rcode;
memset(msg, 0 , sizeof(msg));
len = cmd->request_bufflen;
len = scsi_bufflen(cmd);
direction = 0x00000000;
scsidir = 0x00000000; // DATA NO XFER
@ -2140,21 +2141,21 @@ static s32 adpt_scsi_to_i2o(adpt_hba* pHba, struct scsi_cmnd* cmd, struct adpt_d
lenptr=mptr++; /* Remember me - fill in when we know */
reqlen = 14; // SINGLE SGE
/* Now fill in the SGList and command */
if(cmd->use_sg) {
struct scatterlist *sg = (struct scatterlist *)cmd->request_buffer;
int sg_count = pci_map_sg(pHba->pDev, sg, cmd->use_sg,
cmd->sc_data_direction);
nseg = scsi_dma_map(cmd);
BUG_ON(nseg < 0);
if (nseg) {
struct scatterlist *sg;
len = 0;
for(i = 0 ; i < sg_count; i++) {
scsi_for_each_sg(cmd, sg, nseg, i) {
*mptr++ = direction|0x10000000|sg_dma_len(sg);
len+=sg_dma_len(sg);
*mptr++ = sg_dma_address(sg);
sg++;
/* Make this an end of list */
if (i == nseg - 1)
mptr[-2] = direction|0xD0000000|sg_dma_len(sg);
}
/* Make this an end of list */
mptr[-2] = direction|0xD0000000|sg_dma_len(sg-1);
reqlen = mptr - msg;
*lenptr = len;
@ -2163,16 +2164,8 @@ static s32 adpt_scsi_to_i2o(adpt_hba* pHba, struct scsi_cmnd* cmd, struct adpt_d
len, cmd->underflow);
}
} else {
*lenptr = len = cmd->request_bufflen;
if(len == 0) {
reqlen = 12;
} else {
*mptr++ = 0xD0000000|direction|cmd->request_bufflen;
*mptr++ = pci_map_single(pHba->pDev,
cmd->request_buffer,
cmd->request_bufflen,
cmd->sc_data_direction);
}
*lenptr = len = 0;
reqlen = 12;
}
/* Stick the headers on */
@ -2232,7 +2225,7 @@ static s32 adpt_i2o_to_scsi(void __iomem *reply, struct scsi_cmnd* cmd)
hba_status = detailed_status >> 8;
// calculate resid for sg
cmd->resid = cmd->request_bufflen - readl(reply+5);
scsi_set_resid(cmd, scsi_bufflen(cmd) - readl(reply+5));
pHba = (adpt_hba*) cmd->device->host->hostdata[0];

View File

@ -1609,8 +1609,9 @@ static int eata2x_detect(struct scsi_host_template *tpnt)
static void map_dma(unsigned int i, struct hostdata *ha)
{
unsigned int k, count, pci_dir;
struct scatterlist *sgpnt;
unsigned int k, pci_dir;
int count;
struct scatterlist *sg;
struct mscp *cpp;
struct scsi_cmnd *SCpnt;
@ -1625,38 +1626,19 @@ static void map_dma(unsigned int i, struct hostdata *ha)
cpp->sense_len = sizeof SCpnt->sense_buffer;
if (!SCpnt->use_sg) {
/* If we get here with PCI_DMA_NONE, pci_map_single triggers a BUG() */
if (!SCpnt->request_bufflen)
pci_dir = PCI_DMA_BIDIRECTIONAL;
if (SCpnt->request_buffer)
cpp->data_address = H2DEV(pci_map_single(ha->pdev,
SCpnt->
request_buffer,
SCpnt->
request_bufflen,
pci_dir));
cpp->data_len = H2DEV(SCpnt->request_bufflen);
return;
}
sgpnt = (struct scatterlist *)SCpnt->request_buffer;
count = pci_map_sg(ha->pdev, sgpnt, SCpnt->use_sg, pci_dir);
for (k = 0; k < count; k++) {
cpp->sglist[k].address = H2DEV(sg_dma_address(&sgpnt[k]));
cpp->sglist[k].num_bytes = H2DEV(sg_dma_len(&sgpnt[k]));
count = scsi_dma_map(SCpnt);
BUG_ON(count < 0);
scsi_for_each_sg(SCpnt, sg, count, k) {
cpp->sglist[k].address = H2DEV(sg_dma_address(sg));
cpp->sglist[k].num_bytes = H2DEV(sg_dma_len(sg));
}
cpp->sg = 1;
cpp->data_address = H2DEV(pci_map_single(ha->pdev, cpp->sglist,
SCpnt->use_sg *
scsi_sg_count(SCpnt) *
sizeof(struct sg_list),
pci_dir));
cpp->data_len = H2DEV((SCpnt->use_sg * sizeof(struct sg_list)));
cpp->data_len = H2DEV((scsi_sg_count(SCpnt) * sizeof(struct sg_list)));
}
static void unmap_dma(unsigned int i, struct hostdata *ha)
@ -1673,9 +1655,7 @@ static void unmap_dma(unsigned int i, struct hostdata *ha)
pci_unmap_single(ha->pdev, DEV2H(cpp->sense_addr),
DEV2H(cpp->sense_len), PCI_DMA_FROMDEVICE);
if (SCpnt->use_sg)
pci_unmap_sg(ha->pdev, SCpnt->request_buffer, SCpnt->use_sg,
pci_dir);
scsi_dma_unmap(SCpnt);
if (!DEV2H(cpp->data_len))
pci_dir = PCI_DMA_BIDIRECTIONAL;
@ -1700,9 +1680,9 @@ static void sync_dma(unsigned int i, struct hostdata *ha)
DEV2H(cpp->sense_len),
PCI_DMA_FROMDEVICE);
if (SCpnt->use_sg)
pci_dma_sync_sg_for_cpu(ha->pdev, SCpnt->request_buffer,
SCpnt->use_sg, pci_dir);
if (scsi_sg_count(SCpnt))
pci_dma_sync_sg_for_cpu(ha->pdev, scsi_sglist(SCpnt),
scsi_sg_count(SCpnt), pci_dir);
if (!DEV2H(cpp->data_len))
pci_dir = PCI_DMA_BIDIRECTIONAL;

View File

@ -324,17 +324,14 @@ static void esp_reset_esp(struct esp *esp)
static void esp_map_dma(struct esp *esp, struct scsi_cmnd *cmd)
{
struct esp_cmd_priv *spriv = ESP_CMD_PRIV(cmd);
struct scatterlist *sg = cmd->request_buffer;
struct scatterlist *sg = scsi_sglist(cmd);
int dir = cmd->sc_data_direction;
int total, i;
if (dir == DMA_NONE)
return;
BUG_ON(cmd->use_sg == 0);
spriv->u.num_sg = esp->ops->map_sg(esp, sg,
cmd->use_sg, dir);
spriv->u.num_sg = esp->ops->map_sg(esp, sg, scsi_sg_count(cmd), dir);
spriv->cur_residue = sg_dma_len(sg);
spriv->cur_sg = sg;
@ -407,8 +404,7 @@ static void esp_unmap_dma(struct esp *esp, struct scsi_cmnd *cmd)
if (dir == DMA_NONE)
return;
esp->ops->unmap_sg(esp, cmd->request_buffer,
spriv->u.num_sg, dir);
esp->ops->unmap_sg(esp, scsi_sglist(cmd), spriv->u.num_sg, dir);
}
static void esp_save_pointers(struct esp *esp, struct esp_cmd_entry *ent)
@ -921,7 +917,7 @@ static void esp_event_queue_full(struct esp *esp, struct esp_cmd_entry *ent)
static int esp_queuecommand(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
{
struct scsi_device *dev = cmd->device;
struct esp *esp = host_to_esp(dev->host);
struct esp *esp = shost_priv(dev->host);
struct esp_cmd_priv *spriv;
struct esp_cmd_entry *ent;
@ -2357,7 +2353,7 @@ EXPORT_SYMBOL(scsi_esp_unregister);
static int esp_slave_alloc(struct scsi_device *dev)
{
struct esp *esp = host_to_esp(dev->host);
struct esp *esp = shost_priv(dev->host);
struct esp_target_data *tp = &esp->target[dev->id];
struct esp_lun_data *lp;
@ -2381,7 +2377,7 @@ static int esp_slave_alloc(struct scsi_device *dev)
static int esp_slave_configure(struct scsi_device *dev)
{
struct esp *esp = host_to_esp(dev->host);
struct esp *esp = shost_priv(dev->host);
struct esp_target_data *tp = &esp->target[dev->id];
int goal_tags, queue_depth;
@ -2423,7 +2419,7 @@ static void esp_slave_destroy(struct scsi_device *dev)
static int esp_eh_abort_handler(struct scsi_cmnd *cmd)
{
struct esp *esp = host_to_esp(cmd->device->host);
struct esp *esp = shost_priv(cmd->device->host);
struct esp_cmd_entry *ent, *tmp;
struct completion eh_done;
unsigned long flags;
@ -2539,7 +2535,7 @@ out_failure:
static int esp_eh_bus_reset_handler(struct scsi_cmnd *cmd)
{
struct esp *esp = host_to_esp(cmd->device->host);
struct esp *esp = shost_priv(cmd->device->host);
struct completion eh_reset;
unsigned long flags;
@ -2575,7 +2571,7 @@ static int esp_eh_bus_reset_handler(struct scsi_cmnd *cmd)
/* All bets are off, reset the entire device. */
static int esp_eh_host_reset_handler(struct scsi_cmnd *cmd)
{
struct esp *esp = host_to_esp(cmd->device->host);
struct esp *esp = shost_priv(cmd->device->host);
unsigned long flags;
spin_lock_irqsave(esp->host->host_lock, flags);
@ -2615,7 +2611,7 @@ EXPORT_SYMBOL(scsi_esp_template);
static void esp_get_signalling(struct Scsi_Host *host)
{
struct esp *esp = host_to_esp(host);
struct esp *esp = shost_priv(host);
enum spi_signal_type type;
if (esp->flags & ESP_FLAG_DIFFERENTIAL)
@ -2629,7 +2625,7 @@ static void esp_get_signalling(struct Scsi_Host *host)
static void esp_set_offset(struct scsi_target *target, int offset)
{
struct Scsi_Host *host = dev_to_shost(target->dev.parent);
struct esp *esp = host_to_esp(host);
struct esp *esp = shost_priv(host);
struct esp_target_data *tp = &esp->target[target->id];
tp->nego_goal_offset = offset;
@ -2639,7 +2635,7 @@ static void esp_set_offset(struct scsi_target *target, int offset)
static void esp_set_period(struct scsi_target *target, int period)
{
struct Scsi_Host *host = dev_to_shost(target->dev.parent);
struct esp *esp = host_to_esp(host);
struct esp *esp = shost_priv(host);
struct esp_target_data *tp = &esp->target[target->id];
tp->nego_goal_period = period;
@ -2649,7 +2645,7 @@ static void esp_set_period(struct scsi_target *target, int period)
static void esp_set_width(struct scsi_target *target, int width)
{
struct Scsi_Host *host = dev_to_shost(target->dev.parent);
struct esp *esp = host_to_esp(host);
struct esp *esp = shost_priv(host);
struct esp_target_data *tp = &esp->target[target->id];
tp->nego_goal_width = (width ? 1 : 0);

View File

@ -517,8 +517,6 @@ struct esp {
struct sbus_dma *dma;
};
#define host_to_esp(host) ((struct esp *)(host)->hostdata)
/* A front-end driver for the ESP chip should do the following in
* it's device probe routine:
* 1) Allocate the host and private area using scsi_host_alloc()

View File

@ -410,6 +410,8 @@ static irqreturn_t do_fdomain_16x0_intr( int irq, void *dev_id );
static char * fdomain = NULL;
module_param(fdomain, charp, 0);
#ifndef PCMCIA
static unsigned long addresses[] = {
0xc8000,
0xca000,
@ -426,6 +428,8 @@ static unsigned short ports[] = { 0x140, 0x150, 0x160, 0x170 };
static unsigned short ints[] = { 3, 5, 10, 11, 12, 14, 15, 0 };
#endif /* !PCMCIA */
/*
READ THIS BEFORE YOU ADD A SIGNATURE!
@ -458,6 +462,8 @@ static unsigned short ints[] = { 3, 5, 10, 11, 12, 14, 15, 0 };
*/
#ifndef PCMCIA
static struct signature {
const char *signature;
int sig_offset;
@ -503,6 +509,8 @@ static struct signature {
#define SIGNATURE_COUNT ARRAY_SIZE(signatures)
#endif /* !PCMCIA */
static void print_banner( struct Scsi_Host *shpnt )
{
if (!shpnt) return; /* This won't ever happen */
@ -633,6 +641,8 @@ static int fdomain_test_loopback( void )
return 0;
}
#ifndef PCMCIA
/* fdomain_get_irq assumes that we have a valid MCA ID for a
TMC-1660/TMC-1680 Future Domain board. Now, check to be sure the
bios_base matches these ports. If someone was unlucky enough to have
@ -667,7 +677,6 @@ static int fdomain_get_irq( int base )
static int fdomain_isa_detect( int *irq, int *iobase )
{
#ifndef PCMCIA
int i, j;
int base = 0xdeadbeef;
int flag = 0;
@ -786,11 +795,22 @@ found:
*iobase = base;
return 1; /* success */
#else
return 0;
#endif
}
#else /* PCMCIA */
static int fdomain_isa_detect( int *irq, int *iobase )
{
if (irq)
*irq = 0;
if (iobase)
*iobase = 0;
return 0;
}
#endif /* !PCMCIA */
/* PCI detection function: int fdomain_pci_bios_detect(int* irq, int*
iobase) This function gets the Interrupt Level and I/O base address from
the PCI configuration registers. */
@ -1345,16 +1365,15 @@ static irqreturn_t do_fdomain_16x0_intr(int irq, void *dev_id)
#if ERRORS_ONLY
if (current_SC->cmnd[0] == REQUEST_SENSE && !current_SC->SCp.Status) {
if ((unsigned char)(*((char *)current_SC->request_buffer+2)) & 0x0f) {
char *buf = scsi_sglist(current_SC);
if ((unsigned char)(*(buf + 2)) & 0x0f) {
unsigned char key;
unsigned char code;
unsigned char qualifier;
key = (unsigned char)(*((char *)current_SC->request_buffer + 2))
& 0x0f;
code = (unsigned char)(*((char *)current_SC->request_buffer + 12));
qualifier = (unsigned char)(*((char *)current_SC->request_buffer
+ 13));
key = (unsigned char)(*(buf + 2)) & 0x0f;
code = (unsigned char)(*(buf + 12));
qualifier = (unsigned char)(*(buf + 13));
if (key != UNIT_ATTENTION
&& !(key == NOT_READY
@ -1405,8 +1424,8 @@ static int fdomain_16x0_queue(struct scsi_cmnd *SCpnt,
printk( "queue: target = %d cmnd = 0x%02x pieces = %d size = %u\n",
SCpnt->target,
*(unsigned char *)SCpnt->cmnd,
SCpnt->use_sg,
SCpnt->request_bufflen );
scsi_sg_count(SCpnt),
scsi_bufflen(SCpnt));
#endif
fdomain_make_bus_idle();
@ -1416,20 +1435,19 @@ static int fdomain_16x0_queue(struct scsi_cmnd *SCpnt,
/* Initialize static data */
if (current_SC->use_sg) {
current_SC->SCp.buffer =
(struct scatterlist *)current_SC->request_buffer;
current_SC->SCp.ptr = page_address(current_SC->SCp.buffer->page) + current_SC->SCp.buffer->offset;
current_SC->SCp.this_residual = current_SC->SCp.buffer->length;
current_SC->SCp.buffers_residual = current_SC->use_sg - 1;
if (scsi_sg_count(current_SC)) {
current_SC->SCp.buffer = scsi_sglist(current_SC);
current_SC->SCp.ptr = page_address(current_SC->SCp.buffer->page)
+ current_SC->SCp.buffer->offset;
current_SC->SCp.this_residual = current_SC->SCp.buffer->length;
current_SC->SCp.buffers_residual = scsi_sg_count(current_SC) - 1;
} else {
current_SC->SCp.ptr = (char *)current_SC->request_buffer;
current_SC->SCp.this_residual = current_SC->request_bufflen;
current_SC->SCp.buffer = NULL;
current_SC->SCp.buffers_residual = 0;
current_SC->SCp.ptr = 0;
current_SC->SCp.this_residual = 0;
current_SC->SCp.buffer = NULL;
current_SC->SCp.buffers_residual = 0;
}
current_SC->SCp.Status = 0;
current_SC->SCp.Message = 0;
current_SC->SCp.have_data_in = 0;
@ -1472,8 +1490,8 @@ static void print_info(struct scsi_cmnd *SCpnt)
SCpnt->SCp.phase,
SCpnt->device->id,
*(unsigned char *)SCpnt->cmnd,
SCpnt->use_sg,
SCpnt->request_bufflen );
scsi_sg_count(SCpnt),
scsi_bufflen(SCpnt));
printk( "sent_command = %d, have_data_in = %d, timeout = %d\n",
SCpnt->SCp.sent_command,
SCpnt->SCp.have_data_in,

View File

@ -876,7 +876,7 @@ static int __init gdth_search_pci(gdth_pci_str *pcistr)
/* Vortex only makes RAID controllers.
* We do not really want to specify all 550 ids here, so wildcard match.
*/
static struct pci_device_id gdthtable[] __attribute_used__ = {
static struct pci_device_id gdthtable[] __maybe_unused = {
{PCI_VENDOR_ID_VORTEX,PCI_ANY_ID,PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_INTEL,PCI_DEVICE_ID_INTEL_SRC,PCI_ANY_ID,PCI_ANY_ID},
{PCI_VENDOR_ID_INTEL,PCI_DEVICE_ID_INTEL_SRC_XSCALE,PCI_ANY_ID,PCI_ANY_ID},
@ -1955,7 +1955,7 @@ static int __init gdth_search_drives(int hanum)
for (j = 0; j < 12; ++j)
rtc[j] = CMOS_READ(j);
} while (rtc[0] != CMOS_READ(0));
spin_lock_irqrestore(&rtc_lock, flags);
spin_unlock_irqrestore(&rtc_lock, flags);
TRACE2(("gdth_search_drives(): RTC: %x/%x/%x\n",*(ulong32 *)&rtc[0],
*(ulong32 *)&rtc[4], *(ulong32 *)&rtc[8]));
/* 3. send to controller firmware */

View File

@ -339,20 +339,8 @@ static void hptiop_host_request_callback(struct hptiop_hba *hba, u32 tag)
scp = hba->reqs[tag].scp;
if (HPT_SCP(scp)->mapped) {
if (scp->use_sg)
pci_unmap_sg(hba->pcidev,
(struct scatterlist *)scp->request_buffer,
scp->use_sg,
scp->sc_data_direction
);
else
pci_unmap_single(hba->pcidev,
HPT_SCP(scp)->dma_handle,
scp->request_bufflen,
scp->sc_data_direction
);
}
if (HPT_SCP(scp)->mapped)
scsi_dma_unmap(scp);
switch (le32_to_cpu(req->header.result)) {
case IOP_RESULT_SUCCESS:
@ -448,43 +436,26 @@ static int hptiop_buildsgl(struct scsi_cmnd *scp, struct hpt_iopsg *psg)
{
struct Scsi_Host *host = scp->device->host;
struct hptiop_hba *hba = (struct hptiop_hba *)host->hostdata;
struct scatterlist *sglist = (struct scatterlist *)scp->request_buffer;
struct scatterlist *sg;
int idx, nseg;
/*
* though we'll not get non-use_sg fields anymore,
* keep use_sg checking anyway
*/
if (scp->use_sg) {
int idx;
nseg = scsi_dma_map(scp);
BUG_ON(nseg < 0);
if (!nseg)
return 0;
HPT_SCP(scp)->sgcnt = pci_map_sg(hba->pcidev,
sglist, scp->use_sg,
scp->sc_data_direction);
HPT_SCP(scp)->mapped = 1;
BUG_ON(HPT_SCP(scp)->sgcnt > hba->max_sg_descriptors);
HPT_SCP(scp)->sgcnt = nseg;
HPT_SCP(scp)->mapped = 1;
for (idx = 0; idx < HPT_SCP(scp)->sgcnt; idx++) {
psg[idx].pci_address =
cpu_to_le64(sg_dma_address(&sglist[idx]));
psg[idx].size = cpu_to_le32(sg_dma_len(&sglist[idx]));
psg[idx].eot = (idx == HPT_SCP(scp)->sgcnt - 1) ?
cpu_to_le32(1) : 0;
}
BUG_ON(HPT_SCP(scp)->sgcnt > hba->max_sg_descriptors);
return HPT_SCP(scp)->sgcnt;
} else {
HPT_SCP(scp)->dma_handle = pci_map_single(
hba->pcidev,
scp->request_buffer,
scp->request_bufflen,
scp->sc_data_direction
);
HPT_SCP(scp)->mapped = 1;
psg->pci_address = cpu_to_le64(HPT_SCP(scp)->dma_handle);
psg->size = cpu_to_le32(scp->request_bufflen);
psg->eot = cpu_to_le32(1);
return 1;
scsi_for_each_sg(scp, sg, HPT_SCP(scp)->sgcnt, idx) {
psg[idx].pci_address = cpu_to_le64(sg_dma_address(sg));
psg[idx].size = cpu_to_le32(sg_dma_len(sg));
psg[idx].eot = (idx == HPT_SCP(scp)->sgcnt - 1) ?
cpu_to_le32(1) : 0;
}
return HPT_SCP(scp)->sgcnt;
}
static int hptiop_queuecommand(struct scsi_cmnd *scp,
@ -529,9 +500,8 @@ static int hptiop_queuecommand(struct scsi_cmnd *scp,
req = (struct hpt_iop_request_scsi_command *)_req->req_virt;
/* build S/G table */
if (scp->request_bufflen)
sg_count = hptiop_buildsgl(scp, req->sg_list);
else
sg_count = hptiop_buildsgl(scp, req->sg_list);
if (!sg_count)
HPT_SCP(scp)->mapped = 0;
req->header.flags = cpu_to_le32(IOP_REQUEST_FLAG_OUTPUT_CONTEXT);
@ -540,7 +510,7 @@ static int hptiop_queuecommand(struct scsi_cmnd *scp,
req->header.context = cpu_to_le32(IOPMU_QUEUE_ADDR_HOST_BIT |
(u32)_req->index);
req->header.context_hi32 = 0;
req->dataxfer_length = cpu_to_le32(scp->request_bufflen);
req->dataxfer_length = cpu_to_le32(scsi_bufflen(scp));
req->channel = scp->device->channel;
req->target = scp->device->id;
req->lun = scp->device->lun;

File diff suppressed because it is too large Load Diff

View File

@ -1,21 +0,0 @@
/*
* Low Level Driver for the IBM Microchannel SCSI Subsystem
* (Headerfile, see Documentation/scsi/ibmmca.txt for description of the
* IBM MCA SCSI-driver.
* For use under the GNU General Public License within the Linux-kernel project.
* This include file works only correctly with kernel 2.4.0 or higher!!! */
#ifndef _IBMMCA_H
#define _IBMMCA_H
/* Common forward declarations for all Linux-versions: */
/* Interfaces to the midlevel Linux SCSI driver */
static int ibmmca_detect (struct scsi_host_template *);
static int ibmmca_release (struct Scsi_Host *);
static int ibmmca_queuecommand (Scsi_Cmnd *, void (*done) (Scsi_Cmnd *));
static int ibmmca_abort (Scsi_Cmnd *);
static int ibmmca_host_reset (Scsi_Cmnd *);
static int ibmmca_biosparam (struct scsi_device *, struct block_device *, sector_t, int *);
#endif /* _IBMMCA_H */

View File

@ -173,9 +173,8 @@ static void release_event_pool(struct event_pool *pool,
}
}
if (in_use)
printk(KERN_WARNING
"ibmvscsi: releasing event pool with %d "
"events still in use?\n", in_use);
dev_warn(hostdata->dev, "releasing event pool with %d "
"events still in use?\n", in_use);
kfree(pool->events);
dma_free_coherent(hostdata->dev,
pool->size * sizeof(*pool->iu_storage),
@ -210,15 +209,13 @@ static void free_event_struct(struct event_pool *pool,
struct srp_event_struct *evt)
{
if (!valid_event_struct(pool, evt)) {
printk(KERN_ERR
"ibmvscsi: Freeing invalid event_struct %p "
"(not in pool %p)\n", evt, pool->events);
dev_err(evt->hostdata->dev, "Freeing invalid event_struct %p "
"(not in pool %p)\n", evt, pool->events);
return;
}
if (atomic_inc_return(&evt->free) != 1) {
printk(KERN_ERR
"ibmvscsi: Freeing event_struct %p "
"which is not in use!\n", evt);
dev_err(evt->hostdata->dev, "Freeing event_struct %p "
"which is not in use!\n", evt);
return;
}
}
@ -353,20 +350,19 @@ static void unmap_cmd_data(struct srp_cmd *cmd,
}
}
static int map_sg_list(int num_entries,
struct scatterlist *sg,
static int map_sg_list(struct scsi_cmnd *cmd, int nseg,
struct srp_direct_buf *md)
{
int i;
struct scatterlist *sg;
u64 total_length = 0;
for (i = 0; i < num_entries; ++i) {
scsi_for_each_sg(cmd, sg, nseg, i) {
struct srp_direct_buf *descr = md + i;
struct scatterlist *sg_entry = &sg[i];
descr->va = sg_dma_address(sg_entry);
descr->len = sg_dma_len(sg_entry);
descr->va = sg_dma_address(sg);
descr->len = sg_dma_len(sg);
descr->key = 0;
total_length += sg_dma_len(sg_entry);
total_length += sg_dma_len(sg);
}
return total_length;
}
@ -387,40 +383,37 @@ static int map_sg_data(struct scsi_cmnd *cmd,
int sg_mapped;
u64 total_length = 0;
struct scatterlist *sg = cmd->request_buffer;
struct srp_direct_buf *data =
(struct srp_direct_buf *) srp_cmd->add_data;
struct srp_indirect_buf *indirect =
(struct srp_indirect_buf *) data;
sg_mapped = dma_map_sg(dev, sg, cmd->use_sg, DMA_BIDIRECTIONAL);
if (sg_mapped == 0)
sg_mapped = scsi_dma_map(cmd);
if (!sg_mapped)
return 1;
else if (sg_mapped < 0)
return 0;
else if (sg_mapped > SG_ALL) {
printk(KERN_ERR
"ibmvscsi: More than %d mapped sg entries, got %d\n",
SG_ALL, sg_mapped);
return 0;
}
set_srp_direction(cmd, srp_cmd, sg_mapped);
/* special case; we can use a single direct descriptor */
if (sg_mapped == 1) {
data->va = sg_dma_address(&sg[0]);
data->len = sg_dma_len(&sg[0]);
data->key = 0;
map_sg_list(cmd, sg_mapped, data);
return 1;
}
if (sg_mapped > SG_ALL) {
printk(KERN_ERR
"ibmvscsi: More than %d mapped sg entries, got %d\n",
SG_ALL, sg_mapped);
return 0;
}
indirect->table_desc.va = 0;
indirect->table_desc.len = sg_mapped * sizeof(struct srp_direct_buf);
indirect->table_desc.key = 0;
if (sg_mapped <= MAX_INDIRECT_BUFS) {
total_length = map_sg_list(sg_mapped, sg,
total_length = map_sg_list(cmd, sg_mapped,
&indirect->desc_list[0]);
indirect->len = total_length;
return 1;
@ -429,60 +422,26 @@ static int map_sg_data(struct scsi_cmnd *cmd,
/* get indirect table */
if (!evt_struct->ext_list) {
evt_struct->ext_list = (struct srp_direct_buf *)
dma_alloc_coherent(dev,
dma_alloc_coherent(dev,
SG_ALL * sizeof(struct srp_direct_buf),
&evt_struct->ext_list_token, 0);
if (!evt_struct->ext_list) {
printk(KERN_ERR
"ibmvscsi: Can't allocate memory for indirect table\n");
sdev_printk(KERN_ERR, cmd->device,
"Can't allocate memory for indirect table\n");
return 0;
}
}
total_length = map_sg_list(sg_mapped, sg, evt_struct->ext_list);
total_length = map_sg_list(cmd, sg_mapped, evt_struct->ext_list);
indirect->len = total_length;
indirect->table_desc.va = evt_struct->ext_list_token;
indirect->table_desc.len = sg_mapped * sizeof(indirect->desc_list[0]);
memcpy(indirect->desc_list, evt_struct->ext_list,
MAX_INDIRECT_BUFS * sizeof(struct srp_direct_buf));
return 1;
}
/**
* map_single_data: - Maps memory and initializes memory decriptor fields
* @cmd: struct scsi_cmnd with the memory to be mapped
* @srp_cmd: srp_cmd that contains the memory descriptor
* @dev: device for which to map dma memory
*
* Called by map_data_for_srp_cmd() when building srp cmd from scsi cmd.
* Returns 1 on success.
*/
static int map_single_data(struct scsi_cmnd *cmd,
struct srp_cmd *srp_cmd, struct device *dev)
{
struct srp_direct_buf *data =
(struct srp_direct_buf *) srp_cmd->add_data;
data->va =
dma_map_single(dev, cmd->request_buffer,
cmd->request_bufflen,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(data->va)) {
printk(KERN_ERR
"ibmvscsi: Unable to map request_buffer for command!\n");
return 0;
}
data->len = cmd->request_bufflen;
data->key = 0;
set_srp_direction(cmd, srp_cmd, 1);
return 1;
}
/**
* map_data_for_srp_cmd: - Calls functions to map data for srp cmds
* @cmd: struct scsi_cmnd with the memory to be mapped
@ -503,23 +462,83 @@ static int map_data_for_srp_cmd(struct scsi_cmnd *cmd,
case DMA_NONE:
return 1;
case DMA_BIDIRECTIONAL:
printk(KERN_ERR
"ibmvscsi: Can't map DMA_BIDIRECTIONAL to read/write\n");
sdev_printk(KERN_ERR, cmd->device,
"Can't map DMA_BIDIRECTIONAL to read/write\n");
return 0;
default:
printk(KERN_ERR
"ibmvscsi: Unknown data direction 0x%02x; can't map!\n",
cmd->sc_data_direction);
sdev_printk(KERN_ERR, cmd->device,
"Unknown data direction 0x%02x; can't map!\n",
cmd->sc_data_direction);
return 0;
}
if (!cmd->request_buffer)
return 1;
if (cmd->use_sg)
return map_sg_data(cmd, evt_struct, srp_cmd, dev);
return map_single_data(cmd, srp_cmd, dev);
return map_sg_data(cmd, evt_struct, srp_cmd, dev);
}
/**
* purge_requests: Our virtual adapter just shut down. purge any sent requests
* @hostdata: the adapter
*/
static void purge_requests(struct ibmvscsi_host_data *hostdata, int error_code)
{
struct srp_event_struct *tmp_evt, *pos;
unsigned long flags;
spin_lock_irqsave(hostdata->host->host_lock, flags);
list_for_each_entry_safe(tmp_evt, pos, &hostdata->sent, list) {
list_del(&tmp_evt->list);
del_timer(&tmp_evt->timer);
if (tmp_evt->cmnd) {
tmp_evt->cmnd->result = (error_code << 16);
unmap_cmd_data(&tmp_evt->iu.srp.cmd,
tmp_evt,
tmp_evt->hostdata->dev);
if (tmp_evt->cmnd_done)
tmp_evt->cmnd_done(tmp_evt->cmnd);
} else if (tmp_evt->done)
tmp_evt->done(tmp_evt);
free_event_struct(&tmp_evt->hostdata->pool, tmp_evt);
}
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
}
/**
* ibmvscsi_reset_host - Reset the connection to the server
* @hostdata: struct ibmvscsi_host_data to reset
*/
static void ibmvscsi_reset_host(struct ibmvscsi_host_data *hostdata)
{
scsi_block_requests(hostdata->host);
atomic_set(&hostdata->request_limit, 0);
purge_requests(hostdata, DID_ERROR);
if ((ibmvscsi_reset_crq_queue(&hostdata->queue, hostdata)) ||
(ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0)) ||
(vio_enable_interrupts(to_vio_dev(hostdata->dev)))) {
atomic_set(&hostdata->request_limit, -1);
dev_err(hostdata->dev, "error after reset\n");
}
scsi_unblock_requests(hostdata->host);
}
/**
* ibmvscsi_timeout - Internal command timeout handler
* @evt_struct: struct srp_event_struct that timed out
*
* Called when an internally generated command times out
*/
static void ibmvscsi_timeout(struct srp_event_struct *evt_struct)
{
struct ibmvscsi_host_data *hostdata = evt_struct->hostdata;
dev_err(hostdata->dev, "Command timed out (%x). Resetting connection\n",
evt_struct->iu.srp.cmd.opcode);
ibmvscsi_reset_host(hostdata);
}
/* ------------------------------------------------------------
* Routines for sending and receiving SRPs
*/
@ -527,12 +546,14 @@ static int map_data_for_srp_cmd(struct scsi_cmnd *cmd,
* ibmvscsi_send_srp_event: - Transforms event to u64 array and calls send_crq()
* @evt_struct: evt_struct to be sent
* @hostdata: ibmvscsi_host_data of host
* @timeout: timeout in seconds - 0 means do not time command
*
* Returns the value returned from ibmvscsi_send_crq(). (Zero for success)
* Note that this routine assumes that host_lock is held for synchronization
*/
static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
struct ibmvscsi_host_data *hostdata)
struct ibmvscsi_host_data *hostdata,
unsigned long timeout)
{
u64 *crq_as_u64 = (u64 *) &evt_struct->crq;
int request_status;
@ -588,12 +609,20 @@ static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
*/
list_add_tail(&evt_struct->list, &hostdata->sent);
init_timer(&evt_struct->timer);
if (timeout) {
evt_struct->timer.data = (unsigned long) evt_struct;
evt_struct->timer.expires = jiffies + (timeout * HZ);
evt_struct->timer.function = (void (*)(unsigned long))ibmvscsi_timeout;
add_timer(&evt_struct->timer);
}
if ((rc =
ibmvscsi_send_crq(hostdata, crq_as_u64[0], crq_as_u64[1])) != 0) {
list_del(&evt_struct->list);
del_timer(&evt_struct->timer);
printk(KERN_ERR "ibmvscsi: send error %d\n",
rc);
dev_err(hostdata->dev, "send error %d\n", rc);
atomic_inc(&hostdata->request_limit);
goto send_error;
}
@ -634,9 +663,8 @@ static void handle_cmd_rsp(struct srp_event_struct *evt_struct)
if (unlikely(rsp->opcode != SRP_RSP)) {
if (printk_ratelimit())
printk(KERN_WARNING
"ibmvscsi: bad SRP RSP type %d\n",
rsp->opcode);
dev_warn(evt_struct->hostdata->dev,
"bad SRP RSP type %d\n", rsp->opcode);
}
if (cmnd) {
@ -650,9 +678,9 @@ static void handle_cmd_rsp(struct srp_event_struct *evt_struct)
evt_struct->hostdata->dev);
if (rsp->flags & SRP_RSP_FLAG_DOOVER)
cmnd->resid = rsp->data_out_res_cnt;
scsi_set_resid(cmnd, rsp->data_out_res_cnt);
else if (rsp->flags & SRP_RSP_FLAG_DIOVER)
cmnd->resid = rsp->data_in_res_cnt;
scsi_set_resid(cmnd, rsp->data_in_res_cnt);
}
if (evt_struct->cmnd_done)
@ -697,7 +725,7 @@ static int ibmvscsi_queuecommand(struct scsi_cmnd *cmnd,
srp_cmd->lun = ((u64) lun) << 48;
if (!map_data_for_srp_cmd(cmnd, evt_struct, srp_cmd, hostdata->dev)) {
printk(KERN_ERR "ibmvscsi: couldn't convert cmd to srp_cmd\n");
sdev_printk(KERN_ERR, cmnd->device, "couldn't convert cmd to srp_cmd\n");
free_event_struct(&hostdata->pool, evt_struct);
return SCSI_MLQUEUE_HOST_BUSY;
}
@ -722,7 +750,7 @@ static int ibmvscsi_queuecommand(struct scsi_cmnd *cmnd,
offsetof(struct srp_indirect_buf, desc_list);
}
return ibmvscsi_send_srp_event(evt_struct, hostdata);
return ibmvscsi_send_srp_event(evt_struct, hostdata, 0);
}
/* ------------------------------------------------------------
@ -744,16 +772,16 @@ static void adapter_info_rsp(struct srp_event_struct *evt_struct)
DMA_BIDIRECTIONAL);
if (evt_struct->xfer_iu->mad.adapter_info.common.status) {
printk("ibmvscsi: error %d getting adapter info\n",
evt_struct->xfer_iu->mad.adapter_info.common.status);
dev_err(hostdata->dev, "error %d getting adapter info\n",
evt_struct->xfer_iu->mad.adapter_info.common.status);
} else {
printk("ibmvscsi: host srp version: %s, "
"host partition %s (%d), OS %d, max io %u\n",
hostdata->madapter_info.srp_version,
hostdata->madapter_info.partition_name,
hostdata->madapter_info.partition_number,
hostdata->madapter_info.os_type,
hostdata->madapter_info.port_max_txu[0]);
dev_info(hostdata->dev, "host srp version: %s, "
"host partition %s (%d), OS %d, max io %u\n",
hostdata->madapter_info.srp_version,
hostdata->madapter_info.partition_name,
hostdata->madapter_info.partition_number,
hostdata->madapter_info.os_type,
hostdata->madapter_info.port_max_txu[0]);
if (hostdata->madapter_info.port_max_txu[0])
hostdata->host->max_sectors =
@ -761,11 +789,10 @@ static void adapter_info_rsp(struct srp_event_struct *evt_struct)
if (hostdata->madapter_info.os_type == 3 &&
strcmp(hostdata->madapter_info.srp_version, "1.6a") <= 0) {
printk("ibmvscsi: host (Ver. %s) doesn't support large"
"transfers\n",
hostdata->madapter_info.srp_version);
printk("ibmvscsi: limiting scatterlists to %d\n",
MAX_INDIRECT_BUFS);
dev_err(hostdata->dev, "host (Ver. %s) doesn't support large transfers\n",
hostdata->madapter_info.srp_version);
dev_err(hostdata->dev, "limiting scatterlists to %d\n",
MAX_INDIRECT_BUFS);
hostdata->host->sg_tablesize = MAX_INDIRECT_BUFS;
}
}
@ -784,19 +811,20 @@ static void send_mad_adapter_info(struct ibmvscsi_host_data *hostdata)
{
struct viosrp_adapter_info *req;
struct srp_event_struct *evt_struct;
unsigned long flags;
dma_addr_t addr;
evt_struct = get_event_struct(&hostdata->pool);
if (!evt_struct) {
printk(KERN_ERR "ibmvscsi: couldn't allocate an event "
"for ADAPTER_INFO_REQ!\n");
dev_err(hostdata->dev,
"couldn't allocate an event for ADAPTER_INFO_REQ!\n");
return;
}
init_event_struct(evt_struct,
adapter_info_rsp,
VIOSRP_MAD_FORMAT,
init_timeout * HZ);
init_timeout);
req = &evt_struct->iu.mad.adapter_info;
memset(req, 0x00, sizeof(*req));
@ -809,20 +837,20 @@ static void send_mad_adapter_info(struct ibmvscsi_host_data *hostdata)
DMA_BIDIRECTIONAL);
if (dma_mapping_error(req->buffer)) {
printk(KERN_ERR
"ibmvscsi: Unable to map request_buffer "
"for adapter_info!\n");
dev_err(hostdata->dev, "Unable to map request_buffer for adapter_info!\n");
free_event_struct(&hostdata->pool, evt_struct);
return;
}
if (ibmvscsi_send_srp_event(evt_struct, hostdata)) {
printk(KERN_ERR "ibmvscsi: couldn't send ADAPTER_INFO_REQ!\n");
spin_lock_irqsave(hostdata->host->host_lock, flags);
if (ibmvscsi_send_srp_event(evt_struct, hostdata, init_timeout * 2)) {
dev_err(hostdata->dev, "couldn't send ADAPTER_INFO_REQ!\n");
dma_unmap_single(hostdata->dev,
addr,
sizeof(hostdata->madapter_info),
DMA_BIDIRECTIONAL);
}
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
};
/**
@ -839,24 +867,23 @@ static void login_rsp(struct srp_event_struct *evt_struct)
case SRP_LOGIN_RSP: /* it worked! */
break;
case SRP_LOGIN_REJ: /* refused! */
printk(KERN_INFO "ibmvscsi: SRP_LOGIN_REJ reason %u\n",
evt_struct->xfer_iu->srp.login_rej.reason);
dev_info(hostdata->dev, "SRP_LOGIN_REJ reason %u\n",
evt_struct->xfer_iu->srp.login_rej.reason);
/* Login failed. */
atomic_set(&hostdata->request_limit, -1);
return;
default:
printk(KERN_ERR
"ibmvscsi: Invalid login response typecode 0x%02x!\n",
evt_struct->xfer_iu->srp.login_rsp.opcode);
dev_err(hostdata->dev, "Invalid login response typecode 0x%02x!\n",
evt_struct->xfer_iu->srp.login_rsp.opcode);
/* Login failed. */
atomic_set(&hostdata->request_limit, -1);
return;
}
printk(KERN_INFO "ibmvscsi: SRP_LOGIN succeeded\n");
dev_info(hostdata->dev, "SRP_LOGIN succeeded\n");
if (evt_struct->xfer_iu->srp.login_rsp.req_lim_delta < 0)
printk(KERN_ERR "ibmvscsi: Invalid request_limit.\n");
dev_err(hostdata->dev, "Invalid request_limit.\n");
/* Now we know what the real request-limit is.
* This value is set rather than added to request_limit because
@ -885,15 +912,14 @@ static int send_srp_login(struct ibmvscsi_host_data *hostdata)
struct srp_login_req *login;
struct srp_event_struct *evt_struct = get_event_struct(&hostdata->pool);
if (!evt_struct) {
printk(KERN_ERR
"ibmvscsi: couldn't allocate an event for login req!\n");
dev_err(hostdata->dev, "couldn't allocate an event for login req!\n");
return FAILED;
}
init_event_struct(evt_struct,
login_rsp,
VIOSRP_SRP_FORMAT,
init_timeout * HZ);
init_timeout);
login = &evt_struct->iu.srp.login_req;
memset(login, 0x00, sizeof(struct srp_login_req));
@ -907,9 +933,9 @@ static int send_srp_login(struct ibmvscsi_host_data *hostdata)
*/
atomic_set(&hostdata->request_limit, 1);
rc = ibmvscsi_send_srp_event(evt_struct, hostdata);
rc = ibmvscsi_send_srp_event(evt_struct, hostdata, init_timeout * 2);
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
printk("ibmvscsic: sent SRP login\n");
dev_info(hostdata->dev, "sent SRP login\n");
return rc;
};
@ -958,20 +984,20 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
if (!found_evt) {
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
return FAILED;
return SUCCESS;
}
evt = get_event_struct(&hostdata->pool);
if (evt == NULL) {
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
printk(KERN_ERR "ibmvscsi: failed to allocate abort event\n");
sdev_printk(KERN_ERR, cmd->device, "failed to allocate abort event\n");
return FAILED;
}
init_event_struct(evt,
sync_completion,
VIOSRP_SRP_FORMAT,
init_timeout * HZ);
init_timeout);
tsk_mgmt = &evt->iu.srp.tsk_mgmt;
@ -982,15 +1008,16 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
tsk_mgmt->tsk_mgmt_func = SRP_TSK_ABORT_TASK;
tsk_mgmt->task_tag = (u64) found_evt;
printk(KERN_INFO "ibmvscsi: aborting command. lun 0x%lx, tag 0x%lx\n",
tsk_mgmt->lun, tsk_mgmt->task_tag);
sdev_printk(KERN_INFO, cmd->device, "aborting command. lun 0x%lx, tag 0x%lx\n",
tsk_mgmt->lun, tsk_mgmt->task_tag);
evt->sync_srp = &srp_rsp;
init_completion(&evt->comp);
rsp_rc = ibmvscsi_send_srp_event(evt, hostdata);
rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
if (rsp_rc != 0) {
printk(KERN_ERR "ibmvscsi: failed to send abort() event\n");
sdev_printk(KERN_ERR, cmd->device,
"failed to send abort() event. rc=%d\n", rsp_rc);
return FAILED;
}
@ -999,9 +1026,8 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
/* make sure we got a good response */
if (unlikely(srp_rsp.srp.rsp.opcode != SRP_RSP)) {
if (printk_ratelimit())
printk(KERN_WARNING
"ibmvscsi: abort bad SRP RSP type %d\n",
srp_rsp.srp.rsp.opcode);
sdev_printk(KERN_WARNING, cmd->device, "abort bad SRP RSP type %d\n",
srp_rsp.srp.rsp.opcode);
return FAILED;
}
@ -1012,10 +1038,9 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
if (rsp_rc) {
if (printk_ratelimit())
printk(KERN_WARNING
"ibmvscsi: abort code %d for task tag 0x%lx\n",
rsp_rc,
tsk_mgmt->task_tag);
sdev_printk(KERN_WARNING, cmd->device,
"abort code %d for task tag 0x%lx\n",
rsp_rc, tsk_mgmt->task_tag);
return FAILED;
}
@ -1034,15 +1059,13 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
if (found_evt == NULL) {
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
printk(KERN_INFO
"ibmvscsi: aborted task tag 0x%lx completed\n",
tsk_mgmt->task_tag);
sdev_printk(KERN_INFO, cmd->device, "aborted task tag 0x%lx completed\n",
tsk_mgmt->task_tag);
return SUCCESS;
}
printk(KERN_INFO
"ibmvscsi: successfully aborted task tag 0x%lx\n",
tsk_mgmt->task_tag);
sdev_printk(KERN_INFO, cmd->device, "successfully aborted task tag 0x%lx\n",
tsk_mgmt->task_tag);
cmd->result = (DID_ABORT << 16);
list_del(&found_evt->list);
@ -1076,14 +1099,14 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
evt = get_event_struct(&hostdata->pool);
if (evt == NULL) {
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
printk(KERN_ERR "ibmvscsi: failed to allocate reset event\n");
sdev_printk(KERN_ERR, cmd->device, "failed to allocate reset event\n");
return FAILED;
}
init_event_struct(evt,
sync_completion,
VIOSRP_SRP_FORMAT,
init_timeout * HZ);
init_timeout);
tsk_mgmt = &evt->iu.srp.tsk_mgmt;
@ -1093,15 +1116,16 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
tsk_mgmt->lun = ((u64) lun) << 48;
tsk_mgmt->tsk_mgmt_func = SRP_TSK_LUN_RESET;
printk(KERN_INFO "ibmvscsi: resetting device. lun 0x%lx\n",
tsk_mgmt->lun);
sdev_printk(KERN_INFO, cmd->device, "resetting device. lun 0x%lx\n",
tsk_mgmt->lun);
evt->sync_srp = &srp_rsp;
init_completion(&evt->comp);
rsp_rc = ibmvscsi_send_srp_event(evt, hostdata);
rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
if (rsp_rc != 0) {
printk(KERN_ERR "ibmvscsi: failed to send reset event\n");
sdev_printk(KERN_ERR, cmd->device,
"failed to send reset event. rc=%d\n", rsp_rc);
return FAILED;
}
@ -1110,9 +1134,8 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
/* make sure we got a good response */
if (unlikely(srp_rsp.srp.rsp.opcode != SRP_RSP)) {
if (printk_ratelimit())
printk(KERN_WARNING
"ibmvscsi: reset bad SRP RSP type %d\n",
srp_rsp.srp.rsp.opcode);
sdev_printk(KERN_WARNING, cmd->device, "reset bad SRP RSP type %d\n",
srp_rsp.srp.rsp.opcode);
return FAILED;
}
@ -1123,9 +1146,9 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
if (rsp_rc) {
if (printk_ratelimit())
printk(KERN_WARNING
"ibmvscsi: reset code %d for task tag 0x%lx\n",
rsp_rc, tsk_mgmt->task_tag);
sdev_printk(KERN_WARNING, cmd->device,
"reset code %d for task tag 0x%lx\n",
rsp_rc, tsk_mgmt->task_tag);
return FAILED;
}
@ -1154,32 +1177,30 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
}
/**
* purge_requests: Our virtual adapter just shut down. purge any sent requests
* @hostdata: the adapter
*/
static void purge_requests(struct ibmvscsi_host_data *hostdata, int error_code)
* ibmvscsi_eh_host_reset_handler - Reset the connection to the server
* @cmd: struct scsi_cmnd having problems
*/
static int ibmvscsi_eh_host_reset_handler(struct scsi_cmnd *cmd)
{
struct srp_event_struct *tmp_evt, *pos;
unsigned long flags;
unsigned long wait_switch = 0;
struct ibmvscsi_host_data *hostdata =
(struct ibmvscsi_host_data *)cmd->device->host->hostdata;
spin_lock_irqsave(hostdata->host->host_lock, flags);
list_for_each_entry_safe(tmp_evt, pos, &hostdata->sent, list) {
list_del(&tmp_evt->list);
if (tmp_evt->cmnd) {
tmp_evt->cmnd->result = (error_code << 16);
unmap_cmd_data(&tmp_evt->iu.srp.cmd,
tmp_evt,
tmp_evt->hostdata->dev);
if (tmp_evt->cmnd_done)
tmp_evt->cmnd_done(tmp_evt->cmnd);
} else {
if (tmp_evt->done) {
tmp_evt->done(tmp_evt);
}
}
free_event_struct(&tmp_evt->hostdata->pool, tmp_evt);
dev_err(hostdata->dev, "Resetting connection due to error recovery\n");
ibmvscsi_reset_host(hostdata);
for (wait_switch = jiffies + (init_timeout * HZ);
time_before(jiffies, wait_switch) &&
atomic_read(&hostdata->request_limit) < 2;) {
msleep(10);
}
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
if (atomic_read(&hostdata->request_limit) <= 0)
return FAILED;
return SUCCESS;
}
/**
@ -1191,6 +1212,7 @@ static void purge_requests(struct ibmvscsi_host_data *hostdata, int error_code)
void ibmvscsi_handle_crq(struct viosrp_crq *crq,
struct ibmvscsi_host_data *hostdata)
{
long rc;
unsigned long flags;
struct srp_event_struct *evt_struct =
(struct srp_event_struct *)crq->IU_data_ptr;
@ -1198,27 +1220,25 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
case 0xC0: /* initialization */
switch (crq->format) {
case 0x01: /* Initialization message */
printk(KERN_INFO "ibmvscsi: partner initialized\n");
dev_info(hostdata->dev, "partner initialized\n");
/* Send back a response */
if (ibmvscsi_send_crq(hostdata,
0xC002000000000000LL, 0) == 0) {
if ((rc = ibmvscsi_send_crq(hostdata,
0xC002000000000000LL, 0)) == 0) {
/* Now login */
send_srp_login(hostdata);
} else {
printk(KERN_ERR
"ibmvscsi: Unable to send init rsp\n");
dev_err(hostdata->dev, "Unable to send init rsp. rc=%ld\n", rc);
}
break;
case 0x02: /* Initialization response */
printk(KERN_INFO
"ibmvscsi: partner initialization complete\n");
dev_info(hostdata->dev, "partner initialization complete\n");
/* Now login */
send_srp_login(hostdata);
break;
default:
printk(KERN_ERR "ibmvscsi: unknown crq message type\n");
dev_err(hostdata->dev, "unknown crq message type: %d\n", crq->format);
}
return;
case 0xFF: /* Hypervisor telling us the connection is closed */
@ -1226,8 +1246,7 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
atomic_set(&hostdata->request_limit, 0);
if (crq->format == 0x06) {
/* We need to re-setup the interpartition connection */
printk(KERN_INFO
"ibmvscsi: Re-enabling adapter!\n");
dev_info(hostdata->dev, "Re-enabling adapter!\n");
purge_requests(hostdata, DID_REQUEUE);
if ((ibmvscsi_reenable_crq_queue(&hostdata->queue,
hostdata)) ||
@ -1235,14 +1254,11 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
0xC001000000000000LL, 0))) {
atomic_set(&hostdata->request_limit,
-1);
printk(KERN_ERR
"ibmvscsi: error after"
" enable\n");
dev_err(hostdata->dev, "error after enable\n");
}
} else {
printk(KERN_INFO
"ibmvscsi: Virtual adapter failed rc %d!\n",
crq->format);
dev_err(hostdata->dev, "Virtual adapter failed rc %d!\n",
crq->format);
purge_requests(hostdata, DID_ERROR);
if ((ibmvscsi_reset_crq_queue(&hostdata->queue,
@ -1251,8 +1267,7 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
0xC001000000000000LL, 0))) {
atomic_set(&hostdata->request_limit,
-1);
printk(KERN_ERR
"ibmvscsi: error after reset\n");
dev_err(hostdata->dev, "error after reset\n");
}
}
scsi_unblock_requests(hostdata->host);
@ -1260,9 +1275,8 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
case 0x80: /* real payload */
break;
default:
printk(KERN_ERR
"ibmvscsi: got an invalid message type 0x%02x\n",
crq->valid);
dev_err(hostdata->dev, "got an invalid message type 0x%02x\n",
crq->valid);
return;
}
@ -1271,16 +1285,14 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
* actually sent
*/
if (!valid_event_struct(&hostdata->pool, evt_struct)) {
printk(KERN_ERR
"ibmvscsi: returned correlation_token 0x%p is invalid!\n",
dev_err(hostdata->dev, "returned correlation_token 0x%p is invalid!\n",
(void *)crq->IU_data_ptr);
return;
}
if (atomic_read(&evt_struct->free)) {
printk(KERN_ERR
"ibmvscsi: received duplicate correlation_token 0x%p!\n",
(void *)crq->IU_data_ptr);
dev_err(hostdata->dev, "received duplicate correlation_token 0x%p!\n",
(void *)crq->IU_data_ptr);
return;
}
@ -1288,11 +1300,12 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
atomic_add(evt_struct->xfer_iu->srp.rsp.req_lim_delta,
&hostdata->request_limit);
del_timer(&evt_struct->timer);
if (evt_struct->done)
evt_struct->done(evt_struct);
else
printk(KERN_ERR
"ibmvscsi: returned done() is NULL; not running it!\n");
dev_err(hostdata->dev, "returned done() is NULL; not running it!\n");
/*
* Lock the host_lock before messing with these structures, since we
@ -1313,20 +1326,20 @@ static int ibmvscsi_do_host_config(struct ibmvscsi_host_data *hostdata,
{
struct viosrp_host_config *host_config;
struct srp_event_struct *evt_struct;
unsigned long flags;
dma_addr_t addr;
int rc;
evt_struct = get_event_struct(&hostdata->pool);
if (!evt_struct) {
printk(KERN_ERR
"ibmvscsi: could't allocate event for HOST_CONFIG!\n");
dev_err(hostdata->dev, "couldn't allocate event for HOST_CONFIG!\n");
return -1;
}
init_event_struct(evt_struct,
sync_completion,
VIOSRP_MAD_FORMAT,
init_timeout * HZ);
init_timeout);
host_config = &evt_struct->iu.mad.host_config;
@ -1339,14 +1352,15 @@ static int ibmvscsi_do_host_config(struct ibmvscsi_host_data *hostdata,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(host_config->buffer)) {
printk(KERN_ERR
"ibmvscsi: dma_mapping error " "getting host config\n");
dev_err(hostdata->dev, "dma_mapping error getting host config\n");
free_event_struct(&hostdata->pool, evt_struct);
return -1;
}
init_completion(&evt_struct->comp);
rc = ibmvscsi_send_srp_event(evt_struct, hostdata);
spin_lock_irqsave(hostdata->host->host_lock, flags);
rc = ibmvscsi_send_srp_event(evt_struct, hostdata, init_timeout * 2);
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
if (rc == 0)
wait_for_completion(&evt_struct->comp);
dma_unmap_single(hostdata->dev, addr, length, DMA_BIDIRECTIONAL);
@ -1375,6 +1389,23 @@ static int ibmvscsi_slave_configure(struct scsi_device *sdev)
return 0;
}
/**
* ibmvscsi_change_queue_depth - Change the device's queue depth
* @sdev: scsi device struct
* @qdepth: depth to set
*
* Return value:
* actual depth set
**/
static int ibmvscsi_change_queue_depth(struct scsi_device *sdev, int qdepth)
{
if (qdepth > IBMVSCSI_MAX_CMDS_PER_LUN)
qdepth = IBMVSCSI_MAX_CMDS_PER_LUN;
scsi_adjust_queue_depth(sdev, 0, qdepth);
return sdev->queue_depth;
}
/* ------------------------------------------------------------
* sysfs attributes
*/
@ -1520,7 +1551,9 @@ static struct scsi_host_template driver_template = {
.queuecommand = ibmvscsi_queuecommand,
.eh_abort_handler = ibmvscsi_eh_abort_handler,
.eh_device_reset_handler = ibmvscsi_eh_device_reset_handler,
.eh_host_reset_handler = ibmvscsi_eh_host_reset_handler,
.slave_configure = ibmvscsi_slave_configure,
.change_queue_depth = ibmvscsi_change_queue_depth,
.cmd_per_lun = 16,
.can_queue = IBMVSCSI_MAX_REQUESTS_DEFAULT,
.this_id = -1,
@ -1545,7 +1578,7 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
driver_template.can_queue = max_requests;
host = scsi_host_alloc(&driver_template, sizeof(*hostdata));
if (!host) {
printk(KERN_ERR "ibmvscsi: couldn't allocate host data\n");
dev_err(&vdev->dev, "couldn't allocate host data\n");
goto scsi_host_alloc_failed;
}
@ -1559,11 +1592,11 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
rc = ibmvscsi_init_crq_queue(&hostdata->queue, hostdata, max_requests);
if (rc != 0 && rc != H_RESOURCE) {
printk(KERN_ERR "ibmvscsi: couldn't initialize crq\n");
dev_err(&vdev->dev, "couldn't initialize crq. rc=%d\n", rc);
goto init_crq_failed;
}
if (initialize_event_pool(&hostdata->pool, max_requests, hostdata) != 0) {
printk(KERN_ERR "ibmvscsi: couldn't initialize event pool\n");
dev_err(&vdev->dev, "couldn't initialize event pool\n");
goto init_pool_failed;
}

View File

@ -45,6 +45,7 @@ struct Scsi_Host;
#define MAX_INDIRECT_BUFS 10
#define IBMVSCSI_MAX_REQUESTS_DEFAULT 100
#define IBMVSCSI_MAX_CMDS_PER_LUN 64
/* ------------------------------------------------------------
* Data Structures
@ -69,6 +70,7 @@ struct srp_event_struct {
union viosrp_iu iu;
void (*cmnd_done) (struct scsi_cmnd *);
struct completion comp;
struct timer_list timer;
union viosrp_iu *sync_srp;
struct srp_direct_buf *ext_list;
dma_addr_t ext_list_token;

View File

@ -177,7 +177,7 @@ static void set_adapter_info(struct ibmvscsi_host_data *hostdata)
memset(&hostdata->madapter_info, 0x00,
sizeof(hostdata->madapter_info));
printk(KERN_INFO "rpa_vscsi: SPR_VERSION: %s\n", SRP_VERSION);
dev_info(hostdata->dev, "SRP_VERSION: %s\n", SRP_VERSION);
strcpy(hostdata->madapter_info.srp_version, SRP_VERSION);
strncpy(hostdata->madapter_info.partition_name, partition_name,
@ -232,25 +232,24 @@ int ibmvscsi_init_crq_queue(struct crq_queue *queue,
if (rc == 2) {
/* Adapter is good, but other end is not ready */
printk(KERN_WARNING "ibmvscsi: Partner adapter not ready\n");
dev_warn(hostdata->dev, "Partner adapter not ready\n");
retrc = 0;
} else if (rc != 0) {
printk(KERN_WARNING "ibmvscsi: Error %d opening adapter\n", rc);
dev_warn(hostdata->dev, "Error %d opening adapter\n", rc);
goto reg_crq_failed;
}
if (request_irq(vdev->irq,
ibmvscsi_handle_event,
0, "ibmvscsi", (void *)hostdata) != 0) {
printk(KERN_ERR "ibmvscsi: couldn't register irq 0x%x\n",
vdev->irq);
dev_err(hostdata->dev, "couldn't register irq 0x%x\n",
vdev->irq);
goto req_irq_failed;
}
rc = vio_enable_interrupts(vdev);
if (rc != 0) {
printk(KERN_ERR "ibmvscsi: Error %d enabling interrupts!!!\n",
rc);
dev_err(hostdata->dev, "Error %d enabling interrupts!!!\n", rc);
goto req_irq_failed;
}
@ -294,7 +293,7 @@ int ibmvscsi_reenable_crq_queue(struct crq_queue *queue,
} while ((rc == H_IN_PROGRESS) || (rc == H_BUSY) || (H_IS_LONG_BUSY(rc)));
if (rc)
printk(KERN_ERR "ibmvscsi: Error %d enabling adapter\n", rc);
dev_err(hostdata->dev, "Error %d enabling adapter\n", rc);
return rc;
}
@ -327,10 +326,9 @@ int ibmvscsi_reset_crq_queue(struct crq_queue *queue,
queue->msg_token, PAGE_SIZE);
if (rc == 2) {
/* Adapter is good, but other end is not ready */
printk(KERN_WARNING "ibmvscsi: Partner adapter not ready\n");
dev_warn(hostdata->dev, "Partner adapter not ready\n");
} else if (rc != 0) {
printk(KERN_WARNING
"ibmvscsi: couldn't register crq--rc 0x%x\n", rc);
dev_warn(hostdata->dev, "couldn't register crq--rc 0x%x\n", rc);
}
return rc;
}

File diff suppressed because it is too large Load Diff

View File

@ -4,6 +4,8 @@
* Copyright (c) 1994-1998 Initio Corporation
* All rights reserved.
*
* Cleanups (c) Copyright 2007 Red Hat <alan@redhat.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2, or (at your option)
@ -18,27 +20,6 @@
* along with this program; see the file COPYING. If not, write to
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
*
* --------------------------------------------------------------------------
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification, immediately at the beginning of the file.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* Where this Software is combined with software released under the terms of
* the GNU General Public License ("GPL") and the terms of the GPL would require the
* combined work to also be released under the terms of the GPL, the terms
* and conditions of this License will apply in addition to those of the
* GPL with the exception of any terms or conditions of this License that
* conflict with, or are expressly prohibited by, the GPL.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
@ -56,17 +37,6 @@
#include <linux/types.h>
#define ULONG unsigned long
#define USHORT unsigned short
#define UCHAR unsigned char
#define BYTE unsigned char
#define WORD unsigned short
#define DWORD unsigned long
#define UBYTE unsigned char
#define UWORD unsigned short
#define UDWORD unsigned long
#define U32 u32
#define TOTAL_SG_ENTRY 32
#define MAX_SUPPORTED_ADAPTERS 8
#define MAX_OFFSET 15
@ -368,55 +338,55 @@ typedef struct {
/************************************************************************/
/* Scatter-Gather Element Structure */
/************************************************************************/
typedef struct SG_Struc {
U32 SG_Ptr; /* Data Pointer */
U32 SG_Len; /* Data Length */
} SG;
struct sg_entry {
u32 data; /* Data Pointer */
u32 len; /* Data Length */
};
/***********************************************************************
SCSI Control Block
************************************************************************/
typedef struct Scsi_Ctrl_Blk {
struct Scsi_Ctrl_Blk *SCB_NxtScb;
UBYTE SCB_Status; /*4 */
UBYTE SCB_NxtStat; /*5 */
UBYTE SCB_Mode; /*6 */
UBYTE SCB_Msgin; /*7 SCB_Res0 */
UWORD SCB_SGIdx; /*8 */
UWORD SCB_SGMax; /*A */
struct scsi_ctrl_blk {
struct scsi_ctrl_blk *next;
u8 status; /*4 */
u8 next_state; /*5 */
u8 mode; /*6 */
u8 msgin; /*7 SCB_Res0 */
u16 sgidx; /*8 */
u16 sgmax; /*A */
#ifdef ALPHA
U32 SCB_Reserved[2]; /*C */
u32 reserved[2]; /*C */
#else
U32 SCB_Reserved[3]; /*C */
u32 reserved[3]; /*C */
#endif
U32 SCB_XferLen; /*18 Current xfer len */
U32 SCB_TotXLen; /*1C Total xfer len */
U32 SCB_PAddr; /*20 SCB phy. Addr. */
u32 xferlen; /*18 Current xfer len */
u32 totxlen; /*1C Total xfer len */
u32 paddr; /*20 SCB phy. Addr. */
UBYTE SCB_Opcode; /*24 SCB command code */
UBYTE SCB_Flags; /*25 SCB Flags */
UBYTE SCB_Target; /*26 Target Id */
UBYTE SCB_Lun; /*27 Lun */
U32 SCB_BufPtr; /*28 Data Buffer Pointer */
U32 SCB_BufLen; /*2C Data Allocation Length */
UBYTE SCB_SGLen; /*30 SG list # */
UBYTE SCB_SenseLen; /*31 Sense Allocation Length */
UBYTE SCB_HaStat; /*32 */
UBYTE SCB_TaStat; /*33 */
UBYTE SCB_CDBLen; /*34 CDB Length */
UBYTE SCB_Ident; /*35 Identify */
UBYTE SCB_TagMsg; /*36 Tag Message */
UBYTE SCB_TagId; /*37 Queue Tag */
UBYTE SCB_CDB[12]; /*38 */
U32 SCB_SGPAddr; /*44 SG List/Sense Buf phy. Addr. */
U32 SCB_SensePtr; /*48 Sense data pointer */
void (*SCB_Post) (BYTE *, BYTE *); /*4C POST routine */
struct scsi_cmnd *SCB_Srb; /*50 SRB Pointer */
SG SCB_SGList[TOTAL_SG_ENTRY]; /*54 Start of SG list */
} SCB;
u8 opcode; /*24 SCB command code */
u8 flags; /*25 SCB Flags */
u8 target; /*26 Target Id */
u8 lun; /*27 Lun */
u32 bufptr; /*28 Data Buffer Pointer */
u32 buflen; /*2C Data Allocation Length */
u8 sglen; /*30 SG list # */
u8 senselen; /*31 Sense Allocation Length */
u8 hastat; /*32 */
u8 tastat; /*33 */
u8 cdblen; /*34 CDB Length */
u8 ident; /*35 Identify */
u8 tagmsg; /*36 Tag Message */
u8 tagid; /*37 Queue Tag */
u8 cdb[12]; /*38 */
u32 sgpaddr; /*44 SG List/Sense Buf phy. Addr. */
u32 senseptr; /*48 Sense data pointer */
void (*post) (u8 *, u8 *); /*4C POST routine */
struct scsi_cmnd *srb; /*50 SRB Pointer */
struct sg_entry sglist[TOTAL_SG_ENTRY]; /*54 Start of SG list */
};
/* Bit Definition for SCB_Status */
/* Bit Definition for status */
#define SCB_RENT 0x01
#define SCB_PEND 0x02
#define SCB_CONTIG 0x04 /* Contigent Allegiance */
@ -425,17 +395,17 @@ typedef struct Scsi_Ctrl_Blk {
#define SCB_DONE 0x20
/* Opcodes of SCB_Opcode */
/* Opcodes for opcode */
#define ExecSCSI 0x1
#define BusDevRst 0x2
#define AbortCmd 0x3
/* Bit Definition for SCB_Mode */
/* Bit Definition for mode */
#define SCM_RSENS 0x01 /* request sense mode */
/* Bit Definition for SCB_Flags */
/* Bit Definition for flags */
#define SCF_DONE 0x01
#define SCF_POST 0x02
#define SCF_SENSE 0x04
@ -492,15 +462,14 @@ typedef struct Scsi_Ctrl_Blk {
Target Device Control Structure
**********************************************************************/
typedef struct Tar_Ctrl_Struc {
UWORD TCS_Flags; /* 0 */
UBYTE TCS_JS_Period; /* 2 */
UBYTE TCS_SConfig0; /* 3 */
UWORD TCS_DrvFlags; /* 4 */
UBYTE TCS_DrvHead; /* 6 */
UBYTE TCS_DrvSector; /* 7 */
} TCS;
struct target_control {
u16 flags;
u8 js_period;
u8 sconfig0;
u16 drv_flags;
u8 heads;
u8 sectors;
};
/***********************************************************************
Target Device Control Structure
@ -523,62 +492,53 @@ typedef struct Tar_Ctrl_Struc {
#define TCF_DRV_EN_TAG 0x0800
#define TCF_DRV_255_63 0x0400
typedef struct I91u_Adpt_Struc {
UWORD ADPT_BIOS; /* 0 */
UWORD ADPT_BASE; /* 1 */
UBYTE ADPT_Bus; /* 2 */
UBYTE ADPT_Device; /* 3 */
UBYTE ADPT_INTR; /* 4 */
} INI_ADPT_STRUCT;
/***********************************************************************
Host Adapter Control Structure
************************************************************************/
typedef struct Ha_Ctrl_Struc {
UWORD HCS_Base; /* 00 */
UWORD HCS_BIOS; /* 02 */
UBYTE HCS_Intr; /* 04 */
UBYTE HCS_SCSI_ID; /* 05 */
UBYTE HCS_MaxTar; /* 06 */
UBYTE HCS_NumScbs; /* 07 */
struct initio_host {
u16 addr; /* 00 */
u16 bios_addr; /* 02 */
u8 irq; /* 04 */
u8 scsi_id; /* 05 */
u8 max_tar; /* 06 */
u8 num_scbs; /* 07 */
UBYTE HCS_Flags; /* 08 */
UBYTE HCS_Index; /* 09 */
UBYTE HCS_HaId; /* 0A */
UBYTE HCS_Config; /* 0B */
UWORD HCS_IdMask; /* 0C */
UBYTE HCS_Semaph; /* 0E */
UBYTE HCS_Phase; /* 0F */
UBYTE HCS_JSStatus0; /* 10 */
UBYTE HCS_JSInt; /* 11 */
UBYTE HCS_JSStatus1; /* 12 */
UBYTE HCS_SConf1; /* 13 */
u8 flags; /* 08 */
u8 index; /* 09 */
u8 ha_id; /* 0A */
u8 config; /* 0B */
u16 idmask; /* 0C */
u8 semaph; /* 0E */
u8 phase; /* 0F */
u8 jsstatus0; /* 10 */
u8 jsint; /* 11 */
u8 jsstatus1; /* 12 */
u8 sconf1; /* 13 */
UBYTE HCS_Msg[8]; /* 14 */
SCB *HCS_NxtAvail; /* 1C */
SCB *HCS_Scb; /* 20 */
SCB *HCS_ScbEnd; /* 24 */
SCB *HCS_NxtPend; /* 28 */
SCB *HCS_NxtContig; /* 2C */
SCB *HCS_ActScb; /* 30 */
TCS *HCS_ActTcs; /* 34 */
u8 msg[8]; /* 14 */
struct scsi_ctrl_blk *next_avail; /* 1C */
struct scsi_ctrl_blk *scb; /* 20 */
struct scsi_ctrl_blk *scb_end; /* 24 */ /*UNUSED*/
struct scsi_ctrl_blk *next_pending; /* 28 */
struct scsi_ctrl_blk *next_contig; /* 2C */ /*UNUSED*/
struct scsi_ctrl_blk *active; /* 30 */
struct target_control *active_tc; /* 34 */
SCB *HCS_FirstAvail; /* 38 */
SCB *HCS_LastAvail; /* 3C */
SCB *HCS_FirstPend; /* 40 */
SCB *HCS_LastPend; /* 44 */
SCB *HCS_FirstBusy; /* 48 */
SCB *HCS_LastBusy; /* 4C */
SCB *HCS_FirstDone; /* 50 */
SCB *HCS_LastDone; /* 54 */
UBYTE HCS_MaxTags[16]; /* 58 */
UBYTE HCS_ActTags[16]; /* 68 */
TCS HCS_Tcs[MAX_TARGETS]; /* 78 */
spinlock_t HCS_AvailLock;
spinlock_t HCS_SemaphLock;
struct scsi_ctrl_blk *first_avail; /* 38 */
struct scsi_ctrl_blk *last_avail; /* 3C */
struct scsi_ctrl_blk *first_pending; /* 40 */
struct scsi_ctrl_blk *last_pending; /* 44 */
struct scsi_ctrl_blk *first_busy; /* 48 */
struct scsi_ctrl_blk *last_busy; /* 4C */
struct scsi_ctrl_blk *first_done; /* 50 */
struct scsi_ctrl_blk *last_done; /* 54 */
u8 max_tags[16]; /* 58 */
u8 act_tags[16]; /* 68 */
struct target_control targets[MAX_TARGETS]; /* 78 */
spinlock_t avail_lock;
spinlock_t semaph_lock;
struct pci_dev *pci_dev;
} HCS;
};
/* Bit Definition for HCB_Config */
#define HCC_SCSI_RESET 0x01
@ -599,47 +559,47 @@ typedef struct Ha_Ctrl_Struc {
*******************************************************************/
typedef struct _NVRAM_SCSI { /* SCSI channel configuration */
UCHAR NVM_ChSCSIID; /* 0Ch -> Channel SCSI ID */
UCHAR NVM_ChConfig1; /* 0Dh -> Channel config 1 */
UCHAR NVM_ChConfig2; /* 0Eh -> Channel config 2 */
UCHAR NVM_NumOfTarg; /* 0Fh -> Number of SCSI target */
u8 NVM_ChSCSIID; /* 0Ch -> Channel SCSI ID */
u8 NVM_ChConfig1; /* 0Dh -> Channel config 1 */
u8 NVM_ChConfig2; /* 0Eh -> Channel config 2 */
u8 NVM_NumOfTarg; /* 0Fh -> Number of SCSI target */
/* SCSI target configuration */
UCHAR NVM_Targ0Config; /* 10h -> Target 0 configuration */
UCHAR NVM_Targ1Config; /* 11h -> Target 1 configuration */
UCHAR NVM_Targ2Config; /* 12h -> Target 2 configuration */
UCHAR NVM_Targ3Config; /* 13h -> Target 3 configuration */
UCHAR NVM_Targ4Config; /* 14h -> Target 4 configuration */
UCHAR NVM_Targ5Config; /* 15h -> Target 5 configuration */
UCHAR NVM_Targ6Config; /* 16h -> Target 6 configuration */
UCHAR NVM_Targ7Config; /* 17h -> Target 7 configuration */
UCHAR NVM_Targ8Config; /* 18h -> Target 8 configuration */
UCHAR NVM_Targ9Config; /* 19h -> Target 9 configuration */
UCHAR NVM_TargAConfig; /* 1Ah -> Target A configuration */
UCHAR NVM_TargBConfig; /* 1Bh -> Target B configuration */
UCHAR NVM_TargCConfig; /* 1Ch -> Target C configuration */
UCHAR NVM_TargDConfig; /* 1Dh -> Target D configuration */
UCHAR NVM_TargEConfig; /* 1Eh -> Target E configuration */
UCHAR NVM_TargFConfig; /* 1Fh -> Target F configuration */
u8 NVM_Targ0Config; /* 10h -> Target 0 configuration */
u8 NVM_Targ1Config; /* 11h -> Target 1 configuration */
u8 NVM_Targ2Config; /* 12h -> Target 2 configuration */
u8 NVM_Targ3Config; /* 13h -> Target 3 configuration */
u8 NVM_Targ4Config; /* 14h -> Target 4 configuration */
u8 NVM_Targ5Config; /* 15h -> Target 5 configuration */
u8 NVM_Targ6Config; /* 16h -> Target 6 configuration */
u8 NVM_Targ7Config; /* 17h -> Target 7 configuration */
u8 NVM_Targ8Config; /* 18h -> Target 8 configuration */
u8 NVM_Targ9Config; /* 19h -> Target 9 configuration */
u8 NVM_TargAConfig; /* 1Ah -> Target A configuration */
u8 NVM_TargBConfig; /* 1Bh -> Target B configuration */
u8 NVM_TargCConfig; /* 1Ch -> Target C configuration */
u8 NVM_TargDConfig; /* 1Dh -> Target D configuration */
u8 NVM_TargEConfig; /* 1Eh -> Target E configuration */
u8 NVM_TargFConfig; /* 1Fh -> Target F configuration */
} NVRAM_SCSI;
typedef struct _NVRAM {
/*----------header ---------------*/
USHORT NVM_Signature; /* 0,1: Signature */
UCHAR NVM_Size; /* 2: Size of data structure */
UCHAR NVM_Revision; /* 3: Revision of data structure */
u16 NVM_Signature; /* 0,1: Signature */
u8 NVM_Size; /* 2: Size of data structure */
u8 NVM_Revision; /* 3: Revision of data structure */
/* ----Host Adapter Structure ---- */
UCHAR NVM_ModelByte0; /* 4: Model number (byte 0) */
UCHAR NVM_ModelByte1; /* 5: Model number (byte 1) */
UCHAR NVM_ModelInfo; /* 6: Model information */
UCHAR NVM_NumOfCh; /* 7: Number of SCSI channel */
UCHAR NVM_BIOSConfig1; /* 8: BIOS configuration 1 */
UCHAR NVM_BIOSConfig2; /* 9: BIOS configuration 2 */
UCHAR NVM_HAConfig1; /* A: Hoat adapter configuration 1 */
UCHAR NVM_HAConfig2; /* B: Hoat adapter configuration 2 */
u8 NVM_ModelByte0; /* 4: Model number (byte 0) */
u8 NVM_ModelByte1; /* 5: Model number (byte 1) */
u8 NVM_ModelInfo; /* 6: Model information */
u8 NVM_NumOfCh; /* 7: Number of SCSI channel */
u8 NVM_BIOSConfig1; /* 8: BIOS configuration 1 */
u8 NVM_BIOSConfig2; /* 9: BIOS configuration 2 */
u8 NVM_HAConfig1; /* A: Hoat adapter configuration 1 */
u8 NVM_HAConfig2; /* B: Hoat adapter configuration 2 */
NVRAM_SCSI NVM_SCSIInfo[2];
UCHAR NVM_reserved[10];
u8 NVM_reserved[10];
/* ---------- CheckSum ---------- */
USHORT NVM_CheckSum; /* 0x3E, 0x3F: Checksum of NVRam */
u16 NVM_CheckSum; /* 0x3E, 0x3F: Checksum of NVRam */
} NVRAM, *PNVRAM;
/* Bios Configuration for nvram->BIOSConfig1 */
@ -681,19 +641,6 @@ typedef struct _NVRAM {
#define DISC_ALLOW 0xC0 /* Disconnect is allowed */
#define SCSICMD_RequestSense 0x03
typedef struct _HCSinfo {
ULONG base;
UCHAR vec;
UCHAR bios; /* High byte of BIOS address */
USHORT BaseAndBios; /* high byte: pHcsInfo->bios,low byte:pHcsInfo->base */
} HCSINFO;
#define TUL_RD(x,y) (UCHAR)(inb( (int)((ULONG)(x+y)) ))
#define TUL_RDLONG(x,y) (ULONG)(inl((int)((ULONG)(x+y)) ))
#define TUL_WR( adr,data) outb( (UCHAR)(data), (int)(adr))
#define TUL_WRSHORT(adr,data) outw( (UWORD)(data), (int)(adr))
#define TUL_WRLONG( adr,data) outl( (ULONG)(data), (int)(adr))
#define SCSI_ABORT_SNOOZE 0
#define SCSI_ABORT_SUCCESS 1
#define SCSI_ABORT_PENDING 2

View File

@ -539,32 +539,6 @@ struct ipr_cmnd *ipr_get_free_ipr_cmnd(struct ipr_ioa_cfg *ioa_cfg)
return ipr_cmd;
}
/**
* ipr_unmap_sglist - Unmap scatterlist if mapped
* @ioa_cfg: ioa config struct
* @ipr_cmd: ipr command struct
*
* Return value:
* nothing
**/
static void ipr_unmap_sglist(struct ipr_ioa_cfg *ioa_cfg,
struct ipr_cmnd *ipr_cmd)
{
struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
if (ipr_cmd->dma_use_sg) {
if (scsi_cmd->use_sg > 0) {
pci_unmap_sg(ioa_cfg->pdev, scsi_cmd->request_buffer,
scsi_cmd->use_sg,
scsi_cmd->sc_data_direction);
} else {
pci_unmap_single(ioa_cfg->pdev, ipr_cmd->dma_handle,
scsi_cmd->request_bufflen,
scsi_cmd->sc_data_direction);
}
}
}
/**
* ipr_mask_and_clear_interrupts - Mask all and clear specified interrupts
* @ioa_cfg: ioa config struct
@ -677,7 +651,7 @@ static void ipr_scsi_eh_done(struct ipr_cmnd *ipr_cmd)
scsi_cmd->result |= (DID_ERROR << 16);
ipr_unmap_sglist(ioa_cfg, ipr_cmd);
scsi_dma_unmap(ipr_cmd->scsi_cmd);
scsi_cmd->scsi_done(scsi_cmd);
list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
}
@ -4298,93 +4272,55 @@ static irqreturn_t ipr_isr(int irq, void *devp)
static int ipr_build_ioadl(struct ipr_ioa_cfg *ioa_cfg,
struct ipr_cmnd *ipr_cmd)
{
int i;
struct scatterlist *sglist;
int i, nseg;
struct scatterlist *sg;
u32 length;
u32 ioadl_flags = 0;
struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
length = scsi_cmd->request_bufflen;
if (length == 0)
length = scsi_bufflen(scsi_cmd);
if (!length)
return 0;
if (scsi_cmd->use_sg) {
ipr_cmd->dma_use_sg = pci_map_sg(ioa_cfg->pdev,
scsi_cmd->request_buffer,
scsi_cmd->use_sg,
scsi_cmd->sc_data_direction);
if (scsi_cmd->sc_data_direction == DMA_TO_DEVICE) {
ioadl_flags = IPR_IOADL_FLAGS_WRITE;
ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
ioarcb->write_data_transfer_length = cpu_to_be32(length);
ioarcb->write_ioadl_len =
cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
} else if (scsi_cmd->sc_data_direction == DMA_FROM_DEVICE) {
ioadl_flags = IPR_IOADL_FLAGS_READ;
ioarcb->read_data_transfer_length = cpu_to_be32(length);
ioarcb->read_ioadl_len =
cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
}
sglist = scsi_cmd->request_buffer;
if (ipr_cmd->dma_use_sg <= ARRAY_SIZE(ioarcb->add_data.u.ioadl)) {
ioadl = ioarcb->add_data.u.ioadl;
ioarcb->write_ioadl_addr =
cpu_to_be32(be32_to_cpu(ioarcb->ioarcb_host_pci_addr) +
offsetof(struct ipr_ioarcb, add_data));
ioarcb->read_ioadl_addr = ioarcb->write_ioadl_addr;
}
for (i = 0; i < ipr_cmd->dma_use_sg; i++) {
ioadl[i].flags_and_data_len =
cpu_to_be32(ioadl_flags | sg_dma_len(&sglist[i]));
ioadl[i].address =
cpu_to_be32(sg_dma_address(&sglist[i]));
}
if (likely(ipr_cmd->dma_use_sg)) {
ioadl[i-1].flags_and_data_len |=
cpu_to_be32(IPR_IOADL_FLAGS_LAST);
return 0;
} else
dev_err(&ioa_cfg->pdev->dev, "pci_map_sg failed!\n");
} else {
if (scsi_cmd->sc_data_direction == DMA_TO_DEVICE) {
ioadl_flags = IPR_IOADL_FLAGS_WRITE;
ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
ioarcb->write_data_transfer_length = cpu_to_be32(length);
ioarcb->write_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
} else if (scsi_cmd->sc_data_direction == DMA_FROM_DEVICE) {
ioadl_flags = IPR_IOADL_FLAGS_READ;
ioarcb->read_data_transfer_length = cpu_to_be32(length);
ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
}
ipr_cmd->dma_handle = pci_map_single(ioa_cfg->pdev,
scsi_cmd->request_buffer, length,
scsi_cmd->sc_data_direction);
if (likely(!pci_dma_mapping_error(ipr_cmd->dma_handle))) {
ioadl = ioarcb->add_data.u.ioadl;
ioarcb->write_ioadl_addr =
cpu_to_be32(be32_to_cpu(ioarcb->ioarcb_host_pci_addr) +
offsetof(struct ipr_ioarcb, add_data));
ioarcb->read_ioadl_addr = ioarcb->write_ioadl_addr;
ipr_cmd->dma_use_sg = 1;
ioadl[0].flags_and_data_len =
cpu_to_be32(ioadl_flags | length | IPR_IOADL_FLAGS_LAST);
ioadl[0].address = cpu_to_be32(ipr_cmd->dma_handle);
return 0;
} else
dev_err(&ioa_cfg->pdev->dev, "pci_map_single failed!\n");
nseg = scsi_dma_map(scsi_cmd);
if (nseg < 0) {
dev_err(&ioa_cfg->pdev->dev, "pci_map_sg failed!\n");
return -1;
}
return -1;
ipr_cmd->dma_use_sg = nseg;
if (scsi_cmd->sc_data_direction == DMA_TO_DEVICE) {
ioadl_flags = IPR_IOADL_FLAGS_WRITE;
ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
ioarcb->write_data_transfer_length = cpu_to_be32(length);
ioarcb->write_ioadl_len =
cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
} else if (scsi_cmd->sc_data_direction == DMA_FROM_DEVICE) {
ioadl_flags = IPR_IOADL_FLAGS_READ;
ioarcb->read_data_transfer_length = cpu_to_be32(length);
ioarcb->read_ioadl_len =
cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
}
if (ipr_cmd->dma_use_sg <= ARRAY_SIZE(ioarcb->add_data.u.ioadl)) {
ioadl = ioarcb->add_data.u.ioadl;
ioarcb->write_ioadl_addr =
cpu_to_be32(be32_to_cpu(ioarcb->ioarcb_host_pci_addr) +
offsetof(struct ipr_ioarcb, add_data));
ioarcb->read_ioadl_addr = ioarcb->write_ioadl_addr;
}
scsi_for_each_sg(scsi_cmd, sg, ipr_cmd->dma_use_sg, i) {
ioadl[i].flags_and_data_len =
cpu_to_be32(ioadl_flags | sg_dma_len(sg));
ioadl[i].address = cpu_to_be32(sg_dma_address(sg));
}
ioadl[i-1].flags_and_data_len |= cpu_to_be32(IPR_IOADL_FLAGS_LAST);
return 0;
}
/**
@ -4447,7 +4383,7 @@ static void ipr_erp_done(struct ipr_cmnd *ipr_cmd)
res->needs_sync_complete = 1;
res->in_erp = 0;
}
ipr_unmap_sglist(ioa_cfg, ipr_cmd);
scsi_dma_unmap(ipr_cmd->scsi_cmd);
list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
scsi_cmd->scsi_done(scsi_cmd);
}
@ -4825,7 +4761,7 @@ static void ipr_erp_start(struct ipr_ioa_cfg *ioa_cfg,
break;
}
ipr_unmap_sglist(ioa_cfg, ipr_cmd);
scsi_dma_unmap(ipr_cmd->scsi_cmd);
list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
scsi_cmd->scsi_done(scsi_cmd);
}
@ -4846,10 +4782,10 @@ static void ipr_scsi_done(struct ipr_cmnd *ipr_cmd)
struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
scsi_cmd->resid = be32_to_cpu(ipr_cmd->ioasa.residual_data_len);
scsi_set_resid(scsi_cmd, be32_to_cpu(ipr_cmd->ioasa.residual_data_len));
if (likely(IPR_IOASC_SENSE_KEY(ioasc) == 0)) {
ipr_unmap_sglist(ioa_cfg, ipr_cmd);
scsi_dma_unmap(ipr_cmd->scsi_cmd);
list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
scsi_cmd->scsi_done(scsi_cmd);
} else

View File

@ -211,19 +211,6 @@ module_param(ips, charp, 0);
#warning "This driver has only been tested on the x86/ia64/x86_64 platforms"
#endif
#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,5,0)
#include <linux/blk.h>
#include "sd.h"
#define IPS_LOCK_SAVE(lock,flags) spin_lock_irqsave(&io_request_lock,flags)
#define IPS_UNLOCK_RESTORE(lock,flags) spin_unlock_irqrestore(&io_request_lock,flags)
#ifndef __devexit_p
#define __devexit_p(x) x
#endif
#else
#define IPS_LOCK_SAVE(lock,flags) do{spin_lock(lock);(void)flags;}while(0)
#define IPS_UNLOCK_RESTORE(lock,flags) do{spin_unlock(lock);(void)flags;}while(0)
#endif
#define IPS_DMA_DIR(scb) ((!scb->scsi_cmd || ips_is_passthru(scb->scsi_cmd) || \
DMA_NONE == scb->scsi_cmd->sc_data_direction) ? \
PCI_DMA_BIDIRECTIONAL : \
@ -381,24 +368,13 @@ static struct scsi_host_template ips_driver_template = {
.eh_abort_handler = ips_eh_abort,
.eh_host_reset_handler = ips_eh_reset,
.proc_name = "ips",
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,5,0)
.proc_info = ips_proc_info,
.slave_configure = ips_slave_configure,
#else
.proc_info = ips_proc24_info,
.select_queue_depths = ips_select_queue_depth,
#endif
.bios_param = ips_biosparam,
.this_id = -1,
.sg_tablesize = IPS_MAX_SG,
.cmd_per_lun = 3,
.use_clustering = ENABLE_CLUSTERING,
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0)
.use_new_eh_code = 1,
#endif
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,20) && LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0)
.highmem_io = 1,
#endif
};
@ -731,7 +707,7 @@ ips_release(struct Scsi_Host *sh)
/* free IRQ */
free_irq(ha->irq, ha);
IPS_REMOVE_HOST(sh);
scsi_remove_host(sh);
scsi_host_put(sh);
ips_released_controllers++;
@ -813,7 +789,6 @@ int ips_eh_abort(struct scsi_cmnd *SC)
ips_ha_t *ha;
ips_copp_wait_item_t *item;
int ret;
unsigned long cpu_flags;
struct Scsi_Host *host;
METHOD_TRACE("ips_eh_abort", 1);
@ -830,7 +805,7 @@ int ips_eh_abort(struct scsi_cmnd *SC)
if (!ha->active)
return (FAILED);
IPS_LOCK_SAVE(host->host_lock, cpu_flags);
spin_lock(host->host_lock);
/* See if the command is on the copp queue */
item = ha->copp_waitlist.head;
@ -851,7 +826,7 @@ int ips_eh_abort(struct scsi_cmnd *SC)
ret = (FAILED);
}
IPS_UNLOCK_RESTORE(host->host_lock, cpu_flags);
spin_unlock(host->host_lock);
return ret;
}
@ -1129,7 +1104,7 @@ static int ips_queue(struct scsi_cmnd *SC, void (*done) (struct scsi_cmnd *))
/* A Reset IOCTL is only sent by the boot CD in extreme cases. */
/* There can never be any system activity ( network or disk ), but check */
/* anyway just as a good practice. */
pt = (ips_passthru_t *) SC->request_buffer;
pt = (ips_passthru_t *) scsi_sglist(SC);
if ((pt->CoppCP.cmd.reset.op_code == IPS_CMD_RESET_CHANNEL) &&
(pt->CoppCP.cmd.reset.adapter_flag == 1)) {
if (ha->scb_activelist.count != 0) {
@ -1176,18 +1151,10 @@ static int ips_queue(struct scsi_cmnd *SC, void (*done) (struct scsi_cmnd *))
/* Set bios geometry for the controller */
/* */
/****************************************************************************/
static int
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0)
ips_biosparam(Disk * disk, kdev_t dev, int geom[])
{
ips_ha_t *ha = (ips_ha_t *) disk->device->host->hostdata;
unsigned long capacity = disk->capacity;
#else
ips_biosparam(struct scsi_device *sdev, struct block_device *bdev,
sector_t capacity, int geom[])
static int ips_biosparam(struct scsi_device *sdev, struct block_device *bdev,
sector_t capacity, int geom[])
{
ips_ha_t *ha = (ips_ha_t *) sdev->host->hostdata;
#endif
int heads;
int sectors;
int cylinders;
@ -1225,70 +1192,6 @@ ips_biosparam(struct scsi_device *sdev, struct block_device *bdev,
return (0);
}
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0)
/* ips_proc24_info is a wrapper around ips_proc_info *
* for compatibility with the 2.4 scsi parameters */
static int
ips_proc24_info(char *buffer, char **start, off_t offset, int length,
int hostno, int func)
{
int i;
for (i = 0; i < ips_next_controller; i++) {
if (ips_sh[i] && ips_sh[i]->host_no == hostno) {
return ips_proc_info(ips_sh[i], buffer, start,
offset, length, func);
}
}
return -EINVAL;
}
/****************************************************************************/
/* */
/* Routine Name: ips_select_queue_depth */
/* */
/* Routine Description: */
/* */
/* Select queue depths for the devices on the contoller */
/* */
/****************************************************************************/
static void
ips_select_queue_depth(struct Scsi_Host *host, struct scsi_device * scsi_devs)
{
struct scsi_device *device;
ips_ha_t *ha;
int count = 0;
int min;
ha = IPS_HA(host);
min = ha->max_cmds / 4;
for (device = scsi_devs; device; device = device->next) {
if (device->host == host) {
if ((device->channel == 0) && (device->type == 0))
count++;
}
}
for (device = scsi_devs; device; device = device->next) {
if (device->host == host) {
if ((device->channel == 0) && (device->type == 0)) {
device->queue_depth =
(ha->max_cmds - 1) / count;
if (device->queue_depth < min)
device->queue_depth = min;
} else {
device->queue_depth = 2;
}
if (device->queue_depth < 2)
device->queue_depth = 2;
}
}
}
#else
/****************************************************************************/
/* */
/* Routine Name: ips_slave_configure */
@ -1316,7 +1219,6 @@ ips_slave_configure(struct scsi_device * SDptr)
SDptr->skip_ms_page_3f = 1;
return 0;
}
#endif
/****************************************************************************/
/* */
@ -1331,7 +1233,6 @@ static irqreturn_t
do_ipsintr(int irq, void *dev_id)
{
ips_ha_t *ha;
unsigned long cpu_flags;
struct Scsi_Host *host;
int irqstatus;
@ -1347,16 +1248,16 @@ do_ipsintr(int irq, void *dev_id)
return IRQ_HANDLED;
}
IPS_LOCK_SAVE(host->host_lock, cpu_flags);
spin_lock(host->host_lock);
if (!ha->active) {
IPS_UNLOCK_RESTORE(host->host_lock, cpu_flags);
spin_unlock(host->host_lock);
return IRQ_HANDLED;
}
irqstatus = (*ha->func.intr) (ha);
IPS_UNLOCK_RESTORE(host->host_lock, cpu_flags);
spin_unlock(host->host_lock);
/* start the next command */
ips_next(ha, IPS_INTR_ON);
@ -1606,30 +1507,22 @@ static int ips_is_passthru(struct scsi_cmnd *SC)
if ((SC->cmnd[0] == IPS_IOCTL_COMMAND) &&
(SC->device->channel == 0) &&
(SC->device->id == IPS_ADAPTER_ID) &&
(SC->device->lun == 0) && SC->request_buffer) {
if ((!SC->use_sg) && SC->request_bufflen &&
(((char *) SC->request_buffer)[0] == 'C') &&
(((char *) SC->request_buffer)[1] == 'O') &&
(((char *) SC->request_buffer)[2] == 'P') &&
(((char *) SC->request_buffer)[3] == 'P'))
return 1;
else if (SC->use_sg) {
struct scatterlist *sg = SC->request_buffer;
char *buffer;
(SC->device->lun == 0) && scsi_sglist(SC)) {
struct scatterlist *sg = scsi_sglist(SC);
char *buffer;
/* kmap_atomic() ensures addressability of the user buffer.*/
/* local_irq_save() protects the KM_IRQ0 address slot. */
local_irq_save(flags);
buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
if (buffer && buffer[0] == 'C' && buffer[1] == 'O' &&
buffer[2] == 'P' && buffer[3] == 'P') {
kunmap_atomic(buffer - sg->offset, KM_IRQ0);
local_irq_restore(flags);
return 1;
}
kunmap_atomic(buffer - sg->offset, KM_IRQ0);
local_irq_restore(flags);
}
/* kmap_atomic() ensures addressability of the user buffer.*/
/* local_irq_save() protects the KM_IRQ0 address slot. */
local_irq_save(flags);
buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset;
if (buffer && buffer[0] == 'C' && buffer[1] == 'O' &&
buffer[2] == 'P' && buffer[3] == 'P') {
kunmap_atomic(buffer - sg->offset, KM_IRQ0);
local_irq_restore(flags);
return 1;
}
kunmap_atomic(buffer - sg->offset, KM_IRQ0);
local_irq_restore(flags);
}
return 0;
}
@ -1680,18 +1573,14 @@ ips_make_passthru(ips_ha_t *ha, struct scsi_cmnd *SC, ips_scb_t *scb, int intr)
{
ips_passthru_t *pt;
int length = 0;
int ret;
int i, ret;
struct scatterlist *sg = scsi_sglist(SC);
METHOD_TRACE("ips_make_passthru", 1);
if (!SC->use_sg) {
length = SC->request_bufflen;
} else {
struct scatterlist *sg = SC->request_buffer;
int i;
for (i = 0; i < SC->use_sg; i++)
length += sg[i].length;
}
scsi_for_each_sg(SC, sg, scsi_sg_count(SC), i)
length += sg[i].length;
if (length < sizeof (ips_passthru_t)) {
/* wrong size */
DEBUG_VAR(1, "(%s%d) Passthru structure wrong size",
@ -2115,7 +2004,7 @@ ips_cleanup_passthru(ips_ha_t * ha, ips_scb_t * scb)
METHOD_TRACE("ips_cleanup_passthru", 1);
if ((!scb) || (!scb->scsi_cmd) || (!scb->scsi_cmd->request_buffer)) {
if ((!scb) || (!scb->scsi_cmd) || (!scsi_sglist(scb->scsi_cmd))) {
DEBUG_VAR(1, "(%s%d) couldn't cleanup after passthru",
ips_name, ha->host_num);
@ -2730,7 +2619,6 @@ ips_next(ips_ha_t * ha, int intr)
struct scsi_cmnd *q;
ips_copp_wait_item_t *item;
int ret;
unsigned long cpu_flags = 0;
struct Scsi_Host *host;
METHOD_TRACE("ips_next", 1);
@ -2742,7 +2630,7 @@ ips_next(ips_ha_t * ha, int intr)
* this command won't time out
*/
if (intr == IPS_INTR_ON)
IPS_LOCK_SAVE(host->host_lock, cpu_flags);
spin_lock(host->host_lock);
if ((ha->subsys->param[3] & 0x300000)
&& (ha->scb_activelist.count == 0)) {
@ -2769,14 +2657,14 @@ ips_next(ips_ha_t * ha, int intr)
item = ips_removeq_copp_head(&ha->copp_waitlist);
ha->num_ioctl++;
if (intr == IPS_INTR_ON)
IPS_UNLOCK_RESTORE(host->host_lock, cpu_flags);
spin_unlock(host->host_lock);
scb->scsi_cmd = item->scsi_cmd;
kfree(item);
ret = ips_make_passthru(ha, scb->scsi_cmd, scb, intr);
if (intr == IPS_INTR_ON)
IPS_LOCK_SAVE(host->host_lock, cpu_flags);
spin_lock(host->host_lock);
switch (ret) {
case IPS_FAILURE:
if (scb->scsi_cmd) {
@ -2846,7 +2734,7 @@ ips_next(ips_ha_t * ha, int intr)
SC = ips_removeq_wait(&ha->scb_waitlist, q);
if (intr == IPS_INTR_ON)
IPS_UNLOCK_RESTORE(host->host_lock, cpu_flags); /* Unlock HA after command is taken off queue */
spin_unlock(host->host_lock); /* Unlock HA after command is taken off queue */
SC->result = DID_OK;
SC->host_scribble = NULL;
@ -2866,41 +2754,26 @@ ips_next(ips_ha_t * ha, int intr)
/* copy in the CDB */
memcpy(scb->cdb, SC->cmnd, SC->cmd_len);
/* Now handle the data buffer */
if (SC->use_sg) {
scb->sg_count = scsi_dma_map(SC);
BUG_ON(scb->sg_count < 0);
if (scb->sg_count) {
struct scatterlist *sg;
int i;
sg = SC->request_buffer;
scb->sg_count = pci_map_sg(ha->pcidev, sg, SC->use_sg,
SC->sc_data_direction);
scb->flags |= IPS_SCB_MAP_SG;
for (i = 0; i < scb->sg_count; i++) {
scsi_for_each_sg(SC, sg, scb->sg_count, i) {
if (ips_fill_scb_sg_single
(ha, sg_dma_address(&sg[i]), scb, i,
sg_dma_len(&sg[i])) < 0)
(ha, sg_dma_address(sg), scb, i,
sg_dma_len(sg)) < 0)
break;
}
scb->dcdb.transfer_length = scb->data_len;
} else {
if (SC->request_bufflen) {
scb->data_busaddr =
pci_map_single(ha->pcidev,
SC->request_buffer,
SC->request_bufflen,
SC->sc_data_direction);
scb->flags |= IPS_SCB_MAP_SINGLE;
ips_fill_scb_sg_single(ha, scb->data_busaddr,
scb, 0,
SC->request_bufflen);
scb->dcdb.transfer_length = scb->data_len;
} else {
scb->data_busaddr = 0L;
scb->sg_len = 0;
scb->data_len = 0;
scb->dcdb.transfer_length = 0;
}
scb->data_busaddr = 0L;
scb->sg_len = 0;
scb->data_len = 0;
scb->dcdb.transfer_length = 0;
}
scb->dcdb.cmd_attribute =
@ -2919,7 +2792,7 @@ ips_next(ips_ha_t * ha, int intr)
scb->dcdb.transfer_length = 0;
}
if (intr == IPS_INTR_ON)
IPS_LOCK_SAVE(host->host_lock, cpu_flags);
spin_lock(host->host_lock);
ret = ips_send_cmd(ha, scb);
@ -2958,7 +2831,7 @@ ips_next(ips_ha_t * ha, int intr)
} /* end while */
if (intr == IPS_INTR_ON)
IPS_UNLOCK_RESTORE(host->host_lock, cpu_flags);
spin_unlock(host->host_lock);
}
/****************************************************************************/
@ -3377,52 +3250,32 @@ ips_done(ips_ha_t * ha, ips_scb_t * scb)
* the rest of the data and continue.
*/
if ((scb->breakup) || (scb->sg_break)) {
struct scatterlist *sg;
int sg_dma_index, ips_sg_index = 0;
/* we had a data breakup */
scb->data_len = 0;
if (scb->sg_count) {
/* S/G request */
struct scatterlist *sg;
int ips_sg_index = 0;
int sg_dma_index;
sg = scsi_sglist(scb->scsi_cmd);
sg = scb->scsi_cmd->request_buffer;
/* Spin forward to last dma chunk */
sg_dma_index = scb->breakup;
/* Spin forward to last dma chunk */
sg_dma_index = scb->breakup;
/* Take care of possible partial on last chunk */
ips_fill_scb_sg_single(ha,
sg_dma_address(&sg[sg_dma_index]),
scb, ips_sg_index++,
sg_dma_len(&sg[sg_dma_index]));
/* Take care of possible partial on last chunk */
ips_fill_scb_sg_single(ha,
sg_dma_address(&sg
[sg_dma_index]),
scb, ips_sg_index++,
sg_dma_len(&sg
[sg_dma_index]));
for (; sg_dma_index < scb->sg_count;
sg_dma_index++) {
if (ips_fill_scb_sg_single
(ha,
sg_dma_address(&sg[sg_dma_index]),
scb, ips_sg_index++,
sg_dma_len(&sg[sg_dma_index])) < 0)
break;
}
} else {
/* Non S/G Request */
(void) ips_fill_scb_sg_single(ha,
scb->
data_busaddr +
(scb->sg_break *
ha->max_xfer),
scb, 0,
scb->scsi_cmd->
request_bufflen -
(scb->sg_break *
ha->max_xfer));
}
for (; sg_dma_index < scsi_sg_count(scb->scsi_cmd);
sg_dma_index++) {
if (ips_fill_scb_sg_single
(ha,
sg_dma_address(&sg[sg_dma_index]),
scb, ips_sg_index++,
sg_dma_len(&sg[sg_dma_index])) < 0)
break;
}
scb->dcdb.transfer_length = scb->data_len;
scb->dcdb.cmd_attribute |=
@ -3653,32 +3506,27 @@ ips_send_wait(ips_ha_t * ha, ips_scb_t * scb, int timeout, int intr)
static void
ips_scmd_buf_write(struct scsi_cmnd *scmd, void *data, unsigned int count)
{
if (scmd->use_sg) {
int i;
unsigned int min_cnt, xfer_cnt;
char *cdata = (char *) data;
unsigned char *buffer;
unsigned long flags;
struct scatterlist *sg = scmd->request_buffer;
for (i = 0, xfer_cnt = 0;
(i < scmd->use_sg) && (xfer_cnt < count); i++) {
min_cnt = min(count - xfer_cnt, sg[i].length);
int i;
unsigned int min_cnt, xfer_cnt;
char *cdata = (char *) data;
unsigned char *buffer;
unsigned long flags;
struct scatterlist *sg = scsi_sglist(scmd);
/* kmap_atomic() ensures addressability of the data buffer.*/
/* local_irq_save() protects the KM_IRQ0 address slot. */
local_irq_save(flags);
buffer = kmap_atomic(sg[i].page, KM_IRQ0) + sg[i].offset;
memcpy(buffer, &cdata[xfer_cnt], min_cnt);
kunmap_atomic(buffer - sg[i].offset, KM_IRQ0);
local_irq_restore(flags);
for (i = 0, xfer_cnt = 0;
(i < scsi_sg_count(scmd)) && (xfer_cnt < count); i++) {
min_cnt = min(count - xfer_cnt, sg[i].length);
xfer_cnt += min_cnt;
}
/* kmap_atomic() ensures addressability of the data buffer.*/
/* local_irq_save() protects the KM_IRQ0 address slot. */
local_irq_save(flags);
buffer = kmap_atomic(sg[i].page, KM_IRQ0) + sg[i].offset;
memcpy(buffer, &cdata[xfer_cnt], min_cnt);
kunmap_atomic(buffer - sg[i].offset, KM_IRQ0);
local_irq_restore(flags);
} else {
unsigned int min_cnt = min(count, scmd->request_bufflen);
memcpy(scmd->request_buffer, data, min_cnt);
}
xfer_cnt += min_cnt;
}
}
/****************************************************************************/
@ -3691,32 +3539,27 @@ ips_scmd_buf_write(struct scsi_cmnd *scmd, void *data, unsigned int count)
static void
ips_scmd_buf_read(struct scsi_cmnd *scmd, void *data, unsigned int count)
{
if (scmd->use_sg) {
int i;
unsigned int min_cnt, xfer_cnt;
char *cdata = (char *) data;
unsigned char *buffer;
unsigned long flags;
struct scatterlist *sg = scmd->request_buffer;
for (i = 0, xfer_cnt = 0;
(i < scmd->use_sg) && (xfer_cnt < count); i++) {
min_cnt = min(count - xfer_cnt, sg[i].length);
int i;
unsigned int min_cnt, xfer_cnt;
char *cdata = (char *) data;
unsigned char *buffer;
unsigned long flags;
struct scatterlist *sg = scsi_sglist(scmd);
/* kmap_atomic() ensures addressability of the data buffer.*/
/* local_irq_save() protects the KM_IRQ0 address slot. */
local_irq_save(flags);
buffer = kmap_atomic(sg[i].page, KM_IRQ0) + sg[i].offset;
memcpy(&cdata[xfer_cnt], buffer, min_cnt);
kunmap_atomic(buffer - sg[i].offset, KM_IRQ0);
local_irq_restore(flags);
for (i = 0, xfer_cnt = 0;
(i < scsi_sg_count(scmd)) && (xfer_cnt < count); i++) {
min_cnt = min(count - xfer_cnt, sg[i].length);
xfer_cnt += min_cnt;
}
/* kmap_atomic() ensures addressability of the data buffer.*/
/* local_irq_save() protects the KM_IRQ0 address slot. */
local_irq_save(flags);
buffer = kmap_atomic(sg[i].page, KM_IRQ0) + sg[i].offset;
memcpy(&cdata[xfer_cnt], buffer, min_cnt);
kunmap_atomic(buffer - sg[i].offset, KM_IRQ0);
local_irq_restore(flags);
} else {
unsigned int min_cnt = min(count, scmd->request_bufflen);
memcpy(data, scmd->request_buffer, min_cnt);
}
xfer_cnt += min_cnt;
}
}
/****************************************************************************/
@ -4350,7 +4193,7 @@ ips_rdcap(ips_ha_t * ha, ips_scb_t * scb)
METHOD_TRACE("ips_rdcap", 1);
if (scb->scsi_cmd->request_bufflen < 8)
if (scsi_bufflen(scb->scsi_cmd) < 8)
return (0);
cap.lba =
@ -4735,8 +4578,7 @@ ips_freescb(ips_ha_t * ha, ips_scb_t * scb)
METHOD_TRACE("ips_freescb", 1);
if (scb->flags & IPS_SCB_MAP_SG)
pci_unmap_sg(ha->pcidev, scb->scsi_cmd->request_buffer,
scb->scsi_cmd->use_sg, IPS_DMA_DIR(scb));
scsi_dma_unmap(scb->scsi_cmd);
else if (scb->flags & IPS_SCB_MAP_SINGLE)
pci_unmap_single(ha->pcidev, scb->data_busaddr, scb->data_len,
IPS_DMA_DIR(scb));
@ -7004,7 +6846,6 @@ ips_register_scsi(int index)
kfree(oldha);
ips_sh[index] = sh;
ips_ha[index] = ha;
IPS_SCSI_SET_DEVICE(sh, ha);
/* Store away needed values for later use */
sh->io_port = ha->io_addr;
@ -7016,17 +6857,16 @@ ips_register_scsi(int index)
sh->cmd_per_lun = sh->hostt->cmd_per_lun;
sh->unchecked_isa_dma = sh->hostt->unchecked_isa_dma;
sh->use_clustering = sh->hostt->use_clustering;
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,7)
sh->max_sectors = 128;
#endif
sh->max_id = ha->ntargets;
sh->max_lun = ha->nlun;
sh->max_channel = ha->nbus - 1;
sh->can_queue = ha->max_cmds - 1;
IPS_ADD_HOST(sh, NULL);
scsi_add_host(sh, NULL);
scsi_scan_host(sh);
return 0;
}
@ -7069,7 +6909,7 @@ ips_module_init(void)
return -ENODEV;
ips_driver_template.module = THIS_MODULE;
ips_order_controllers();
if (IPS_REGISTER_HOSTS(&ips_driver_template)) {
if (!ips_detect(&ips_driver_template)) {
pci_unregister_driver(&ips_pci_driver);
return -ENODEV;
}
@ -7087,7 +6927,6 @@ ips_module_init(void)
static void __exit
ips_module_exit(void)
{
IPS_UNREGISTER_HOSTS(&ips_driver_template);
pci_unregister_driver(&ips_pci_driver);
unregister_reboot_notifier(&ips_notifier);
}
@ -7436,15 +7275,9 @@ ips_init_phase2(int index)
return SUCCESS;
}
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,9)
MODULE_LICENSE("GPL");
#endif
MODULE_DESCRIPTION("IBM ServeRAID Adapter Driver " IPS_VER_STRING);
#ifdef MODULE_VERSION
MODULE_VERSION(IPS_VER_STRING);
#endif
/*

View File

@ -58,10 +58,6 @@
/*
* Some handy macros
*/
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,20) || defined CONFIG_HIGHIO
#define IPS_HIGHIO
#endif
#define IPS_HA(x) ((ips_ha_t *) x->hostdata)
#define IPS_COMMAND_ID(ha, scb) (int) (scb - ha->scbs)
#define IPS_IS_TROMBONE(ha) (((ha->device_id == IPS_DEVICEID_COPPERHEAD) && \
@ -84,38 +80,8 @@
#define IPS_SGLIST_SIZE(ha) (IPS_USE_ENH_SGLIST(ha) ? \
sizeof(IPS_ENH_SG_LIST) : sizeof(IPS_STD_SG_LIST))
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,4,4)
#define pci_set_dma_mask(dev,mask) ( mask > 0xffffffff ? 1:0 )
#define scsi_set_pci_device(sh,dev) (0)
#endif
#ifndef IRQ_NONE
typedef void irqreturn_t;
#define IRQ_NONE
#define IRQ_HANDLED
#define IRQ_RETVAL(x)
#endif
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0)
#define IPS_REGISTER_HOSTS(SHT) scsi_register_module(MODULE_SCSI_HA,SHT)
#define IPS_UNREGISTER_HOSTS(SHT) scsi_unregister_module(MODULE_SCSI_HA,SHT)
#define IPS_ADD_HOST(shost,device)
#define IPS_REMOVE_HOST(shost)
#define IPS_SCSI_SET_DEVICE(sh,ha) scsi_set_pci_device(sh, (ha)->pcidev)
#define IPS_PRINTK(level, pcidev, format, arg...) \
printk(level "%s %s:" format , "ips" , \
(pcidev)->slot_name , ## arg)
#define scsi_host_alloc(sh,size) scsi_register(sh,size)
#define scsi_host_put(sh) scsi_unregister(sh)
#else
#define IPS_REGISTER_HOSTS(SHT) (!ips_detect(SHT))
#define IPS_UNREGISTER_HOSTS(SHT)
#define IPS_ADD_HOST(shost,device) do { scsi_add_host(shost,device); scsi_scan_host(shost); } while (0)
#define IPS_REMOVE_HOST(shost) scsi_remove_host(shost)
#define IPS_SCSI_SET_DEVICE(sh,ha) do { } while (0)
#define IPS_PRINTK(level, pcidev, format, arg...) \
#define IPS_PRINTK(level, pcidev, format, arg...) \
dev_printk(level , &((pcidev)->dev) , format , ## arg)
#endif
#define MDELAY(n) \
do { \
@ -134,7 +100,7 @@
#define pci_dma_hi32(a) ((a >> 16) >> 16)
#define pci_dma_lo32(a) (a & 0xffffffff)
#if (BITS_PER_LONG > 32) || (defined CONFIG_HIGHMEM64G && defined IPS_HIGHIO)
#if (BITS_PER_LONG > 32) || defined(CONFIG_HIGHMEM64G)
#define IPS_ENABLE_DMA64 (1)
#else
#define IPS_ENABLE_DMA64 (0)
@ -451,16 +417,10 @@
/*
* Scsi_Host Template
*/
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0)
static int ips_proc24_info(char *, char **, off_t, int, int, int);
static void ips_select_queue_depth(struct Scsi_Host *, struct scsi_device *);
static int ips_biosparam(Disk *disk, kdev_t dev, int geom[]);
#else
static int ips_proc_info(struct Scsi_Host *, char *, char **, off_t, int, int);
static int ips_biosparam(struct scsi_device *sdev, struct block_device *bdev,
sector_t capacity, int geom[]);
static int ips_slave_configure(struct scsi_device *SDptr);
#endif
/*
* Raid Command Formats

View File

@ -29,14 +29,15 @@
#include <linux/types.h>
#include <linux/list.h>
#include <linux/inet.h>
#include <linux/file.h>
#include <linux/blkdev.h>
#include <linux/crypto.h>
#include <linux/delay.h>
#include <linux/kfifo.h>
#include <linux/scatterlist.h>
#include <linux/mutex.h>
#include <net/tcp.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi.h>
#include <scsi/scsi_transport_iscsi.h>
@ -109,7 +110,7 @@ iscsi_hdr_digest(struct iscsi_conn *conn, struct iscsi_buf *buf,
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
crypto_hash_digest(&tcp_conn->tx_hash, &buf->sg, buf->sg.length, crc);
buf->sg.length = tcp_conn->hdr_size;
buf->sg.length += sizeof(u32);
}
static inline int
@ -211,16 +212,14 @@ iscsi_tcp_cleanup_ctask(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
static int
iscsi_data_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
{
int rc;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
struct iscsi_data_rsp *rhdr = (struct iscsi_data_rsp *)tcp_conn->in.hdr;
struct iscsi_session *session = conn->session;
struct scsi_cmnd *sc = ctask->sc;
int datasn = be32_to_cpu(rhdr->datasn);
rc = iscsi_check_assign_cmdsn(session, (struct iscsi_nopin*)rhdr);
if (rc)
return rc;
iscsi_update_cmdsn(session, (struct iscsi_nopin*)rhdr);
/*
* setup Data-In byte counter (gets decremented..)
*/
@ -229,31 +228,36 @@ iscsi_data_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
if (tcp_conn->in.datalen == 0)
return 0;
if (ctask->datasn != datasn)
if (tcp_ctask->exp_datasn != datasn) {
debug_tcp("%s: ctask->exp_datasn(%d) != rhdr->datasn(%d)\n",
__FUNCTION__, tcp_ctask->exp_datasn, datasn);
return ISCSI_ERR_DATASN;
}
ctask->datasn++;
tcp_ctask->exp_datasn++;
tcp_ctask->data_offset = be32_to_cpu(rhdr->offset);
if (tcp_ctask->data_offset + tcp_conn->in.datalen > ctask->total_length)
if (tcp_ctask->data_offset + tcp_conn->in.datalen > scsi_bufflen(sc)) {
debug_tcp("%s: data_offset(%d) + data_len(%d) > total_length_in(%d)\n",
__FUNCTION__, tcp_ctask->data_offset,
tcp_conn->in.datalen, scsi_bufflen(sc));
return ISCSI_ERR_DATA_OFFSET;
}
if (rhdr->flags & ISCSI_FLAG_DATA_STATUS) {
struct scsi_cmnd *sc = ctask->sc;
conn->exp_statsn = be32_to_cpu(rhdr->statsn) + 1;
if (rhdr->flags & ISCSI_FLAG_DATA_UNDERFLOW) {
int res_count = be32_to_cpu(rhdr->residual_count);
if (res_count > 0 &&
res_count <= sc->request_bufflen) {
sc->resid = res_count;
res_count <= scsi_bufflen(sc)) {
scsi_set_resid(sc, res_count);
sc->result = (DID_OK << 16) | rhdr->cmd_status;
} else
sc->result = (DID_BAD_TARGET << 16) |
rhdr->cmd_status;
} else if (rhdr->flags & ISCSI_FLAG_DATA_OVERFLOW) {
sc->resid = be32_to_cpu(rhdr->residual_count);
scsi_set_resid(sc, be32_to_cpu(rhdr->residual_count));
sc->result = (DID_OK << 16) | rhdr->cmd_status;
} else
sc->result = (DID_OK << 16) | rhdr->cmd_status;
@ -281,6 +285,8 @@ iscsi_solicit_data_init(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
{
struct iscsi_data *hdr;
struct scsi_cmnd *sc = ctask->sc;
int i, sg_count = 0;
struct scatterlist *sg;
hdr = &r2t->dtask.hdr;
memset(hdr, 0, sizeof(struct iscsi_data));
@ -308,39 +314,30 @@ iscsi_solicit_data_init(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
iscsi_buf_init_iov(&r2t->headbuf, (char*)hdr,
sizeof(struct iscsi_hdr));
if (sc->use_sg) {
int i, sg_count = 0;
struct scatterlist *sg = sc->request_buffer;
sg = scsi_sglist(sc);
r2t->sg = NULL;
for (i = 0; i < scsi_sg_count(sc); i++, sg += 1) {
/* FIXME: prefetch ? */
if (sg_count + sg->length > r2t->data_offset) {
int page_offset;
r2t->sg = NULL;
for (i = 0; i < sc->use_sg; i++, sg += 1) {
/* FIXME: prefetch ? */
if (sg_count + sg->length > r2t->data_offset) {
int page_offset;
/* sg page found! */
/* sg page found! */
/* offset within this page */
page_offset = r2t->data_offset - sg_count;
/* offset within this page */
page_offset = r2t->data_offset - sg_count;
/* fill in this buffer */
iscsi_buf_init_sg(&r2t->sendbuf, sg);
r2t->sendbuf.sg.offset += page_offset;
r2t->sendbuf.sg.length -= page_offset;
/* fill in this buffer */
iscsi_buf_init_sg(&r2t->sendbuf, sg);
r2t->sendbuf.sg.offset += page_offset;
r2t->sendbuf.sg.length -= page_offset;
/* xmit logic will continue with next one */
r2t->sg = sg + 1;
break;
}
sg_count += sg->length;
/* xmit logic will continue with next one */
r2t->sg = sg + 1;
break;
}
BUG_ON(r2t->sg == NULL);
} else {
iscsi_buf_init_iov(&r2t->sendbuf,
(char*)sc->request_buffer + r2t->data_offset,
r2t->data_count);
r2t->sg = NULL;
sg_count += sg->length;
}
BUG_ON(r2t->sg == NULL);
}
/**
@ -365,17 +362,16 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
return ISCSI_ERR_DATALEN;
}
if (tcp_ctask->exp_r2tsn && tcp_ctask->exp_r2tsn != r2tsn)
if (tcp_ctask->exp_datasn != r2tsn){
debug_tcp("%s: ctask->exp_datasn(%d) != rhdr->r2tsn(%d)\n",
__FUNCTION__, tcp_ctask->exp_datasn, r2tsn);
return ISCSI_ERR_R2TSN;
rc = iscsi_check_assign_cmdsn(session, (struct iscsi_nopin*)rhdr);
if (rc)
return rc;
/* FIXME: use R2TSN to detect missing R2T */
}
/* fill-in new R2T associated with the task */
spin_lock(&session->lock);
iscsi_update_cmdsn(session, (struct iscsi_nopin*)rhdr);
if (!ctask->sc || ctask->mtask ||
session->state != ISCSI_STATE_LOGGED_IN) {
printk(KERN_INFO "iscsi_tcp: dropping R2T itt %d in "
@ -401,11 +397,11 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
r2t->data_length, session->max_burst);
r2t->data_offset = be32_to_cpu(rhdr->data_offset);
if (r2t->data_offset + r2t->data_length > ctask->total_length) {
if (r2t->data_offset + r2t->data_length > scsi_bufflen(ctask->sc)) {
spin_unlock(&session->lock);
printk(KERN_ERR "iscsi_tcp: invalid R2T with data len %u at "
"offset %u and total length %d\n", r2t->data_length,
r2t->data_offset, ctask->total_length);
r2t->data_offset, scsi_bufflen(ctask->sc));
return ISCSI_ERR_DATALEN;
}
@ -414,9 +410,9 @@ iscsi_r2t_rsp(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
iscsi_solicit_data_init(conn, ctask, r2t);
tcp_ctask->exp_r2tsn = r2tsn + 1;
tcp_ctask->exp_datasn = r2tsn + 1;
__kfifo_put(tcp_ctask->r2tqueue, (void*)&r2t, sizeof(void*));
tcp_ctask->xmstate |= XMSTATE_SOL_HDR;
tcp_ctask->xmstate |= XMSTATE_SOL_HDR_INIT;
list_move_tail(&ctask->running, &conn->xmitqueue);
scsi_queue_work(session->host, &conn->xmitwork);
@ -600,7 +596,7 @@ iscsi_ctask_copy(struct iscsi_tcp_conn *tcp_conn, struct iscsi_cmd_task *ctask,
{
struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
int buf_left = buf_size - (tcp_conn->data_copied + offset);
int size = min(tcp_conn->in.copy, buf_left);
unsigned size = min(tcp_conn->in.copy, buf_left);
int rc;
size = min(size, ctask->data_count);
@ -609,7 +605,7 @@ iscsi_ctask_copy(struct iscsi_tcp_conn *tcp_conn, struct iscsi_cmd_task *ctask,
size, tcp_conn->in.offset, tcp_conn->in.copied);
BUG_ON(size <= 0);
BUG_ON(tcp_ctask->sent + size > ctask->total_length);
BUG_ON(tcp_ctask->sent + size > scsi_bufflen(ctask->sc));
rc = skb_copy_bits(tcp_conn->in.skb, tcp_conn->in.offset,
(char*)buf + (offset + tcp_conn->data_copied), size);
@ -707,25 +703,8 @@ static int iscsi_scsi_data_in(struct iscsi_conn *conn)
BUG_ON((void*)ctask != sc->SCp.ptr);
/*
* copying Data-In into the Scsi_Cmnd
*/
if (!sc->use_sg) {
i = ctask->data_count;
rc = iscsi_ctask_copy(tcp_conn, ctask, sc->request_buffer,
sc->request_bufflen,
tcp_ctask->data_offset);
if (rc == -EAGAIN)
return rc;
if (conn->datadgst_en)
iscsi_recv_digest_update(tcp_conn, sc->request_buffer,
i);
rc = 0;
goto done;
}
offset = tcp_ctask->data_offset;
sg = sc->request_buffer;
sg = scsi_sglist(sc);
if (tcp_ctask->data_offset)
for (i = 0; i < tcp_ctask->sg_count; i++)
@ -734,7 +713,7 @@ static int iscsi_scsi_data_in(struct iscsi_conn *conn)
if (offset < 0)
offset = 0;
for (i = tcp_ctask->sg_count; i < sc->use_sg; i++) {
for (i = tcp_ctask->sg_count; i < scsi_sg_count(sc); i++) {
char *dest;
dest = kmap_atomic(sg[i].page, KM_SOFTIRQ0);
@ -779,7 +758,6 @@ static int iscsi_scsi_data_in(struct iscsi_conn *conn)
}
BUG_ON(ctask->data_count);
done:
/* check for non-exceptional status */
if (tcp_conn->in.hdr->flags & ISCSI_FLAG_DATA_STATUS) {
debug_scsi("done [sc %lx res %d itt 0x%x flags 0x%x]\n",
@ -895,11 +873,27 @@ more:
}
}
if (tcp_conn->in_progress == IN_PROGRESS_DDIGEST_RECV) {
if (tcp_conn->in_progress == IN_PROGRESS_DDIGEST_RECV &&
tcp_conn->in.copy) {
uint32_t recv_digest;
debug_tcp("extra data_recv offset %d copy %d\n",
tcp_conn->in.offset, tcp_conn->in.copy);
if (!tcp_conn->data_copied) {
if (tcp_conn->in.padding) {
debug_tcp("padding -> %d\n",
tcp_conn->in.padding);
memset(pad, 0, tcp_conn->in.padding);
sg_init_one(&sg, pad, tcp_conn->in.padding);
crypto_hash_update(&tcp_conn->rx_hash,
&sg, sg.length);
}
crypto_hash_final(&tcp_conn->rx_hash,
(u8 *) &tcp_conn->in.datadgst);
debug_tcp("rx digest 0x%x\n", tcp_conn->in.datadgst);
}
rc = iscsi_tcp_copy(conn, sizeof(uint32_t));
if (rc) {
if (rc == -EAGAIN)
@ -924,8 +918,7 @@ more:
}
if (tcp_conn->in_progress == IN_PROGRESS_DATA_RECV &&
tcp_conn->in.copy) {
tcp_conn->in.copy) {
debug_tcp("data_recv offset %d copy %d\n",
tcp_conn->in.offset, tcp_conn->in.copy);
@ -936,24 +929,32 @@ more:
iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
return 0;
}
tcp_conn->in.copy -= tcp_conn->in.padding;
tcp_conn->in.offset += tcp_conn->in.padding;
if (conn->datadgst_en) {
if (tcp_conn->in.padding) {
debug_tcp("padding -> %d\n",
tcp_conn->in.padding);
memset(pad, 0, tcp_conn->in.padding);
sg_init_one(&sg, pad, tcp_conn->in.padding);
crypto_hash_update(&tcp_conn->rx_hash,
&sg, sg.length);
}
crypto_hash_final(&tcp_conn->rx_hash,
(u8 *) &tcp_conn->in.datadgst);
debug_tcp("rx digest 0x%x\n", tcp_conn->in.datadgst);
if (tcp_conn->in.padding)
tcp_conn->in_progress = IN_PROGRESS_PAD_RECV;
else if (conn->datadgst_en)
tcp_conn->in_progress = IN_PROGRESS_DDIGEST_RECV;
tcp_conn->data_copied = 0;
} else
else
tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
tcp_conn->data_copied = 0;
}
if (tcp_conn->in_progress == IN_PROGRESS_PAD_RECV &&
tcp_conn->in.copy) {
int copylen = min(tcp_conn->in.padding - tcp_conn->data_copied,
tcp_conn->in.copy);
tcp_conn->in.copy -= copylen;
tcp_conn->in.offset += copylen;
tcp_conn->data_copied += copylen;
if (tcp_conn->data_copied != tcp_conn->in.padding)
tcp_conn->in_progress = IN_PROGRESS_PAD_RECV;
else if (conn->datadgst_en)
tcp_conn->in_progress = IN_PROGRESS_DDIGEST_RECV;
else
tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
tcp_conn->data_copied = 0;
}
debug_tcp("f, processed %d from out of %d padding %d\n",
@ -1215,7 +1216,6 @@ iscsi_solicit_data_cont(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
struct iscsi_r2t_info *r2t, int left)
{
struct iscsi_data *hdr;
struct scsi_cmnd *sc = ctask->sc;
int new_offset;
hdr = &r2t->dtask.hdr;
@ -1245,15 +1245,8 @@ iscsi_solicit_data_cont(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask,
if (iscsi_buf_left(&r2t->sendbuf))
return;
if (sc->use_sg) {
iscsi_buf_init_sg(&r2t->sendbuf, r2t->sg);
r2t->sg += 1;
} else {
iscsi_buf_init_iov(&r2t->sendbuf,
(char*)sc->request_buffer + new_offset,
r2t->data_count);
r2t->sg = NULL;
}
iscsi_buf_init_sg(&r2t->sendbuf, r2t->sg);
r2t->sg += 1;
}
static void iscsi_set_padding(struct iscsi_tcp_cmd_task *tcp_ctask,
@ -1277,41 +1270,10 @@ static void iscsi_set_padding(struct iscsi_tcp_cmd_task *tcp_ctask,
static void
iscsi_tcp_cmd_init(struct iscsi_cmd_task *ctask)
{
struct scsi_cmnd *sc = ctask->sc;
struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
BUG_ON(__kfifo_len(tcp_ctask->r2tqueue));
tcp_ctask->sent = 0;
tcp_ctask->sg_count = 0;
if (sc->sc_data_direction == DMA_TO_DEVICE) {
tcp_ctask->xmstate = XMSTATE_W_HDR;
tcp_ctask->exp_r2tsn = 0;
BUG_ON(ctask->total_length == 0);
if (sc->use_sg) {
struct scatterlist *sg = sc->request_buffer;
iscsi_buf_init_sg(&tcp_ctask->sendbuf, sg);
tcp_ctask->sg = sg + 1;
tcp_ctask->bad_sg = sg + sc->use_sg;
} else {
iscsi_buf_init_iov(&tcp_ctask->sendbuf,
sc->request_buffer,
sc->request_bufflen);
tcp_ctask->sg = NULL;
tcp_ctask->bad_sg = NULL;
}
debug_scsi("cmd [itt 0x%x total %d imm_data %d "
"unsol count %d, unsol offset %d]\n",
ctask->itt, ctask->total_length, ctask->imm_count,
ctask->unsol_count, ctask->unsol_offset);
} else
tcp_ctask->xmstate = XMSTATE_R_HDR;
iscsi_buf_init_iov(&tcp_ctask->headbuf, (char*)ctask->hdr,
sizeof(struct iscsi_hdr));
tcp_ctask->xmstate = XMSTATE_CMD_HDR_INIT;
}
/**
@ -1324,9 +1286,11 @@ iscsi_tcp_cmd_init(struct iscsi_cmd_task *ctask)
* call it again later, or recover. '0' return code means successful
* xmit.
*
* Management xmit state machine consists of two states:
* IN_PROGRESS_IMM_HEAD - PDU Header xmit in progress
* IN_PROGRESS_IMM_DATA - PDU Data xmit in progress
* Management xmit state machine consists of these states:
* XMSTATE_IMM_HDR_INIT - calculate digest of PDU Header
* XMSTATE_IMM_HDR - PDU Header xmit in progress
* XMSTATE_IMM_DATA - PDU Data xmit in progress
* XMSTATE_IDLE - management PDU is done
**/
static int
iscsi_tcp_mtask_xmit(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
@ -1337,23 +1301,34 @@ iscsi_tcp_mtask_xmit(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
debug_scsi("mtask deq [cid %d state %x itt 0x%x]\n",
conn->id, tcp_mtask->xmstate, mtask->itt);
if (tcp_mtask->xmstate & XMSTATE_IMM_HDR) {
tcp_mtask->xmstate &= ~XMSTATE_IMM_HDR;
if (mtask->data_count)
if (tcp_mtask->xmstate & XMSTATE_IMM_HDR_INIT) {
iscsi_buf_init_iov(&tcp_mtask->headbuf, (char*)mtask->hdr,
sizeof(struct iscsi_hdr));
if (mtask->data_count) {
tcp_mtask->xmstate |= XMSTATE_IMM_DATA;
iscsi_buf_init_iov(&tcp_mtask->sendbuf,
(char*)mtask->data,
mtask->data_count);
}
if (conn->c_stage != ISCSI_CONN_INITIAL_STAGE &&
conn->stop_stage != STOP_CONN_RECOVER &&
conn->hdrdgst_en)
iscsi_hdr_digest(conn, &tcp_mtask->headbuf,
(u8*)tcp_mtask->hdrext);
tcp_mtask->sent = 0;
tcp_mtask->xmstate &= ~XMSTATE_IMM_HDR_INIT;
tcp_mtask->xmstate |= XMSTATE_IMM_HDR;
}
if (tcp_mtask->xmstate & XMSTATE_IMM_HDR) {
rc = iscsi_sendhdr(conn, &tcp_mtask->headbuf,
mtask->data_count);
if (rc) {
tcp_mtask->xmstate |= XMSTATE_IMM_HDR;
if (mtask->data_count)
tcp_mtask->xmstate &= ~XMSTATE_IMM_DATA;
if (rc)
return rc;
}
tcp_mtask->xmstate &= ~XMSTATE_IMM_HDR;
}
if (tcp_mtask->xmstate & XMSTATE_IMM_DATA) {
@ -1387,55 +1362,67 @@ iscsi_tcp_mtask_xmit(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
return 0;
}
static inline int
iscsi_send_read_hdr(struct iscsi_conn *conn,
struct iscsi_tcp_cmd_task *tcp_ctask)
{
int rc;
tcp_ctask->xmstate &= ~XMSTATE_R_HDR;
if (conn->hdrdgst_en)
iscsi_hdr_digest(conn, &tcp_ctask->headbuf,
(u8*)tcp_ctask->hdrext);
rc = iscsi_sendhdr(conn, &tcp_ctask->headbuf, 0);
if (!rc) {
BUG_ON(tcp_ctask->xmstate != XMSTATE_IDLE);
return 0; /* wait for Data-In */
}
tcp_ctask->xmstate |= XMSTATE_R_HDR;
return rc;
}
static inline int
iscsi_send_write_hdr(struct iscsi_conn *conn,
struct iscsi_cmd_task *ctask)
static int
iscsi_send_cmd_hdr(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
{
struct scsi_cmnd *sc = ctask->sc;
struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data;
int rc;
int rc = 0;
tcp_ctask->xmstate &= ~XMSTATE_W_HDR;
if (conn->hdrdgst_en)
iscsi_hdr_digest(conn, &tcp_ctask->headbuf,
(u8*)tcp_ctask->hdrext);
rc = iscsi_sendhdr(conn, &tcp_ctask->headbuf, ctask->imm_count);
if (rc) {
tcp_ctask->xmstate |= XMSTATE_W_HDR;
return rc;
}
if (tcp_ctask->xmstate & XMSTATE_CMD_HDR_INIT) {
tcp_ctask->sent = 0;
tcp_ctask->sg_count = 0;
tcp_ctask->exp_datasn = 0;
if (ctask->imm_count) {
tcp_ctask->xmstate |= XMSTATE_IMM_DATA;
iscsi_set_padding(tcp_ctask, ctask->imm_count);
if (sc->sc_data_direction == DMA_TO_DEVICE) {
struct scatterlist *sg = scsi_sglist(sc);
if (ctask->conn->datadgst_en) {
iscsi_data_digest_init(ctask->conn->dd_data, tcp_ctask);
tcp_ctask->immdigest = 0;
iscsi_buf_init_sg(&tcp_ctask->sendbuf, sg);
tcp_ctask->sg = sg + 1;
tcp_ctask->bad_sg = sg + scsi_sg_count(sc);
debug_scsi("cmd [itt 0x%x total %d imm_data %d "
"unsol count %d, unsol offset %d]\n",
ctask->itt, scsi_bufflen(sc),
ctask->imm_count, ctask->unsol_count,
ctask->unsol_offset);
}
iscsi_buf_init_iov(&tcp_ctask->headbuf, (char*)ctask->hdr,
sizeof(struct iscsi_hdr));
if (conn->hdrdgst_en)
iscsi_hdr_digest(conn, &tcp_ctask->headbuf,
(u8*)tcp_ctask->hdrext);
tcp_ctask->xmstate &= ~XMSTATE_CMD_HDR_INIT;
tcp_ctask->xmstate |= XMSTATE_CMD_HDR_XMIT;
}
if (ctask->unsol_count)
tcp_ctask->xmstate |= XMSTATE_UNS_HDR | XMSTATE_UNS_INIT;
return 0;
if (tcp_ctask->xmstate & XMSTATE_CMD_HDR_XMIT) {
rc = iscsi_sendhdr(conn, &tcp_ctask->headbuf, ctask->imm_count);
if (rc)
return rc;
tcp_ctask->xmstate &= ~XMSTATE_CMD_HDR_XMIT;
if (sc->sc_data_direction != DMA_TO_DEVICE)
return 0;
if (ctask->imm_count) {
tcp_ctask->xmstate |= XMSTATE_IMM_DATA;
iscsi_set_padding(tcp_ctask, ctask->imm_count);
if (ctask->conn->datadgst_en) {
iscsi_data_digest_init(ctask->conn->dd_data,
tcp_ctask);
tcp_ctask->immdigest = 0;
}
}
if (ctask->unsol_count)
tcp_ctask->xmstate |=
XMSTATE_UNS_HDR | XMSTATE_UNS_INIT;
}
return rc;
}
static int
@ -1624,9 +1611,7 @@ static int iscsi_send_sol_pdu(struct iscsi_conn *conn,
struct iscsi_data_task *dtask;
int left, rc;
if (tcp_ctask->xmstate & XMSTATE_SOL_HDR) {
tcp_ctask->xmstate &= ~XMSTATE_SOL_HDR;
tcp_ctask->xmstate |= XMSTATE_SOL_DATA;
if (tcp_ctask->xmstate & XMSTATE_SOL_HDR_INIT) {
if (!tcp_ctask->r2t) {
spin_lock_bh(&session->lock);
__kfifo_get(tcp_ctask->r2tqueue, (void*)&tcp_ctask->r2t,
@ -1640,12 +1625,19 @@ send_hdr:
if (conn->hdrdgst_en)
iscsi_hdr_digest(conn, &r2t->headbuf,
(u8*)dtask->hdrext);
tcp_ctask->xmstate &= ~XMSTATE_SOL_HDR_INIT;
tcp_ctask->xmstate |= XMSTATE_SOL_HDR;
}
if (tcp_ctask->xmstate & XMSTATE_SOL_HDR) {
r2t = tcp_ctask->r2t;
dtask = &r2t->dtask;
rc = iscsi_sendhdr(conn, &r2t->headbuf, r2t->data_count);
if (rc) {
tcp_ctask->xmstate &= ~XMSTATE_SOL_DATA;
tcp_ctask->xmstate |= XMSTATE_SOL_HDR;
if (rc)
return rc;
}
tcp_ctask->xmstate &= ~XMSTATE_SOL_HDR;
tcp_ctask->xmstate |= XMSTATE_SOL_DATA;
if (conn->datadgst_en) {
iscsi_data_digest_init(conn->dd_data, tcp_ctask);
@ -1677,8 +1669,6 @@ send_hdr:
left = r2t->data_length - r2t->sent;
if (left) {
iscsi_solicit_data_cont(conn, ctask, r2t, left);
tcp_ctask->xmstate |= XMSTATE_SOL_DATA;
tcp_ctask->xmstate &= ~XMSTATE_SOL_HDR;
goto send_hdr;
}
@ -1693,8 +1683,6 @@ send_hdr:
if (__kfifo_get(tcp_ctask->r2tqueue, (void*)&r2t,
sizeof(void*))) {
tcp_ctask->r2t = r2t;
tcp_ctask->xmstate |= XMSTATE_SOL_DATA;
tcp_ctask->xmstate &= ~XMSTATE_SOL_HDR;
spin_unlock_bh(&session->lock);
goto send_hdr;
}
@ -1703,6 +1691,46 @@ send_hdr:
return 0;
}
/**
* iscsi_tcp_ctask_xmit - xmit normal PDU task
* @conn: iscsi connection
* @ctask: iscsi command task
*
* Notes:
* The function can return -EAGAIN in which case caller must
* call it again later, or recover. '0' return code means successful
* xmit.
* The function is devided to logical helpers (above) for the different
* xmit stages.
*
*iscsi_send_cmd_hdr()
* XMSTATE_CMD_HDR_INIT - prepare Header and Data buffers Calculate
* Header Digest
* XMSTATE_CMD_HDR_XMIT - Transmit header in progress
*
*iscsi_send_padding
* XMSTATE_W_PAD - Prepare and send pading
* XMSTATE_W_RESEND_PAD - retry send pading
*
*iscsi_send_digest
* XMSTATE_W_RESEND_DATA_DIGEST - Finalize and send Data Digest
* XMSTATE_W_RESEND_DATA_DIGEST - retry sending digest
*
*iscsi_send_unsol_hdr
* XMSTATE_UNS_INIT - prepare un-solicit data header and digest
* XMSTATE_UNS_HDR - send un-solicit header
*
*iscsi_send_unsol_pdu
* XMSTATE_UNS_DATA - send un-solicit data in progress
*
*iscsi_send_sol_pdu
* XMSTATE_SOL_HDR_INIT - solicit data header and digest initialize
* XMSTATE_SOL_HDR - send solicit header
* XMSTATE_SOL_DATA - send solicit data
*
*iscsi_tcp_ctask_xmit
* XMSTATE_IMM_DATA - xmit managment data (??)
**/
static int
iscsi_tcp_ctask_xmit(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
{
@ -1712,20 +1740,11 @@ iscsi_tcp_ctask_xmit(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask)
debug_scsi("ctask deq [cid %d xmstate %x itt 0x%x]\n",
conn->id, tcp_ctask->xmstate, ctask->itt);
/*
* serialize with TMF AbortTask
*/
if (ctask->mtask)
rc = iscsi_send_cmd_hdr(conn, ctask);
if (rc)
return rc;
if (tcp_ctask->xmstate & XMSTATE_R_HDR)
return iscsi_send_read_hdr(conn, tcp_ctask);
if (tcp_ctask->xmstate & XMSTATE_W_HDR) {
rc = iscsi_send_write_hdr(conn, ctask);
if (rc)
return rc;
}
if (ctask->sc->sc_data_direction != DMA_TO_DEVICE)
return 0;
if (tcp_ctask->xmstate & XMSTATE_IMM_DATA) {
rc = iscsi_send_data(ctask, &tcp_ctask->sendbuf, &tcp_ctask->sg,
@ -1810,18 +1829,22 @@ tcp_conn_alloc_fail:
static void
iscsi_tcp_release_conn(struct iscsi_conn *conn)
{
struct iscsi_session *session = conn->session;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct socket *sock = tcp_conn->sock;
if (!tcp_conn->sock)
if (!sock)
return;
sock_hold(tcp_conn->sock->sk);
sock_hold(sock->sk);
iscsi_conn_restore_callbacks(tcp_conn);
sock_put(tcp_conn->sock->sk);
sock_put(sock->sk);
sock_release(tcp_conn->sock);
spin_lock_bh(&session->lock);
tcp_conn->sock = NULL;
conn->recv_lock = NULL;
spin_unlock_bh(&session->lock);
sockfd_put(sock);
}
static void
@ -1852,6 +1875,46 @@ iscsi_tcp_conn_stop(struct iscsi_cls_conn *cls_conn, int flag)
tcp_conn->hdr_size = sizeof(struct iscsi_hdr);
}
static int iscsi_tcp_get_addr(struct iscsi_conn *conn, struct socket *sock,
char *buf, int *port,
int (*getname)(struct socket *, struct sockaddr *,
int *addrlen))
{
struct sockaddr_storage *addr;
struct sockaddr_in6 *sin6;
struct sockaddr_in *sin;
int rc = 0, len;
addr = kmalloc(GFP_KERNEL, sizeof(*addr));
if (!addr)
return -ENOMEM;
if (getname(sock, (struct sockaddr *) addr, &len)) {
rc = -ENODEV;
goto free_addr;
}
switch (addr->ss_family) {
case AF_INET:
sin = (struct sockaddr_in *)addr;
spin_lock_bh(&conn->session->lock);
sprintf(buf, NIPQUAD_FMT, NIPQUAD(sin->sin_addr.s_addr));
*port = be16_to_cpu(sin->sin_port);
spin_unlock_bh(&conn->session->lock);
break;
case AF_INET6:
sin6 = (struct sockaddr_in6 *)addr;
spin_lock_bh(&conn->session->lock);
sprintf(buf, NIP6_FMT, NIP6(sin6->sin6_addr));
*port = be16_to_cpu(sin6->sin6_port);
spin_unlock_bh(&conn->session->lock);
break;
}
free_addr:
kfree(addr);
return rc;
}
static int
iscsi_tcp_conn_bind(struct iscsi_cls_session *cls_session,
struct iscsi_cls_conn *cls_conn, uint64_t transport_eph,
@ -1869,10 +1932,24 @@ iscsi_tcp_conn_bind(struct iscsi_cls_session *cls_session,
printk(KERN_ERR "iscsi_tcp: sockfd_lookup failed %d\n", err);
return -EEXIST;
}
/*
* copy these values now because if we drop the session
* userspace may still want to query the values since we will
* be using them for the reconnect
*/
err = iscsi_tcp_get_addr(conn, sock, conn->portal_address,
&conn->portal_port, kernel_getpeername);
if (err)
goto free_socket;
err = iscsi_tcp_get_addr(conn, sock, conn->local_address,
&conn->local_port, kernel_getsockname);
if (err)
goto free_socket;
err = iscsi_conn_bind(cls_session, cls_conn, is_leading);
if (err)
return err;
goto free_socket;
/* bind iSCSI connection and socket */
tcp_conn->sock = sock;
@ -1896,25 +1973,19 @@ iscsi_tcp_conn_bind(struct iscsi_cls_session *cls_session,
* set receive state machine into initial state
*/
tcp_conn->in_progress = IN_PROGRESS_WAIT_HEADER;
return 0;
free_socket:
sockfd_put(sock);
return err;
}
/* called with host lock */
static void
iscsi_tcp_mgmt_init(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask,
char *data, uint32_t data_size)
iscsi_tcp_mgmt_init(struct iscsi_conn *conn, struct iscsi_mgmt_task *mtask)
{
struct iscsi_tcp_mgmt_task *tcp_mtask = mtask->dd_data;
iscsi_buf_init_iov(&tcp_mtask->headbuf, (char*)mtask->hdr,
sizeof(struct iscsi_hdr));
tcp_mtask->xmstate = XMSTATE_IMM_HDR;
tcp_mtask->sent = 0;
if (mtask->data_count)
iscsi_buf_init_iov(&tcp_mtask->sendbuf, (char*)mtask->data,
mtask->data_count);
tcp_mtask->xmstate = XMSTATE_IMM_HDR_INIT;
}
static int
@ -2026,41 +2097,18 @@ iscsi_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
enum iscsi_param param, char *buf)
{
struct iscsi_conn *conn = cls_conn->dd_data;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct inet_sock *inet;
struct ipv6_pinfo *np;
struct sock *sk;
int len;
switch(param) {
case ISCSI_PARAM_CONN_PORT:
mutex_lock(&conn->xmitmutex);
if (!tcp_conn->sock) {
mutex_unlock(&conn->xmitmutex);
return -EINVAL;
}
inet = inet_sk(tcp_conn->sock->sk);
len = sprintf(buf, "%hu\n", be16_to_cpu(inet->dport));
mutex_unlock(&conn->xmitmutex);
spin_lock_bh(&conn->session->lock);
len = sprintf(buf, "%hu\n", conn->portal_port);
spin_unlock_bh(&conn->session->lock);
break;
case ISCSI_PARAM_CONN_ADDRESS:
mutex_lock(&conn->xmitmutex);
if (!tcp_conn->sock) {
mutex_unlock(&conn->xmitmutex);
return -EINVAL;
}
sk = tcp_conn->sock->sk;
if (sk->sk_family == PF_INET) {
inet = inet_sk(sk);
len = sprintf(buf, NIPQUAD_FMT "\n",
NIPQUAD(inet->daddr));
} else {
np = inet6_sk(sk);
len = sprintf(buf, NIP6_FMT "\n", NIP6(np->daddr));
}
mutex_unlock(&conn->xmitmutex);
spin_lock_bh(&conn->session->lock);
len = sprintf(buf, "%s\n", conn->portal_address);
spin_unlock_bh(&conn->session->lock);
break;
default:
return iscsi_conn_get_param(cls_conn, param, buf);
@ -2069,6 +2117,29 @@ iscsi_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
return len;
}
static int
iscsi_tcp_host_get_param(struct Scsi_Host *shost, enum iscsi_host_param param,
char *buf)
{
struct iscsi_session *session = iscsi_hostdata(shost->hostdata);
int len;
switch (param) {
case ISCSI_HOST_PARAM_IPADDRESS:
spin_lock_bh(&session->lock);
if (!session->leadconn)
len = -ENODEV;
else
len = sprintf(buf, "%s\n",
session->leadconn->local_address);
spin_unlock_bh(&session->lock);
break;
default:
return iscsi_host_get_param(shost, param, buf);
}
return len;
}
static void
iscsi_conn_get_stats(struct iscsi_cls_conn *cls_conn, struct iscsi_stats *stats)
{
@ -2096,6 +2167,7 @@ iscsi_conn_get_stats(struct iscsi_cls_conn *cls_conn, struct iscsi_stats *stats)
static struct iscsi_cls_session *
iscsi_tcp_session_create(struct iscsi_transport *iscsit,
struct scsi_transport_template *scsit,
uint16_t cmds_max, uint16_t qdepth,
uint32_t initial_cmdsn, uint32_t *hostno)
{
struct iscsi_cls_session *cls_session;
@ -2103,7 +2175,7 @@ iscsi_tcp_session_create(struct iscsi_transport *iscsit,
uint32_t hn;
int cmd_i;
cls_session = iscsi_session_setup(iscsit, scsit,
cls_session = iscsi_session_setup(iscsit, scsit, cmds_max, qdepth,
sizeof(struct iscsi_tcp_cmd_task),
sizeof(struct iscsi_tcp_mgmt_task),
initial_cmdsn, &hn);
@ -2142,17 +2214,24 @@ static void iscsi_tcp_session_destroy(struct iscsi_cls_session *cls_session)
iscsi_session_teardown(cls_session);
}
static int iscsi_tcp_slave_configure(struct scsi_device *sdev)
{
blk_queue_dma_alignment(sdev->request_queue, 0);
return 0;
}
static struct scsi_host_template iscsi_sht = {
.name = "iSCSI Initiator over TCP/IP",
.queuecommand = iscsi_queuecommand,
.change_queue_depth = iscsi_change_queue_depth,
.can_queue = ISCSI_XMIT_CMDS_MAX - 1,
.can_queue = ISCSI_DEF_XMIT_CMDS_MAX - 1,
.sg_tablesize = ISCSI_SG_TABLESIZE,
.max_sectors = 0xFFFF,
.cmd_per_lun = ISCSI_DEF_CMD_PER_LUN,
.eh_abort_handler = iscsi_eh_abort,
.eh_host_reset_handler = iscsi_eh_host_reset,
.use_clustering = DISABLE_CLUSTERING,
.slave_configure = iscsi_tcp_slave_configure,
.proc_name = "iscsi_tcp",
.this_id = -1,
};
@ -2179,8 +2258,12 @@ static struct iscsi_transport iscsi_tcp_transport = {
ISCSI_EXP_STATSN |
ISCSI_PERSISTENT_PORT |
ISCSI_PERSISTENT_ADDRESS |
ISCSI_TARGET_NAME |
ISCSI_TPGT,
ISCSI_TARGET_NAME | ISCSI_TPGT |
ISCSI_USERNAME | ISCSI_PASSWORD |
ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN,
.host_param_mask = ISCSI_HOST_HWADDRESS | ISCSI_HOST_IPADDRESS |
ISCSI_HOST_INITIATOR_NAME |
ISCSI_HOST_NETDEV_NAME,
.host_template = &iscsi_sht,
.conndata_size = sizeof(struct iscsi_conn),
.max_conn = 1,
@ -2197,6 +2280,9 @@ static struct iscsi_transport iscsi_tcp_transport = {
.get_session_param = iscsi_session_get_param,
.start_conn = iscsi_conn_start,
.stop_conn = iscsi_tcp_conn_stop,
/* iscsi host params */
.get_host_param = iscsi_tcp_host_get_param,
.set_host_param = iscsi_host_set_param,
/* IO */
.send_pdu = iscsi_conn_send_pdu,
.get_stats = iscsi_conn_get_stats,

View File

@ -29,11 +29,12 @@
#define IN_PROGRESS_HEADER_GATHER 0x1
#define IN_PROGRESS_DATA_RECV 0x2
#define IN_PROGRESS_DDIGEST_RECV 0x3
#define IN_PROGRESS_PAD_RECV 0x4
/* xmit state machine */
#define XMSTATE_IDLE 0x0
#define XMSTATE_R_HDR 0x1
#define XMSTATE_W_HDR 0x2
#define XMSTATE_CMD_HDR_INIT 0x1
#define XMSTATE_CMD_HDR_XMIT 0x2
#define XMSTATE_IMM_HDR 0x4
#define XMSTATE_IMM_DATA 0x8
#define XMSTATE_UNS_INIT 0x10
@ -44,6 +45,8 @@
#define XMSTATE_W_PAD 0x200
#define XMSTATE_W_RESEND_PAD 0x400
#define XMSTATE_W_RESEND_DATA_DIGEST 0x800
#define XMSTATE_IMM_HDR_INIT 0x1000
#define XMSTATE_SOL_HDR_INIT 0x2000
#define ISCSI_PAD_LEN 4
#define ISCSI_SG_TABLESIZE SG_ALL
@ -152,7 +155,7 @@ struct iscsi_tcp_cmd_task {
struct scatterlist *sg; /* per-cmd SG list */
struct scatterlist *bad_sg; /* assert statement */
int sg_count; /* SG's to process */
uint32_t exp_r2tsn;
uint32_t exp_datasn; /* expected target's R2TSN/DataSN */
int data_offset;
struct iscsi_r2t_info *r2t; /* in progress R2T */
struct iscsi_queue r2tpool;

View File

@ -1,6 +1,6 @@
/* jazz_esp.c: ESP front-end for MIPS JAZZ systems.
*
* Copyright (C) 2007 Thomas Bogendörfer (tsbogend@alpha.frankende)
* Copyright (C) 2007 Thomas Bogendörfer (tsbogend@alpha.frankende)
*/
#include <linux/kernel.h>
@ -143,7 +143,7 @@ static int __devinit esp_jazz_probe(struct platform_device *dev)
goto fail;
host->max_id = 8;
esp = host_to_esp(host);
esp = shost_priv(host);
esp->host = host;
esp->dev = dev;

File diff suppressed because it is too large Load Diff

View File

@ -76,8 +76,8 @@ static void sas_scsi_task_done(struct sas_task *task)
hs = DID_NO_CONNECT;
break;
case SAS_DATA_UNDERRUN:
sc->resid = ts->residual;
if (sc->request_bufflen - sc->resid < sc->underflow)
scsi_set_resid(sc, ts->residual);
if (scsi_bufflen(sc) - scsi_get_resid(sc) < sc->underflow)
hs = DID_ERROR;
break;
case SAS_DATA_OVERRUN:
@ -161,9 +161,9 @@ static struct sas_task *sas_create_task(struct scsi_cmnd *cmd,
task->ssp_task.task_attr = sas_scsi_get_task_attr(cmd);
memcpy(task->ssp_task.cdb, cmd->cmnd, 16);
task->scatter = cmd->request_buffer;
task->num_scatter = cmd->use_sg;
task->total_xfer_len = cmd->request_bufflen;
task->scatter = scsi_sglist(cmd);
task->num_scatter = scsi_sg_count(cmd);
task->total_xfer_len = scsi_bufflen(cmd);
task->data_dir = cmd->sc_data_direction;
task->task_done = sas_scsi_task_done;

View File

@ -1,7 +1,7 @@
#/*******************************************************************
# * This file is part of the Emulex Linux Device Driver for *
# * Fibre Channel Host Bus Adapters. *
# * Copyright (C) 2004-2005 Emulex. All rights reserved. *
# * Copyright (C) 2004-2006 Emulex. All rights reserved. *
# * EMULEX and SLI are trademarks of Emulex. *
# * www.emulex.com *
# * *
@ -27,4 +27,5 @@ endif
obj-$(CONFIG_SCSI_LPFC) := lpfc.o
lpfc-objs := lpfc_mem.o lpfc_sli.o lpfc_ct.o lpfc_els.o lpfc_hbadisc.o \
lpfc_init.o lpfc_mbox.o lpfc_nportdisc.o lpfc_scsi.o lpfc_attr.o
lpfc_init.o lpfc_mbox.o lpfc_nportdisc.o lpfc_scsi.o lpfc_attr.o \
lpfc_vport.o lpfc_debugfs.o

View File

@ -19,8 +19,9 @@
* included with this package. *
*******************************************************************/
struct lpfc_sli2_slim;
#include <scsi/scsi_host.h>
struct lpfc_sli2_slim;
#define LPFC_MAX_TARGET 256 /* max number of targets supported */
#define LPFC_MAX_DISC_THREADS 64 /* max outstanding discovery els
@ -32,6 +33,20 @@ struct lpfc_sli2_slim;
#define LPFC_IOCB_LIST_CNT 2250 /* list of IOCBs for fast-path usage. */
#define LPFC_Q_RAMP_UP_INTERVAL 120 /* lun q_depth ramp up interval */
/*
* Following time intervals are used of adjusting SCSI device
* queue depths when there are driver resource error or Firmware
* resource error.
*/
#define QUEUE_RAMP_DOWN_INTERVAL (1 * HZ) /* 1 Second */
#define QUEUE_RAMP_UP_INTERVAL (300 * HZ) /* 5 minutes */
/* Number of exchanges reserved for discovery to complete */
#define LPFC_DISC_IOCB_BUFF_COUNT 20
#define LPFC_HB_MBOX_INTERVAL 5 /* Heart beat interval in seconds. */
#define LPFC_HB_MBOX_TIMEOUT 30 /* Heart beat timeout in seconds. */
/* Define macros for 64 bit support */
#define putPaddrLow(addr) ((uint32_t) (0xffffffff & (u64)(addr)))
#define putPaddrHigh(addr) ((uint32_t) (0xffffffff & (((u64)(addr))>>32)))
@ -61,6 +76,11 @@ struct lpfc_dma_pool {
uint32_t current_count;
};
struct hbq_dmabuf {
struct lpfc_dmabuf dbuf;
uint32_t tag;
};
/* Priority bit. Set value to exceed low water mark in lpfc_mem. */
#define MEM_PRI 0x100
@ -90,6 +110,29 @@ typedef struct lpfc_vpd {
uint32_t sli2FwRev;
uint8_t sli2FwName[16];
} rev;
struct {
#ifdef __BIG_ENDIAN_BITFIELD
uint32_t rsvd2 :24; /* Reserved */
uint32_t cmv : 1; /* Configure Max VPIs */
uint32_t ccrp : 1; /* Config Command Ring Polling */
uint32_t csah : 1; /* Configure Synchronous Abort Handling */
uint32_t chbs : 1; /* Cofigure Host Backing store */
uint32_t cinb : 1; /* Enable Interrupt Notification Block */
uint32_t cerbm : 1; /* Configure Enhanced Receive Buf Mgmt */
uint32_t cmx : 1; /* Configure Max XRIs */
uint32_t cmr : 1; /* Configure Max RPIs */
#else /* __LITTLE_ENDIAN */
uint32_t cmr : 1; /* Configure Max RPIs */
uint32_t cmx : 1; /* Configure Max XRIs */
uint32_t cerbm : 1; /* Configure Enhanced Receive Buf Mgmt */
uint32_t cinb : 1; /* Enable Interrupt Notification Block */
uint32_t chbs : 1; /* Cofigure Host Backing store */
uint32_t csah : 1; /* Configure Synchronous Abort Handling */
uint32_t ccrp : 1; /* Config Command Ring Polling */
uint32_t cmv : 1; /* Configure Max VPIs */
uint32_t rsvd2 :24; /* Reserved */
#endif
} sli3Feat;
} lpfc_vpd_t;
struct lpfc_scsi_buf;
@ -122,6 +165,7 @@ struct lpfc_stats {
uint32_t elsRcvRPS;
uint32_t elsRcvRPL;
uint32_t elsXmitFLOGI;
uint32_t elsXmitFDISC;
uint32_t elsXmitPLOGI;
uint32_t elsXmitPRLI;
uint32_t elsXmitADISC;
@ -165,96 +209,71 @@ struct lpfc_sysfs_mbox {
struct lpfcMboxq * mbox;
};
struct lpfc_hba {
struct lpfc_sli sli;
struct lpfc_sli2_slim *slim2p;
dma_addr_t slim2p_mapping;
uint16_t pci_cfg_value;
struct lpfc_hba;
int32_t hba_state;
#define LPFC_STATE_UNKNOWN 0 /* HBA state is unknown */
#define LPFC_WARM_START 1 /* HBA state after selective reset */
#define LPFC_INIT_START 2 /* Initial state after board reset */
#define LPFC_INIT_MBX_CMDS 3 /* Initialize HBA with mbox commands */
#define LPFC_LINK_DOWN 4 /* HBA initialized, link is down */
#define LPFC_LINK_UP 5 /* Link is up - issue READ_LA */
#define LPFC_LOCAL_CFG_LINK 6 /* local NPORT Id configured */
#define LPFC_FLOGI 7 /* FLOGI sent to Fabric */
#define LPFC_FABRIC_CFG_LINK 8 /* Fabric assigned NPORT Id
configured */
#define LPFC_NS_REG 9 /* Register with NameServer */
#define LPFC_NS_QRY 10 /* Query NameServer for NPort ID list */
#define LPFC_BUILD_DISC_LIST 11 /* Build ADISC and PLOGI lists for
* device authentication / discovery */
#define LPFC_DISC_AUTH 12 /* Processing ADISC list */
#define LPFC_CLEAR_LA 13 /* authentication cmplt - issue
CLEAR_LA */
#define LPFC_HBA_READY 32
#define LPFC_HBA_ERROR -1
enum discovery_state {
LPFC_VPORT_UNKNOWN = 0, /* vport state is unknown */
LPFC_VPORT_FAILED = 1, /* vport has failed */
LPFC_LOCAL_CFG_LINK = 6, /* local NPORT Id configured */
LPFC_FLOGI = 7, /* FLOGI sent to Fabric */
LPFC_FDISC = 8, /* FDISC sent for vport */
LPFC_FABRIC_CFG_LINK = 9, /* Fabric assigned NPORT Id
* configured */
LPFC_NS_REG = 10, /* Register with NameServer */
LPFC_NS_QRY = 11, /* Query NameServer for NPort ID list */
LPFC_BUILD_DISC_LIST = 12, /* Build ADISC and PLOGI lists for
* device authentication / discovery */
LPFC_DISC_AUTH = 13, /* Processing ADISC list */
LPFC_VPORT_READY = 32,
};
int32_t stopped; /* HBA has not been restarted since last ERATT */
uint8_t fc_linkspeed; /* Link speed after last READ_LA */
enum hba_state {
LPFC_LINK_UNKNOWN = 0, /* HBA state is unknown */
LPFC_WARM_START = 1, /* HBA state after selective reset */
LPFC_INIT_START = 2, /* Initial state after board reset */
LPFC_INIT_MBX_CMDS = 3, /* Initialize HBA with mbox commands */
LPFC_LINK_DOWN = 4, /* HBA initialized, link is down */
LPFC_LINK_UP = 5, /* Link is up - issue READ_LA */
LPFC_CLEAR_LA = 6, /* authentication cmplt - issue
* CLEAR_LA */
LPFC_HBA_READY = 32,
LPFC_HBA_ERROR = -1
};
uint32_t fc_eventTag; /* event tag for link attention */
uint32_t fc_prli_sent; /* cntr for outstanding PRLIs */
struct lpfc_vport {
struct list_head listentry;
struct lpfc_hba *phba;
uint8_t port_type;
#define LPFC_PHYSICAL_PORT 1
#define LPFC_NPIV_PORT 2
#define LPFC_FABRIC_PORT 3
enum discovery_state port_state;
uint32_t num_disc_nodes; /*in addition to hba_state */
uint16_t vpi;
struct timer_list fc_estabtmo; /* link establishment timer */
struct timer_list fc_disctmo; /* Discovery rescue timer */
struct timer_list fc_fdmitmo; /* fdmi timer */
/* These fields used to be binfo */
struct lpfc_name fc_nodename; /* fc nodename */
struct lpfc_name fc_portname; /* fc portname */
uint32_t fc_pref_DID; /* preferred D_ID */
uint8_t fc_pref_ALPA; /* preferred AL_PA */
uint32_t fc_edtov; /* E_D_TOV timer value */
uint32_t fc_arbtov; /* ARB_TOV timer value */
uint32_t fc_ratov; /* R_A_TOV timer value */
uint32_t fc_rttov; /* R_T_TOV timer value */
uint32_t fc_altov; /* AL_TOV timer value */
uint32_t fc_crtov; /* C_R_TOV timer value */
uint32_t fc_citov; /* C_I_TOV timer value */
uint32_t fc_myDID; /* fibre channel S_ID */
uint32_t fc_prevDID; /* previous fibre channel S_ID */
struct serv_parm fc_sparam; /* buffer for our service parameters */
struct serv_parm fc_fabparam; /* fabric service parameters buffer */
uint8_t alpa_map[128]; /* AL_PA map from READ_LA */
uint8_t fc_ns_retry; /* retries for fabric nameserver */
uint32_t fc_nlp_cnt; /* outstanding NODELIST requests */
uint32_t fc_rscn_id_cnt; /* count of RSCNs payloads in list */
struct lpfc_dmabuf *fc_rscn_id_list[FC_MAX_HOLD_RSCN];
uint32_t lmt;
uint32_t fc_flag; /* FC flags */
#define FC_PT2PT 0x1 /* pt2pt with no fabric */
#define FC_PT2PT_PLOGI 0x2 /* pt2pt initiate PLOGI */
#define FC_DISC_TMO 0x4 /* Discovery timer running */
#define FC_PUBLIC_LOOP 0x8 /* Public loop */
#define FC_LBIT 0x10 /* LOGIN bit in loopinit set */
#define FC_RSCN_MODE 0x20 /* RSCN cmd rcv'ed */
#define FC_NLP_MORE 0x40 /* More node to process in node tbl */
#define FC_OFFLINE_MODE 0x80 /* Interface is offline for diag */
#define FC_FABRIC 0x100 /* We are fabric attached */
#define FC_ESTABLISH_LINK 0x200 /* Reestablish Link */
#define FC_RSCN_DISCOVERY 0x400 /* Authenticate all devices after RSCN*/
#define FC_BLOCK_MGMT_IO 0x800 /* Don't allow mgmt mbx or iocb cmds */
#define FC_LOADING 0x1000 /* HBA in process of loading drvr */
#define FC_UNLOADING 0x2000 /* HBA in process of unloading drvr */
#define FC_SCSI_SCAN_TMO 0x4000 /* scsi scan timer running */
#define FC_ABORT_DISCOVERY 0x8000 /* we want to abort discovery */
#define FC_NDISC_ACTIVE 0x10000 /* NPort discovery active */
#define FC_BYPASSED_MODE 0x20000 /* NPort is in bypassed mode */
#define FC_LOOPBACK_MODE 0x40000 /* NPort is in Loopback mode */
/* This flag is set while issuing */
/* INIT_LINK mailbox command */
#define FC_IGNORE_ERATT 0x80000 /* intr handler should ignore ERATT */
uint32_t fc_topology; /* link topology, from LINK INIT */
struct lpfc_stats fc_stat;
/* Several of these flags are HBA centric and should be moved to
* phba->link_flag (e.g. FC_PTP, FC_PUBLIC_LOOP)
*/
#define FC_PT2PT 0x1 /* pt2pt with no fabric */
#define FC_PT2PT_PLOGI 0x2 /* pt2pt initiate PLOGI */
#define FC_DISC_TMO 0x4 /* Discovery timer running */
#define FC_PUBLIC_LOOP 0x8 /* Public loop */
#define FC_LBIT 0x10 /* LOGIN bit in loopinit set */
#define FC_RSCN_MODE 0x20 /* RSCN cmd rcv'ed */
#define FC_NLP_MORE 0x40 /* More node to process in node tbl */
#define FC_OFFLINE_MODE 0x80 /* Interface is offline for diag */
#define FC_FABRIC 0x100 /* We are fabric attached */
#define FC_ESTABLISH_LINK 0x200 /* Reestablish Link */
#define FC_RSCN_DISCOVERY 0x400 /* Auth all devices after RSCN */
#define FC_SCSI_SCAN_TMO 0x4000 /* scsi scan timer running */
#define FC_ABORT_DISCOVERY 0x8000 /* we want to abort discovery */
#define FC_NDISC_ACTIVE 0x10000 /* NPort discovery active */
#define FC_BYPASSED_MODE 0x20000 /* NPort is in bypassed mode */
#define FC_RFF_NOT_SUPPORTED 0x40000 /* RFF_ID was rejected by switch */
#define FC_VPORT_NEEDS_REG_VPI 0x80000 /* Needs to have its vpi registered */
#define FC_RSCN_DEFERRED 0x100000 /* A deferred RSCN being processed */
struct list_head fc_nodes;
@ -267,10 +286,131 @@ struct lpfc_hba {
uint16_t fc_map_cnt;
uint16_t fc_npr_cnt;
uint16_t fc_unused_cnt;
struct serv_parm fc_sparam; /* buffer for our service parameters */
uint32_t fc_myDID; /* fibre channel S_ID */
uint32_t fc_prevDID; /* previous fibre channel S_ID */
int32_t stopped; /* HBA has not been restarted since last ERATT */
uint8_t fc_linkspeed; /* Link speed after last READ_LA */
uint32_t num_disc_nodes; /*in addition to hba_state */
uint32_t fc_nlp_cnt; /* outstanding NODELIST requests */
uint32_t fc_rscn_id_cnt; /* count of RSCNs payloads in list */
struct lpfc_dmabuf *fc_rscn_id_list[FC_MAX_HOLD_RSCN];
struct lpfc_name fc_nodename; /* fc nodename */
struct lpfc_name fc_portname; /* fc portname */
struct lpfc_work_evt disc_timeout_evt;
struct timer_list fc_disctmo; /* Discovery rescue timer */
uint8_t fc_ns_retry; /* retries for fabric nameserver */
uint32_t fc_prli_sent; /* cntr for outstanding PRLIs */
spinlock_t work_port_lock;
uint32_t work_port_events; /* Timeout to be handled */
#define WORKER_DISC_TMO 0x1 /* vport: Discovery timeout */
#define WORKER_ELS_TMO 0x2 /* vport: ELS timeout */
#define WORKER_FDMI_TMO 0x4 /* vport: FDMI timeout */
#define WORKER_MBOX_TMO 0x100 /* hba: MBOX timeout */
#define WORKER_HB_TMO 0x200 /* hba: Heart beat timeout */
#define WORKER_FABRIC_BLOCK_TMO 0x400 /* hba: fabric block timout */
#define WORKER_RAMP_DOWN_QUEUE 0x800 /* hba: Decrease Q depth */
#define WORKER_RAMP_UP_QUEUE 0x1000 /* hba: Increase Q depth */
struct timer_list fc_fdmitmo;
struct timer_list els_tmofunc;
int unreg_vpi_cmpl;
uint8_t load_flag;
#define FC_LOADING 0x1 /* HBA in process of loading drvr */
#define FC_UNLOADING 0x2 /* HBA in process of unloading drvr */
char *vname; /* Application assigned name */
struct fc_vport *fc_vport;
#ifdef CONFIG_LPFC_DEBUG_FS
struct dentry *debug_disc_trc;
struct dentry *debug_nodelist;
struct dentry *vport_debugfs_root;
struct lpfc_disc_trc *disc_trc;
atomic_t disc_trc_cnt;
#endif
};
struct hbq_s {
uint16_t entry_count; /* Current number of HBQ slots */
uint32_t next_hbqPutIdx; /* Index to next HBQ slot to use */
uint32_t hbqPutIdx; /* HBQ slot to use */
uint32_t local_hbqGetIdx; /* Local copy of Get index from Port */
};
#define LPFC_MAX_HBQS 16
/* this matches the possition in the lpfc_hbq_defs array */
#define LPFC_ELS_HBQ 0
struct lpfc_hba {
struct lpfc_sli sli;
uint32_t sli_rev; /* SLI2 or SLI3 */
uint32_t sli3_options; /* Mask of enabled SLI3 options */
#define LPFC_SLI3_ENABLED 0x01
#define LPFC_SLI3_HBQ_ENABLED 0x02
#define LPFC_SLI3_NPIV_ENABLED 0x04
#define LPFC_SLI3_VPORT_TEARDOWN 0x08
uint32_t iocb_cmd_size;
uint32_t iocb_rsp_size;
enum hba_state link_state;
uint32_t link_flag; /* link state flags */
#define LS_LOOPBACK_MODE 0x1 /* NPort is in Loopback mode */
/* This flag is set while issuing */
/* INIT_LINK mailbox command */
#define LS_NPIV_FAB_SUPPORTED 0x2 /* Fabric supports NPIV */
#define LS_IGNORE_ERATT 0x3 /* intr handler should ignore ERATT */
struct lpfc_sli2_slim *slim2p;
struct lpfc_dmabuf hbqslimp;
dma_addr_t slim2p_mapping;
uint16_t pci_cfg_value;
uint8_t work_found;
#define LPFC_MAX_WORKER_ITERATION 4
uint8_t fc_linkspeed; /* Link speed after last READ_LA */
uint32_t fc_eventTag; /* event tag for link attention */
struct timer_list fc_estabtmo; /* link establishment timer */
/* These fields used to be binfo */
uint32_t fc_pref_DID; /* preferred D_ID */
uint8_t fc_pref_ALPA; /* preferred AL_PA */
uint32_t fc_edtov; /* E_D_TOV timer value */
uint32_t fc_arbtov; /* ARB_TOV timer value */
uint32_t fc_ratov; /* R_A_TOV timer value */
uint32_t fc_rttov; /* R_T_TOV timer value */
uint32_t fc_altov; /* AL_TOV timer value */
uint32_t fc_crtov; /* C_R_TOV timer value */
uint32_t fc_citov; /* C_I_TOV timer value */
struct serv_parm fc_fabparam; /* fabric service parameters buffer */
uint8_t alpa_map[128]; /* AL_PA map from READ_LA */
uint32_t lmt;
uint32_t fc_topology; /* link topology, from LINK INIT */
struct lpfc_stats fc_stat;
struct lpfc_nodelist fc_fcpnodev; /* nodelist entry for no device */
uint32_t nport_event_cnt; /* timestamp for nlplist entry */
uint32_t wwnn[2];
uint8_t wwnn[8];
uint8_t wwpn[8];
uint32_t RandomData[7];
uint32_t cfg_log_verbose;
@ -278,6 +418,9 @@ struct lpfc_hba {
uint32_t cfg_nodev_tmo;
uint32_t cfg_devloss_tmo;
uint32_t cfg_hba_queue_depth;
uint32_t cfg_peer_port_login;
uint32_t cfg_vport_restrict_login;
uint32_t cfg_npiv_enable;
uint32_t cfg_fcp_class;
uint32_t cfg_use_adisc;
uint32_t cfg_ack0;
@ -304,22 +447,20 @@ struct lpfc_hba {
lpfc_vpd_t vpd; /* vital product data */
struct Scsi_Host *host;
struct pci_dev *pcidev;
struct list_head work_list;
uint32_t work_ha; /* Host Attention Bits for WT */
uint32_t work_ha_mask; /* HA Bits owned by WT */
uint32_t work_hs; /* HS stored in case of ERRAT */
uint32_t work_status[2]; /* Extra status from SLIM */
uint32_t work_hba_events; /* Timeout to be handled */
#define WORKER_DISC_TMO 0x1 /* Discovery timeout */
#define WORKER_ELS_TMO 0x2 /* ELS timeout */
#define WORKER_MBOX_TMO 0x4 /* MBOX timeout */
#define WORKER_FDMI_TMO 0x8 /* FDMI timeout */
wait_queue_head_t *work_wait;
struct task_struct *worker_thread;
struct list_head hbq_buffer_list;
uint32_t hbq_count; /* Count of configured HBQs */
struct hbq_s hbqs[LPFC_MAX_HBQS]; /* local copy of hbq indicies */
unsigned long pci_bar0_map; /* Physical address for PCI BAR0 */
unsigned long pci_bar2_map; /* Physical address for PCI BAR2 */
void __iomem *slim_memmap_p; /* Kernel memory mapped address for
@ -334,6 +475,10 @@ struct lpfc_hba {
reg */
void __iomem *HCregaddr; /* virtual address for host ctl reg */
struct lpfc_hgp __iomem *host_gp; /* Host side get/put pointers */
uint32_t __iomem *hbq_put; /* Address in SLIM to HBQ put ptrs */
uint32_t *hbq_get; /* Host mem address of HBQ get ptrs */
int brd_no; /* FC board number */
char SerialNumber[32]; /* adapter Serial Number */
@ -353,7 +498,6 @@ struct lpfc_hba {
uint8_t soft_wwn_enable;
struct timer_list fcp_poll_timer;
struct timer_list els_tmofunc;
/*
* stat counters
@ -370,31 +514,69 @@ struct lpfc_hba {
uint32_t total_scsi_bufs;
struct list_head lpfc_iocb_list;
uint32_t total_iocbq_bufs;
spinlock_t hbalock;
/* pci_mem_pools */
struct pci_pool *lpfc_scsi_dma_buf_pool;
struct pci_pool *lpfc_mbuf_pool;
struct pci_pool *lpfc_hbq_pool;
struct lpfc_dma_pool lpfc_mbuf_safety_pool;
mempool_t *mbox_mem_pool;
mempool_t *nlp_mem_pool;
struct fc_host_statistics link_stats;
struct list_head port_list;
struct lpfc_vport *pport; /* physical lpfc_vport pointer */
uint16_t max_vpi; /* Maximum virtual nports */
#define LPFC_MAX_VPI 100 /* Max number of VPorts supported */
unsigned long *vpi_bmask; /* vpi allocation table */
/* Data structure used by fabric iocb scheduler */
struct list_head fabric_iocb_list;
atomic_t fabric_iocb_count;
struct timer_list fabric_block_timer;
unsigned long bit_flags;
#define FABRIC_COMANDS_BLOCKED 0
atomic_t num_rsrc_err;
atomic_t num_cmd_success;
unsigned long last_rsrc_error_time;
unsigned long last_ramp_down_time;
unsigned long last_ramp_up_time;
#ifdef CONFIG_LPFC_DEBUG_FS
struct dentry *hba_debugfs_root;
atomic_t debugfs_vport_count;
#endif
/* Fields used for heart beat. */
unsigned long last_completion_time;
struct timer_list hb_tmofunc;
uint8_t hb_outstanding;
};
static inline void
lpfc_set_loopback_flag(struct lpfc_hba *phba) {
if (phba->cfg_topology == FLAGS_LOCAL_LB)
phba->fc_flag |= FC_LOOPBACK_MODE;
else
phba->fc_flag &= ~FC_LOOPBACK_MODE;
static inline struct Scsi_Host *
lpfc_shost_from_vport(struct lpfc_vport *vport)
{
return container_of((void *) vport, struct Scsi_Host, hostdata[0]);
}
struct rnidrsp {
void *buf;
uint32_t uniqueid;
struct list_head list;
uint32_t data;
};
static inline void
lpfc_set_loopback_flag(struct lpfc_hba *phba)
{
if (phba->cfg_topology == FLAGS_LOCAL_LB)
phba->link_flag |= LS_LOOPBACK_MODE;
else
phba->link_flag &= ~LS_LOOPBACK_MODE;
}
static inline int
lpfc_is_link_up(struct lpfc_hba *phba)
{
return phba->link_state == LPFC_LINK_UP ||
phba->link_state == LPFC_CLEAR_LA ||
phba->link_state == LPFC_HBA_READY;
}
#define FC_REG_DUMP_EVENT 0x10 /* Register for Dump events */

File diff suppressed because it is too large Load Diff

View File

@ -23,92 +23,114 @@ typedef int (*node_filter)(struct lpfc_nodelist *ndlp, void *param);
struct fc_rport;
void lpfc_dump_mem(struct lpfc_hba *, LPFC_MBOXQ_t *, uint16_t);
void lpfc_read_nv(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_heart_beat(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_read_la(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb,
struct lpfc_dmabuf *mp);
void lpfc_clear_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_issue_clear_la(struct lpfc_hba *phba, struct lpfc_vport *vport);
void lpfc_config_link(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_read_sparam(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_read_sparam(struct lpfc_hba *, LPFC_MBOXQ_t *, int);
void lpfc_read_config(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_read_lnk_stat(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_reg_login(struct lpfc_hba *, uint32_t, uint8_t *, LPFC_MBOXQ_t *,
uint32_t);
void lpfc_unreg_login(struct lpfc_hba *, uint32_t, LPFC_MBOXQ_t *);
void lpfc_unreg_did(struct lpfc_hba *, uint32_t, LPFC_MBOXQ_t *);
int lpfc_reg_login(struct lpfc_hba *, uint16_t, uint32_t, uint8_t *,
LPFC_MBOXQ_t *, uint32_t);
void lpfc_unreg_login(struct lpfc_hba *, uint16_t, uint32_t, LPFC_MBOXQ_t *);
void lpfc_unreg_did(struct lpfc_hba *, uint16_t, uint32_t, LPFC_MBOXQ_t *);
void lpfc_reg_vpi(struct lpfc_hba *, uint16_t, uint32_t, LPFC_MBOXQ_t *);
void lpfc_unreg_vpi(struct lpfc_hba *, uint16_t, LPFC_MBOXQ_t *);
void lpfc_init_link(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t, uint32_t);
void lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove);
int lpfc_linkdown(struct lpfc_hba *);
void lpfc_mbx_cmpl_read_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbx_cmpl_clear_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbx_cmpl_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_dequeue_node(struct lpfc_hba *, struct lpfc_nodelist *);
void lpfc_nlp_set_state(struct lpfc_hba *, struct lpfc_nodelist *, int);
void lpfc_drop_node(struct lpfc_hba *, struct lpfc_nodelist *);
void lpfc_set_disctmo(struct lpfc_hba *);
int lpfc_can_disctmo(struct lpfc_hba *);
int lpfc_unreg_rpi(struct lpfc_hba *, struct lpfc_nodelist *);
void lpfc_dequeue_node(struct lpfc_vport *, struct lpfc_nodelist *);
void lpfc_nlp_set_state(struct lpfc_vport *, struct lpfc_nodelist *, int);
void lpfc_drop_node(struct lpfc_vport *, struct lpfc_nodelist *);
void lpfc_set_disctmo(struct lpfc_vport *);
int lpfc_can_disctmo(struct lpfc_vport *);
int lpfc_unreg_rpi(struct lpfc_vport *, struct lpfc_nodelist *);
void lpfc_unreg_all_rpis(struct lpfc_vport *);
void lpfc_unreg_default_rpis(struct lpfc_vport *);
void lpfc_issue_reg_vpi(struct lpfc_hba *, struct lpfc_vport *);
int lpfc_check_sli_ndlp(struct lpfc_hba *, struct lpfc_sli_ring *,
struct lpfc_iocbq *, struct lpfc_nodelist *);
void lpfc_nlp_init(struct lpfc_hba *, struct lpfc_nodelist *, uint32_t);
struct lpfc_iocbq *, struct lpfc_nodelist *);
void lpfc_nlp_init(struct lpfc_vport *, struct lpfc_nodelist *, uint32_t);
struct lpfc_nodelist *lpfc_nlp_get(struct lpfc_nodelist *);
int lpfc_nlp_put(struct lpfc_nodelist *);
struct lpfc_nodelist *lpfc_setup_disc_node(struct lpfc_hba *, uint32_t);
void lpfc_disc_list_loopmap(struct lpfc_hba *);
void lpfc_disc_start(struct lpfc_hba *);
void lpfc_disc_flush_list(struct lpfc_hba *);
struct lpfc_nodelist *lpfc_setup_disc_node(struct lpfc_vport *, uint32_t);
void lpfc_disc_list_loopmap(struct lpfc_vport *);
void lpfc_disc_start(struct lpfc_vport *);
void lpfc_disc_flush_list(struct lpfc_vport *);
void lpfc_cleanup_discovery_resources(struct lpfc_vport *);
void lpfc_disc_timeout(unsigned long);
struct lpfc_nodelist *__lpfc_findnode_rpi(struct lpfc_hba * phba, uint16_t rpi);
struct lpfc_nodelist *lpfc_findnode_rpi(struct lpfc_hba * phba, uint16_t rpi);
struct lpfc_nodelist *__lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
struct lpfc_nodelist *lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
void lpfc_worker_wake_up(struct lpfc_hba *);
int lpfc_workq_post_event(struct lpfc_hba *, void *, void *, uint32_t);
int lpfc_do_work(void *);
int lpfc_disc_state_machine(struct lpfc_hba *, struct lpfc_nodelist *, void *,
int lpfc_disc_state_machine(struct lpfc_vport *, struct lpfc_nodelist *, void *,
uint32_t);
int lpfc_check_sparm(struct lpfc_hba *, struct lpfc_nodelist *,
struct serv_parm *, uint32_t);
int lpfc_els_abort(struct lpfc_hba *, struct lpfc_nodelist * ndlp);
int lpfc_els_abort_flogi(struct lpfc_hba *);
int lpfc_initial_flogi(struct lpfc_hba *);
int lpfc_issue_els_plogi(struct lpfc_hba *, uint32_t, uint8_t);
int lpfc_issue_els_prli(struct lpfc_hba *, struct lpfc_nodelist *, uint8_t);
int lpfc_issue_els_adisc(struct lpfc_hba *, struct lpfc_nodelist *, uint8_t);
int lpfc_issue_els_logo(struct lpfc_hba *, struct lpfc_nodelist *, uint8_t);
int lpfc_issue_els_scr(struct lpfc_hba *, uint32_t, uint8_t);
int lpfc_els_free_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
int lpfc_els_rsp_acc(struct lpfc_hba *, uint32_t, struct lpfc_iocbq *,
struct lpfc_nodelist *, LPFC_MBOXQ_t *, uint8_t);
int lpfc_els_rsp_reject(struct lpfc_hba *, uint32_t, struct lpfc_iocbq *,
void lpfc_register_new_vport(struct lpfc_hba *, struct lpfc_vport *,
struct lpfc_nodelist *);
int lpfc_els_rsp_adisc_acc(struct lpfc_hba *, struct lpfc_iocbq *,
void lpfc_do_scr_ns_plogi(struct lpfc_hba *, struct lpfc_vport *);
int lpfc_check_sparm(struct lpfc_vport *, struct lpfc_nodelist *,
struct serv_parm *, uint32_t);
int lpfc_els_abort(struct lpfc_hba *, struct lpfc_nodelist *);
int lpfc_els_chk_latt(struct lpfc_vport *);
int lpfc_els_abort_flogi(struct lpfc_hba *);
int lpfc_initial_flogi(struct lpfc_vport *);
int lpfc_initial_fdisc(struct lpfc_vport *);
int lpfc_issue_els_fdisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
int lpfc_issue_els_plogi(struct lpfc_vport *, uint32_t, uint8_t);
int lpfc_issue_els_prli(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
int lpfc_issue_els_adisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
int lpfc_issue_els_logo(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
int lpfc_issue_els_npiv_logo(struct lpfc_vport *, struct lpfc_nodelist *);
int lpfc_issue_els_scr(struct lpfc_vport *, uint32_t, uint8_t);
int lpfc_els_free_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
int lpfc_ct_free_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
int lpfc_els_rsp_acc(struct lpfc_vport *, uint32_t, struct lpfc_iocbq *,
struct lpfc_nodelist *, LPFC_MBOXQ_t *, uint8_t);
int lpfc_els_rsp_reject(struct lpfc_vport *, uint32_t, struct lpfc_iocbq *,
struct lpfc_nodelist *, LPFC_MBOXQ_t *);
int lpfc_els_rsp_adisc_acc(struct lpfc_vport *, struct lpfc_iocbq *,
struct lpfc_nodelist *);
int lpfc_els_rsp_prli_acc(struct lpfc_hba *, struct lpfc_iocbq *,
int lpfc_els_rsp_prli_acc(struct lpfc_vport *, struct lpfc_iocbq *,
struct lpfc_nodelist *);
void lpfc_cancel_retry_delay_tmo(struct lpfc_hba *, struct lpfc_nodelist *);
void lpfc_cancel_retry_delay_tmo(struct lpfc_vport *, struct lpfc_nodelist *);
void lpfc_els_retry_delay(unsigned long);
void lpfc_els_retry_delay_handler(struct lpfc_nodelist *);
void lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *);
void lpfc_els_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
struct lpfc_iocbq *);
int lpfc_els_handle_rscn(struct lpfc_hba *);
int lpfc_els_flush_rscn(struct lpfc_hba *);
int lpfc_rscn_payload_check(struct lpfc_hba *, uint32_t);
void lpfc_els_flush_cmd(struct lpfc_hba *);
int lpfc_els_disc_adisc(struct lpfc_hba *);
int lpfc_els_disc_plogi(struct lpfc_hba *);
int lpfc_els_handle_rscn(struct lpfc_vport *);
void lpfc_els_flush_rscn(struct lpfc_vport *);
int lpfc_rscn_payload_check(struct lpfc_vport *, uint32_t);
void lpfc_els_flush_cmd(struct lpfc_vport *);
int lpfc_els_disc_adisc(struct lpfc_vport *);
int lpfc_els_disc_plogi(struct lpfc_vport *);
void lpfc_els_timeout(unsigned long);
void lpfc_els_timeout_handler(struct lpfc_hba *);
void lpfc_els_timeout_handler(struct lpfc_vport *);
void lpfc_hb_timeout(unsigned long);
void lpfc_hb_timeout_handler(struct lpfc_hba *);
void lpfc_ct_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
struct lpfc_iocbq *);
int lpfc_ns_cmd(struct lpfc_hba *, struct lpfc_nodelist *, int);
int lpfc_fdmi_cmd(struct lpfc_hba *, struct lpfc_nodelist *, int);
int lpfc_ns_cmd(struct lpfc_vport *, int, uint8_t, uint32_t);
int lpfc_fdmi_cmd(struct lpfc_vport *, struct lpfc_nodelist *, int);
void lpfc_fdmi_tmo(unsigned long);
void lpfc_fdmi_tmo_handler(struct lpfc_hba *);
void lpfc_fdmi_timeout_handler(struct lpfc_vport *vport);
int lpfc_config_port_prep(struct lpfc_hba *);
int lpfc_config_port_post(struct lpfc_hba *);
@ -136,16 +158,23 @@ void lpfc_config_port(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_kill_board(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_mbox_put(struct lpfc_hba *, LPFC_MBOXQ_t *);
LPFC_MBOXQ_t *lpfc_mbox_get(struct lpfc_hba *);
void lpfc_mbox_cmpl_put(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_mbox_tmo_val(struct lpfc_hba *, int);
void lpfc_config_hbq(struct lpfc_hba *, struct lpfc_hbq_init *, uint32_t ,
LPFC_MBOXQ_t *);
struct lpfc_hbq_entry * lpfc_sli_next_hbq_slot(struct lpfc_hba *, uint32_t);
int lpfc_mem_alloc(struct lpfc_hba *);
void lpfc_mem_free(struct lpfc_hba *);
void lpfc_stop_vport_timers(struct lpfc_vport *);
void lpfc_poll_timeout(unsigned long ptr);
void lpfc_poll_start_timer(struct lpfc_hba * phba);
void lpfc_sli_poll_fcp_ring(struct lpfc_hba * hba);
struct lpfc_iocbq * lpfc_sli_get_iocbq(struct lpfc_hba *);
void lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
void __lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
uint16_t lpfc_sli_next_iotag(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
void lpfc_reset_barrier(struct lpfc_hba * phba);
@ -154,6 +183,7 @@ int lpfc_sli_brdkill(struct lpfc_hba *);
int lpfc_sli_brdreset(struct lpfc_hba *);
int lpfc_sli_brdrestart(struct lpfc_hba *);
int lpfc_sli_hba_setup(struct lpfc_hba *);
int lpfc_sli_host_down(struct lpfc_vport *);
int lpfc_sli_hba_down(struct lpfc_hba *);
int lpfc_sli_issue_mbox(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t);
int lpfc_sli_handle_mb_event(struct lpfc_hba *);
@ -164,27 +194,36 @@ void lpfc_sli_def_mbox_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_sli_issue_iocb(struct lpfc_hba *, struct lpfc_sli_ring *,
struct lpfc_iocbq *, uint32_t);
void lpfc_sli_pcimem_bcopy(void *, void *, uint32_t);
int lpfc_sli_abort_iocb_ring(struct lpfc_hba *, struct lpfc_sli_ring *);
void lpfc_sli_abort_iocb_ring(struct lpfc_hba *, struct lpfc_sli_ring *);
int lpfc_sli_ringpostbuf_put(struct lpfc_hba *, struct lpfc_sli_ring *,
struct lpfc_dmabuf *);
struct lpfc_dmabuf *lpfc_sli_ringpostbuf_get(struct lpfc_hba *,
struct lpfc_sli_ring *,
dma_addr_t);
int lpfc_sli_hbqbuf_init_hbqs(struct lpfc_hba *, uint32_t);
int lpfc_sli_hbqbuf_add_hbqs(struct lpfc_hba *, uint32_t);
void lpfc_sli_hbqbuf_free_all(struct lpfc_hba *);
struct hbq_dmabuf *lpfc_sli_hbqbuf_find(struct lpfc_hba *, uint32_t);
int lpfc_sli_hbq_size(void);
int lpfc_sli_issue_abort_iotag(struct lpfc_hba *, struct lpfc_sli_ring *,
struct lpfc_iocbq *);
int lpfc_sli_sum_iocb(struct lpfc_hba *, struct lpfc_sli_ring *, uint16_t,
uint64_t, lpfc_ctx_cmd);
uint64_t, lpfc_ctx_cmd);
int lpfc_sli_abort_iocb(struct lpfc_hba *, struct lpfc_sli_ring *, uint16_t,
uint64_t, uint32_t, lpfc_ctx_cmd);
uint64_t, uint32_t, lpfc_ctx_cmd);
void lpfc_mbox_timeout(unsigned long);
void lpfc_mbox_timeout_handler(struct lpfc_hba *);
struct lpfc_nodelist *lpfc_findnode_did(struct lpfc_hba *, uint32_t);
struct lpfc_nodelist *lpfc_findnode_wwpn(struct lpfc_hba *, struct lpfc_name *);
struct lpfc_nodelist *__lpfc_find_node(struct lpfc_vport *, node_filter,
void *);
struct lpfc_nodelist *lpfc_find_node(struct lpfc_vport *, node_filter, void *);
struct lpfc_nodelist *lpfc_findnode_did(struct lpfc_vport *, uint32_t);
struct lpfc_nodelist *lpfc_findnode_wwpn(struct lpfc_vport *,
struct lpfc_name *);
int lpfc_sli_issue_mbox_wait(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq,
uint32_t timeout);
uint32_t timeout);
int lpfc_sli_issue_iocb_wait(struct lpfc_hba * phba,
struct lpfc_sli_ring * pring,
@ -195,25 +234,56 @@ void lpfc_sli_abort_fcp_cmpl(struct lpfc_hba * phba,
struct lpfc_iocbq * cmdiocb,
struct lpfc_iocbq * rspiocb);
void *lpfc_hbq_alloc(struct lpfc_hba *, int, dma_addr_t *);
void lpfc_hbq_free(struct lpfc_hba *, void *, dma_addr_t);
void lpfc_sli_free_hbq(struct lpfc_hba *, struct hbq_dmabuf *);
void *lpfc_mbuf_alloc(struct lpfc_hba *, int, dma_addr_t *);
void __lpfc_mbuf_free(struct lpfc_hba *, void *, dma_addr_t);
void lpfc_mbuf_free(struct lpfc_hba *, void *, dma_addr_t);
void lpfc_in_buf_free(struct lpfc_hba *, struct lpfc_dmabuf *);
/* Function prototypes. */
const char* lpfc_info(struct Scsi_Host *);
void lpfc_scan_start(struct Scsi_Host *);
int lpfc_scan_finished(struct Scsi_Host *, unsigned long);
void lpfc_get_cfgparam(struct lpfc_hba *);
int lpfc_alloc_sysfs_attr(struct lpfc_hba *);
void lpfc_free_sysfs_attr(struct lpfc_hba *);
extern struct class_device_attribute *lpfc_host_attrs[];
int lpfc_alloc_sysfs_attr(struct lpfc_vport *);
void lpfc_free_sysfs_attr(struct lpfc_vport *);
extern struct class_device_attribute *lpfc_hba_attrs[];
extern struct scsi_host_template lpfc_template;
extern struct fc_function_template lpfc_transport_functions;
extern struct fc_function_template lpfc_vport_transport_functions;
extern int lpfc_sli_mode;
void lpfc_get_hba_sym_node_name(struct lpfc_hba * phba, uint8_t * symbp);
int lpfc_vport_symbolic_node_name(struct lpfc_vport *, char *, size_t);
void lpfc_terminate_rport_io(struct fc_rport *);
void lpfc_dev_loss_tmo_callbk(struct fc_rport *rport);
struct lpfc_vport *lpfc_create_port(struct lpfc_hba *, int, struct fc_vport *);
int lpfc_vport_disable(struct fc_vport *fc_vport, bool disable);
void lpfc_mbx_unreg_vpi(struct lpfc_vport *);
void destroy_port(struct lpfc_vport *);
int lpfc_get_instance(void);
void lpfc_host_attrib_init(struct Scsi_Host *);
extern void lpfc_debugfs_initialize(struct lpfc_vport *);
extern void lpfc_debugfs_terminate(struct lpfc_vport *);
extern void lpfc_debugfs_disc_trc(struct lpfc_vport *, int, char *, uint32_t,
uint32_t, uint32_t);
/* Interface exported by fabric iocb scheduler */
int lpfc_issue_fabric_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
void lpfc_fabric_abort_vport(struct lpfc_vport *);
void lpfc_fabric_abort_nport(struct lpfc_nodelist *);
void lpfc_fabric_abort_hba(struct lpfc_hba *);
void lpfc_fabric_abort_flogi(struct lpfc_hba *);
void lpfc_fabric_block_timeout(unsigned long);
void lpfc_unblock_fabric_iocbs(struct lpfc_hba *);
void lpfc_adjust_queue_depth(struct lpfc_hba *);
void lpfc_ramp_down_queue_handler(struct lpfc_hba *);
void lpfc_ramp_up_queue_handler(struct lpfc_hba *);
#define ScsiResult(host_code, scsi_code) (((host_code) << 16) | scsi_code)
#define HBA_EVENT_RSCN 5
#define HBA_EVENT_LINK_UP 2

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More