1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (1075 commits)
  myri10ge: update driver version number to 1.4.3-1.369
  r8169: add shutdown handler
  r8169: preliminary 8168d support
  r8169: support additional 8168cp chipset
  r8169: change default behavior for mildly identified 8168c chipsets
  r8169: add a new 8168cp flavor
  r8169: add a new 8168c flavor (bis)
  r8169: add a new 8168c flavor
  r8169: sync existing 8168 device hardware start sequences with vendor driver
  r8169: 8168b Tx performance tweak
  r8169: make room for more specific 8168 hardware start procedure
  r8169: shuffle some registers handling around (8168 operation only)
  r8169: new phy init parameters for the 8168b
  r8169: update phy init parameters
  r8169: wake up the PHY of the 8168
  af_key: fix SADB_X_SPDDELETE response
  ath9k: Fix return code when ath9k_hw_setpower() fails on reset
  ath9k: remove nasty FAIL macro from ath9k_hw_reset()
  gre: minor cleanups in netlink interface
  gre: fix copy and paste error
  ...
hifive-unleashed-5.1
Linus Torvalds 2008-10-11 09:33:18 -07:00
commit 4dd9ec4946
893 changed files with 91029 additions and 41742 deletions

View File

@ -145,7 +145,6 @@ usage should require reading the full document.
this though and the recommendation to allow only a single
interface in STA mode at first!
</para>
!Finclude/net/mac80211.h ieee80211_if_types
!Finclude/net/mac80211.h ieee80211_if_init_conf
!Finclude/net/mac80211.h ieee80211_if_conf
</chapter>
@ -177,8 +176,7 @@ usage should require reading the full document.
<title>functions/definitions</title>
!Finclude/net/mac80211.h ieee80211_rx_status
!Finclude/net/mac80211.h mac80211_rx_flags
!Finclude/net/mac80211.h ieee80211_tx_control
!Finclude/net/mac80211.h ieee80211_tx_status_flags
!Finclude/net/mac80211.h ieee80211_tx_info
!Finclude/net/mac80211.h ieee80211_rx
!Finclude/net/mac80211.h ieee80211_rx_irqsafe
!Finclude/net/mac80211.h ieee80211_tx_status
@ -189,12 +187,11 @@ usage should require reading the full document.
!Finclude/net/mac80211.h ieee80211_ctstoself_duration
!Finclude/net/mac80211.h ieee80211_generic_frame_duration
!Finclude/net/mac80211.h ieee80211_get_hdrlen_from_skb
!Finclude/net/mac80211.h ieee80211_get_hdrlen
!Finclude/net/mac80211.h ieee80211_hdrlen
!Finclude/net/mac80211.h ieee80211_wake_queue
!Finclude/net/mac80211.h ieee80211_stop_queue
!Finclude/net/mac80211.h ieee80211_start_queues
!Finclude/net/mac80211.h ieee80211_stop_queues
!Finclude/net/mac80211.h ieee80211_wake_queues
!Finclude/net/mac80211.h ieee80211_stop_queues
</sect1>
</chapter>
@ -230,8 +227,7 @@ usage should require reading the full document.
<title>Multiple queues and QoS support</title>
<para>TBD</para>
!Finclude/net/mac80211.h ieee80211_tx_queue_params
!Finclude/net/mac80211.h ieee80211_tx_queue_stats_data
!Finclude/net/mac80211.h ieee80211_tx_queue
!Finclude/net/mac80211.h ieee80211_tx_queue_stats
</chapter>
<chapter id="AP">

View File

@ -6,6 +6,24 @@ be removed from this file.
---------------------------
What: old static regulatory information and ieee80211_regdom module parameter
When: 2.6.29
Why: The old regulatory infrastructure has been replaced with a new one
which does not require statically defined regulatory domains. We do
not want to keep static regulatory domains in the kernel due to the
the dynamic nature of regulatory law and localization. We kept around
the old static definitions for the regulatory domains of:
* US
* JP
* EU
and used by default the US when CONFIG_WIRELESS_OLD_REGULATORY was
set. We also kept around the ieee80211_regdom module parameter in case
some applications were relying on it. Changing regulatory domains
can now be done instead by using nl80211, as is done with iw.
Who: Luis R. Rodriguez <lrodriguez@atheros.com>
---------------------------
What: dev->power.power_state
When: July 2007
Why: Broken design for runtime control over driver power states, confusing
@ -232,6 +250,9 @@ What (Why):
- xt_mark match revision 0
(superseded by xt_mark match revision 1)
- xt_recent: the old ipt_recent proc dir
(superseded by /proc/net/xt_recent)
When: January 2009 or Linux 2.7.0, whichever comes first
Why: Superseded by newer revisions or modules
Who: Jan Engelhardt <jengelh@computergmbh.de>

View File

@ -0,0 +1,46 @@
Copyright (c) 2003-2008 QLogic Corporation
QLogic Linux Networking HBA Driver
This program includes a device driver for Linux 2.6 that may be
distributed with QLogic hardware specific firmware binary file.
You may modify and redistribute the device driver code under the
GNU General Public License as published by the Free Software
Foundation (version 2 or a later version).
You may redistribute the hardware specific firmware binary file
under the following terms:
1. Redistribution of source code (only if applicable),
must retain the above copyright notice, this list of
conditions and the following disclaimer.
2. Redistribution in binary form must reproduce the above
copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other
materials provided with the distribution.
3. The name of QLogic Corporation may not be used to
endorse or promote products derived from this software
without specific prior written permission
REGARDLESS OF WHAT LICENSING MECHANISM IS USED OR APPLICABLE,
THIS PROGRAM IS PROVIDED BY QLOGIC CORPORATION "AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
USER ACKNOWLEDGES AND AGREES THAT USE OF THIS PROGRAM WILL NOT
CREATE OR GIVE GROUNDS FOR A LICENSE BY IMPLICATION, ESTOPPEL, OR
OTHERWISE IN ANY INTELLECTUAL PROPERTY RIGHTS (PATENT, COPYRIGHT,
TRADE SECRET, MASK WORK, OR OTHER PROPRIETARY RIGHT) EMBODIED IN
ANY OTHER QLOGIC HARDWARE OR SOFTWARE EITHER SOLELY OR IN
COMBINATION WITH THIS PROGRAM.

View File

@ -35,8 +35,9 @@ This file contains
6.1 general settings
6.2 local loopback of sent frames
6.3 CAN controller hardware filters
6.4 currently supported CAN hardware
6.5 todo
6.4 The virtual CAN driver (vcan)
6.5 currently supported CAN hardware
6.6 todo
7 Credits
@ -584,7 +585,42 @@ solution for a couple of reasons:
@133MHz with four SJA1000 CAN controllers from 2002 under heavy bus
load without any problems ...
6.4 currently supported CAN hardware (September 2007)
6.4 The virtual CAN driver (vcan)
Similar to the network loopback devices, vcan offers a virtual local
CAN interface. A full qualified address on CAN consists of
- a unique CAN Identifier (CAN ID)
- the CAN bus this CAN ID is transmitted on (e.g. can0)
so in common use cases more than one virtual CAN interface is needed.
The virtual CAN interfaces allow the transmission and reception of CAN
frames without real CAN controller hardware. Virtual CAN network
devices are usually named 'vcanX', like vcan0 vcan1 vcan2 ...
When compiled as a module the virtual CAN driver module is called vcan.ko
Since Linux Kernel version 2.6.24 the vcan driver supports the Kernel
netlink interface to create vcan network devices. The creation and
removal of vcan network devices can be managed with the ip(8) tool:
- Create a virtual CAN network interface:
ip link add type vcan
- Create a virtual CAN network interface with a specific name 'vcan42':
ip link add dev vcan42 type vcan
- Remove a (virtual CAN) network interface 'vcan42':
ip link del vcan42
The tool 'vcan' from the SocketCAN SVN repository on BerliOS is obsolete.
Virtual CAN network device creation in older Kernels:
In Linux Kernel versions < 2.6.24 the vcan driver creates 4 vcan
netdevices at module load time by default. This value can be changed
with the module parameter 'numdev'. E.g. 'modprobe vcan numdev=8'
6.5 currently supported CAN hardware
On the project website http://developer.berlios.de/projects/socketcan
there are different drivers available:
@ -603,7 +639,7 @@ solution for a couple of reasons:
Please check the Mailing Lists on the berlios OSS project website.
6.5 todo (September 2007)
6.6 todo
The configuration interface for CAN network drivers is still an open
issue that has not been finalized in the socketcan project. Also the

View File

@ -24,4 +24,56 @@ netif_{start|stop|wake}_subqueue() functions to manage each queue while the
device is still operational. netdev->queue_lock is still used when the device
comes online or when it's completely shut down (unregister_netdev(), etc.).
Author: Peter P. Waskiewicz Jr. <peter.p.waskiewicz.jr@intel.com>
Section 2: Qdisc support for multiqueue devices
-----------------------------------------------
Currently two qdiscs are optimized for multiqueue devices. The first is the
default pfifo_fast qdisc. This qdisc supports one qdisc per hardware queue.
A new round-robin qdisc, sch_multiq also supports multiple hardware queues. The
qdisc is responsible for classifying the skb's and then directing the skb's to
bands and queues based on the value in skb->queue_mapping. Use this field in
the base driver to determine which queue to send the skb to.
sch_multiq has been added for hardware that wishes to avoid head-of-line
blocking. It will cycle though the bands and verify that the hardware queue
associated with the band is not stopped prior to dequeuing a packet.
On qdisc load, the number of bands is based on the number of queues on the
hardware. Once the association is made, any skb with skb->queue_mapping set,
will be queued to the band associated with the hardware queue.
Section 3: Brief howto using MULTIQ for multiqueue devices
---------------------------------------------------------------
The userspace command 'tc,' part of the iproute2 package, is used to configure
qdiscs. To add the MULTIQ qdisc to your network device, assuming the device
is called eth0, run the following command:
# tc qdisc add dev eth0 root handle 1: multiq
The qdisc will allocate the number of bands to equal the number of queues that
the device reports, and bring the qdisc online. Assuming eth0 has 4 Tx
queues, the band mapping would look like:
band 0 => queue 0
band 1 => queue 1
band 2 => queue 2
band 3 => queue 3
Traffic will begin flowing through each queue based on either the simple_tx_hash
function or based on netdev->select_queue() if you have it defined.
The behavior of tc filters remains the same. However a new tc action,
skbedit, has been added. Assuming you wanted to route all traffic to a
specific host, for example 192.168.0.3, through a specific queue you could use
this action and establish a filter such as:
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 \
match ip dst 192.168.0.3 \
action skbedit queue_mapping 3
Author: Alexander Duyck <alexander.h.duyck@intel.com>
Original Author: Peter P. Waskiewicz Jr. <peter.p.waskiewicz.jr@intel.com>

View File

@ -0,0 +1,175 @@
Linux Phonet protocol family
============================
Introduction
------------
Phonet is a packet protocol used by Nokia cellular modems for both IPC
and RPC. With the Linux Phonet socket family, Linux host processes can
receive and send messages from/to the modem, or any other external
device attached to the modem. The modem takes care of routing.
Phonet packets can be exchanged through various hardware connections
depending on the device, such as:
- USB with the CDC Phonet interface,
- infrared,
- Bluetooth,
- an RS232 serial port (with a dedicated "FBUS" line discipline),
- the SSI bus with some TI OMAP processors.
Packets format
--------------
Phonet packets have a common header as follows:
struct phonethdr {
uint8_t pn_media; /* Media type (link-layer identifier) */
uint8_t pn_rdev; /* Receiver device ID */
uint8_t pn_sdev; /* Sender device ID */
uint8_t pn_res; /* Resource ID or function */
uint16_t pn_length; /* Big-endian message byte length (minus 6) */
uint8_t pn_robj; /* Receiver object ID */
uint8_t pn_sobj; /* Sender object ID */
};
On Linux, the link-layer header includes the pn_media byte (see below).
The next 7 bytes are part of the network-layer header.
The device ID is split: the 6 higher-order bits consitute the device
address, while the 2 lower-order bits are used for multiplexing, as are
the 8-bit object identifiers. As such, Phonet can be considered as a
network layer with 6 bits of address space and 10 bits for transport
protocol (much like port numbers in IP world).
The modem always has address number zero. All other device have a their
own 6-bit address.
Link layer
----------
Phonet links are always point-to-point links. The link layer header
consists of a single Phonet media type byte. It uniquely identifies the
link through which the packet is transmitted, from the modem's
perspective. Each Phonet network device shall prepend and set the media
type byte as appropriate. For convenience, a common phonet_header_ops
link-layer header operations structure is provided. It sets the
media type according to the network device hardware address.
Linux Phonet network interfaces support a dedicated link layer packets
type (ETH_P_PHONET) which is out of the Ethernet type range. They can
only send and receive Phonet packets.
The virtual TUN tunnel device driver can also be used for Phonet. This
requires IFF_TUN mode, _without_ the IFF_NO_PI flag. In this case,
there is no link-layer header, so there is no Phonet media type byte.
Note that Phonet interfaces are not allowed to re-order packets, so
only the (default) Linux FIFO qdisc should be used with them.
Network layer
-------------
The Phonet socket address family maps the Phonet packet header:
struct sockaddr_pn {
sa_family_t spn_family; /* AF_PHONET */
uint8_t spn_obj; /* Object ID */
uint8_t spn_dev; /* Device ID */
uint8_t spn_resource; /* Resource or function */
uint8_t spn_zero[...]; /* Padding */
};
The resource field is only used when sending and receiving;
It is ignored by bind() and getsockname().
Low-level datagram protocol
---------------------------
Applications can send Phonet messages using the Phonet datagram socket
protocol from the PF_PHONET family. Each socket is bound to one of the
2^10 object IDs available, and can send and receive packets with any
other peer.
struct sockaddr_pn addr = { .spn_family = AF_PHONET, };
ssize_t len;
socklen_t addrlen = sizeof(addr);
int fd;
fd = socket(PF_PHONET, SOCK_DGRAM, 0);
bind(fd, (struct sockaddr *)&addr, sizeof(addr));
/* ... */
sendto(fd, msg, msglen, 0, (struct sockaddr *)&addr, sizeof(addr));
len = recvfrom(fd, buf, sizeof(buf), 0,
(struct sockaddr *)&addr, &addrlen);
This protocol follows the SOCK_DGRAM connection-less semantics.
However, connect() and getpeername() are not supported, as they did
not seem useful with Phonet usages (could be added easily).
Phonet Pipe protocol
--------------------
The Phonet Pipe protocol is a simple sequenced packets protocol
with end-to-end congestion control. It uses the passive listening
socket paradigm. The listening socket is bound to an unique free object
ID. Each listening socket can handle up to 255 simultaneous
connections, one per accept()'d socket.
int lfd, cfd;
lfd = socket(PF_PHONET, SOCK_SEQPACKET, PN_PROTO_PIPE);
listen (lfd, INT_MAX);
/* ... */
cfd = accept(lfd, NULL, NULL);
for (;;)
{
char buf[...];
ssize_t len = read(cfd, buf, sizeof(buf));
/* ... */
write(cfd, msg, msglen);
}
Connections are established between two endpoints by a "third party"
application. This means that both endpoints are passive; so connect()
is not possible.
WARNING:
When polling a connected pipe socket for writability, there is an
intrinsic race condition whereby writability might be lost between the
polling and the writing system calls. In this case, the socket will
block until write because possible again, unless non-blocking mode
becomes enabled.
The pipe protocol provides two socket options at the SOL_PNPIPE level:
PNPIPE_ENCAP accepts one integer value (int) of:
PNPIPE_ENCAP_NONE: The socket operates normally (default).
PNPIPE_ENCAP_IP: The socket is used as a backend for a virtual IP
interface. This requires CAP_NET_ADMIN capability. GPRS data
support on Nokia modems can use this. Note that the socket cannot
be reliably poll()'d or read() from while in this mode.
PNPIPE_IFINDEX is a read-only integer value. It contains the
interface index of the network interface created by PNPIPE_ENCAP,
or zero if encapsulation is off.
Authors
-------
Linux Phonet was initially written by Sakari Ailus.
Other contributors include Mikä Liljeberg, Andras Domokos,
Carlos Chinea and Rémi Denis-Courmont.
Copyright (C) 2008 Nokia Corporation.

View File

@ -0,0 +1,194 @@
Linux wireless regulatory documentation
---------------------------------------
This document gives a brief review over how the Linux wireless
regulatory infrastructure works.
More up to date information can be obtained at the project's web page:
http://wireless.kernel.org/en/developers/Regulatory
Keeping regulatory domains in userspace
---------------------------------------
Due to the dynamic nature of regulatory domains we keep them
in userspace and provide a framework for userspace to upload
to the kernel one regulatory domain to be used as the central
core regulatory domain all wireless devices should adhere to.
How to get regulatory domains to the kernel
-------------------------------------------
Userspace gets a regulatory domain in the kernel by having
a userspace agent build it and send it via nl80211. Only
expected regulatory domains will be respected by the kernel.
A currently available userspace agent which can accomplish this
is CRDA - central regulatory domain agent. Its documented here:
http://wireless.kernel.org/en/developers/Regulatory/CRDA
Essentially the kernel will send a udev event when it knows
it needs a new regulatory domain. A udev rule can be put in place
to trigger crda to send the respective regulatory domain for a
specific ISO/IEC 3166 alpha2.
Below is an example udev rule which can be used:
# Example file, should be put in /etc/udev/rules.d/regulatory.rules
KERNEL=="regulatory*", ACTION=="change", SUBSYSTEM=="platform", RUN+="/sbin/crda"
The alpha2 is passed as an environment variable under the variable COUNTRY.
Who asks for regulatory domains?
--------------------------------
* Users
Users can use iw:
http://wireless.kernel.org/en/users/Documentation/iw
An example:
# set regulatory domain to "Costa Rica"
iw reg set CR
This will request the kernel to set the regulatory domain to
the specificied alpha2. The kernel in turn will then ask userspace
to provide a regulatory domain for the alpha2 specified by the user
by sending a uevent.
* Wireless subsystems for Country Information elements
The kernel will send a uevent to inform userspace a new
regulatory domain is required. More on this to be added
as its integration is added.
* Drivers
If drivers determine they need a specific regulatory domain
set they can inform the wireless core using regulatory_hint().
They have two options -- they either provide an alpha2 so that
crda can provide back a regulatory domain for that country or
they can build their own regulatory domain based on internal
custom knowledge so the wireless core can respect it.
*Most* drivers will rely on the first mechanism of providing a
regulatory hint with an alpha2. For these drivers there is an additional
check that can be used to ensure compliance based on custom EEPROM
regulatory data. This additional check can be used by drivers by
registering on its struct wiphy a reg_notifier() callback. This notifier
is called when the core's regulatory domain has been changed. The driver
can use this to review the changes made and also review who made them
(driver, user, country IE) and determine what to allow based on its
internal EEPROM data. Devices drivers wishing to be capable of world
roaming should use this callback. More on world roaming will be
added to this document when its support is enabled.
Device drivers who provide their own built regulatory domain
do not need a callback as the channels registered by them are
the only ones that will be allowed and therefore *additional*
cannels cannot be enabled.
Example code - drivers hinting an alpha2:
------------------------------------------
This example comes from the zd1211rw device driver. You can start
by having a mapping of your device's EEPROM country/regulatory
domain value to to a specific alpha2 as follows:
static struct zd_reg_alpha2_map reg_alpha2_map[] = {
{ ZD_REGDOMAIN_FCC, "US" },
{ ZD_REGDOMAIN_IC, "CA" },
{ ZD_REGDOMAIN_ETSI, "DE" }, /* Generic ETSI, use most restrictive */
{ ZD_REGDOMAIN_JAPAN, "JP" },
{ ZD_REGDOMAIN_JAPAN_ADD, "JP" },
{ ZD_REGDOMAIN_SPAIN, "ES" },
{ ZD_REGDOMAIN_FRANCE, "FR" },
Then you can define a routine to map your read EEPROM value to an alpha2,
as follows:
static int zd_reg2alpha2(u8 regdomain, char *alpha2)
{
unsigned int i;
struct zd_reg_alpha2_map *reg_map;
for (i = 0; i < ARRAY_SIZE(reg_alpha2_map); i++) {
reg_map = &reg_alpha2_map[i];
if (regdomain == reg_map->reg) {
alpha2[0] = reg_map->alpha2[0];
alpha2[1] = reg_map->alpha2[1];
return 0;
}
}
return 1;
}
Lastly, you can then hint to the core of your discovered alpha2, if a match
was found. You need to do this after you have registered your wiphy. You
are expected to do this during initialization.
r = zd_reg2alpha2(mac->regdomain, alpha2);
if (!r)
regulatory_hint(hw->wiphy, alpha2, NULL);
Example code - drivers providing a built in regulatory domain:
--------------------------------------------------------------
If you have regulatory information you can obtain from your
driver and you *need* to use this we let you build a regulatory domain
structure and pass it to the wireless core. To do this you should
kmalloc() a structure big enough to hold your regulatory domain
structure and you should then fill it with your data. Finally you simply
call regulatory_hint() with the regulatory domain structure in it.
Bellow is a simple example, with a regulatory domain cached using the stack.
Your implementation may vary (read EEPROM cache instead, for example).
Example cache of some regulatory domain
struct ieee80211_regdomain mydriver_jp_regdom = {
.n_reg_rules = 3,
.alpha2 = "JP",
//.alpha2 = "99", /* If I have no alpha2 to map it to */
.reg_rules = {
/* IEEE 802.11b/g, channels 1..14 */
REG_RULE(2412-20, 2484+20, 40, 6, 20, 0),
/* IEEE 802.11a, channels 34..48 */
REG_RULE(5170-20, 5240+20, 40, 6, 20,
NL80211_RRF_PASSIVE_SCAN),
/* IEEE 802.11a, channels 52..64 */
REG_RULE(5260-20, 5320+20, 40, 6, 20,
NL80211_RRF_NO_IBSS |
NL80211_RRF_DFS),
}
};
Then in some part of your code after your wiphy has been registered:
int r;
struct ieee80211_regdomain *rd;
int size_of_regd;
int num_rules = mydriver_jp_regdom.n_reg_rules;
unsigned int i;
size_of_regd = sizeof(struct ieee80211_regdomain) +
(num_rules * sizeof(struct ieee80211_reg_rule));
rd = kzalloc(size_of_regd, GFP_KERNEL);
if (!rd)
return -ENOMEM;
memcpy(rd, &mydriver_jp_regdom, sizeof(struct ieee80211_regdomain));
for (i=0; i < num_rules; i++) {
memcpy(&rd->reg_rules[i], &mydriver_jp_regdom.reg_rules[i],
sizeof(struct ieee80211_reg_rule));
}
r = regulatory_hint(hw->wiphy, NULL, rd);
if (r) {
kfree(rd);
return r;
}

View File

@ -0,0 +1,85 @@
Transparent proxy support
=========================
This feature adds Linux 2.2-like transparent proxy support to current kernels.
To use it, enable NETFILTER_TPROXY, the socket match and the TPROXY target in
your kernel config. You will need policy routing too, so be sure to enable that
as well.
1. Making non-local sockets work
================================
The idea is that you identify packets with destination address matching a local
socket on your box, set the packet mark to a certain value, and then match on that
value using policy routing to have those packets delivered locally:
# iptables -t mangle -N DIVERT
# iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
# iptables -t mangle -A DIVERT -j MARK --set-mark 1
# iptables -t mangle -A DIVERT -j ACCEPT
# ip rule add fwmark 1 lookup 100
# ip route add local 0.0.0.0/0 dev lo table 100
Because of certain restrictions in the IPv4 routing output code you'll have to
modify your application to allow it to send datagrams _from_ non-local IP
addresses. All you have to do is enable the (SOL_IP, IP_TRANSPARENT) socket
option before calling bind:
fd = socket(AF_INET, SOCK_STREAM, 0);
/* - 8< -*/
int value = 1;
setsockopt(fd, SOL_IP, IP_TRANSPARENT, &value, sizeof(value));
/* - 8< -*/
name.sin_family = AF_INET;
name.sin_port = htons(0xCAFE);
name.sin_addr.s_addr = htonl(0xDEADBEEF);
bind(fd, &name, sizeof(name));
A trivial patch for netcat is available here:
http://people.netfilter.org/hidden/tproxy/netcat-ip_transparent-support.patch
2. Redirecting traffic
======================
Transparent proxying often involves "intercepting" traffic on a router. This is
usually done with the iptables REDIRECT target; however, there are serious
limitations of that method. One of the major issues is that it actually
modifies the packets to change the destination address -- which might not be
acceptable in certain situations. (Think of proxying UDP for example: you won't
be able to find out the original destination address. Even in case of TCP
getting the original destination address is racy.)
The 'TPROXY' target provides similar functionality without relying on NAT. Simply
add rules like this to the iptables ruleset above:
# iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY \
--tproxy-mark 0x1/0x1 --on-port 50080
Note that for this to work you'll have to modify the proxy to enable (SOL_IP,
IP_TRANSPARENT) for the listening socket.
3. Iptables extensions
======================
To use tproxy you'll need to have the 'socket' and 'TPROXY' modules
compiled for iptables. A patched version of iptables is available
here: http://git.balabit.hu/?p=bazsi/iptables-tproxy.git
4. Application support
======================
4.1. Squid
----------
Squid 3.HEAD has support built-in. To use it, pass
'--enable-linux-netfilter' to configure and set the 'tproxy' option on
the HTTP listener you redirect traffic to with the TPROXY iptables
target.
For more information please consult the following page on the Squid
wiki: http://wiki.squid-cache.org/Features/Tproxy4

View File

@ -341,6 +341,8 @@ key that does nothing by itself, as well as any hot key that is type-specific
3.1 Guidelines for wireless device drivers
------------------------------------------
(in this text, rfkill->foo means the foo field of struct rfkill).
1. Each independent transmitter in a wireless device (usually there is only one
transmitter per device) should have a SINGLE rfkill class attached to it.
@ -363,10 +365,32 @@ This rule exists because users of the rfkill subsystem expect to get (and set,
when possible) the overall transmitter rfkill state, not of a particular rfkill
line.
5. During suspend, the rfkill class will attempt to soft-block the radio
through a call to rfkill->toggle_radio, and will try to restore its previous
state during resume. After a rfkill class is suspended, it will *not* call
rfkill->toggle_radio until it is resumed.
5. The wireless device driver MUST NOT leave the transmitter enabled during
suspend and hibernation unless:
5.1. The transmitter has to be enabled for some sort of functionality
like wake-on-wireless-packet or autonomous packed forwarding in a mesh
network, and that functionality is enabled for this suspend/hibernation
cycle.
AND
5.2. The device was not on a user-requested BLOCKED state before
the suspend (i.e. the driver must NOT unblock a device, not even
to support wake-on-wireless-packet or remain in the mesh).
In other words, there is absolutely no allowed scenario where a driver can
automatically take action to unblock a rfkill controller (obviously, this deals
with scenarios where soft-blocking or both soft and hard blocking is happening.
Scenarios where hardware rfkill lines are the only ones blocking the
transmitter are outside of this rule, since the wireless device driver does not
control its input hardware rfkill lines in the first place).
6. During resume, rfkill will try to restore its previous state.
7. After a rfkill class is suspended, it will *not* call rfkill->toggle_radio
until it is resumed.
Example of a WLAN wireless driver connected to the rfkill subsystem:
--------------------------------------------------------------------

View File

@ -1048,6 +1048,13 @@ L: cbe-oss-dev@ozlabs.org
W: http://www.ibm.com/developerworks/power/cell/
S: Supported
CISCO 10G ETHERNET DRIVER
P: Scott Feldman
M: scofeldm@cisco.com
P: Joe Eykholt
M: jeykholt@cisco.com
S: Supported
CFAG12864B LCD DRIVER
P: Miguel Ojeda Sandonis
M: miguel.ojeda.sandonis@gmail.com
@ -2319,6 +2326,12 @@ L: video4linux-list@redhat.com
W: http://www.ivtvdriver.org
S: Maintained
JME NETWORK DRIVER
P: Guo-Fu Tseng
M: cooldavid@cooldavid.org
L: netdev@vger.kernel.org
S: Maintained
JOURNALLING FLASH FILE SYSTEM V2 (JFFS2)
P: David Woodhouse
M: dwmw2@infradead.org
@ -3384,6 +3397,13 @@ M: linux-driver@qlogic.com
L: netdev@vger.kernel.org
S: Supported
QLOGIC QLGE 10Gb ETHERNET DRIVER
P: Ron Mercer
M: linux-driver@qlogic.com
M: ron.mercer@qlogic.com
L: netdev@vger.kernel.org
S: Supported
QNX4 FILESYSTEM
P: Anders Larsen
M: al@alarsen.net
@ -4336,6 +4356,12 @@ L: linux-usb@vger.kernel.org
W: http://www.connecttech.com
S: Supported
USB SMSC95XX ETHERNET DRIVER
P: Steve Glendinning
M: steve.glendinning@smsc.com
L: netdev@vger.kernel.org
S: Supported
USB SN9C1xx DRIVER
P: Luca Risolia
M: luca.risolia@studio.unibo.it

View File

@ -25,7 +25,7 @@
#include "common.h"
static struct mv643xx_eth_platform_data db88f6281_ge00_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
static struct mv_sata_platform_data db88f6281_sata_data = {

View File

@ -30,7 +30,7 @@
#define RD88F6192_GPIO_USB_VBUS 10
static struct mv643xx_eth_platform_data rd88f6192_ge00_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
static struct mv_sata_platform_data rd88f6192_sata_data = {

View File

@ -69,7 +69,7 @@ static struct platform_device rd88f6281_nand_flash = {
};
static struct mv643xx_eth_platform_data rd88f6281_ge00_data = {
.phy_addr = -1,
.phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000,
.duplex = DUPLEX_FULL,
};

View File

@ -67,7 +67,7 @@ static struct platform_device lb88rc8480_boot_flash = {
};
static struct mv643xx_eth_platform_data lb88rc8480_ge0_data = {
.phy_addr = 1,
.phy_addr = MV643XX_ETH_PHY_ADDR(1),
.mac_addr = { 0x00, 0x50, 0x43, 0x11, 0x22, 0x33 },
};

View File

@ -330,6 +330,7 @@ void __init mv78xx0_ge00_init(struct mv643xx_eth_platform_data *eth_data)
struct mv643xx_eth_shared_platform_data mv78xx0_ge01_shared_data = {
.t_clk = 0,
.dram = &mv78xx0_mbus_dram_info,
.shared_smi = &mv78xx0_ge00_shared,
};
static struct resource mv78xx0_ge01_shared_resources[] = {
@ -370,7 +371,6 @@ static struct platform_device mv78xx0_ge01 = {
void __init mv78xx0_ge01_init(struct mv643xx_eth_platform_data *eth_data)
{
eth_data->shared = &mv78xx0_ge01_shared;
eth_data->shared_smi = &mv78xx0_ge00_shared;
mv78xx0_ge01.dev.platform_data = eth_data;
platform_device_register(&mv78xx0_ge01_shared);
@ -384,6 +384,7 @@ void __init mv78xx0_ge01_init(struct mv643xx_eth_platform_data *eth_data)
struct mv643xx_eth_shared_platform_data mv78xx0_ge10_shared_data = {
.t_clk = 0,
.dram = &mv78xx0_mbus_dram_info,
.shared_smi = &mv78xx0_ge00_shared,
};
static struct resource mv78xx0_ge10_shared_resources[] = {
@ -424,7 +425,6 @@ static struct platform_device mv78xx0_ge10 = {
void __init mv78xx0_ge10_init(struct mv643xx_eth_platform_data *eth_data)
{
eth_data->shared = &mv78xx0_ge10_shared;
eth_data->shared_smi = &mv78xx0_ge00_shared;
mv78xx0_ge10.dev.platform_data = eth_data;
platform_device_register(&mv78xx0_ge10_shared);
@ -438,6 +438,7 @@ void __init mv78xx0_ge10_init(struct mv643xx_eth_platform_data *eth_data)
struct mv643xx_eth_shared_platform_data mv78xx0_ge11_shared_data = {
.t_clk = 0,
.dram = &mv78xx0_mbus_dram_info,
.shared_smi = &mv78xx0_ge00_shared,
};
static struct resource mv78xx0_ge11_shared_resources[] = {
@ -478,7 +479,6 @@ static struct platform_device mv78xx0_ge11 = {
void __init mv78xx0_ge11_init(struct mv643xx_eth_platform_data *eth_data)
{
eth_data->shared = &mv78xx0_ge11_shared;
eth_data->shared_smi = &mv78xx0_ge00_shared;
mv78xx0_ge11.dev.platform_data = eth_data;
platform_device_register(&mv78xx0_ge11_shared);

View File

@ -19,19 +19,19 @@
#include "common.h"
static struct mv643xx_eth_platform_data db78x00_ge00_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
static struct mv643xx_eth_platform_data db78x00_ge01_data = {
.phy_addr = 9,
.phy_addr = MV643XX_ETH_PHY_ADDR(9),
};
static struct mv643xx_eth_platform_data db78x00_ge10_data = {
.phy_addr = -1,
.phy_addr = MV643XX_ETH_PHY_NONE,
};
static struct mv643xx_eth_platform_data db78x00_ge11_data = {
.phy_addr = -1,
.phy_addr = MV643XX_ETH_PHY_NONE,
};
static struct mv_sata_platform_data db78x00_sata_data = {

View File

@ -285,7 +285,7 @@ subsys_initcall(db88f5281_pci_init);
* Ethernet
****************************************************************************/
static struct mv643xx_eth_platform_data db88f5281_eth_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
/*****************************************************************************

View File

@ -79,7 +79,7 @@ subsys_initcall(dns323_pci_init);
*/
static struct mv643xx_eth_platform_data dns323_eth_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
/****************************************************************************

View File

@ -161,7 +161,7 @@ subsys_initcall(kurobox_pro_pci_init);
****************************************************************************/
static struct mv643xx_eth_platform_data kurobox_pro_eth_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
/*****************************************************************************

View File

@ -109,7 +109,7 @@ subsys_initcall(mss2_pci_init);
****************************************************************************/
static struct mv643xx_eth_platform_data mss2_eth_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
/*****************************************************************************

View File

@ -39,7 +39,7 @@
* Ethernet
****************************************************************************/
static struct mv643xx_eth_platform_data mv2120_eth_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
static struct mv_sata_platform_data mv2120_sata_data = {

View File

@ -88,7 +88,7 @@ static struct orion5x_mpp_mode rd88f5181l_fxo_mpp_modes[] __initdata = {
};
static struct mv643xx_eth_platform_data rd88f5181l_fxo_eth_data = {
.phy_addr = -1,
.phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000,
.duplex = DUPLEX_FULL,
};

View File

@ -89,7 +89,7 @@ static struct orion5x_mpp_mode rd88f5181l_ge_mpp_modes[] __initdata = {
};
static struct mv643xx_eth_platform_data rd88f5181l_ge_eth_data = {
.phy_addr = -1,
.phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000,
.duplex = DUPLEX_FULL,
};

View File

@ -221,7 +221,7 @@ subsys_initcall(rd88f5182_pci_init);
****************************************************************************/
static struct mv643xx_eth_platform_data rd88f5182_eth_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
/*****************************************************************************

View File

@ -103,8 +103,7 @@ static struct platform_device ts78xx_nor_boot_flash = {
* Ethernet
****************************************************************************/
static struct mv643xx_eth_platform_data ts78xx_eth_data = {
.phy_addr = 0,
.force_phy_addr = 1,
.phy_addr = MV643XX_ETH_PHY_ADDR(0),
};
/*****************************************************************************

View File

@ -48,7 +48,7 @@ void qnap_tsx09_power_off(void)
****************************************************************************/
struct mv643xx_eth_platform_data qnap_tsx09_eth_data = {
.phy_addr = 8,
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
static int __init qnap_tsx09_parse_hex_nibble(char n)

View File

@ -92,7 +92,7 @@ static struct platform_device wnr854t_nor_flash = {
};
static struct mv643xx_eth_platform_data wnr854t_eth_data = {
.phy_addr = -1,
.phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000,
.duplex = DUPLEX_FULL,
};

View File

@ -100,7 +100,7 @@ static struct platform_device wrt350n_v2_nor_flash = {
};
static struct mv643xx_eth_platform_data wrt350n_v2_eth_data = {
.phy_addr = -1,
.phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000,
.duplex = DUPLEX_FULL,
};

View File

@ -68,6 +68,10 @@
#define SDR0_UART3 0x0123
#define SDR0_CUST0 0x4000
/* SDRs (460EX/460GT) */
#define SDR0_ETH_CFG 0x4103
#define SDR0_ETH_CFG_ECS 0x00000100 /* EMAC int clk source */
/*
* All those DCR register addresses are offsets from the base address
* for the SRAM0 controller (e.g. 0x20 on 440GX). The base address is

View File

@ -137,7 +137,7 @@ static int __devinit ep8248e_mdio_probe(struct of_device *ofdev,
bus->irq[i] = -1;
bus->name = "ep8248e-mdio-bitbang";
bus->dev = &ofdev->dev;
bus->parent = &ofdev->dev;
snprintf(bus->id, MII_BUS_ID_SIZE, "%x", res.start);
return mdiobus_register(bus);

View File

@ -230,7 +230,7 @@ static int __devinit gpio_mdio_probe(struct of_device *ofdev,
if (!priv)
goto out;
new_bus = kzalloc(sizeof(struct mii_bus), GFP_KERNEL);
new_bus = mdiobus_alloc();
if (!new_bus)
goto out_free_priv;
@ -272,7 +272,7 @@ static int __devinit gpio_mdio_probe(struct of_device *ofdev,
prop = of_get_property(np, "mdio-pin", NULL);
priv->mdio_pin = *prop;
new_bus->dev = dev;
new_bus->parent = dev;
dev_set_drvdata(dev, new_bus);
err = mdiobus_register(new_bus);
@ -306,7 +306,7 @@ static int gpio_mdio_remove(struct of_device *dev)
kfree(bus->priv);
bus->priv = NULL;
kfree(bus);
mdiobus_free(bus);
return 0;
}

View File

@ -293,10 +293,8 @@ static int __init mv64x60_eth_device_setup(struct device_node *np, int id,
return -ENODEV;
prop = of_get_property(phy, "reg", NULL);
if (prop) {
pdata.force_phy_addr = 1;
pdata.phy_addr = *prop;
}
if (prop)
pdata.phy_addr = MV643XX_ETH_PHY_ADDR(*prop);
of_node_put(phy);

View File

@ -260,6 +260,9 @@ config ACPI_ASUS
config ACPI_TOSHIBA
tristate "Toshiba Laptop Extras"
depends on X86
select INPUT_POLLDEV
select NET
select RFKILL
select BACKLIGHT_CLASS_DEVICE
---help---
This driver adds support for access to certain system settings

View File

@ -3,6 +3,7 @@
*
*
* Copyright (C) 2002-2004 John Belmonte
* Copyright (C) 2008 Philip Langdale
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -33,7 +34,7 @@
*
*/
#define TOSHIBA_ACPI_VERSION "0.18"
#define TOSHIBA_ACPI_VERSION "0.19"
#define PROC_INTERFACE_VERSION 1
#include <linux/kernel.h>
@ -42,6 +43,9 @@
#include <linux/types.h>
#include <linux/proc_fs.h>
#include <linux/backlight.h>
#include <linux/platform_device.h>
#include <linux/rfkill.h>
#include <linux/input-polldev.h>
#include <asm/uaccess.h>
@ -90,6 +94,7 @@ MODULE_LICENSE("GPL");
#define HCI_VIDEO_OUT 0x001c
#define HCI_HOTKEY_EVENT 0x001e
#define HCI_LCD_BRIGHTNESS 0x002a
#define HCI_WIRELESS 0x0056
/* field definitions */
#define HCI_LCD_BRIGHTNESS_BITS 3
@ -98,9 +103,14 @@ MODULE_LICENSE("GPL");
#define HCI_VIDEO_OUT_LCD 0x1
#define HCI_VIDEO_OUT_CRT 0x2
#define HCI_VIDEO_OUT_TV 0x4
#define HCI_WIRELESS_KILL_SWITCH 0x01
#define HCI_WIRELESS_BT_PRESENT 0x0f
#define HCI_WIRELESS_BT_ATTACH 0x40
#define HCI_WIRELESS_BT_POWER 0x80
static const struct acpi_device_id toshiba_device_ids[] = {
{"TOS6200", 0},
{"TOS6208", 0},
{"TOS1900", 0},
{"", 0},
};
@ -193,7 +203,7 @@ static acpi_status hci_raw(const u32 in[HCI_WORDS], u32 out[HCI_WORDS])
return status;
}
/* common hci tasks (get or set one value)
/* common hci tasks (get or set one or two value)
*
* In addition to the ACPI status, the HCI system returns a result which
* may be useful (such as "not supported").
@ -218,6 +228,152 @@ static acpi_status hci_read1(u32 reg, u32 * out1, u32 * result)
return status;
}
static acpi_status hci_write2(u32 reg, u32 in1, u32 in2, u32 *result)
{
u32 in[HCI_WORDS] = { HCI_SET, reg, in1, in2, 0, 0 };
u32 out[HCI_WORDS];
acpi_status status = hci_raw(in, out);
*result = (status == AE_OK) ? out[0] : HCI_FAILURE;
return status;
}
static acpi_status hci_read2(u32 reg, u32 *out1, u32 *out2, u32 *result)
{
u32 in[HCI_WORDS] = { HCI_GET, reg, *out1, *out2, 0, 0 };
u32 out[HCI_WORDS];
acpi_status status = hci_raw(in, out);
*out1 = out[2];
*out2 = out[3];
*result = (status == AE_OK) ? out[0] : HCI_FAILURE;
return status;
}
struct toshiba_acpi_dev {
struct platform_device *p_dev;
struct rfkill *rfk_dev;
struct input_polled_dev *poll_dev;
const char *bt_name;
const char *rfk_name;
bool last_rfk_state;
struct mutex mutex;
};
static struct toshiba_acpi_dev toshiba_acpi = {
.bt_name = "Toshiba Bluetooth",
.rfk_name = "Toshiba RFKill Switch",
.last_rfk_state = false,
};
/* Bluetooth rfkill handlers */
static u32 hci_get_bt_present(bool *present)
{
u32 hci_result;
u32 value, value2;
value = 0;
value2 = 0;
hci_read2(HCI_WIRELESS, &value, &value2, &hci_result);
if (hci_result == HCI_SUCCESS)
*present = (value & HCI_WIRELESS_BT_PRESENT) ? true : false;
return hci_result;
}
static u32 hci_get_bt_on(bool *on)
{
u32 hci_result;
u32 value, value2;
value = 0;
value2 = 0x0001;
hci_read2(HCI_WIRELESS, &value, &value2, &hci_result);
if (hci_result == HCI_SUCCESS)
*on = (value & HCI_WIRELESS_BT_POWER) &&
(value & HCI_WIRELESS_BT_ATTACH);
return hci_result;
}
static u32 hci_get_radio_state(bool *radio_state)
{
u32 hci_result;
u32 value, value2;
value = 0;
value2 = 0x0001;
hci_read2(HCI_WIRELESS, &value, &value2, &hci_result);
*radio_state = value & HCI_WIRELESS_KILL_SWITCH;
return hci_result;
}
static int bt_rfkill_toggle_radio(void *data, enum rfkill_state state)
{
u32 result1, result2;
u32 value;
bool radio_state;
struct toshiba_acpi_dev *dev = data;
value = (state == RFKILL_STATE_UNBLOCKED);
if (hci_get_radio_state(&radio_state) != HCI_SUCCESS)
return -EFAULT;
switch (state) {
case RFKILL_STATE_UNBLOCKED:
if (!radio_state)
return -EPERM;
break;
case RFKILL_STATE_SOFT_BLOCKED:
break;
default:
return -EINVAL;
}
mutex_lock(&dev->mutex);
hci_write2(HCI_WIRELESS, value, HCI_WIRELESS_BT_POWER, &result1);
hci_write2(HCI_WIRELESS, value, HCI_WIRELESS_BT_ATTACH, &result2);
mutex_unlock(&dev->mutex);
if (result1 != HCI_SUCCESS || result2 != HCI_SUCCESS)
return -EFAULT;
return 0;
}
static void bt_poll_rfkill(struct input_polled_dev *poll_dev)
{
bool state_changed;
bool new_rfk_state;
bool value;
u32 hci_result;
struct toshiba_acpi_dev *dev = poll_dev->private;
hci_result = hci_get_radio_state(&value);
if (hci_result != HCI_SUCCESS)
return; /* Can't do anything useful */
new_rfk_state = value;
mutex_lock(&dev->mutex);
state_changed = new_rfk_state != dev->last_rfk_state;
dev->last_rfk_state = new_rfk_state;
mutex_unlock(&dev->mutex);
if (unlikely(state_changed)) {
rfkill_force_state(dev->rfk_dev,
new_rfk_state ?
RFKILL_STATE_SOFT_BLOCKED :
RFKILL_STATE_HARD_BLOCKED);
input_report_switch(poll_dev->input, SW_RFKILL_ALL,
new_rfk_state);
}
}
static struct proc_dir_entry *toshiba_proc_dir /*= 0*/ ;
static struct backlight_device *toshiba_backlight_device;
static int force_fan;
@ -547,6 +703,14 @@ static struct backlight_ops toshiba_backlight_data = {
static void toshiba_acpi_exit(void)
{
if (toshiba_acpi.poll_dev) {
input_unregister_polled_device(toshiba_acpi.poll_dev);
input_free_polled_device(toshiba_acpi.poll_dev);
}
if (toshiba_acpi.rfk_dev)
rfkill_unregister(toshiba_acpi.rfk_dev);
if (toshiba_backlight_device)
backlight_device_unregister(toshiba_backlight_device);
@ -555,6 +719,8 @@ static void toshiba_acpi_exit(void)
if (toshiba_proc_dir)
remove_proc_entry(PROC_TOSHIBA, acpi_root_dir);
platform_device_unregister(toshiba_acpi.p_dev);
return;
}
@ -562,6 +728,10 @@ static int __init toshiba_acpi_init(void)
{
acpi_status status = AE_OK;
u32 hci_result;
bool bt_present;
bool bt_on;
bool radio_on;
int ret = 0;
if (acpi_disabled)
return -ENODEV;
@ -578,6 +748,18 @@ static int __init toshiba_acpi_init(void)
TOSHIBA_ACPI_VERSION);
printk(MY_INFO " HCI method: %s\n", method_hci);
mutex_init(&toshiba_acpi.mutex);
toshiba_acpi.p_dev = platform_device_register_simple("toshiba_acpi",
-1, NULL, 0);
if (IS_ERR(toshiba_acpi.p_dev)) {
ret = PTR_ERR(toshiba_acpi.p_dev);
printk(MY_ERR "unable to register platform device\n");
toshiba_acpi.p_dev = NULL;
toshiba_acpi_exit();
return ret;
}
force_fan = 0;
key_event_valid = 0;
@ -586,19 +768,23 @@ static int __init toshiba_acpi_init(void)
toshiba_proc_dir = proc_mkdir(PROC_TOSHIBA, acpi_root_dir);
if (!toshiba_proc_dir) {
status = AE_ERROR;
toshiba_acpi_exit();
return -ENODEV;
} else {
toshiba_proc_dir->owner = THIS_MODULE;
status = add_device();
if (ACPI_FAILURE(status))
remove_proc_entry(PROC_TOSHIBA, acpi_root_dir);
if (ACPI_FAILURE(status)) {
toshiba_acpi_exit();
return -ENODEV;
}
}
toshiba_backlight_device = backlight_device_register("toshiba",NULL,
toshiba_backlight_device = backlight_device_register("toshiba",
&toshiba_acpi.p_dev->dev,
NULL,
&toshiba_backlight_data);
if (IS_ERR(toshiba_backlight_device)) {
int ret = PTR_ERR(toshiba_backlight_device);
ret = PTR_ERR(toshiba_backlight_device);
printk(KERN_ERR "Could not register toshiba backlight device\n");
toshiba_backlight_device = NULL;
@ -607,7 +793,66 @@ static int __init toshiba_acpi_init(void)
}
toshiba_backlight_device->props.max_brightness = HCI_LCD_BRIGHTNESS_LEVELS - 1;
return (ACPI_SUCCESS(status)) ? 0 : -ENODEV;
/* Register rfkill switch for Bluetooth */
if (hci_get_bt_present(&bt_present) == HCI_SUCCESS && bt_present) {
toshiba_acpi.rfk_dev = rfkill_allocate(&toshiba_acpi.p_dev->dev,
RFKILL_TYPE_BLUETOOTH);
if (!toshiba_acpi.rfk_dev) {
printk(MY_ERR "unable to allocate rfkill device\n");
toshiba_acpi_exit();
return -ENOMEM;
}
toshiba_acpi.rfk_dev->name = toshiba_acpi.bt_name;
toshiba_acpi.rfk_dev->toggle_radio = bt_rfkill_toggle_radio;
toshiba_acpi.rfk_dev->user_claim_unsupported = 1;
toshiba_acpi.rfk_dev->data = &toshiba_acpi;
if (hci_get_bt_on(&bt_on) == HCI_SUCCESS && bt_on) {
toshiba_acpi.rfk_dev->state = RFKILL_STATE_UNBLOCKED;
} else if (hci_get_radio_state(&radio_on) == HCI_SUCCESS &&
radio_on) {
toshiba_acpi.rfk_dev->state = RFKILL_STATE_SOFT_BLOCKED;
} else {
toshiba_acpi.rfk_dev->state = RFKILL_STATE_HARD_BLOCKED;
}
ret = rfkill_register(toshiba_acpi.rfk_dev);
if (ret) {
printk(MY_ERR "unable to register rfkill device\n");
toshiba_acpi_exit();
return -ENOMEM;
}
}
/* Register input device for kill switch */
toshiba_acpi.poll_dev = input_allocate_polled_device();
if (!toshiba_acpi.poll_dev) {
printk(MY_ERR "unable to allocate kill-switch input device\n");
toshiba_acpi_exit();
return -ENOMEM;
}
toshiba_acpi.poll_dev->private = &toshiba_acpi;
toshiba_acpi.poll_dev->poll = bt_poll_rfkill;
toshiba_acpi.poll_dev->poll_interval = 1000; /* msecs */
toshiba_acpi.poll_dev->input->name = toshiba_acpi.rfk_name;
toshiba_acpi.poll_dev->input->id.bustype = BUS_HOST;
toshiba_acpi.poll_dev->input->id.vendor = 0x0930; /* Toshiba USB ID */
set_bit(EV_SW, toshiba_acpi.poll_dev->input->evbit);
set_bit(SW_RFKILL_ALL, toshiba_acpi.poll_dev->input->swbit);
input_report_switch(toshiba_acpi.poll_dev->input, SW_RFKILL_ALL, TRUE);
ret = input_register_polled_device(toshiba_acpi.poll_dev);
if (ret) {
printk(MY_ERR "unable to register kill-switch input device\n");
rfkill_free(toshiba_acpi.rfk_dev);
toshiba_acpi.rfk_dev = NULL;
toshiba_acpi_exit();
return ret;
}
return 0;
}
module_init(toshiba_acpi_init);

View File

@ -1270,7 +1270,7 @@ static int comp_tx(struct eni_dev *eni_dev,int *pcr,int reserved,int *pre,
if (*pre < 3) (*pre)++; /* else fail later */
div = pre_div[*pre]*-*pcr;
DPRINTK("max div %d\n",div);
*res = (TS_CLOCK+div-1)/div-1;
*res = DIV_ROUND_UP(TS_CLOCK, div)-1;
}
if (*res < 0) *res = 0;
if (*res > MID_SEG_MAX_RATE) *res = MID_SEG_MAX_RATE;

View File

@ -635,7 +635,7 @@ static int make_rate (const hrz_dev * dev, u32 c, rounding r,
// take care of rounding
switch (r) {
case round_down:
pre = (br+(c<<div)-1)/(c<<div);
pre = DIV_ROUND_UP(br, c<<div);
// but p must be non-zero
if (!pre)
pre = 1;
@ -668,7 +668,7 @@ static int make_rate (const hrz_dev * dev, u32 c, rounding r,
// take care of rounding
switch (r) {
case round_down:
pre = (br+(c<<div)-1)/(c<<div);
pre = DIV_ROUND_UP(br, c<<div);
break;
case round_nearest:
pre = (br+(c<<div)/2)/(c<<div);
@ -698,7 +698,7 @@ got_it:
if (bits)
*bits = (div<<CLOCK_SELECT_SHIFT) | (pre-1);
if (actual) {
*actual = (br + (pre<<div) - 1) / (pre<<div);
*actual = DIV_ROUND_UP(br, pre<<div);
PRINTD (DBG_QOS, "actual rate: %u", *actual);
}
return 0;
@ -1967,7 +1967,7 @@ static int __devinit hrz_init (hrz_dev * dev) {
// Set the max AAL5 cell count to be just enough to contain the
// largest AAL5 frame that the user wants to receive
wr_regw (dev, MAX_AAL5_CELL_COUNT_OFF,
(max_rx_size + ATM_AAL5_TRAILER + ATM_CELL_PAYLOAD - 1) / ATM_CELL_PAYLOAD);
DIV_ROUND_UP(max_rx_size + ATM_AAL5_TRAILER, ATM_CELL_PAYLOAD));
// Enable receive
wr_regw (dev, RX_CONFIG_OFF, rd_regw (dev, RX_CONFIG_OFF) | RX_ENABLE);

View File

@ -1114,11 +1114,8 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
rpp = &vc->rcv.rx_pool;
__skb_queue_tail(&rpp->queue, skb);
rpp->len += skb->len;
if (!rpp->count++)
rpp->first = skb;
*rpp->last = skb;
rpp->last = &skb->next;
if (stat & SAR_RSQE_EPDU) {
unsigned char *l1l2;
@ -1145,7 +1142,7 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
atomic_inc(&vcc->stats->rx_err);
return;
}
if (rpp->count > 1) {
if (skb_queue_len(&rpp->queue) > 1) {
struct sk_buff *sb;
skb = dev_alloc_skb(rpp->len);
@ -1161,12 +1158,9 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
dev_kfree_skb(skb);
return;
}
sb = rpp->first;
for (i = 0; i < rpp->count; i++) {
skb_queue_walk(&rpp->queue, sb)
memcpy(skb_put(skb, sb->len),
sb->data, sb->len);
sb = sb->next;
}
recycle_rx_pool_skb(card, rpp);
@ -1180,7 +1174,6 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
return;
}
skb->next = NULL;
flush_rx_pool(card, rpp);
if (!atm_charge(vcc, skb->truesize)) {
@ -1918,25 +1911,18 @@ recycle_rx_skb(struct idt77252_dev *card, struct sk_buff *skb)
static void
flush_rx_pool(struct idt77252_dev *card, struct rx_pool *rpp)
{
skb_queue_head_init(&rpp->queue);
rpp->len = 0;
rpp->count = 0;
rpp->first = NULL;
rpp->last = &rpp->first;
}
static void
recycle_rx_pool_skb(struct idt77252_dev *card, struct rx_pool *rpp)
{
struct sk_buff *skb, *next;
int i;
struct sk_buff *skb, *tmp;
skb = rpp->first;
for (i = 0; i < rpp->count; i++) {
next = skb->next;
skb->next = NULL;
skb_queue_walk_safe(&rpp->queue, skb, tmp)
recycle_rx_skb(card, skb);
skb = next;
}
flush_rx_pool(card, rpp);
}
@ -2537,7 +2523,7 @@ idt77252_close(struct atm_vcc *vcc)
waitfor_idle(card);
spin_unlock_irqrestore(&card->cmd_lock, flags);
if (vc->rcv.rx_pool.count) {
if (skb_queue_len(&vc->rcv.rx_pool.queue) != 0) {
DPRINTK("%s: closing a VC with pending rx buffers.\n",
card->name);
@ -2970,7 +2956,7 @@ close_card_oam(struct idt77252_dev *card)
waitfor_idle(card);
spin_unlock_irqrestore(&card->cmd_lock, flags);
if (vc->rcv.rx_pool.count) {
if (skb_queue_len(&vc->rcv.rx_pool.queue) != 0) {
DPRINTK("%s: closing a VC "
"with pending rx buffers.\n",
card->name);

View File

@ -173,10 +173,8 @@ struct scq_info
};
struct rx_pool {
struct sk_buff *first;
struct sk_buff **last;
struct sk_buff_head queue;
unsigned int len;
unsigned int count;
};
struct aal1 {

View File

@ -496,8 +496,8 @@ static int open_rx_first(struct atm_vcc *vcc)
vcc->qos.rxtp.max_sdu = 65464;
/* fix this - we may want to receive 64kB SDUs
later */
cells = (vcc->qos.rxtp.max_sdu+ATM_AAL5_TRAILER+
ATM_CELL_PAYLOAD-1)/ATM_CELL_PAYLOAD;
cells = DIV_ROUND_UP(vcc->qos.rxtp.max_sdu + ATM_AAL5_TRAILER,
ATM_CELL_PAYLOAD);
zatm_vcc->pool = pool_index(cells*ATM_CELL_PAYLOAD);
}
else {
@ -820,7 +820,7 @@ static int alloc_shaper(struct atm_dev *dev,int *pcr,int min,int max,int ubr)
}
else {
i = 255;
m = (ATM_OC3_PCR*255+max-1)/max;
m = DIV_ROUND_UP(ATM_OC3_PCR*255, max);
}
}
if (i > m) {

View File

@ -159,11 +159,8 @@ struct aoedev {
sector_t ssize;
struct timer_list timer;
spinlock_t lock;
struct sk_buff *sendq_hd; /* packets needing to be sent, list head */
struct sk_buff *sendq_tl;
struct sk_buff *skbpool_hd;
struct sk_buff *skbpool_tl;
int nskbpool;
struct sk_buff_head sendq;
struct sk_buff_head skbpool;
mempool_t *bufpool; /* for deadlock-free Buf allocation */
struct list_head bufq; /* queue of bios to work on */
struct buf *inprocess; /* the one we're currently working on */
@ -199,7 +196,7 @@ int aoedev_flush(const char __user *str, size_t size);
int aoenet_init(void);
void aoenet_exit(void);
void aoenet_xmit(struct sk_buff *);
void aoenet_xmit(struct sk_buff_head *);
int is_aoe_netif(struct net_device *ifp);
int set_aoe_iflist(const char __user *str, size_t size);

View File

@ -158,9 +158,9 @@ aoeblk_release(struct inode *inode, struct file *filp)
static int
aoeblk_make_request(struct request_queue *q, struct bio *bio)
{
struct sk_buff_head queue;
struct aoedev *d;
struct buf *buf;
struct sk_buff *sl;
ulong flags;
blk_queue_bounce(q, &bio);
@ -213,11 +213,11 @@ aoeblk_make_request(struct request_queue *q, struct bio *bio)
list_add_tail(&buf->bufs, &d->bufq);
aoecmd_work(d);
sl = d->sendq_hd;
d->sendq_hd = d->sendq_tl = NULL;
__skb_queue_head_init(&queue);
skb_queue_splice_init(&d->sendq, &queue);
spin_unlock_irqrestore(&d->lock, flags);
aoenet_xmit(sl);
aoenet_xmit(&queue);
return 0;
}

View File

@ -9,6 +9,7 @@
#include <linux/completion.h>
#include <linux/delay.h>
#include <linux/smp_lock.h>
#include <linux/skbuff.h>
#include "aoe.h"
enum {
@ -103,7 +104,12 @@ loop:
spin_lock_irqsave(&d->lock, flags);
goto loop;
}
aoenet_xmit(skb);
if (skb) {
struct sk_buff_head queue;
__skb_queue_head_init(&queue);
__skb_queue_tail(&queue, skb);
aoenet_xmit(&queue);
}
aoecmd_cfg(major, minor);
return 0;
}

View File

@ -114,29 +114,22 @@ ifrotate(struct aoetgt *t)
static void
skb_pool_put(struct aoedev *d, struct sk_buff *skb)
{
if (!d->skbpool_hd)
d->skbpool_hd = skb;
else
d->skbpool_tl->next = skb;
d->skbpool_tl = skb;
__skb_queue_tail(&d->skbpool, skb);
}
static struct sk_buff *
skb_pool_get(struct aoedev *d)
{
struct sk_buff *skb;
struct sk_buff *skb = skb_peek(&d->skbpool);
skb = d->skbpool_hd;
if (skb && atomic_read(&skb_shinfo(skb)->dataref) == 1) {
d->skbpool_hd = skb->next;
skb->next = NULL;
__skb_unlink(skb, &d->skbpool);
return skb;
}
if (d->nskbpool < NSKBPOOLMAX
&& (skb = new_skb(ETH_ZLEN))) {
d->nskbpool++;
if (skb_queue_len(&d->skbpool) < NSKBPOOLMAX &&
(skb = new_skb(ETH_ZLEN)))
return skb;
}
return NULL;
}
@ -293,29 +286,22 @@ aoecmd_ata_rw(struct aoedev *d)
skb->dev = t->ifp->nd;
skb = skb_clone(skb, GFP_ATOMIC);
if (skb) {
if (d->sendq_hd)
d->sendq_tl->next = skb;
else
d->sendq_hd = skb;
d->sendq_tl = skb;
}
if (skb)
__skb_queue_tail(&d->sendq, skb);
return 1;
}
/* some callers cannot sleep, and they can call this function,
* transmitting the packets later, when interrupts are on
*/
static struct sk_buff *
aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff **tail)
static void
aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff_head *queue)
{
struct aoe_hdr *h;
struct aoe_cfghdr *ch;
struct sk_buff *skb, *sl, *sl_tail;
struct sk_buff *skb;
struct net_device *ifp;
sl = sl_tail = NULL;
read_lock(&dev_base_lock);
for_each_netdev(&init_net, ifp) {
dev_hold(ifp);
@ -329,8 +315,7 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff **tail)
}
skb_put(skb, sizeof *h + sizeof *ch);
skb->dev = ifp;
if (sl_tail == NULL)
sl_tail = skb;
__skb_queue_tail(queue, skb);
h = (struct aoe_hdr *) skb_mac_header(skb);
memset(h, 0, sizeof *h + sizeof *ch);
@ -342,16 +327,10 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff **tail)
h->minor = aoeminor;
h->cmd = AOECMD_CFG;
skb->next = sl;
sl = skb;
cont:
dev_put(ifp);
}
read_unlock(&dev_base_lock);
if (tail != NULL)
*tail = sl_tail;
return sl;
}
static void
@ -406,11 +385,7 @@ resend(struct aoedev *d, struct aoetgt *t, struct frame *f)
skb = skb_clone(skb, GFP_ATOMIC);
if (skb == NULL)
return;
if (d->sendq_hd)
d->sendq_tl->next = skb;
else
d->sendq_hd = skb;
d->sendq_tl = skb;
__skb_queue_tail(&d->sendq, skb);
}
static int
@ -508,16 +483,15 @@ ata_scnt(unsigned char *packet) {
static void
rexmit_timer(ulong vp)
{
struct sk_buff_head queue;
struct aoedev *d;
struct aoetgt *t, **tt, **te;
struct aoeif *ifp;
struct frame *f, *e;
struct sk_buff *sl;
register long timeout;
ulong flags, n;
d = (struct aoedev *) vp;
sl = NULL;
/* timeout is always ~150% of the moving average */
timeout = d->rttavg;
@ -589,7 +563,7 @@ rexmit_timer(ulong vp)
}
}
if (d->sendq_hd) {
if (!skb_queue_empty(&d->sendq)) {
n = d->rttavg <<= 1;
if (n > MAXTIMER)
d->rttavg = MAXTIMER;
@ -600,15 +574,15 @@ rexmit_timer(ulong vp)
aoecmd_work(d);
}
sl = d->sendq_hd;
d->sendq_hd = d->sendq_tl = NULL;
__skb_queue_head_init(&queue);
skb_queue_splice_init(&d->sendq, &queue);
d->timer.expires = jiffies + TIMERTICK;
add_timer(&d->timer);
spin_unlock_irqrestore(&d->lock, flags);
aoenet_xmit(sl);
aoenet_xmit(&queue);
}
/* enters with d->lock held */
@ -772,12 +746,12 @@ diskstats(struct gendisk *disk, struct bio *bio, ulong duration, sector_t sector
void
aoecmd_ata_rsp(struct sk_buff *skb)
{
struct sk_buff_head queue;
struct aoedev *d;
struct aoe_hdr *hin, *hout;
struct aoe_atahdr *ahin, *ahout;
struct frame *f;
struct buf *buf;
struct sk_buff *sl;
struct aoetgt *t;
struct aoeif *ifp;
register long n;
@ -898,21 +872,21 @@ aoecmd_ata_rsp(struct sk_buff *skb)
aoecmd_work(d);
xmit:
sl = d->sendq_hd;
d->sendq_hd = d->sendq_tl = NULL;
__skb_queue_head_init(&queue);
skb_queue_splice_init(&d->sendq, &queue);
spin_unlock_irqrestore(&d->lock, flags);
aoenet_xmit(sl);
aoenet_xmit(&queue);
}
void
aoecmd_cfg(ushort aoemajor, unsigned char aoeminor)
{
struct sk_buff *sl;
struct sk_buff_head queue;
sl = aoecmd_cfg_pkts(aoemajor, aoeminor, NULL);
aoenet_xmit(sl);
__skb_queue_head_init(&queue);
aoecmd_cfg_pkts(aoemajor, aoeminor, &queue);
aoenet_xmit(&queue);
}
struct sk_buff *
@ -1081,7 +1055,12 @@ aoecmd_cfg_rsp(struct sk_buff *skb)
spin_unlock_irqrestore(&d->lock, flags);
aoenet_xmit(sl);
if (sl) {
struct sk_buff_head queue;
__skb_queue_head_init(&queue);
__skb_queue_tail(&queue, sl);
aoenet_xmit(&queue);
}
}
void

View File

@ -188,14 +188,12 @@ skbfree(struct sk_buff *skb)
static void
skbpoolfree(struct aoedev *d)
{
struct sk_buff *skb;
struct sk_buff *skb, *tmp;
while ((skb = d->skbpool_hd)) {
d->skbpool_hd = skb->next;
skb->next = NULL;
skb_queue_walk_safe(&d->skbpool, skb, tmp)
skbfree(skb);
}
d->skbpool_tl = NULL;
__skb_queue_head_init(&d->skbpool);
}
/* find it or malloc it */
@ -217,6 +215,8 @@ aoedev_by_sysminor_m(ulong sysminor)
goto out;
INIT_WORK(&d->work, aoecmd_sleepwork);
spin_lock_init(&d->lock);
skb_queue_head_init(&d->sendq);
skb_queue_head_init(&d->skbpool);
init_timer(&d->timer);
d->timer.data = (ulong) d;
d->timer.function = dummy_timer;

View File

@ -7,6 +7,7 @@
#include <linux/hdreg.h>
#include <linux/blkdev.h>
#include <linux/module.h>
#include <linux/skbuff.h>
#include "aoe.h"
MODULE_LICENSE("GPL");

View File

@ -95,13 +95,12 @@ mac_addr(char addr[6])
}
void
aoenet_xmit(struct sk_buff *sl)
aoenet_xmit(struct sk_buff_head *queue)
{
struct sk_buff *skb;
struct sk_buff *skb, *tmp;
while ((skb = sl)) {
sl = sl->next;
skb->next = skb->prev = NULL;
skb_queue_walk_safe(queue, skb, tmp) {
__skb_unlink(skb, queue);
dev_queue_xmit(skb);
}
}

View File

@ -352,14 +352,14 @@ static int bcsp_flush(struct hci_uart *hu)
/* Remove ack'ed packets */
static void bcsp_pkt_cull(struct bcsp_struct *bcsp)
{
struct sk_buff *skb, *tmp;
unsigned long flags;
struct sk_buff *skb;
int i, pkts_to_be_removed;
u8 seqno;
spin_lock_irqsave(&bcsp->unack.lock, flags);
pkts_to_be_removed = bcsp->unack.qlen;
pkts_to_be_removed = skb_queue_len(&bcsp->unack);
seqno = bcsp->msgq_txseq;
while (pkts_to_be_removed) {
@ -373,19 +373,19 @@ static void bcsp_pkt_cull(struct bcsp_struct *bcsp)
BT_ERR("Peer acked invalid packet");
BT_DBG("Removing %u pkts out of %u, up to seqno %u",
pkts_to_be_removed, bcsp->unack.qlen, (seqno - 1) & 0x07);
pkts_to_be_removed, skb_queue_len(&bcsp->unack),
(seqno - 1) & 0x07);
for (i = 0, skb = ((struct sk_buff *) &bcsp->unack)->next; i < pkts_to_be_removed
&& skb != (struct sk_buff *) &bcsp->unack; i++) {
struct sk_buff *nskb;
i = 0;
skb_queue_walk_safe(&bcsp->unack, skb, tmp) {
if (i++ >= pkts_to_be_removed)
break;
nskb = skb->next;
__skb_unlink(skb, &bcsp->unack);
kfree_skb(skb);
skb = nskb;
}
if (bcsp->unack.qlen == 0)
if (skb_queue_empty(&bcsp->unack))
del_timer(&bcsp->tbcsp);
spin_unlock_irqrestore(&bcsp->unack.lock, flags);

View File

@ -70,8 +70,8 @@ static inline void _urb_queue_head(struct _urb_queue *q, struct _urb *_urb)
{
unsigned long flags;
spin_lock_irqsave(&q->lock, flags);
/* _urb_unlink needs to know which spinlock to use, thus mb(). */
_urb->queue = q; mb(); list_add(&_urb->list, &q->head);
/* _urb_unlink needs to know which spinlock to use, thus smp_mb(). */
_urb->queue = q; smp_mb(); list_add(&_urb->list, &q->head);
spin_unlock_irqrestore(&q->lock, flags);
}
@ -79,8 +79,8 @@ static inline void _urb_queue_tail(struct _urb_queue *q, struct _urb *_urb)
{
unsigned long flags;
spin_lock_irqsave(&q->lock, flags);
/* _urb_unlink needs to know which spinlock to use, thus mb(). */
_urb->queue = q; mb(); list_add_tail(&_urb->list, &q->head);
/* _urb_unlink needs to know which spinlock to use, thus smp_mb(). */
_urb->queue = q; smp_mb(); list_add_tail(&_urb->list, &q->head);
spin_unlock_irqrestore(&q->lock, flags);
}
@ -89,7 +89,7 @@ static inline void _urb_unlink(struct _urb *_urb)
struct _urb_queue *q;
unsigned long flags;
mb();
smp_mb();
q = _urb->queue;
/* If q is NULL, it will die at easy-to-debug NULL pointer dereference.
No need to BUG(). */

View File

@ -828,15 +828,18 @@ static int old_capi_manufacturer(unsigned int cmd, void __user *data)
return -ESRCH;
if (card->load_firmware == NULL) {
printk(KERN_DEBUG "kcapi: load: no load function\n");
capi_ctr_put(card);
return -ESRCH;
}
if (ldef.t4file.len <= 0) {
printk(KERN_DEBUG "kcapi: load: invalid parameter: length of t4file is %d ?\n", ldef.t4file.len);
capi_ctr_put(card);
return -EINVAL;
}
if (ldef.t4file.data == NULL) {
printk(KERN_DEBUG "kcapi: load: invalid parameter: dataptr is 0\n");
capi_ctr_put(card);
return -EINVAL;
}
@ -849,6 +852,7 @@ static int old_capi_manufacturer(unsigned int cmd, void __user *data)
if (card->cardstate != CARD_DETECTED) {
printk(KERN_INFO "kcapi: load: contr=%d not in detect state\n", ldef.contr);
capi_ctr_put(card);
return -EBUSY;
}
card->cardstate = CARD_LOADING;

View File

@ -183,8 +183,8 @@
#define D_FREG_MASK 0xF
struct zt {
unsigned short z1; /* Z1 pointer 16 Bit */
unsigned short z2; /* Z2 pointer 16 Bit */
__le16 z1; /* Z1 pointer 16 Bit */
__le16 z2; /* Z2 pointer 16 Bit */
};
struct dfifo {

View File

@ -43,7 +43,7 @@ MODULE_LICENSE("GPL");
module_param(debug, uint, 0);
static LIST_HEAD(HFClist);
DEFINE_RWLOCK(HFClock);
static DEFINE_RWLOCK(HFClock);
enum {
HFC_CCD_2BD0,
@ -88,7 +88,7 @@ struct hfcPCI_hw {
unsigned char bswapped;
unsigned char protocol;
int nt_timer;
unsigned char *pci_io; /* start of PCI IO memory */
unsigned char __iomem *pci_io; /* start of PCI IO memory */
dma_addr_t dmahandle;
void *fifos; /* FIFO memory */
int last_bfifo_cnt[2];
@ -153,7 +153,7 @@ release_io_hfcpci(struct hfc_pci *hc)
pci_write_config_word(hc->pdev, PCI_COMMAND, 0);
del_timer(&hc->hw.timer);
pci_free_consistent(hc->pdev, 0x8000, hc->hw.fifos, hc->hw.dmahandle);
iounmap((void *)hc->hw.pci_io);
iounmap(hc->hw.pci_io);
}
/*
@ -366,8 +366,7 @@ static void hfcpci_clear_fifo_tx(struct hfc_pci *hc, int fifo)
bzt->f2 = MAX_B_FRAMES;
bzt->f1 = bzt->f2; /* init F pointers to remain constant */
bzt->za[MAX_B_FRAMES].z1 = cpu_to_le16(B_FIFO_SIZE + B_SUB_VAL - 1);
bzt->za[MAX_B_FRAMES].z2 = cpu_to_le16(
le16_to_cpu(bzt->za[MAX_B_FRAMES].z1 - 1));
bzt->za[MAX_B_FRAMES].z2 = cpu_to_le16(B_FIFO_SIZE + B_SUB_VAL - 2);
if (fifo_state)
hc->hw.fifo_en |= fifo_state;
Write_hfc(hc, HFCPCI_FIFO_EN, hc->hw.fifo_en);
@ -482,7 +481,7 @@ receive_dmsg(struct hfc_pci *hc)
df->f2 = ((df->f2 + 1) & MAX_D_FRAMES) |
(MAX_D_FRAMES + 1); /* next buffer */
df->za[df->f2 & D_FREG_MASK].z2 =
cpu_to_le16((zp->z2 + rcnt) & (D_FIFO_SIZE - 1));
cpu_to_le16((le16_to_cpu(zp->z2) + rcnt) & (D_FIFO_SIZE - 1));
} else {
dch->rx_skb = mI_alloc_skb(rcnt - 3, GFP_ATOMIC);
if (!dch->rx_skb) {
@ -523,10 +522,10 @@ receive_dmsg(struct hfc_pci *hc)
/*
* check for transparent receive data and read max one threshold size if avail
*/
int
static int
hfcpci_empty_fifo_trans(struct bchannel *bch, struct bzfifo *bz, u_char *bdata)
{
unsigned short *z1r, *z2r;
__le16 *z1r, *z2r;
int new_z2, fcnt, maxlen;
u_char *ptr, *ptr1;
@ -576,7 +575,7 @@ hfcpci_empty_fifo_trans(struct bchannel *bch, struct bzfifo *bz, u_char *bdata)
/*
* B-channel main receive routine
*/
void
static void
main_rec_hfcpci(struct bchannel *bch)
{
struct hfc_pci *hc = bch->hw;
@ -724,7 +723,7 @@ hfcpci_fill_fifo(struct bchannel *bch)
struct bzfifo *bz;
u_char *bdata;
u_char new_f1, *src, *dst;
unsigned short *z1t, *z2t;
__le16 *z1t, *z2t;
if ((bch->debug & DEBUG_HW_BCHANNEL) && !(bch->debug & DEBUG_HW_BFIFO))
printk(KERN_DEBUG "%s\n", __func__);
@ -1679,7 +1678,7 @@ hfcpci_l2l1B(struct mISDNchannel *ch, struct sk_buff *skb)
* called for card init message
*/
void
static void
inithfcpci(struct hfc_pci *hc)
{
printk(KERN_DEBUG "inithfcpci: entered\n");
@ -1966,7 +1965,7 @@ setup_hw(struct hfc_pci *hc)
printk(KERN_WARNING "HFC-PCI: No IRQ for PCI card found\n");
return 1;
}
hc->hw.pci_io = (char *)(ulong)hc->pdev->resource[1].start;
hc->hw.pci_io = (char __iomem *)(unsigned long)hc->pdev->resource[1].start;
if (!hc->hw.pci_io) {
printk(KERN_WARNING "HFC-PCI: No IO-Mem for PCI card found\n");

View File

@ -1533,8 +1533,10 @@ static int isdn_ppp_mp_bundle_array_init(void)
int sz = ISDN_MAX_CHANNELS*sizeof(ippp_bundle);
if( (isdn_ppp_bundle_arr = kzalloc(sz, GFP_KERNEL)) == NULL )
return -ENOMEM;
for( i = 0; i < ISDN_MAX_CHANNELS; i++ )
for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
spin_lock_init(&isdn_ppp_bundle_arr[i].lock);
skb_queue_head_init(&isdn_ppp_bundle_arr[i].frags);
}
return 0;
}
@ -1567,7 +1569,7 @@ static int isdn_ppp_mp_init( isdn_net_local * lp, ippp_bundle * add_to )
if ((lp->netdev->pb = isdn_ppp_mp_bundle_alloc()) == NULL)
return -ENOMEM;
lp->next = lp->last = lp; /* nobody else in a queue */
lp->netdev->pb->frags = NULL;
skb_queue_head_init(&lp->netdev->pb->frags);
lp->netdev->pb->frames = 0;
lp->netdev->pb->seq = UINT_MAX;
}
@ -1579,28 +1581,29 @@ static int isdn_ppp_mp_init( isdn_net_local * lp, ippp_bundle * add_to )
static u32 isdn_ppp_mp_get_seq( int short_seq,
struct sk_buff * skb, u32 last_seq );
static struct sk_buff * isdn_ppp_mp_discard( ippp_bundle * mp,
struct sk_buff * from, struct sk_buff * to );
static void isdn_ppp_mp_reassembly( isdn_net_dev * net_dev, isdn_net_local * lp,
struct sk_buff * from, struct sk_buff * to );
static void isdn_ppp_mp_free_skb( ippp_bundle * mp, struct sk_buff * skb );
static void isdn_ppp_mp_discard(ippp_bundle *mp, struct sk_buff *from,
struct sk_buff *to);
static void isdn_ppp_mp_reassembly(isdn_net_dev *net_dev, isdn_net_local *lp,
struct sk_buff *from, struct sk_buff *to,
u32 lastseq);
static void isdn_ppp_mp_free_skb(ippp_bundle *mp, struct sk_buff *skb);
static void isdn_ppp_mp_print_recv_pkt( int slot, struct sk_buff * skb );
static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
struct sk_buff *skb)
struct sk_buff *skb)
{
struct ippp_struct *is;
isdn_net_local * lpq;
ippp_bundle * mp;
isdn_mppp_stats * stats;
struct sk_buff * newfrag, * frag, * start, *nextf;
struct sk_buff *newfrag, *frag, *start, *nextf;
u32 newseq, minseq, thisseq;
isdn_mppp_stats *stats;
struct ippp_struct *is;
unsigned long flags;
isdn_net_local *lpq;
ippp_bundle *mp;
int slot;
spin_lock_irqsave(&net_dev->pb->lock, flags);
mp = net_dev->pb;
stats = &mp->stats;
mp = net_dev->pb;
stats = &mp->stats;
slot = lp->ppp_slot;
if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: lp->ppp_slot(%d)\n",
@ -1611,20 +1614,19 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
return;
}
is = ippp_table[slot];
if( ++mp->frames > stats->max_queue_len )
if (++mp->frames > stats->max_queue_len)
stats->max_queue_len = mp->frames;
if (is->debug & 0x8)
isdn_ppp_mp_print_recv_pkt(lp->ppp_slot, skb);
newseq = isdn_ppp_mp_get_seq(is->mpppcfg & SC_IN_SHORT_SEQ,
skb, is->last_link_seqno);
newseq = isdn_ppp_mp_get_seq(is->mpppcfg & SC_IN_SHORT_SEQ,
skb, is->last_link_seqno);
/* if this packet seq # is less than last already processed one,
* toss it right away, but check for sequence start case first
*/
if( mp->seq > MP_LONGSEQ_MAX && (newseq & MP_LONGSEQ_MAXBIT) ) {
if (mp->seq > MP_LONGSEQ_MAX && (newseq & MP_LONGSEQ_MAXBIT)) {
mp->seq = newseq; /* the first packet: required for
* rfc1990 non-compliant clients --
* prevents constant packet toss */
@ -1634,7 +1636,7 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
spin_unlock_irqrestore(&mp->lock, flags);
return;
}
/* find the minimum received sequence number over all links */
is->last_link_seqno = minseq = newseq;
for (lpq = net_dev->queue;;) {
@ -1655,22 +1657,31 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
* packets */
newfrag = skb;
/* if this new fragment is before the first one, then enqueue it now. */
if ((frag = mp->frags) == NULL || MP_LT(newseq, MP_SEQ(frag))) {
newfrag->next = frag;
mp->frags = frag = newfrag;
newfrag = NULL;
}
/* Insert new fragment into the proper sequence slot. */
skb_queue_walk(&mp->frags, frag) {
if (MP_SEQ(frag) == newseq) {
isdn_ppp_mp_free_skb(mp, newfrag);
newfrag = NULL;
break;
}
if (MP_LT(newseq, MP_SEQ(frag))) {
__skb_queue_before(&mp->frags, frag, newfrag);
newfrag = NULL;
break;
}
}
if (newfrag)
__skb_queue_tail(&mp->frags, newfrag);
start = MP_FLAGS(frag) & MP_BEGIN_FRAG &&
MP_SEQ(frag) == mp->seq ? frag : NULL;
frag = skb_peek(&mp->frags);
start = ((MP_FLAGS(frag) & MP_BEGIN_FRAG) &&
(MP_SEQ(frag) == mp->seq)) ? frag : NULL;
if (!start)
goto check_overflow;
/*
* main fragment traversing loop
/* main fragment traversing loop
*
* try to accomplish several tasks:
* - insert new fragment into the proper sequence slot (once that's done
* newfrag will be set to NULL)
* - reassemble any complete fragment sequence (non-null 'start'
* indicates there is a continguous sequence present)
* - discard any incomplete sequences that are below minseq -- due
@ -1679,71 +1690,46 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
* come to complete such sequence and it should be discarded
*
* loop completes when we accomplished the following tasks:
* - new fragment is inserted in the proper sequence ('newfrag' is
* set to NULL)
* - we hit a gap in the sequence, so no reassembly/processing is
* possible ('start' would be set to NULL)
*
* algorithm for this code is derived from code in the book
* 'PPP Design And Debugging' by James Carlson (Addison-Wesley)
*/
while (start != NULL || newfrag != NULL) {
skb_queue_walk_safe(&mp->frags, frag, nextf) {
thisseq = MP_SEQ(frag);
thisseq = MP_SEQ(frag);
nextf = frag->next;
/* drop any duplicate fragments */
if (newfrag != NULL && thisseq == newseq) {
isdn_ppp_mp_free_skb(mp, newfrag);
newfrag = NULL;
}
/* insert new fragment before next element if possible. */
if (newfrag != NULL && (nextf == NULL ||
MP_LT(newseq, MP_SEQ(nextf)))) {
newfrag->next = nextf;
frag->next = nextf = newfrag;
newfrag = NULL;
}
if (start != NULL) {
/* check for misplaced start */
if (start != frag && (MP_FLAGS(frag) & MP_BEGIN_FRAG)) {
printk(KERN_WARNING"isdn_mppp(seq %d): new "
"BEGIN flag with no prior END", thisseq);
stats->seqerrs++;
stats->frame_drops++;
start = isdn_ppp_mp_discard(mp, start,frag);
nextf = frag->next;
}
} else if (MP_LE(thisseq, minseq)) {
if (MP_FLAGS(frag) & MP_BEGIN_FRAG)
/* check for misplaced start */
if (start != frag && (MP_FLAGS(frag) & MP_BEGIN_FRAG)) {
printk(KERN_WARNING"isdn_mppp(seq %d): new "
"BEGIN flag with no prior END", thisseq);
stats->seqerrs++;
stats->frame_drops++;
isdn_ppp_mp_discard(mp, start, frag);
start = frag;
} else if (MP_LE(thisseq, minseq)) {
if (MP_FLAGS(frag) & MP_BEGIN_FRAG)
start = frag;
else {
else {
if (MP_FLAGS(frag) & MP_END_FRAG)
stats->frame_drops++;
if( mp->frags == frag )
mp->frags = nextf;
stats->frame_drops++;
__skb_unlink(skb, &mp->frags);
isdn_ppp_mp_free_skb(mp, frag);
frag = nextf;
continue;
}
}
}
/* if start is non-null and we have end fragment, then
* we have full reassembly sequence -- reassemble
* and process packet now
*/
if (start != NULL && (MP_FLAGS(frag) & MP_END_FRAG)) {
minseq = mp->seq = (thisseq+1) & MP_LONGSEQ_MASK;
/* Reassemble the packet then dispatch it */
isdn_ppp_mp_reassembly(net_dev, lp, start, nextf);
start = NULL;
frag = NULL;
mp->frags = nextf;
}
/* if we have end fragment, then we have full reassembly
* sequence -- reassemble and process packet now
*/
if (MP_FLAGS(frag) & MP_END_FRAG) {
minseq = mp->seq = (thisseq+1) & MP_LONGSEQ_MASK;
/* Reassemble the packet then dispatch it */
isdn_ppp_mp_reassembly(net_dev, lp, start, frag, thisseq);
start = NULL;
frag = NULL;
}
/* check if need to update start pointer: if we just
* reassembled the packet and sequence is contiguous
@ -1754,26 +1740,25 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
* below low watermark and set start to the next frag or
* clear start ptr.
*/
if (nextf != NULL &&
if (nextf != (struct sk_buff *)&mp->frags &&
((thisseq+1) & MP_LONGSEQ_MASK) == MP_SEQ(nextf)) {
/* if we just reassembled and the next one is here,
* then start another reassembly. */
if (frag == NULL) {
/* if we just reassembled and the next one is here,
* then start another reassembly.
*/
if (frag == NULL) {
if (MP_FLAGS(nextf) & MP_BEGIN_FRAG)
start = nextf;
else
{
printk(KERN_WARNING"isdn_mppp(seq %d):"
" END flag with no following "
"BEGIN", thisseq);
start = nextf;
else {
printk(KERN_WARNING"isdn_mppp(seq %d):"
" END flag with no following "
"BEGIN", thisseq);
stats->seqerrs++;
}
}
} else {
if ( nextf != NULL && frag != NULL &&
MP_LT(thisseq, minseq)) {
} else {
if (nextf != (struct sk_buff *)&mp->frags &&
frag != NULL &&
MP_LT(thisseq, minseq)) {
/* we've got a break in the sequence
* and we not at the end yet
* and we did not just reassembled
@ -1782,41 +1767,39 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
* discard all the frames below low watermark
* and start over */
stats->frame_drops++;
mp->frags = isdn_ppp_mp_discard(mp,start,nextf);
isdn_ppp_mp_discard(mp, start, nextf);
}
/* break in the sequence, no reassembly */
start = NULL;
}
frag = nextf;
} /* while -- main loop */
if (mp->frags == NULL)
mp->frags = frag;
start = NULL;
}
if (!start)
break;
}
check_overflow:
/* rather straighforward way to deal with (not very) possible
* queue overflow */
* queue overflow
*/
if (mp->frames > MP_MAX_QUEUE_LEN) {
stats->overflows++;
while (mp->frames > MP_MAX_QUEUE_LEN) {
frag = mp->frags->next;
isdn_ppp_mp_free_skb(mp, mp->frags);
mp->frags = frag;
skb_queue_walk_safe(&mp->frags, frag, nextf) {
if (mp->frames <= MP_MAX_QUEUE_LEN)
break;
__skb_unlink(frag, &mp->frags);
isdn_ppp_mp_free_skb(mp, frag);
}
}
spin_unlock_irqrestore(&mp->lock, flags);
}
static void isdn_ppp_mp_cleanup( isdn_net_local * lp )
static void isdn_ppp_mp_cleanup(isdn_net_local *lp)
{
struct sk_buff * frag = lp->netdev->pb->frags;
struct sk_buff * nextfrag;
while( frag ) {
nextfrag = frag->next;
isdn_ppp_mp_free_skb(lp->netdev->pb, frag);
frag = nextfrag;
struct sk_buff *skb, *tmp;
skb_queue_walk_safe(&lp->netdev->pb->frags, skb, tmp) {
__skb_unlink(skb, &lp->netdev->pb->frags);
isdn_ppp_mp_free_skb(lp->netdev->pb, skb);
}
lp->netdev->pb->frags = NULL;
}
static u32 isdn_ppp_mp_get_seq( int short_seq,
@ -1853,72 +1836,115 @@ static u32 isdn_ppp_mp_get_seq( int short_seq,
return seq;
}
struct sk_buff * isdn_ppp_mp_discard( ippp_bundle * mp,
struct sk_buff * from, struct sk_buff * to )
static void isdn_ppp_mp_discard(ippp_bundle *mp, struct sk_buff *from,
struct sk_buff *to)
{
if( from )
while (from != to) {
struct sk_buff * next = from->next;
isdn_ppp_mp_free_skb(mp, from);
from = next;
if (from) {
struct sk_buff *skb, *tmp;
int freeing = 0;
skb_queue_walk_safe(&mp->frags, skb, tmp) {
if (skb == to)
break;
if (skb == from)
freeing = 1;
if (!freeing)
continue;
__skb_unlink(skb, &mp->frags);
isdn_ppp_mp_free_skb(mp, skb);
}
return from;
}
}
void isdn_ppp_mp_reassembly( isdn_net_dev * net_dev, isdn_net_local * lp,
struct sk_buff * from, struct sk_buff * to )
static unsigned int calc_tot_len(struct sk_buff_head *queue,
struct sk_buff *from, struct sk_buff *to)
{
ippp_bundle * mp = net_dev->pb;
int proto;
struct sk_buff * skb;
unsigned int tot_len = 0;
struct sk_buff *skb;
int found_start = 0;
skb_queue_walk(queue, skb) {
if (skb == from)
found_start = 1;
if (!found_start)
continue;
tot_len += skb->len - MP_HEADER_LEN;
if (skb == to)
break;
}
return tot_len;
}
/* Reassemble packet using fragments in the reassembly queue from
* 'from' until 'to', inclusive.
*/
static void isdn_ppp_mp_reassembly(isdn_net_dev *net_dev, isdn_net_local *lp,
struct sk_buff *from, struct sk_buff *to,
u32 lastseq)
{
ippp_bundle *mp = net_dev->pb;
unsigned int tot_len;
struct sk_buff *skb;
int proto;
if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: lp->ppp_slot(%d) out of range\n",
__func__, lp->ppp_slot);
return;
}
if( MP_FLAGS(from) == (MP_BEGIN_FRAG | MP_END_FRAG) ) {
if( ippp_table[lp->ppp_slot]->debug & 0x40 )
tot_len = calc_tot_len(&mp->frags, from, to);
if (MP_FLAGS(from) == (MP_BEGIN_FRAG | MP_END_FRAG)) {
if (ippp_table[lp->ppp_slot]->debug & 0x40)
printk(KERN_DEBUG "isdn_mppp: reassembly: frame %d, "
"len %d\n", MP_SEQ(from), from->len );
"len %d\n", MP_SEQ(from), from->len);
skb = from;
skb_pull(skb, MP_HEADER_LEN);
__skb_unlink(skb, &mp->frags);
mp->frames--;
} else {
struct sk_buff * frag;
int n;
struct sk_buff *walk, *tmp;
int found_start = 0;
for(tot_len=n=0, frag=from; frag != to; frag=frag->next, n++)
tot_len += frag->len - MP_HEADER_LEN;
if( ippp_table[lp->ppp_slot]->debug & 0x40 )
if (ippp_table[lp->ppp_slot]->debug & 0x40)
printk(KERN_DEBUG"isdn_mppp: reassembling frames %d "
"to %d, len %d\n", MP_SEQ(from),
(MP_SEQ(from)+n-1) & MP_LONGSEQ_MASK, tot_len );
if( (skb = dev_alloc_skb(tot_len)) == NULL ) {
"to %d, len %d\n", MP_SEQ(from), lastseq,
tot_len);
skb = dev_alloc_skb(tot_len);
if (!skb)
printk(KERN_ERR "isdn_mppp: cannot allocate sk buff "
"of size %d\n", tot_len);
isdn_ppp_mp_discard(mp, from, to);
return;
}
"of size %d\n", tot_len);
while( from != to ) {
unsigned int len = from->len - MP_HEADER_LEN;
found_start = 0;
skb_queue_walk_safe(&mp->frags, walk, tmp) {
if (walk == from)
found_start = 1;
if (!found_start)
continue;
skb_copy_from_linear_data_offset(from, MP_HEADER_LEN,
skb_put(skb,len),
len);
frag = from->next;
isdn_ppp_mp_free_skb(mp, from);
from = frag;
if (skb) {
unsigned int len = walk->len - MP_HEADER_LEN;
skb_copy_from_linear_data_offset(walk, MP_HEADER_LEN,
skb_put(skb, len),
len);
}
__skb_unlink(walk, &mp->frags);
isdn_ppp_mp_free_skb(mp, walk);
if (walk == to)
break;
}
}
if (!skb)
return;
proto = isdn_ppp_strip_proto(skb);
isdn_ppp_push_higher(net_dev, lp, skb, proto);
}
static void isdn_ppp_mp_free_skb(ippp_bundle * mp, struct sk_buff * skb)
static void isdn_ppp_mp_free_skb(ippp_bundle *mp, struct sk_buff *skb)
{
dev_kfree_skb(skb);
mp->frames--;

View File

@ -124,18 +124,6 @@ mISDN_read(struct file *filep, char *buf, size_t count, loff_t *off)
return ret;
}
static loff_t
mISDN_llseek(struct file *filep, loff_t offset, int orig)
{
return -ESPIPE;
}
static ssize_t
mISDN_write(struct file *filep, const char *buf, size_t count, loff_t *off)
{
return -EOPNOTSUPP;
}
static unsigned int
mISDN_poll(struct file *filep, poll_table *wait)
{
@ -157,8 +145,9 @@ mISDN_poll(struct file *filep, poll_table *wait)
}
static void
dev_expire_timer(struct mISDNtimer *timer)
dev_expire_timer(unsigned long data)
{
struct mISDNtimer *timer = (void *)data;
u_long flags;
spin_lock_irqsave(&timer->dev->lock, flags);
@ -191,7 +180,7 @@ misdn_add_timer(struct mISDNtimerdev *dev, int timeout)
spin_unlock_irqrestore(&dev->lock, flags);
timer->dev = dev;
timer->tl.data = (long)timer;
timer->tl.function = (void *) dev_expire_timer;
timer->tl.function = dev_expire_timer;
init_timer(&timer->tl);
timer->tl.expires = jiffies + ((HZ * (u_long)timeout) / 1000);
add_timer(&timer->tl);
@ -211,6 +200,9 @@ misdn_del_timer(struct mISDNtimerdev *dev, int id)
list_for_each_entry(timer, &dev->pending, list) {
if (timer->id == id) {
list_del_init(&timer->list);
/* RED-PEN AK: race -- timer can be still running on
* other CPU. Needs reference count I think
*/
del_timer(&timer->tl);
ret = timer->id;
kfree(timer);
@ -268,9 +260,7 @@ mISDN_ioctl(struct inode *inode, struct file *filep, unsigned int cmd,
}
static struct file_operations mISDN_fops = {
.llseek = mISDN_llseek,
.read = mISDN_read,
.write = mISDN_write,
.poll = mISDN_poll,
.ioctl = mISDN_ioctl,
.open = mISDN_open,

View File

@ -130,12 +130,12 @@ static const char filename[] = __FILE__;
static const char timeout_msg[] = "*** timeout at %s:%s (line %d) ***\n";
#define TIMEOUT_MSG(lineno) \
printk(timeout_msg, filename,__FUNCTION__,(lineno))
printk(timeout_msg, filename,__func__,(lineno))
static const char invalid_pcb_msg[] =
"*** invalid pcb length %d at %s:%s (line %d) ***\n";
#define INVALID_PCB_MSG(len) \
printk(invalid_pcb_msg, (len),filename,__FUNCTION__,__LINE__)
printk(invalid_pcb_msg, (len),filename,__func__,__LINE__)
static char search_msg[] __initdata = KERN_INFO "%s: Looking for 3c505 adapter at address %#x...";

View File

@ -127,7 +127,6 @@ MODULE_PARM_DESC (multicast_filter_limit, "8139cp: maximum number of filtered mu
(CP)->tx_tail - (CP)->tx_head - 1)
#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
#define RX_OFFSET 2
#define CP_INTERNAL_PHY 32
/* The following settings are log_2(bytes)-4: 0 == 16 bytes .. 6==1024, 7==end of packet. */
@ -552,14 +551,14 @@ rx_status_loop:
printk(KERN_DEBUG "%s: rx slot %d status 0x%x len %d\n",
dev->name, rx_tail, status, len);
buflen = cp->rx_buf_sz + RX_OFFSET;
new_skb = dev_alloc_skb (buflen);
buflen = cp->rx_buf_sz + NET_IP_ALIGN;
new_skb = netdev_alloc_skb(dev, buflen);
if (!new_skb) {
dev->stats.rx_dropped++;
goto rx_next;
}
skb_reserve(new_skb, RX_OFFSET);
skb_reserve(new_skb, NET_IP_ALIGN);
dma_unmap_single(&cp->pdev->dev, mapping,
buflen, PCI_DMA_FROMDEVICE);
@ -1051,19 +1050,20 @@ static void cp_init_hw (struct cp_private *cp)
cpw8_f(Cfg9346, Cfg9346_Lock);
}
static int cp_refill_rx (struct cp_private *cp)
static int cp_refill_rx(struct cp_private *cp)
{
struct net_device *dev = cp->dev;
unsigned i;
for (i = 0; i < CP_RX_RING_SIZE; i++) {
struct sk_buff *skb;
dma_addr_t mapping;
skb = dev_alloc_skb(cp->rx_buf_sz + RX_OFFSET);
skb = netdev_alloc_skb(dev, cp->rx_buf_sz + NET_IP_ALIGN);
if (!skb)
goto err_out;
skb_reserve(skb, RX_OFFSET);
skb_reserve(skb, NET_IP_ALIGN);
mapping = dma_map_single(&cp->pdev->dev, skb->data,
cp->rx_buf_sz, PCI_DMA_FROMDEVICE);

View File

@ -309,7 +309,7 @@ enum RTL8139_registers {
Cfg9346 = 0x50,
Config0 = 0x51,
Config1 = 0x52,
FlashReg = 0x54,
TimerInt = 0x54,
MediaStatus = 0x58,
Config3 = 0x59,
Config4 = 0x5A, /* absent on RTL-8139A */
@ -325,6 +325,7 @@ enum RTL8139_registers {
FIFOTMS = 0x70, /* FIFO Control and test. */
CSCR = 0x74, /* Chip Status and Configuration Register. */
PARA78 = 0x78,
FlashReg = 0xD4, /* Communication with Flash ROM, four bytes. */
PARA7c = 0x7c, /* Magic transceiver parameter register. */
Config5 = 0xD8, /* absent on RTL-8139A */
};
@ -1722,13 +1723,18 @@ static int rtl8139_start_xmit (struct sk_buff *skb, struct net_device *dev)
}
spin_lock_irqsave(&tp->lock, flags);
/*
* Writing to TxStatus triggers a DMA transfer of the data
* copied to tp->tx_buf[entry] above. Use a memory barrier
* to make sure that the device sees the updated data.
*/
wmb();
RTL_W32_F (TxStatus0 + (entry * sizeof (u32)),
tp->tx_flag | max(len, (unsigned int)ETH_ZLEN));
dev->trans_start = jiffies;
tp->cur_tx++;
wmb();
if ((tp->cur_tx - NUM_TX_DESC) == tp->dirty_tx)
netif_stop_queue (dev);
@ -2009,9 +2015,9 @@ no_early_rx:
/* Malloc up new buffer, compatible with net-2e. */
/* Omit the four octet CRC from the length. */
skb = dev_alloc_skb (pkt_size + 2);
skb = netdev_alloc_skb(dev, pkt_size + NET_IP_ALIGN);
if (likely(skb)) {
skb_reserve (skb, 2); /* 16 byte align the IP fields. */
skb_reserve (skb, NET_IP_ALIGN); /* 16 byte align the IP fields. */
#if RX_BUF_IDX == 3
wrap_copy(skb, rx_ring, ring_offset+4, pkt_size);
#else

View File

@ -1813,7 +1813,7 @@ config FEC2
config FEC_MPC52xx
tristate "MPC52xx FEC driver"
depends on PPC_MERGE && PPC_MPC52xx && PPC_BESTCOMM_FEC
depends on PPC_MPC52xx && PPC_BESTCOMM_FEC
select CRC32
select PHYLIB
---help---
@ -1840,6 +1840,17 @@ config NE_H8300
Say Y here if you want to use the NE2000 compatible
controller on the Renesas H8/300 processor.
config ATL2
tristate "Atheros L2 Fast Ethernet support"
depends on PCI
select CRC32
select MII
help
This driver supports the Atheros L2 fast ethernet adapter.
To compile this driver as a module, choose M here. The module
will be called atl2.
source "drivers/net/fs_enet/Kconfig"
endif # NET_ETHERNET
@ -1927,15 +1938,6 @@ config E1000
To compile this driver as a module, choose M here. The module
will be called e1000.
config E1000_DISABLE_PACKET_SPLIT
bool "Disable Packet Split for PCI express adapters"
depends on E1000
help
Say Y here if you want to use the legacy receive path for PCI express
hardware.
If in doubt, say N.
config E1000E
tristate "Intel(R) PRO/1000 PCI-Express Gigabit Ethernet support"
depends on PCI && (!SPARC32 || BROKEN)
@ -2046,6 +2048,7 @@ config R8169
tristate "Realtek 8169 gigabit ethernet support"
depends on PCI
select CRC32
select MII
---help---
Say Y here if you have a Realtek 8169 PCI Gigabit Ethernet adapter.
@ -2262,7 +2265,7 @@ config UGETH_TX_ON_DEMAND
config MV643XX_ETH
tristate "Marvell Discovery (643XX) and Orion ethernet support"
depends on MV64360 || MV64X60 || (PPC_MULTIPLATFORM && PPC32) || PLAT_ORION
select MII
select PHYLIB
help
This driver supports the gigabit ethernet MACs in the
Marvell Discovery PPC/MIPS chipset family (MV643XX) and
@ -2281,12 +2284,13 @@ config QLA3XXX
will be called qla3xxx.
config ATL1
tristate "Attansic L1 Gigabit Ethernet support (EXPERIMENTAL)"
depends on PCI && EXPERIMENTAL
tristate "Atheros/Attansic L1 Gigabit Ethernet support"
depends on PCI
select CRC32
select MII
help
This driver supports the Attansic L1 gigabit ethernet adapter.
This driver supports the Atheros/Attansic L1 gigabit ethernet
adapter.
To compile this driver as a module, choose M here. The module
will be called atl1.
@ -2302,6 +2306,18 @@ config ATL1E
To compile this driver as a module, choose M here. The module
will be called atl1e.
config JME
tristate "JMicron(R) PCI-Express Gigabit Ethernet support"
depends on PCI
select CRC32
select MII
---help---
This driver supports the PCI-Express gigabit ethernet adapters
based on JMicron JMC250 chipset.
To compile this driver as a module, choose M here. The module
will be called jme.
endif # NETDEV_1000
#
@ -2377,10 +2393,18 @@ config EHEA
To compile the driver as a module, choose M here. The module
will be called ehea.
config ENIC
tristate "E, the Cisco 10G Ethernet NIC"
depends on PCI && INET
select INET_LRO
help
This enables the support for the Cisco 10G Ethernet card.
config IXGBE
tristate "Intel(R) 10GbE PCI Express adapters support"
depends on PCI && INET
select INET_LRO
select INTEL_IOATDMA
---help---
This driver supports Intel(R) 10GbE PCI Express family of
adapters. For more information on how to identify your adapter, go
@ -2432,6 +2456,7 @@ config MYRI10GE
select FW_LOADER
select CRC32
select INET_LRO
select INTEL_IOATDMA
---help---
This driver supports Myricom Myri-10G Dual Protocol interface in
Ethernet mode. If the eeprom on your board is not recent enough,
@ -2496,6 +2521,15 @@ config BNX2X
To compile this driver as a module, choose M here: the module
will be called bnx2x. This is recommended.
config QLGE
tristate "QLogic QLGE 10Gb Ethernet Driver Support"
depends on PCI
help
This driver supports QLogic ISP8XXX 10Gb Ethernet cards.
To compile this driver as a module, choose M here: the module
will be called qlge.
source "drivers/net/sfc/Kconfig"
endif # NETDEV_10000

View File

@ -15,9 +15,12 @@ obj-$(CONFIG_EHEA) += ehea/
obj-$(CONFIG_CAN) += can/
obj-$(CONFIG_BONDING) += bonding/
obj-$(CONFIG_ATL1) += atlx/
obj-$(CONFIG_ATL2) += atlx/
obj-$(CONFIG_ATL1E) += atl1e/
obj-$(CONFIG_GIANFAR) += gianfar_driver.o
obj-$(CONFIG_TEHUTI) += tehuti.o
obj-$(CONFIG_ENIC) += enic/
obj-$(CONFIG_JME) += jme.o
gianfar_driver-objs := gianfar.o \
gianfar_ethtool.o \
@ -111,7 +114,7 @@ obj-$(CONFIG_EL2) += 3c503.o 8390p.o
obj-$(CONFIG_NE2000) += ne.o 8390p.o
obj-$(CONFIG_NE2_MCA) += ne2.o 8390p.o
obj-$(CONFIG_HPLAN) += hp.o 8390p.o
obj-$(CONFIG_HPLAN_PLUS) += hp-plus.o 8390p.o
obj-$(CONFIG_HPLAN_PLUS) += hp-plus.o 8390.o
obj-$(CONFIG_ULTRA) += smc-ultra.o 8390.o
obj-$(CONFIG_ULTRAMCA) += smc-mca.o 8390.o
obj-$(CONFIG_ULTRA32) += smc-ultra32.o 8390.o
@ -128,6 +131,7 @@ obj-$(CONFIG_AX88796) += ax88796.o
obj-$(CONFIG_TSI108_ETH) += tsi108_eth.o
obj-$(CONFIG_MV643XX_ETH) += mv643xx_eth.o
obj-$(CONFIG_QLA3XXX) += qla3xxx.o
obj-$(CONFIG_QLGE) += qlge/
obj-$(CONFIG_PPP) += ppp_generic.o
obj-$(CONFIG_PPP_ASYNC) += ppp_async.o

View File

@ -442,24 +442,24 @@ static int arcnet_open(struct net_device *dev)
BUGMSG(D_NORMAL, "WARNING! Station address FF may confuse "
"DOS networking programs!\n");
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
if (ASTATUS() & RESETflag) {
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
ACOMMAND(CFLAGScmd | RESETclear);
}
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
/* make sure we're ready to receive IRQ's. */
AINTMASK(0);
udelay(1); /* give it time to set the mask before
* we reset it again. (may not even be
* necessary)
*/
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
lp->intmask = NORXflag | RECONflag;
AINTMASK(lp->intmask);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
netif_start_queue(dev);
@ -670,14 +670,14 @@ static int arcnet_send_packet(struct sk_buff *skb, struct net_device *dev)
freeskb = 0;
}
BUGMSG(D_DEBUG, "%s: %d: %s, status: %x\n",__FILE__,__LINE__,__FUNCTION__,ASTATUS());
BUGMSG(D_DEBUG, "%s: %d: %s, status: %x\n",__FILE__,__LINE__,__func__,ASTATUS());
/* make sure we didn't ignore a TX IRQ while we were in here */
AINTMASK(0);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
lp->intmask |= TXFREEflag|EXCNAKflag;
AINTMASK(lp->intmask);
BUGMSG(D_DEBUG, "%s: %d: %s, status: %x\n",__FILE__,__LINE__,__FUNCTION__,ASTATUS());
BUGMSG(D_DEBUG, "%s: %d: %s, status: %x\n",__FILE__,__LINE__,__func__,ASTATUS());
spin_unlock_irqrestore(&lp->lock, flags);
if (freeskb) {
@ -798,7 +798,7 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
diagstatus = (status >> 8) & 0xFF;
BUGMSG(D_DEBUG, "%s: %d: %s: status=%x\n",
__FILE__,__LINE__,__FUNCTION__,status);
__FILE__,__LINE__,__func__,status);
didsomething = 0;
/*

View File

@ -238,15 +238,15 @@ static int com20020_reset(struct net_device *dev, int really_reset)
u_char inbyte;
BUGMSG(D_DEBUG, "%s: %d: %s: dev: %p, lp: %p, dev->name: %s\n",
__FILE__,__LINE__,__FUNCTION__,dev,lp,dev->name);
__FILE__,__LINE__,__func__,dev,lp,dev->name);
BUGMSG(D_INIT, "Resetting %s (status=%02Xh)\n",
dev->name, ASTATUS());
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
lp->config = TXENcfg | (lp->timeout << 3) | (lp->backplane << 2);
/* power-up defaults */
SETCONF;
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
if (really_reset) {
/* reset the card */
@ -254,22 +254,22 @@ static int com20020_reset(struct net_device *dev, int really_reset)
mdelay(RESETtime * 2); /* COM20020 seems to be slower sometimes */
}
/* clear flags & end reset */
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
ACOMMAND(CFLAGScmd | RESETclear | CONFIGclear);
/* verify that the ARCnet signature byte is present */
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
com20020_copy_from_card(dev, 0, 0, &inbyte, 1);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
if (inbyte != TESTvalue) {
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
BUGMSG(D_NORMAL, "reset failed: TESTvalue not present.\n");
return 1;
}
/* enable extended (512-byte) packets */
ACOMMAND(CONFIGcmd | EXTconf);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
/* done! return success. */
return 0;

View File

@ -397,7 +397,7 @@ static int atl1e_phy_setup_autoneg_adv(struct atl1e_hw *hw)
*/
int atl1e_phy_commit(struct atl1e_hw *hw)
{
struct atl1e_adapter *adapter = (struct atl1e_adapter *)hw->adapter;
struct atl1e_adapter *adapter = hw->adapter;
struct pci_dev *pdev = adapter->pdev;
int ret_val;
u16 phy_data;
@ -431,7 +431,7 @@ int atl1e_phy_commit(struct atl1e_hw *hw)
int atl1e_phy_init(struct atl1e_hw *hw)
{
struct atl1e_adapter *adapter = (struct atl1e_adapter *)hw->adapter;
struct atl1e_adapter *adapter = hw->adapter;
struct pci_dev *pdev = adapter->pdev;
s32 ret_val;
u16 phy_val;
@ -525,7 +525,7 @@ int atl1e_phy_init(struct atl1e_hw *hw)
*/
int atl1e_reset_hw(struct atl1e_hw *hw)
{
struct atl1e_adapter *adapter = (struct atl1e_adapter *)hw->adapter;
struct atl1e_adapter *adapter = hw->adapter;
struct pci_dev *pdev = adapter->pdev;
u32 idle_status_data = 0;

View File

@ -2390,9 +2390,7 @@ static int __devinit atl1e_probe(struct pci_dev *pdev,
}
/* Init GPHY as early as possible due to power saving issue */
spin_lock(&adapter->mdio_lock);
atl1e_phy_init(&adapter->hw);
spin_unlock(&adapter->mdio_lock);
/* reset the controller to
* put the device in a known good starting state */
err = atl1e_reset_hw(&adapter->hw);

View File

@ -1 +1,3 @@
obj-$(CONFIG_ATL1) += atl1.o
obj-$(CONFIG_ATL2) += atl2.o

View File

@ -24,16 +24,12 @@
* file called COPYING.
*
* Contact Information:
* Xiong Huang <xiong_huang@attansic.com>
* Attansic Technology Corp. 3F 147, Xianzheng 9th Road, Zhubei,
* Xinzhu 302, TAIWAN, REPUBLIC OF CHINA
*
* Xiong Huang <xiong.huang@atheros.com>
* Jie Yang <jie.yang@atheros.com>
* Chris Snook <csnook@redhat.com>
* Jay Cliburn <jcliburn@gmail.com>
*
* This version is adapted from the Attansic reference driver for
* inclusion in the Linux kernel. It is currently under heavy development.
* A very incomplete list of things that need to be dealt with:
* This version is adapted from the Attansic reference driver.
*
* TODO:
* Add more ethtool functions.
@ -2109,7 +2105,6 @@ static u16 atl1_tpd_avail(struct atl1_tpd_ring *tpd_ring)
static int atl1_tso(struct atl1_adapter *adapter, struct sk_buff *skb,
struct tx_packet_desc *ptpd)
{
/* spinlock held */
u8 hdr_len, ip_off;
u32 real_len;
int err;
@ -2196,7 +2191,6 @@ static int atl1_tx_csum(struct atl1_adapter *adapter, struct sk_buff *skb,
static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
struct tx_packet_desc *ptpd)
{
/* spinlock held */
struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring;
struct atl1_buffer *buffer_info;
u16 buf_len = skb->len;
@ -2303,7 +2297,6 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
static void atl1_tx_queue(struct atl1_adapter *adapter, u16 count,
struct tx_packet_desc *ptpd)
{
/* spinlock held */
struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring;
struct atl1_buffer *buffer_info;
struct tx_packet_desc *tpd;
@ -2361,7 +2354,6 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
struct tx_packet_desc *ptpd;
u16 frag_size;
u16 vlan_tag;
unsigned long flags;
unsigned int nr_frags = 0;
unsigned int mss = 0;
unsigned int f;
@ -2399,18 +2391,9 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
}
}
if (!spin_trylock_irqsave(&adapter->lock, flags)) {
/* Can't get lock - tell upper layer to requeue */
if (netif_msg_tx_queued(adapter))
dev_printk(KERN_DEBUG, &adapter->pdev->dev,
"tx locked\n");
return NETDEV_TX_LOCKED;
}
if (atl1_tpd_avail(&adapter->tpd_ring) < count) {
/* not enough descriptors */
netif_stop_queue(netdev);
spin_unlock_irqrestore(&adapter->lock, flags);
if (netif_msg_tx_queued(adapter))
dev_printk(KERN_DEBUG, &adapter->pdev->dev,
"tx busy\n");
@ -2432,7 +2415,6 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
tso = atl1_tso(adapter, skb, ptpd);
if (tso < 0) {
spin_unlock_irqrestore(&adapter->lock, flags);
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
@ -2440,7 +2422,6 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
if (!tso) {
ret_val = atl1_tx_csum(adapter, skb, ptpd);
if (ret_val < 0) {
spin_unlock_irqrestore(&adapter->lock, flags);
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
@ -2449,7 +2430,7 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
atl1_tx_map(adapter, skb, ptpd);
atl1_tx_queue(adapter, count, ptpd);
atl1_update_mailbox(adapter);
spin_unlock_irqrestore(&adapter->lock, flags);
mmiowb();
netdev->trans_start = jiffies;
return NETDEV_TX_OK;
}
@ -2642,6 +2623,7 @@ static void atl1_down(struct atl1_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
netif_stop_queue(netdev);
del_timer_sync(&adapter->watchdog_timer);
del_timer_sync(&adapter->phy_config_timer);
adapter->phy_timer_pending = false;
@ -2655,7 +2637,6 @@ static void atl1_down(struct atl1_adapter *adapter)
adapter->link_speed = SPEED_0;
adapter->link_duplex = -1;
netif_carrier_off(netdev);
netif_stop_queue(netdev);
atl1_clean_tx_ring(adapter);
atl1_clean_rx_ring(adapter);
@ -2724,6 +2705,8 @@ static int atl1_open(struct net_device *netdev)
struct atl1_adapter *adapter = netdev_priv(netdev);
int err;
netif_carrier_off(netdev);
/* allocate transmit descriptors */
err = atl1_setup_ring_resources(adapter);
if (err)
@ -3022,7 +3005,6 @@ static int __devinit atl1_probe(struct pci_dev *pdev,
netdev->features = NETIF_F_HW_CSUM;
netdev->features |= NETIF_F_SG;
netdev->features |= (NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX);
netdev->features |= NETIF_F_LLTX;
/*
* patch for some L1 of old version,

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,529 @@
/* atl2.h -- atl2 driver definitions
*
* Copyright(c) 2007 Atheros Corporation. All rights reserved.
* Copyright(c) 2006 xiong huang <xiong.huang@atheros.com>
* Copyright(c) 2007 Chris Snook <csnook@redhat.com>
*
* Derived from Intel e1000 driver
* Copyright(c) 1999 - 2005 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59
* Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#ifndef _ATL2_H_
#define _ATL2_H_
#include <asm/atomic.h>
#include <linux/netdevice.h>
#ifndef _ATL2_HW_H_
#define _ATL2_HW_H_
#ifndef _ATL2_OSDEP_H_
#define _ATL2_OSDEP_H_
#include <linux/pci.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/if_ether.h>
#include "atlx.h"
#ifdef ETHTOOL_OPS_COMPAT
extern int ethtool_ioctl(struct ifreq *ifr);
#endif
#define PCI_COMMAND_REGISTER PCI_COMMAND
#define CMD_MEM_WRT_INVALIDATE PCI_COMMAND_INVALIDATE
#define ETH_ADDR_LEN ETH_ALEN
#define ATL2_WRITE_REG(a, reg, value) (iowrite32((value), \
((a)->hw_addr + (reg))))
#define ATL2_WRITE_FLUSH(a) (ioread32((a)->hw_addr))
#define ATL2_READ_REG(a, reg) (ioread32((a)->hw_addr + (reg)))
#define ATL2_WRITE_REGB(a, reg, value) (iowrite8((value), \
((a)->hw_addr + (reg))))
#define ATL2_READ_REGB(a, reg) (ioread8((a)->hw_addr + (reg)))
#define ATL2_WRITE_REGW(a, reg, value) (iowrite16((value), \
((a)->hw_addr + (reg))))
#define ATL2_READ_REGW(a, reg) (ioread16((a)->hw_addr + (reg)))
#define ATL2_WRITE_REG_ARRAY(a, reg, offset, value) \
(iowrite32((value), (((a)->hw_addr + (reg)) + ((offset) << 2))))
#define ATL2_READ_REG_ARRAY(a, reg, offset) \
(ioread32(((a)->hw_addr + (reg)) + ((offset) << 2)))
#endif /* _ATL2_OSDEP_H_ */
struct atl2_adapter;
struct atl2_hw;
/* function prototype */
static s32 atl2_reset_hw(struct atl2_hw *hw);
static s32 atl2_read_mac_addr(struct atl2_hw *hw);
static s32 atl2_init_hw(struct atl2_hw *hw);
static s32 atl2_get_speed_and_duplex(struct atl2_hw *hw, u16 *speed,
u16 *duplex);
static u32 atl2_hash_mc_addr(struct atl2_hw *hw, u8 *mc_addr);
static void atl2_hash_set(struct atl2_hw *hw, u32 hash_value);
static s32 atl2_read_phy_reg(struct atl2_hw *hw, u16 reg_addr, u16 *phy_data);
static s32 atl2_write_phy_reg(struct atl2_hw *hw, u32 reg_addr, u16 phy_data);
static void atl2_read_pci_cfg(struct atl2_hw *hw, u32 reg, u16 *value);
static void atl2_write_pci_cfg(struct atl2_hw *hw, u32 reg, u16 *value);
static void atl2_set_mac_addr(struct atl2_hw *hw);
static bool atl2_read_eeprom(struct atl2_hw *hw, u32 Offset, u32 *pValue);
static bool atl2_write_eeprom(struct atl2_hw *hw, u32 offset, u32 value);
static s32 atl2_phy_init(struct atl2_hw *hw);
static int atl2_check_eeprom_exist(struct atl2_hw *hw);
static void atl2_force_ps(struct atl2_hw *hw);
/* register definition */
/* Block IDLE Status Register */
#define IDLE_STATUS_RXMAC 1 /* 1: RXMAC is non-IDLE */
#define IDLE_STATUS_TXMAC 2 /* 1: TXMAC is non-IDLE */
#define IDLE_STATUS_DMAR 8 /* 1: DMAR is non-IDLE */
#define IDLE_STATUS_DMAW 4 /* 1: DMAW is non-IDLE */
/* MDIO Control Register */
#define MDIO_WAIT_TIMES 10
/* MAC Control Register */
#define MAC_CTRL_DBG_TX_BKPRESURE 0x100000 /* 1: TX max backoff */
#define MAC_CTRL_MACLP_CLK_PHY 0x8000000 /* 1: 25MHz from phy */
#define MAC_CTRL_HALF_LEFT_BUF_SHIFT 28
#define MAC_CTRL_HALF_LEFT_BUF_MASK 0xF /* MAC retry buf x32B */
/* Internal SRAM Partition Register */
#define REG_SRAM_TXRAM_END 0x1500 /* Internal tail address of TXRAM
* default: 2byte*1024 */
#define REG_SRAM_RXRAM_END 0x1502 /* Internal tail address of RXRAM
* default: 2byte*1024 */
/* Descriptor Control register */
#define REG_TXD_BASE_ADDR_LO 0x1544 /* The base address of the Transmit
* Data Mem low 32-bit(dword align) */
#define REG_TXD_MEM_SIZE 0x1548 /* Transmit Data Memory size(by
* double word , max 256KB) */
#define REG_TXS_BASE_ADDR_LO 0x154C /* The base address of the Transmit
* Status Memory low 32-bit(dword word
* align) */
#define REG_TXS_MEM_SIZE 0x1550 /* double word unit, max 4*2047
* bytes. */
#define REG_RXD_BASE_ADDR_LO 0x1554 /* The base address of the Transmit
* Status Memory low 32-bit(unit 8
* bytes) */
#define REG_RXD_BUF_NUM 0x1558 /* Receive Data & Status Memory buffer
* number (unit 1536bytes, max
* 1536*2047) */
/* DMAR Control Register */
#define REG_DMAR 0x1580
#define DMAR_EN 0x1 /* 1: Enable DMAR */
/* TX Cur-Through (early tx threshold) Control Register */
#define REG_TX_CUT_THRESH 0x1590 /* TxMac begin transmit packet
* threshold(unit word) */
/* DMAW Control Register */
#define REG_DMAW 0x15A0
#define DMAW_EN 0x1
/* Flow control register */
#define REG_PAUSE_ON_TH 0x15A8 /* RXD high watermark of overflow
* threshold configuration register */
#define REG_PAUSE_OFF_TH 0x15AA /* RXD lower watermark of overflow
* threshold configuration register */
/* Mailbox Register */
#define REG_MB_TXD_WR_IDX 0x15f0 /* double word align */
#define REG_MB_RXD_RD_IDX 0x15F4 /* RXD Read index (unit: 1536byets) */
/* Interrupt Status Register */
#define ISR_TIMER 1 /* Interrupt when Timer counts down to zero */
#define ISR_MANUAL 2 /* Software manual interrupt, for debug. Set
* when SW_MAN_INT_EN is set in Table 51
* Selene Master Control Register
* (Offset 0x1400). */
#define ISR_RXF_OV 4 /* RXF overflow interrupt */
#define ISR_TXF_UR 8 /* TXF underrun interrupt */
#define ISR_TXS_OV 0x10 /* Internal transmit status buffer full
* interrupt */
#define ISR_RXS_OV 0x20 /* Internal receive status buffer full
* interrupt */
#define ISR_LINK_CHG 0x40 /* Link Status Change Interrupt */
#define ISR_HOST_TXD_UR 0x80
#define ISR_HOST_RXD_OV 0x100 /* Host rx data memory full , one pulse */
#define ISR_DMAR_TO_RST 0x200 /* DMAR op timeout interrupt. SW should
* do Reset */
#define ISR_DMAW_TO_RST 0x400
#define ISR_PHY 0x800 /* phy interrupt */
#define ISR_TS_UPDATE 0x10000 /* interrupt after new tx pkt status written
* to host */
#define ISR_RS_UPDATE 0x20000 /* interrupt ater new rx pkt status written
* to host. */
#define ISR_TX_EARLY 0x40000 /* interrupt when txmac begin transmit one
* packet */
#define ISR_TX_EVENT (ISR_TXF_UR | ISR_TXS_OV | ISR_HOST_TXD_UR |\
ISR_TS_UPDATE | ISR_TX_EARLY)
#define ISR_RX_EVENT (ISR_RXF_OV | ISR_RXS_OV | ISR_HOST_RXD_OV |\
ISR_RS_UPDATE)
#define IMR_NORMAL_MASK (\
/*ISR_LINK_CHG |*/\
ISR_MANUAL |\
ISR_DMAR_TO_RST |\
ISR_DMAW_TO_RST |\
ISR_PHY |\
ISR_PHY_LINKDOWN |\
ISR_TS_UPDATE |\
ISR_RS_UPDATE)
/* Receive MAC Statistics Registers */
#define REG_STS_RX_PAUSE 0x1700 /* Num pause packets received */
#define REG_STS_RXD_OV 0x1704 /* Num frames dropped due to RX
* FIFO overflow */
#define REG_STS_RXS_OV 0x1708 /* Num frames dropped due to RX
* Status Buffer Overflow */
#define REG_STS_RX_FILTER 0x170C /* Num packets dropped due to
* address filtering */
/* MII definitions */
/* PHY Common Register */
#define MII_SMARTSPEED 0x14
#define MII_DBG_ADDR 0x1D
#define MII_DBG_DATA 0x1E
/* PCI Command Register Bit Definitions */
#define PCI_REG_COMMAND 0x04
#define CMD_IO_SPACE 0x0001
#define CMD_MEMORY_SPACE 0x0002
#define CMD_BUS_MASTER 0x0004
#define MEDIA_TYPE_100M_FULL 1
#define MEDIA_TYPE_100M_HALF 2
#define MEDIA_TYPE_10M_FULL 3
#define MEDIA_TYPE_10M_HALF 4
#define AUTONEG_ADVERTISE_SPEED_DEFAULT 0x000F /* Everything */
/* The size (in bytes) of a ethernet packet */
#define ENET_HEADER_SIZE 14
#define MAXIMUM_ETHERNET_FRAME_SIZE 1518 /* with FCS */
#define MINIMUM_ETHERNET_FRAME_SIZE 64 /* with FCS */
#define ETHERNET_FCS_SIZE 4
#define MAX_JUMBO_FRAME_SIZE 0x2000
#define VLAN_SIZE 4
struct tx_pkt_header {
unsigned pkt_size:11;
unsigned:4; /* reserved */
unsigned ins_vlan:1; /* txmac should insert vlan */
unsigned short vlan; /* vlan tag */
};
/* FIXME: replace above bitfields with MASK/SHIFT defines below */
#define TX_PKT_HEADER_SIZE_MASK 0x7FF
#define TX_PKT_HEADER_SIZE_SHIFT 0
#define TX_PKT_HEADER_INS_VLAN_MASK 0x1
#define TX_PKT_HEADER_INS_VLAN_SHIFT 15
#define TX_PKT_HEADER_VLAN_TAG_MASK 0xFFFF
#define TX_PKT_HEADER_VLAN_TAG_SHIFT 16
struct tx_pkt_status {
unsigned pkt_size:11;
unsigned:5; /* reserved */
unsigned ok:1; /* current packet transmitted without error */
unsigned bcast:1; /* broadcast packet */
unsigned mcast:1; /* multicast packet */
unsigned pause:1; /* transmiited a pause frame */
unsigned ctrl:1;
unsigned defer:1; /* current packet is xmitted with defer */
unsigned exc_defer:1;
unsigned single_col:1;
unsigned multi_col:1;
unsigned late_col:1;
unsigned abort_col:1;
unsigned underun:1; /* current packet is aborted
* due to txram underrun */
unsigned:3; /* reserved */
unsigned update:1; /* always 1'b1 in tx_status_buf */
};
/* FIXME: replace above bitfields with MASK/SHIFT defines below */
#define TX_PKT_STATUS_SIZE_MASK 0x7FF
#define TX_PKT_STATUS_SIZE_SHIFT 0
#define TX_PKT_STATUS_OK_MASK 0x1
#define TX_PKT_STATUS_OK_SHIFT 16
#define TX_PKT_STATUS_BCAST_MASK 0x1
#define TX_PKT_STATUS_BCAST_SHIFT 17
#define TX_PKT_STATUS_MCAST_MASK 0x1
#define TX_PKT_STATUS_MCAST_SHIFT 18
#define TX_PKT_STATUS_PAUSE_MASK 0x1
#define TX_PKT_STATUS_PAUSE_SHIFT 19
#define TX_PKT_STATUS_CTRL_MASK 0x1
#define TX_PKT_STATUS_CTRL_SHIFT 20
#define TX_PKT_STATUS_DEFER_MASK 0x1
#define TX_PKT_STATUS_DEFER_SHIFT 21
#define TX_PKT_STATUS_EXC_DEFER_MASK 0x1
#define TX_PKT_STATUS_EXC_DEFER_SHIFT 22
#define TX_PKT_STATUS_SINGLE_COL_MASK 0x1
#define TX_PKT_STATUS_SINGLE_COL_SHIFT 23
#define TX_PKT_STATUS_MULTI_COL_MASK 0x1
#define TX_PKT_STATUS_MULTI_COL_SHIFT 24
#define TX_PKT_STATUS_LATE_COL_MASK 0x1
#define TX_PKT_STATUS_LATE_COL_SHIFT 25
#define TX_PKT_STATUS_ABORT_COL_MASK 0x1
#define TX_PKT_STATUS_ABORT_COL_SHIFT 26
#define TX_PKT_STATUS_UNDERRUN_MASK 0x1
#define TX_PKT_STATUS_UNDERRUN_SHIFT 27
#define TX_PKT_STATUS_UPDATE_MASK 0x1
#define TX_PKT_STATUS_UPDATE_SHIFT 31
struct rx_pkt_status {
unsigned pkt_size:11; /* packet size, max 2047 bytes */
unsigned:5; /* reserved */
unsigned ok:1; /* current packet received ok without error */
unsigned bcast:1; /* current packet is broadcast */
unsigned mcast:1; /* current packet is multicast */
unsigned pause:1;
unsigned ctrl:1;
unsigned crc:1; /* received a packet with crc error */
unsigned code:1; /* received a packet with code error */
unsigned runt:1; /* received a packet less than 64 bytes
* with good crc */
unsigned frag:1; /* received a packet less than 64 bytes
* with bad crc */
unsigned trunc:1; /* current frame truncated due to rxram full */
unsigned align:1; /* this packet is alignment error */
unsigned vlan:1; /* this packet has vlan */
unsigned:3; /* reserved */
unsigned update:1;
unsigned short vtag; /* vlan tag */
unsigned:16;
};
/* FIXME: replace above bitfields with MASK/SHIFT defines below */
#define RX_PKT_STATUS_SIZE_MASK 0x7FF
#define RX_PKT_STATUS_SIZE_SHIFT 0
#define RX_PKT_STATUS_OK_MASK 0x1
#define RX_PKT_STATUS_OK_SHIFT 16
#define RX_PKT_STATUS_BCAST_MASK 0x1
#define RX_PKT_STATUS_BCAST_SHIFT 17
#define RX_PKT_STATUS_MCAST_MASK 0x1
#define RX_PKT_STATUS_MCAST_SHIFT 18
#define RX_PKT_STATUS_PAUSE_MASK 0x1
#define RX_PKT_STATUS_PAUSE_SHIFT 19
#define RX_PKT_STATUS_CTRL_MASK 0x1
#define RX_PKT_STATUS_CTRL_SHIFT 20
#define RX_PKT_STATUS_CRC_MASK 0x1
#define RX_PKT_STATUS_CRC_SHIFT 21
#define RX_PKT_STATUS_CODE_MASK 0x1
#define RX_PKT_STATUS_CODE_SHIFT 22
#define RX_PKT_STATUS_RUNT_MASK 0x1
#define RX_PKT_STATUS_RUNT_SHIFT 23
#define RX_PKT_STATUS_FRAG_MASK 0x1
#define RX_PKT_STATUS_FRAG_SHIFT 24
#define RX_PKT_STATUS_TRUNK_MASK 0x1
#define RX_PKT_STATUS_TRUNK_SHIFT 25
#define RX_PKT_STATUS_ALIGN_MASK 0x1
#define RX_PKT_STATUS_ALIGN_SHIFT 26
#define RX_PKT_STATUS_VLAN_MASK 0x1
#define RX_PKT_STATUS_VLAN_SHIFT 27
#define RX_PKT_STATUS_UPDATE_MASK 0x1
#define RX_PKT_STATUS_UPDATE_SHIFT 31
#define RX_PKT_STATUS_VLAN_TAG_MASK 0xFFFF
#define RX_PKT_STATUS_VLAN_TAG_SHIFT 32
struct rx_desc {
struct rx_pkt_status status;
unsigned char packet[1536-sizeof(struct rx_pkt_status)];
};
enum atl2_speed_duplex {
atl2_10_half = 0,
atl2_10_full = 1,
atl2_100_half = 2,
atl2_100_full = 3
};
struct atl2_spi_flash_dev {
const char *manu_name; /* manufacturer id */
/* op-code */
u8 cmdWRSR;
u8 cmdREAD;
u8 cmdPROGRAM;
u8 cmdWREN;
u8 cmdWRDI;
u8 cmdRDSR;
u8 cmdRDID;
u8 cmdSECTOR_ERASE;
u8 cmdCHIP_ERASE;
};
/* Structure containing variables used by the shared code (atl2_hw.c) */
struct atl2_hw {
u8 __iomem *hw_addr;
void *back;
u8 preamble_len;
u8 max_retry; /* Retransmission maximum, afterwards the
* packet will be discarded. */
u8 jam_ipg; /* IPG to start JAM for collision based flow
* control in half-duplex mode. In unit of
* 8-bit time. */
u8 ipgt; /* Desired back to back inter-packet gap. The
* default is 96-bit time. */
u8 min_ifg; /* Minimum number of IFG to enforce in between
* RX frames. Frame gap below such IFP is
* dropped. */
u8 ipgr1; /* 64bit Carrier-Sense window */
u8 ipgr2; /* 96-bit IPG window */
u8 retry_buf; /* When half-duplex mode, should hold some
* bytes for mac retry . (8*4bytes unit) */
u16 fc_rxd_hi;
u16 fc_rxd_lo;
u16 lcol; /* Collision Window */
u16 max_frame_size;
u16 MediaType;
u16 autoneg_advertised;
u16 pci_cmd_word;
u16 mii_autoneg_adv_reg;
u32 mem_rang;
u32 txcw;
u32 mc_filter_type;
u32 num_mc_addrs;
u32 collision_delta;
u32 tx_packet_delta;
u16 phy_spd_default;
u16 device_id;
u16 vendor_id;
u16 subsystem_id;
u16 subsystem_vendor_id;
u8 revision_id;
/* spi flash */
u8 flash_vendor;
u8 dma_fairness;
u8 mac_addr[NODE_ADDRESS_SIZE];
u8 perm_mac_addr[NODE_ADDRESS_SIZE];
/* FIXME */
/* bool phy_preamble_sup; */
bool phy_configured;
};
#endif /* _ATL2_HW_H_ */
struct atl2_ring_header {
/* pointer to the descriptor ring memory */
void *desc;
/* physical adress of the descriptor ring */
dma_addr_t dma;
/* length of descriptor ring in bytes */
unsigned int size;
};
/* board specific private data structure */
struct atl2_adapter {
/* OS defined structs */
struct net_device *netdev;
struct pci_dev *pdev;
struct net_device_stats net_stats;
#ifdef NETIF_F_HW_VLAN_TX
struct vlan_group *vlgrp;
#endif
u32 wol;
u16 link_speed;
u16 link_duplex;
spinlock_t stats_lock;
struct work_struct reset_task;
struct work_struct link_chg_task;
struct timer_list watchdog_timer;
struct timer_list phy_config_timer;
unsigned long cfg_phy;
bool mac_disabled;
/* All Descriptor memory */
dma_addr_t ring_dma;
void *ring_vir_addr;
int ring_size;
struct tx_pkt_header *txd_ring;
dma_addr_t txd_dma;
struct tx_pkt_status *txs_ring;
dma_addr_t txs_dma;
struct rx_desc *rxd_ring;
dma_addr_t rxd_dma;
u32 txd_ring_size; /* bytes per unit */
u32 txs_ring_size; /* dwords per unit */
u32 rxd_ring_size; /* 1536 bytes per unit */
/* read /write ptr: */
/* host */
u32 txd_write_ptr;
u32 txs_next_clear;
u32 rxd_read_ptr;
/* nic */
atomic_t txd_read_ptr;
atomic_t txs_write_ptr;
u32 rxd_write_ptr;
/* Interrupt Moderator timer ( 2us resolution) */
u16 imt;
/* Interrupt Clear timer (2us resolution) */
u16 ict;
unsigned long flags;
/* structs defined in atl2_hw.h */
u32 bd_number; /* board number */
bool pci_using_64;
bool have_msi;
struct atl2_hw hw;
u32 usr_cmd;
/* FIXME */
/* u32 regs_buff[ATL2_REGS_LEN]; */
u32 pci_state[16];
u32 *config_space;
};
enum atl2_state_t {
__ATL2_TESTING,
__ATL2_RESETTING,
__ATL2_DOWN
};
#endif /* _ATL2_H_ */

View File

@ -105,7 +105,6 @@ static void atlx_check_for_link(struct atlx_adapter *adapter)
netdev->name);
adapter->link_speed = SPEED_0;
netif_carrier_off(netdev);
netif_stop_queue(netdev);
}
}
schedule_work(&adapter->link_chg_task);

View File

@ -290,7 +290,7 @@ static int mii_probe (struct net_device *dev)
if(aup->mac_id == 0) { /* get PHY0 */
# if defined(AU1XXX_PHY0_ADDR)
phydev = au_macs[AU1XXX_PHY0_BUSID]->mii_bus.phy_map[AU1XXX_PHY0_ADDR];
phydev = au_macs[AU1XXX_PHY0_BUSID]->mii_bus->phy_map[AU1XXX_PHY0_ADDR];
# else
printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n",
dev->name);
@ -298,7 +298,7 @@ static int mii_probe (struct net_device *dev)
# endif /* defined(AU1XXX_PHY0_ADDR) */
} else if (aup->mac_id == 1) { /* get PHY1 */
# if defined(AU1XXX_PHY1_ADDR)
phydev = au_macs[AU1XXX_PHY1_BUSID]->mii_bus.phy_map[AU1XXX_PHY1_ADDR];
phydev = au_macs[AU1XXX_PHY1_BUSID]->mii_bus->phy_map[AU1XXX_PHY1_ADDR];
# else
printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n",
dev->name);
@ -311,8 +311,8 @@ static int mii_probe (struct net_device *dev)
/* find the first (lowest address) PHY on the current MAC's MII bus */
for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++)
if (aup->mii_bus.phy_map[phy_addr]) {
phydev = aup->mii_bus.phy_map[phy_addr];
if (aup->mii_bus->phy_map[phy_addr]) {
phydev = aup->mii_bus->phy_map[phy_addr];
# if !defined(AU1XXX_PHY_SEARCH_HIGHEST_ADDR)
break; /* break out with first one found */
# endif
@ -331,7 +331,7 @@ static int mii_probe (struct net_device *dev)
* the MAC0 MII bus */
for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++) {
struct phy_device *const tmp_phydev =
au_macs[0]->mii_bus.phy_map[phy_addr];
au_macs[0]->mii_bus->phy_map[phy_addr];
if (!tmp_phydev)
continue; /* no PHY here... */
@ -653,6 +653,8 @@ static struct net_device * au1000_probe(int port_num)
aup = dev->priv;
spin_lock_init(&aup->lock);
/* Allocate the data buffers */
/* Snooping works fine with eth on all au1xxx */
aup->vaddr = (u32)dma_alloc_noncoherent(NULL, MAX_BUF_SIZE *
@ -696,28 +698,32 @@ static struct net_device * au1000_probe(int port_num)
*aup->enable = 0;
aup->mac_enabled = 0;
aup->mii_bus.priv = dev;
aup->mii_bus.read = mdiobus_read;
aup->mii_bus.write = mdiobus_write;
aup->mii_bus.reset = mdiobus_reset;
aup->mii_bus.name = "au1000_eth_mii";
snprintf(aup->mii_bus.id, MII_BUS_ID_SIZE, "%x", aup->mac_id);
aup->mii_bus.irq = kmalloc(sizeof(int)*PHY_MAX_ADDR, GFP_KERNEL);
aup->mii_bus = mdiobus_alloc();
if (aup->mii_bus == NULL)
goto err_out;
aup->mii_bus->priv = dev;
aup->mii_bus->read = mdiobus_read;
aup->mii_bus->write = mdiobus_write;
aup->mii_bus->reset = mdiobus_reset;
aup->mii_bus->name = "au1000_eth_mii";
snprintf(aup->mii_bus->id, MII_BUS_ID_SIZE, "%x", aup->mac_id);
aup->mii_bus->irq = kmalloc(sizeof(int)*PHY_MAX_ADDR, GFP_KERNEL);
for(i = 0; i < PHY_MAX_ADDR; ++i)
aup->mii_bus.irq[i] = PHY_POLL;
aup->mii_bus->irq[i] = PHY_POLL;
/* if known, set corresponding PHY IRQs */
#if defined(AU1XXX_PHY_STATIC_CONFIG)
# if defined(AU1XXX_PHY0_IRQ)
if (AU1XXX_PHY0_BUSID == aup->mac_id)
aup->mii_bus.irq[AU1XXX_PHY0_ADDR] = AU1XXX_PHY0_IRQ;
aup->mii_bus->irq[AU1XXX_PHY0_ADDR] = AU1XXX_PHY0_IRQ;
# endif
# if defined(AU1XXX_PHY1_IRQ)
if (AU1XXX_PHY1_BUSID == aup->mac_id)
aup->mii_bus.irq[AU1XXX_PHY1_ADDR] = AU1XXX_PHY1_IRQ;
aup->mii_bus->irq[AU1XXX_PHY1_ADDR] = AU1XXX_PHY1_IRQ;
# endif
#endif
mdiobus_register(&aup->mii_bus);
mdiobus_register(aup->mii_bus);
if (mii_probe(dev) != 0) {
goto err_out;
@ -753,7 +759,6 @@ static struct net_device * au1000_probe(int port_num)
aup->tx_db_inuse[i] = pDB;
}
spin_lock_init(&aup->lock);
dev->base_addr = base;
dev->irq = irq;
dev->open = au1000_open;
@ -774,6 +779,11 @@ static struct net_device * au1000_probe(int port_num)
return dev;
err_out:
if (aup->mii_bus != NULL) {
mdiobus_unregister(aup->mii_bus);
mdiobus_free(aup->mii_bus);
}
/* here we should have a valid dev plus aup-> register addresses
* so we can reset the mac properly.*/
reset_mac(dev);
@ -1004,6 +1014,8 @@ static void __exit au1000_cleanup_module(void)
if (dev) {
aup = (struct au1000_private *) dev->priv;
unregister_netdev(dev);
mdiobus_unregister(aup->mii_bus);
mdiobus_free(aup->mii_bus);
for (j = 0; j < NUM_RX_DMA; j++)
if (aup->rx_db_inuse[j])
ReleaseDB(aup, aup->rx_db_inuse[j]);

View File

@ -106,7 +106,7 @@ struct au1000_private {
int old_duplex;
struct phy_device *phy_dev;
struct mii_bus mii_bus;
struct mii_bus *mii_bus;
/* These variables are just for quick access to certain regs addresses. */
volatile mac_reg_t *mac; /* mac registers */

View File

@ -153,7 +153,7 @@ static void ax_reset_8390(struct net_device *dev)
while ((ei_inb(addr + EN0_ISR) & ENISR_RESET) == 0) {
if (jiffies - reset_start_time > 2*HZ/100) {
dev_warn(&ax->dev->dev, "%s: %s did not complete.\n",
__FUNCTION__, dev->name);
__func__, dev->name);
break;
}
}
@ -173,7 +173,7 @@ static void ax_get_8390_hdr(struct net_device *dev, struct e8390_pkt_hdr *hdr,
if (ei_status.dmaing) {
dev_err(&ax->dev->dev, "%s: DMAing conflict in %s "
"[DMAstat:%d][irqlock:%d].\n",
dev->name, __FUNCTION__,
dev->name, __func__,
ei_status.dmaing, ei_status.irqlock);
return;
}
@ -215,7 +215,7 @@ static void ax_block_input(struct net_device *dev, int count,
dev_err(&ax->dev->dev,
"%s: DMAing conflict in %s "
"[DMAstat:%d][irqlock:%d].\n",
dev->name, __FUNCTION__,
dev->name, __func__,
ei_status.dmaing, ei_status.irqlock);
return;
}
@ -260,7 +260,7 @@ static void ax_block_output(struct net_device *dev, int count,
if (ei_status.dmaing) {
dev_err(&ax->dev->dev, "%s: DMAing conflict in %s."
"[DMAstat:%d][irqlock:%d]\n",
dev->name, __FUNCTION__,
dev->name, __func__,
ei_status.dmaing, ei_status.irqlock);
return;
}
@ -396,7 +396,7 @@ ax_phy_issueaddr(struct net_device *dev, int phy_addr, int reg, int opc)
{
if (phy_debug)
pr_debug("%s: dev %p, %04x, %04x, %d\n",
__FUNCTION__, dev, phy_addr, reg, opc);
__func__, dev, phy_addr, reg, opc);
ax_mii_ei_outbits(dev, 0x3f, 6); /* pre-amble */
ax_mii_ei_outbits(dev, 1, 2); /* frame-start */
@ -422,7 +422,7 @@ ax_phy_read(struct net_device *dev, int phy_addr, int reg)
spin_unlock_irqrestore(&ei_local->page_lock, flags);
if (phy_debug)
pr_debug("%s: %04x.%04x => read %04x\n", __FUNCTION__,
pr_debug("%s: %04x.%04x => read %04x\n", __func__,
phy_addr, reg, result);
return result;
@ -436,7 +436,7 @@ ax_phy_write(struct net_device *dev, int phy_addr, int reg, int value)
unsigned long flags;
dev_dbg(&ax->dev->dev, "%s: %p, %04x, %04x %04x\n",
__FUNCTION__, dev, phy_addr, reg, value);
__func__, dev, phy_addr, reg, value);
spin_lock_irqsave(&ei->page_lock, flags);

View File

@ -398,7 +398,7 @@ static int mii_probe(struct net_device *dev)
/* search for connect PHY device */
for (i = 0; i < PHY_MAX_ADDR; i++) {
struct phy_device *const tmp_phydev = lp->mii_bus.phy_map[i];
struct phy_device *const tmp_phydev = lp->mii_bus->phy_map[i];
if (!tmp_phydev)
continue; /* no PHY here... */
@ -811,7 +811,7 @@ static void bfin_mac_enable(void)
{
u32 opmode;
pr_debug("%s: %s\n", DRV_NAME, __FUNCTION__);
pr_debug("%s: %s\n", DRV_NAME, __func__);
/* Set RX DMA */
bfin_write_DMA1_NEXT_DESC_PTR(&(rx_list_head->desc_a));
@ -847,7 +847,7 @@ static void bfin_mac_enable(void)
/* Our watchdog timed out. Called by the networking layer */
static void bfin_mac_timeout(struct net_device *dev)
{
pr_debug("%s: %s\n", dev->name, __FUNCTION__);
pr_debug("%s: %s\n", dev->name, __func__);
bfin_mac_disable();
@ -949,7 +949,7 @@ static int bfin_mac_open(struct net_device *dev)
{
struct bfin_mac_local *lp = netdev_priv(dev);
int retval;
pr_debug("%s: %s\n", dev->name, __FUNCTION__);
pr_debug("%s: %s\n", dev->name, __func__);
/*
* Check that the address is valid. If its not, refuse
@ -989,7 +989,7 @@ static int bfin_mac_open(struct net_device *dev)
static int bfin_mac_close(struct net_device *dev)
{
struct bfin_mac_local *lp = netdev_priv(dev);
pr_debug("%s: %s\n", dev->name, __FUNCTION__);
pr_debug("%s: %s\n", dev->name, __func__);
netif_stop_queue(dev);
netif_carrier_off(dev);
@ -1058,17 +1058,21 @@ static int __devinit bfin_mac_probe(struct platform_device *pdev)
setup_mac_addr(ndev->dev_addr);
/* MDIO bus initial */
lp->mii_bus.priv = ndev;
lp->mii_bus.read = mdiobus_read;
lp->mii_bus.write = mdiobus_write;
lp->mii_bus.reset = mdiobus_reset;
lp->mii_bus.name = "bfin_mac_mdio";
snprintf(lp->mii_bus.id, MII_BUS_ID_SIZE, "0");
lp->mii_bus.irq = kmalloc(sizeof(int)*PHY_MAX_ADDR, GFP_KERNEL);
for (i = 0; i < PHY_MAX_ADDR; ++i)
lp->mii_bus.irq[i] = PHY_POLL;
lp->mii_bus = mdiobus_alloc();
if (lp->mii_bus == NULL)
goto out_err_mdiobus_alloc;
rc = mdiobus_register(&lp->mii_bus);
lp->mii_bus->priv = ndev;
lp->mii_bus->read = mdiobus_read;
lp->mii_bus->write = mdiobus_write;
lp->mii_bus->reset = mdiobus_reset;
lp->mii_bus->name = "bfin_mac_mdio";
snprintf(lp->mii_bus->id, MII_BUS_ID_SIZE, "0");
lp->mii_bus->irq = kmalloc(sizeof(int)*PHY_MAX_ADDR, GFP_KERNEL);
for (i = 0; i < PHY_MAX_ADDR; ++i)
lp->mii_bus->irq[i] = PHY_POLL;
rc = mdiobus_register(lp->mii_bus);
if (rc) {
dev_err(&pdev->dev, "Cannot register MDIO bus!\n");
goto out_err_mdiobus_register;
@ -1121,8 +1125,10 @@ out_err_reg_ndev:
free_irq(IRQ_MAC_RX, ndev);
out_err_request_irq:
out_err_mii_probe:
mdiobus_unregister(&lp->mii_bus);
mdiobus_unregister(lp->mii_bus);
out_err_mdiobus_register:
mdiobus_free(lp->mii_bus);
out_err_mdiobus_alloc:
peripheral_free_list(pin_req);
out_err_setup_pin_mux:
out_err_probe_mac:
@ -1139,7 +1145,8 @@ static int __devexit bfin_mac_remove(struct platform_device *pdev)
platform_set_drvdata(pdev, NULL);
mdiobus_unregister(&lp->mii_bus);
mdiobus_unregister(lp->mii_bus);
mdiobus_free(lp->mii_bus);
unregister_netdev(ndev);

View File

@ -66,7 +66,7 @@ struct bfin_mac_local {
int old_duplex;
struct phy_device *phydev;
struct mii_bus mii_bus;
struct mii_bus *mii_bus;
};
extern void bfin_get_ether_addr(char *addr);

View File

@ -57,8 +57,8 @@
#define DRV_MODULE_NAME "bnx2"
#define PFX DRV_MODULE_NAME ": "
#define DRV_MODULE_VERSION "1.8.0"
#define DRV_MODULE_RELDATE "Aug 14, 2008"
#define DRV_MODULE_VERSION "1.8.1"
#define DRV_MODULE_RELDATE "Oct 7, 2008"
#define RUN_AT(x) (jiffies + (x))
@ -69,7 +69,7 @@ static char version[] __devinitdata =
"Broadcom NetXtreme II Gigabit Ethernet Driver " DRV_MODULE_NAME " v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
MODULE_AUTHOR("Michael Chan <mchan@broadcom.com>");
MODULE_DESCRIPTION("Broadcom NetXtreme II BCM5706/5708/5709 Driver");
MODULE_DESCRIPTION("Broadcom NetXtreme II BCM5706/5708/5709/5716 Driver");
MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_MODULE_VERSION);
@ -1127,7 +1127,7 @@ bnx2_init_all_rx_contexts(struct bnx2 *bp)
}
}
static int
static void
bnx2_set_mac_link(struct bnx2 *bp)
{
u32 val;
@ -1193,8 +1193,6 @@ bnx2_set_mac_link(struct bnx2 *bp)
if (CHIP_NUM(bp) == CHIP_NUM_5709)
bnx2_init_all_rx_contexts(bp);
return 0;
}
static void
@ -2478,6 +2476,11 @@ bnx2_alloc_rx_page(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, u16 index)
return -ENOMEM;
mapping = pci_map_page(bp->pdev, page, 0, PAGE_SIZE,
PCI_DMA_FROMDEVICE);
if (pci_dma_mapping_error(bp->pdev, mapping)) {
__free_page(page);
return -EIO;
}
rx_pg->page = page;
pci_unmap_addr_set(rx_pg, mapping, mapping);
rxbd->rx_bd_haddr_hi = (u64) mapping >> 32;
@ -2520,6 +2523,10 @@ bnx2_alloc_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, u16 index)
mapping = pci_map_single(bp->pdev, skb->data, bp->rx_buf_use_size,
PCI_DMA_FROMDEVICE);
if (pci_dma_mapping_error(bp->pdev, mapping)) {
dev_kfree_skb(skb);
return -EIO;
}
rx_buf->skb = skb;
pci_unmap_addr_set(rx_buf, mapping, mapping);
@ -2594,7 +2601,7 @@ bnx2_tx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
sw_cons = txr->tx_cons;
while (sw_cons != hw_cons) {
struct sw_bd *tx_buf;
struct sw_tx_bd *tx_buf;
struct sk_buff *skb;
int i, last;
@ -2619,21 +2626,13 @@ bnx2_tx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
}
}
pci_unmap_single(bp->pdev, pci_unmap_addr(tx_buf, mapping),
skb_headlen(skb), PCI_DMA_TODEVICE);
skb_dma_unmap(&bp->pdev->dev, skb, DMA_TO_DEVICE);
tx_buf->skb = NULL;
last = skb_shinfo(skb)->nr_frags;
for (i = 0; i < last; i++) {
sw_cons = NEXT_TX_BD(sw_cons);
pci_unmap_page(bp->pdev,
pci_unmap_addr(
&txr->tx_buf_ring[TX_RING_IDX(sw_cons)],
mapping),
skb_shinfo(skb)->frags[i].size,
PCI_DMA_TODEVICE);
}
sw_cons = NEXT_TX_BD(sw_cons);
@ -2674,11 +2673,31 @@ bnx2_reuse_rx_skb_pages(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr,
{
struct sw_pg *cons_rx_pg, *prod_rx_pg;
struct rx_bd *cons_bd, *prod_bd;
dma_addr_t mapping;
int i;
u16 hw_prod = rxr->rx_pg_prod, prod;
u16 hw_prod, prod;
u16 cons = rxr->rx_pg_cons;
cons_rx_pg = &rxr->rx_pg_ring[cons];
/* The caller was unable to allocate a new page to replace the
* last one in the frags array, so we need to recycle that page
* and then free the skb.
*/
if (skb) {
struct page *page;
struct skb_shared_info *shinfo;
shinfo = skb_shinfo(skb);
shinfo->nr_frags--;
page = shinfo->frags[shinfo->nr_frags].page;
shinfo->frags[shinfo->nr_frags].page = NULL;
cons_rx_pg->page = page;
dev_kfree_skb(skb);
}
hw_prod = rxr->rx_pg_prod;
for (i = 0; i < count; i++) {
prod = RX_PG_RING_IDX(hw_prod);
@ -2687,20 +2706,6 @@ bnx2_reuse_rx_skb_pages(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr,
cons_bd = &rxr->rx_pg_desc_ring[RX_RING(cons)][RX_IDX(cons)];
prod_bd = &rxr->rx_pg_desc_ring[RX_RING(prod)][RX_IDX(prod)];
if (i == 0 && skb) {
struct page *page;
struct skb_shared_info *shinfo;
shinfo = skb_shinfo(skb);
shinfo->nr_frags--;
page = shinfo->frags[shinfo->nr_frags].page;
shinfo->frags[shinfo->nr_frags].page = NULL;
mapping = pci_map_page(bp->pdev, page, 0, PAGE_SIZE,
PCI_DMA_FROMDEVICE);
cons_rx_pg->page = page;
pci_unmap_addr_set(cons_rx_pg, mapping, mapping);
dev_kfree_skb(skb);
}
if (prod != cons) {
prod_rx_pg->page = cons_rx_pg->page;
cons_rx_pg->page = NULL;
@ -2786,6 +2791,8 @@ bnx2_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, struct sk_buff *skb,
skb_put(skb, hdr_len);
for (i = 0; i < pages; i++) {
dma_addr_t mapping_old;
frag_len = min(frag_size, (unsigned int) PAGE_SIZE);
if (unlikely(frag_len <= 4)) {
unsigned int tail = 4 - frag_len;
@ -2808,9 +2815,10 @@ bnx2_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, struct sk_buff *skb,
}
rx_pg = &rxr->rx_pg_ring[pg_cons];
pci_unmap_page(bp->pdev, pci_unmap_addr(rx_pg, mapping),
PAGE_SIZE, PCI_DMA_FROMDEVICE);
/* Don't unmap yet. If we're unable to allocate a new
* page, we need to recycle the page and the DMA addr.
*/
mapping_old = pci_unmap_addr(rx_pg, mapping);
if (i == pages - 1)
frag_len -= 4;
@ -2827,6 +2835,9 @@ bnx2_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, struct sk_buff *skb,
return err;
}
pci_unmap_page(bp->pdev, mapping_old,
PAGE_SIZE, PCI_DMA_FROMDEVICE);
frag_size -= frag_len;
skb->data_len += frag_len;
skb->truesize += frag_len;
@ -3250,6 +3261,9 @@ bnx2_set_rx_mode(struct net_device *dev)
struct dev_addr_list *uc_ptr;
int i;
if (!netif_running(dev))
return;
spin_lock_bh(&bp->phy_lock);
rx_mode = bp->rx_mode & ~(BNX2_EMAC_RX_MODE_PROMISCUOUS |
@ -4970,31 +4984,20 @@ bnx2_free_tx_skbs(struct bnx2 *bp)
continue;
for (j = 0; j < TX_DESC_CNT; ) {
struct sw_bd *tx_buf = &txr->tx_buf_ring[j];
struct sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
struct sk_buff *skb = tx_buf->skb;
int k, last;
if (skb == NULL) {
j++;
continue;
}
pci_unmap_single(bp->pdev,
pci_unmap_addr(tx_buf, mapping),
skb_headlen(skb), PCI_DMA_TODEVICE);
skb_dma_unmap(&bp->pdev->dev, skb, DMA_TO_DEVICE);
tx_buf->skb = NULL;
last = skb_shinfo(skb)->nr_frags;
for (k = 0; k < last; k++) {
tx_buf = &txr->tx_buf_ring[j + k + 1];
pci_unmap_page(bp->pdev,
pci_unmap_addr(tx_buf, mapping),
skb_shinfo(skb)->frags[j].size,
PCI_DMA_TODEVICE);
}
j += skb_shinfo(skb)->nr_frags + 1;
dev_kfree_skb(skb);
j += k + 1;
}
}
}
@ -5074,6 +5077,21 @@ bnx2_init_nic(struct bnx2 *bp, int reset_phy)
return 0;
}
static int
bnx2_shutdown_chip(struct bnx2 *bp)
{
u32 reset_code;
if (bp->flags & BNX2_FLAG_NO_WOL)
reset_code = BNX2_DRV_MSG_CODE_UNLOAD_LNK_DN;
else if (bp->wol)
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_WOL;
else
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL;
return bnx2_reset_chip(bp, reset_code);
}
static int
bnx2_test_registers(struct bnx2 *bp)
{
@ -5357,8 +5375,11 @@ bnx2_run_loopback(struct bnx2 *bp, int loopback_mode)
for (i = 14; i < pkt_size; i++)
packet[i] = (unsigned char) (i & 0xff);
map = pci_map_single(bp->pdev, skb->data, pkt_size,
PCI_DMA_TODEVICE);
if (skb_dma_map(&bp->pdev->dev, skb, DMA_TO_DEVICE)) {
dev_kfree_skb(skb);
return -EIO;
}
map = skb_shinfo(skb)->dma_maps[0];
REG_WR(bp, BNX2_HC_COMMAND,
bp->hc_cmd | BNX2_HC_COMMAND_COAL_NOW_WO_INT);
@ -5393,7 +5414,7 @@ bnx2_run_loopback(struct bnx2 *bp, int loopback_mode)
udelay(5);
pci_unmap_single(bp->pdev, map, pkt_size, PCI_DMA_TODEVICE);
skb_dma_unmap(&bp->pdev->dev, skb, DMA_TO_DEVICE);
dev_kfree_skb(skb);
if (bnx2_get_hw_tx_cons(tx_napi) != txr->tx_prod)
@ -5508,6 +5529,9 @@ bnx2_test_link(struct bnx2 *bp)
{
u32 bmsr;
if (!netif_running(bp->dev))
return -ENODEV;
if (bp->phy_flags & BNX2_PHY_FLAG_REMOTE_PHY_CAP) {
if (bp->link_up)
return 0;
@ -5600,7 +5624,7 @@ bnx2_5706_serdes_timer(struct bnx2 *bp)
} else if ((bp->link_up == 0) && (bp->autoneg & AUTONEG_SPEED)) {
u32 bmcr;
bp->current_interval = bp->timer_interval;
bp->current_interval = BNX2_TIMER_INTERVAL;
bnx2_read_phy(bp, bp->mii_bmcr, &bmcr);
@ -5629,7 +5653,7 @@ bnx2_5706_serdes_timer(struct bnx2 *bp)
bp->phy_flags &= ~BNX2_PHY_FLAG_PARALLEL_DETECT;
}
} else
bp->current_interval = bp->timer_interval;
bp->current_interval = BNX2_TIMER_INTERVAL;
if (check_link) {
u32 val;
@ -5674,11 +5698,11 @@ bnx2_5708_serdes_timer(struct bnx2 *bp)
} else {
bnx2_disable_forced_2g5(bp);
bp->serdes_an_pending = 2;
bp->current_interval = bp->timer_interval;
bp->current_interval = BNX2_TIMER_INTERVAL;
}
} else
bp->current_interval = bp->timer_interval;
bp->current_interval = BNX2_TIMER_INTERVAL;
spin_unlock(&bp->phy_lock);
}
@ -5951,13 +5975,14 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
struct bnx2 *bp = netdev_priv(dev);
dma_addr_t mapping;
struct tx_bd *txbd;
struct sw_bd *tx_buf;
struct sw_tx_bd *tx_buf;
u32 len, vlan_tag_flags, last_frag, mss;
u16 prod, ring_prod;
int i;
struct bnx2_napi *bnapi;
struct bnx2_tx_ring_info *txr;
struct netdev_queue *txq;
struct skb_shared_info *sp;
/* Determine which tx ring we will be placed on */
i = skb_get_queue_mapping(skb);
@ -5989,7 +6014,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
#endif
if ((mss = skb_shinfo(skb)->gso_size)) {
u32 tcp_opt_len, ip_tcp_len;
u32 tcp_opt_len;
struct iphdr *iph;
vlan_tag_flags |= TX_BD_FLAGS_SW_LSO;
@ -6013,21 +6038,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
mss |= (tcp_off & 0xc) << TX_BD_TCP6_OFF2_SHL;
}
} else {
if (skb_header_cloned(skb) &&
pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) {
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
ip_tcp_len = ip_hdrlen(skb) + sizeof(struct tcphdr);
iph = ip_hdr(skb);
iph->check = 0;
iph->tot_len = htons(mss + ip_tcp_len + tcp_opt_len);
tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
iph->daddr, 0,
IPPROTO_TCP,
0);
if (tcp_opt_len || (iph->ihl > 5)) {
vlan_tag_flags |= ((iph->ihl - 5) +
(tcp_opt_len >> 2)) << 8;
@ -6036,11 +6047,16 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
} else
mss = 0;
mapping = pci_map_single(bp->pdev, skb->data, len, PCI_DMA_TODEVICE);
if (skb_dma_map(&bp->pdev->dev, skb, DMA_TO_DEVICE)) {
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
sp = skb_shinfo(skb);
mapping = sp->dma_maps[0];
tx_buf = &txr->tx_buf_ring[ring_prod];
tx_buf->skb = skb;
pci_unmap_addr_set(tx_buf, mapping, mapping);
txbd = &txr->tx_desc_ring[ring_prod];
@ -6059,10 +6075,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
txbd = &txr->tx_desc_ring[ring_prod];
len = frag->size;
mapping = pci_map_page(bp->pdev, frag->page, frag->page_offset,
len, PCI_DMA_TODEVICE);
pci_unmap_addr_set(&txr->tx_buf_ring[ring_prod],
mapping, mapping);
mapping = sp->dma_maps[i + 1];
txbd->tx_bd_haddr_hi = (u64) mapping >> 32;
txbd->tx_bd_haddr_lo = (u64) mapping & 0xffffffff;
@ -6097,20 +6110,13 @@ static int
bnx2_close(struct net_device *dev)
{
struct bnx2 *bp = netdev_priv(dev);
u32 reset_code;
cancel_work_sync(&bp->reset_task);
bnx2_disable_int_sync(bp);
bnx2_napi_disable(bp);
del_timer_sync(&bp->timer);
if (bp->flags & BNX2_FLAG_NO_WOL)
reset_code = BNX2_DRV_MSG_CODE_UNLOAD_LNK_DN;
else if (bp->wol)
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_WOL;
else
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL;
bnx2_reset_chip(bp, reset_code);
bnx2_shutdown_chip(bp);
bnx2_free_irq(bp);
bnx2_free_skbs(bp);
bnx2_free_mem(bp);
@ -6479,6 +6485,9 @@ bnx2_nway_reset(struct net_device *dev)
struct bnx2 *bp = netdev_priv(dev);
u32 bmcr;
if (!netif_running(dev))
return -EAGAIN;
if (!(bp->autoneg & AUTONEG_SPEED)) {
return -EINVAL;
}
@ -6534,6 +6543,9 @@ bnx2_get_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
struct bnx2 *bp = netdev_priv(dev);
int rc;
if (!netif_running(dev))
return -EAGAIN;
/* parameters already validated in ethtool_get_eeprom */
rc = bnx2_nvram_read(bp, eeprom->offset, eebuf, eeprom->len);
@ -6548,6 +6560,9 @@ bnx2_set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
struct bnx2 *bp = netdev_priv(dev);
int rc;
if (!netif_running(dev))
return -EAGAIN;
/* parameters already validated in ethtool_set_eeprom */
rc = bnx2_nvram_write(bp, eeprom->offset, eebuf, eeprom->len);
@ -6712,11 +6727,11 @@ bnx2_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam *epause)
bp->autoneg &= ~AUTONEG_FLOW_CTRL;
}
spin_lock_bh(&bp->phy_lock);
bnx2_setup_phy(bp, bp->phy_port);
spin_unlock_bh(&bp->phy_lock);
if (netif_running(dev)) {
spin_lock_bh(&bp->phy_lock);
bnx2_setup_phy(bp, bp->phy_port);
spin_unlock_bh(&bp->phy_lock);
}
return 0;
}
@ -6907,6 +6922,8 @@ bnx2_self_test(struct net_device *dev, struct ethtool_test *etest, u64 *buf)
{
struct bnx2 *bp = netdev_priv(dev);
bnx2_set_power_state(bp, PCI_D0);
memset(buf, 0, sizeof(u64) * BNX2_NUM_TESTS);
if (etest->flags & ETH_TEST_FL_OFFLINE) {
int i;
@ -6926,9 +6943,8 @@ bnx2_self_test(struct net_device *dev, struct ethtool_test *etest, u64 *buf)
if ((buf[2] = bnx2_test_loopback(bp)) != 0)
etest->flags |= ETH_TEST_FL_FAILED;
if (!netif_running(bp->dev)) {
bnx2_reset_chip(bp, BNX2_DRV_MSG_CODE_RESET);
}
if (!netif_running(bp->dev))
bnx2_shutdown_chip(bp);
else {
bnx2_init_nic(bp, 1);
bnx2_netif_start(bp);
@ -6956,6 +6972,8 @@ bnx2_self_test(struct net_device *dev, struct ethtool_test *etest, u64 *buf)
etest->flags |= ETH_TEST_FL_FAILED;
}
if (!netif_running(bp->dev))
bnx2_set_power_state(bp, PCI_D3hot);
}
static void
@ -7021,6 +7039,8 @@ bnx2_phys_id(struct net_device *dev, u32 data)
int i;
u32 save;
bnx2_set_power_state(bp, PCI_D0);
if (data == 0)
data = 2;
@ -7045,6 +7065,10 @@ bnx2_phys_id(struct net_device *dev, u32 data)
}
REG_WR(bp, BNX2_EMAC_LED, 0);
REG_WR(bp, BNX2_MISC_CFG, save);
if (!netif_running(dev))
bnx2_set_power_state(bp, PCI_D3hot);
return 0;
}
@ -7516,8 +7540,7 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
bp->stats_ticks = USEC_PER_SEC & BNX2_HC_STATS_TICKS_HC_STAT_TICKS;
bp->timer_interval = HZ;
bp->current_interval = HZ;
bp->current_interval = BNX2_TIMER_INTERVAL;
bp->phy_addr = 1;
@ -7607,7 +7630,7 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
bp->req_flow_ctrl = FLOW_CTRL_RX | FLOW_CTRL_TX;
init_timer(&bp->timer);
bp->timer.expires = RUN_AT(bp->timer_interval);
bp->timer.expires = RUN_AT(BNX2_TIMER_INTERVAL);
bp->timer.data = (unsigned long) bp;
bp->timer.function = bnx2_timer;
@ -7720,7 +7743,6 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
memcpy(dev->dev_addr, bp->mac_addr, 6);
memcpy(dev->perm_addr, bp->mac_addr, 6);
bp->name = board_info[ent->driver_data].name;
dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
if (CHIP_NUM(bp) == CHIP_NUM_5709)
@ -7747,7 +7769,7 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
printk(KERN_INFO "%s: %s (%c%d) %s found at mem %lx, "
"IRQ %d, node addr %s\n",
dev->name,
bp->name,
board_info[ent->driver_data].name,
((CHIP_ID(bp) & 0xf000) >> 12) + 'A',
((CHIP_ID(bp) & 0x0ff0) >> 4),
bnx2_bus_string(bp, str),
@ -7781,7 +7803,6 @@ bnx2_suspend(struct pci_dev *pdev, pm_message_t state)
{
struct net_device *dev = pci_get_drvdata(pdev);
struct bnx2 *bp = netdev_priv(dev);
u32 reset_code;
/* PCI register 4 needs to be saved whether netif_running() or not.
* MSI address and data need to be saved if using MSI and
@ -7795,13 +7816,7 @@ bnx2_suspend(struct pci_dev *pdev, pm_message_t state)
bnx2_netif_stop(bp);
netif_device_detach(dev);
del_timer_sync(&bp->timer);
if (bp->flags & BNX2_FLAG_NO_WOL)
reset_code = BNX2_DRV_MSG_CODE_UNLOAD_LNK_DN;
else if (bp->wol)
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_WOL;
else
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL;
bnx2_reset_chip(bp, reset_code);
bnx2_shutdown_chip(bp);
bnx2_free_skbs(bp);
bnx2_set_power_state(bp, pci_choose_state(pdev, state));
return 0;

View File

@ -6526,10 +6526,14 @@ struct sw_pg {
DECLARE_PCI_UNMAP_ADDR(mapping)
};
struct sw_tx_bd {
struct sk_buff *skb;
};
#define SW_RXBD_RING_SIZE (sizeof(struct sw_bd) * RX_DESC_CNT)
#define SW_RXPG_RING_SIZE (sizeof(struct sw_pg) * RX_DESC_CNT)
#define RXBD_RING_SIZE (sizeof(struct rx_bd) * RX_DESC_CNT)
#define SW_TXBD_RING_SIZE (sizeof(struct sw_bd) * TX_DESC_CNT)
#define SW_TXBD_RING_SIZE (sizeof(struct sw_tx_bd) * TX_DESC_CNT)
#define TXBD_RING_SIZE (sizeof(struct tx_bd) * TX_DESC_CNT)
/* Buffered flash (Atmel: AT45DB011B) specific information */
@ -6609,7 +6613,7 @@ struct bnx2_tx_ring_info {
u32 tx_bseq_addr;
struct tx_bd *tx_desc_ring;
struct sw_bd *tx_buf_ring;
struct sw_tx_bd *tx_buf_ring;
u16 tx_cons;
u16 hw_tx_cons;
@ -6654,6 +6658,8 @@ struct bnx2_napi {
struct bnx2_tx_ring_info tx_ring;
};
#define BNX2_TIMER_INTERVAL HZ
struct bnx2 {
/* Fields used in the tx and intr/napi performance paths are grouped */
/* together in the beginning of the structure. */
@ -6701,9 +6707,6 @@ struct bnx2 {
/* End of fields used in the performance code paths. */
char *name;
int timer_interval;
int current_interval;
struct timer_list timer;
struct work_struct reset_task;

File diff suppressed because it is too large Load Diff

View File

@ -59,8 +59,8 @@
#include "bnx2x.h"
#include "bnx2x_init.h"
#define DRV_MODULE_VERSION "1.45.21"
#define DRV_MODULE_RELDATE "2008/09/03"
#define DRV_MODULE_VERSION "1.45.22"
#define DRV_MODULE_RELDATE "2008/09/09"
#define BNX2X_BC_VER 0x040200
/* Time in jiffies before concluding the transmitter is hung */
@ -649,15 +649,16 @@ static void bnx2x_int_disable(struct bnx2x *bp)
BNX2X_ERR("BUG! proper val not read from IGU!\n");
}
static void bnx2x_int_disable_sync(struct bnx2x *bp)
static void bnx2x_int_disable_sync(struct bnx2x *bp, int disable_hw)
{
int msix = (bp->flags & USING_MSIX_FLAG) ? 1 : 0;
int i;
/* disable interrupt handling */
atomic_inc(&bp->intr_sem);
/* prevent the HW from sending interrupts */
bnx2x_int_disable(bp);
if (disable_hw)
/* prevent the HW from sending interrupts */
bnx2x_int_disable(bp);
/* make sure all ISRs are done */
if (msix) {
@ -6086,9 +6087,9 @@ static void bnx2x_netif_start(struct bnx2x *bp)
}
}
static void bnx2x_netif_stop(struct bnx2x *bp)
static void bnx2x_netif_stop(struct bnx2x *bp, int disable_hw)
{
bnx2x_int_disable_sync(bp);
bnx2x_int_disable_sync(bp, disable_hw);
if (netif_running(bp->dev)) {
bnx2x_napi_disable(bp);
netif_tx_disable(bp->dev);
@ -6475,7 +6476,7 @@ load_rings_free:
for_each_queue(bp, i)
bnx2x_free_rx_sge_range(bp, bp->fp + i, NUM_RX_SGE);
load_int_disable:
bnx2x_int_disable_sync(bp);
bnx2x_int_disable_sync(bp, 1);
/* Release IRQs */
bnx2x_free_irq(bp);
load_error:
@ -6650,7 +6651,7 @@ static int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode)
bp->rx_mode = BNX2X_RX_MODE_NONE;
bnx2x_set_storm_rx_mode(bp);
bnx2x_netif_stop(bp);
bnx2x_netif_stop(bp, 1);
if (!netif_running(bp->dev))
bnx2x_napi_disable(bp);
del_timer_sync(&bp->timer);
@ -8791,7 +8792,7 @@ static int bnx2x_test_loopback(struct bnx2x *bp, u8 link_up)
if (!netif_running(bp->dev))
return BNX2X_LOOPBACK_FAILED;
bnx2x_netif_stop(bp);
bnx2x_netif_stop(bp, 1);
if (bnx2x_run_loopback(bp, BNX2X_MAC_LOOPBACK, link_up)) {
DP(NETIF_MSG_PROBE, "MAC loopback failed\n");
@ -10346,6 +10347,74 @@ static int bnx2x_resume(struct pci_dev *pdev)
return rc;
}
static int bnx2x_eeh_nic_unload(struct bnx2x *bp)
{
int i;
bp->state = BNX2X_STATE_ERROR;
bp->rx_mode = BNX2X_RX_MODE_NONE;
bnx2x_netif_stop(bp, 0);
del_timer_sync(&bp->timer);
bp->stats_state = STATS_STATE_DISABLED;
DP(BNX2X_MSG_STATS, "stats_state - DISABLED\n");
/* Release IRQs */
bnx2x_free_irq(bp);
if (CHIP_IS_E1(bp)) {
struct mac_configuration_cmd *config =
bnx2x_sp(bp, mcast_config);
for (i = 0; i < config->hdr.length_6b; i++)
CAM_INVALIDATE(config->config_table[i]);
}
/* Free SKBs, SGEs, TPA pool and driver internals */
bnx2x_free_skbs(bp);
for_each_queue(bp, i)
bnx2x_free_rx_sge_range(bp, bp->fp + i, NUM_RX_SGE);
bnx2x_free_mem(bp);
bp->state = BNX2X_STATE_CLOSED;
netif_carrier_off(bp->dev);
return 0;
}
static void bnx2x_eeh_recover(struct bnx2x *bp)
{
u32 val;
mutex_init(&bp->port.phy_mutex);
bp->common.shmem_base = REG_RD(bp, MISC_REG_SHARED_MEM_ADDR);
bp->link_params.shmem_base = bp->common.shmem_base;
BNX2X_DEV_INFO("shmem offset is 0x%x\n", bp->common.shmem_base);
if (!bp->common.shmem_base ||
(bp->common.shmem_base < 0xA0000) ||
(bp->common.shmem_base >= 0xC0000)) {
BNX2X_DEV_INFO("MCP not active\n");
bp->flags |= NO_MCP_FLAG;
return;
}
val = SHMEM_RD(bp, validity_map[BP_PORT(bp)]);
if ((val & (SHR_MEM_VALIDITY_DEV_INFO | SHR_MEM_VALIDITY_MB))
!= (SHR_MEM_VALIDITY_DEV_INFO | SHR_MEM_VALIDITY_MB))
BNX2X_ERR("BAD MCP validity signature\n");
if (!BP_NOMCP(bp)) {
bp->fw_seq = (SHMEM_RD(bp, func_mb[BP_FUNC(bp)].drv_mb_header)
& DRV_MSG_SEQ_NUMBER_MASK);
BNX2X_DEV_INFO("fw_seq 0x%08x\n", bp->fw_seq);
}
}
/**
* bnx2x_io_error_detected - called when PCI error is detected
* @pdev: Pointer to PCI device
@ -10365,7 +10434,7 @@ static pci_ers_result_t bnx2x_io_error_detected(struct pci_dev *pdev,
netif_device_detach(dev);
if (netif_running(dev))
bnx2x_nic_unload(bp, UNLOAD_CLOSE);
bnx2x_eeh_nic_unload(bp);
pci_disable_device(pdev);
@ -10420,8 +10489,10 @@ static void bnx2x_io_resume(struct pci_dev *pdev)
rtnl_lock();
bnx2x_eeh_recover(bp);
if (netif_running(dev))
bnx2x_nic_load(bp, LOAD_OPEN);
bnx2x_nic_load(bp, LOAD_NORMAL);
netif_device_attach(dev);

View File

@ -38,6 +38,7 @@
#include <linux/in.h>
#include <net/ipx.h>
#include <net/arp.h>
#include <net/ipv6.h>
#include <asm/byteorder.h>
#include "bonding.h"
#include "bond_alb.h"
@ -81,6 +82,7 @@
#define RLB_PROMISC_TIMEOUT 10*ALB_TIMER_TICKS_PER_SEC
static const u8 mac_bcast[ETH_ALEN] = {0xff,0xff,0xff,0xff,0xff,0xff};
static const u8 mac_v6_allmcast[ETH_ALEN] = {0x33,0x33,0x00,0x00,0x00,0x01};
static const int alb_delta_in_ticks = HZ / ALB_TIMER_TICKS_PER_SEC;
#pragma pack(1)
@ -710,7 +712,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
struct arp_pkt *arp = arp_pkt(skb);
struct slave *tx_slave = NULL;
if (arp->op_code == __constant_htons(ARPOP_REPLY)) {
if (arp->op_code == htons(ARPOP_REPLY)) {
/* the arp must be sent on the selected
* rx channel
*/
@ -719,7 +721,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
memcpy(arp->mac_src,tx_slave->dev->dev_addr, ETH_ALEN);
}
dprintk("Server sent ARP Reply packet\n");
} else if (arp->op_code == __constant_htons(ARPOP_REQUEST)) {
} else if (arp->op_code == htons(ARPOP_REQUEST)) {
/* Create an entry in the rx_hashtbl for this client as a
* place holder.
* When the arp reply is received the entry will be updated
@ -1290,6 +1292,7 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
u32 hash_index = 0;
const u8 *hash_start = NULL;
int res = 1;
struct ipv6hdr *ip6hdr;
skb_reset_mac_header(skb);
eth_data = eth_hdr(skb);
@ -1319,11 +1322,32 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
}
break;
case ETH_P_IPV6:
/* IPv6 doesn't really use broadcast mac address, but leave
* that here just in case.
*/
if (memcmp(eth_data->h_dest, mac_bcast, ETH_ALEN) == 0) {
do_tx_balance = 0;
break;
}
/* IPv6 uses all-nodes multicast as an equivalent to
* broadcasts in IPv4.
*/
if (memcmp(eth_data->h_dest, mac_v6_allmcast, ETH_ALEN) == 0) {
do_tx_balance = 0;
break;
}
/* Additianally, DAD probes should not be tx-balanced as that
* will lead to false positives for duplicate addresses and
* prevent address configuration from working.
*/
ip6hdr = ipv6_hdr(skb);
if (ipv6_addr_any(&ip6hdr->saddr)) {
do_tx_balance = 0;
break;
}
hash_start = (char *)&(ipv6_hdr(skb)->daddr);
hash_size = sizeof(ipv6_hdr(skb)->daddr);
break;

View File

@ -3702,7 +3702,7 @@ static int bond_xmit_hash_policy_l23(struct sk_buff *skb,
struct ethhdr *data = (struct ethhdr *)skb->data;
struct iphdr *iph = ip_hdr(skb);
if (skb->protocol == __constant_htons(ETH_P_IP)) {
if (skb->protocol == htons(ETH_P_IP)) {
return ((ntohl(iph->saddr ^ iph->daddr) & 0xffff) ^
(data->h_dest[5] ^ bond_dev->dev_addr[5])) % count;
}
@ -3723,8 +3723,8 @@ static int bond_xmit_hash_policy_l34(struct sk_buff *skb,
__be16 *layer4hdr = (__be16 *)((u32 *)iph + iph->ihl);
int layer4_xor = 0;
if (skb->protocol == __constant_htons(ETH_P_IP)) {
if (!(iph->frag_off & __constant_htons(IP_MF|IP_OFFSET)) &&
if (skb->protocol == htons(ETH_P_IP)) {
if (!(iph->frag_off & htons(IP_MF|IP_OFFSET)) &&
(iph->protocol == IPPROTO_TCP ||
iph->protocol == IPPROTO_UDP)) {
layer4_xor = ntohs((*layer4hdr ^ *(layer4hdr + 1)));
@ -4493,6 +4493,12 @@ static void bond_ethtool_get_drvinfo(struct net_device *bond_dev,
static const struct ethtool_ops bond_ethtool_ops = {
.get_drvinfo = bond_ethtool_get_drvinfo,
.get_link = ethtool_op_get_link,
.get_tx_csum = ethtool_op_get_tx_csum,
.get_sg = ethtool_op_get_sg,
.get_tso = ethtool_op_get_tso,
.get_ufo = ethtool_op_get_ufo,
.get_flags = ethtool_op_get_flags,
};
/*

View File

@ -32,7 +32,7 @@
#ifdef BONDING_DEBUG
#define dprintk(fmt, args...) \
printk(KERN_DEBUG \
DRV_NAME ": %s() %d: " fmt, __FUNCTION__, __LINE__ , ## args )
DRV_NAME ": %s() %d: " fmt, __func__, __LINE__ , ## args )
#else
#define dprintk(fmt, args...)
#endif /* BONDING_DEBUG */
@ -333,5 +333,13 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active);
void bond_register_arp(struct bonding *);
void bond_unregister_arp(struct bonding *);
/* exported from bond_main.c */
extern struct list_head bond_dev_list;
extern struct bond_parm_tbl bond_lacp_tbl[];
extern struct bond_parm_tbl bond_mode_tbl[];
extern struct bond_parm_tbl xmit_hashtype_tbl[];
extern struct bond_parm_tbl arp_validate_tbl[];
extern struct bond_parm_tbl fail_over_mac_tbl[];
#endif /* _LINUX_BONDING_H */

View File

@ -74,6 +74,7 @@
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/vmalloc.h>
#include <linux/ioport.h>
#include <linux/pci.h>
#include <linux/mm.h>
@ -91,6 +92,7 @@
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/mutex.h>
#include <linux/firmware.h>
#include <net/checksum.h>
@ -197,6 +199,7 @@ static int link_mode;
MODULE_AUTHOR("Adrian Sun (asun@darksunrising.com)");
MODULE_DESCRIPTION("Sun Cassini(+) ethernet driver");
MODULE_LICENSE("GPL");
MODULE_FIRMWARE("sun/cassini.bin");
module_param(cassini_debug, int, 0);
MODULE_PARM_DESC(cassini_debug, "Cassini bitmapped debugging message enable value");
module_param(link_mode, int, 0);
@ -812,9 +815,44 @@ static int cas_reset_mii_phy(struct cas *cp)
return (limit <= 0);
}
static int cas_saturn_firmware_init(struct cas *cp)
{
const struct firmware *fw;
const char fw_name[] = "sun/cassini.bin";
int err;
if (PHY_NS_DP83065 != cp->phy_id)
return 0;
err = request_firmware(&fw, fw_name, &cp->pdev->dev);
if (err) {
printk(KERN_ERR "cassini: Failed to load firmware \"%s\"\n",
fw_name);
return err;
}
if (fw->size < 2) {
printk(KERN_ERR "cassini: bogus length %zu in \"%s\"\n",
fw->size, fw_name);
err = -EINVAL;
goto out;
}
cp->fw_load_addr= fw->data[1] << 8 | fw->data[0];
cp->fw_size = fw->size - 2;
cp->fw_data = vmalloc(cp->fw_size);
if (!cp->fw_data) {
err = -ENOMEM;
printk(KERN_ERR "cassini: \"%s\" Failed %d\n", fw_name, err);
goto out;
}
memcpy(cp->fw_data, &fw->data[2], cp->fw_size);
out:
release_firmware(fw);
return err;
}
static void cas_saturn_firmware_load(struct cas *cp)
{
cas_saturn_patch_t *patch = cas_saturn_patch;
int i;
cas_phy_powerdown(cp);
@ -833,11 +871,9 @@ static void cas_saturn_firmware_load(struct cas *cp)
/* download new firmware */
cas_phy_write(cp, DP83065_MII_MEM, 0x1);
cas_phy_write(cp, DP83065_MII_REGE, patch->addr);
while (patch->addr) {
cas_phy_write(cp, DP83065_MII_REGD, patch->val);
patch++;
}
cas_phy_write(cp, DP83065_MII_REGE, cp->fw_load_addr);
for (i = 0; i < cp->fw_size; i++)
cas_phy_write(cp, DP83065_MII_REGD, cp->fw_data[i]);
/* enable firmware */
cas_phy_write(cp, DP83065_MII_REGE, 0x8ff8);
@ -2182,7 +2218,7 @@ static inline void cas_rx_flow_pkt(struct cas *cp, const u64 *words,
* do any additional locking here. stick the buffer
* at the end.
*/
__skb_insert(skb, flow->prev, (struct sk_buff *) flow, flow);
__skb_queue_tail(flow, skb);
if (words[0] & RX_COMP1_RELEASE_FLOW) {
while ((skb = __skb_dequeue(flow))) {
cas_skb_release(skb);
@ -5108,6 +5144,9 @@ static int __devinit cas_init_one(struct pci_dev *pdev,
cas_reset(cp, 0);
if (cas_check_invariants(cp))
goto err_out_iounmap;
if (cp->cas_flags & CAS_FLAG_SATURN)
if (cas_saturn_firmware_init(cp))
goto err_out_iounmap;
cp->init_block = (struct cas_init_block *)
pci_alloc_consistent(pdev, sizeof(struct cas_init_block),
@ -5217,6 +5256,9 @@ static void __devexit cas_remove_one(struct pci_dev *pdev)
cp = netdev_priv(dev);
unregister_netdev(dev);
if (cp->fw_data)
vfree(cp->fw_data);
mutex_lock(&cp->pm_mutex);
flush_scheduled_work();
if (cp->hw_running)

File diff suppressed because it is too large Load Diff

View File

@ -302,13 +302,7 @@ static int cpmac_mdio_reset(struct mii_bus *bus)
static int mii_irqs[PHY_MAX_ADDR] = { PHY_POLL, };
static struct mii_bus cpmac_mii = {
.name = "cpmac-mii",
.read = cpmac_mdio_read,
.write = cpmac_mdio_write,
.reset = cpmac_mdio_reset,
.irq = mii_irqs,
};
static struct mii_bus *cpmac_mii;
static int cpmac_config(struct net_device *dev, struct ifmap *map)
{
@ -1116,7 +1110,7 @@ static int __devinit cpmac_probe(struct platform_device *pdev)
for (phy_id = 0; phy_id < PHY_MAX_ADDR; phy_id++) {
if (!(pdata->phy_mask & (1 << phy_id)))
continue;
if (!cpmac_mii.phy_map[phy_id])
if (!cpmac_mii->phy_map[phy_id])
continue;
break;
}
@ -1168,7 +1162,7 @@ static int __devinit cpmac_probe(struct platform_device *pdev)
priv->msg_enable = netif_msg_init(debug_level, 0xff);
memcpy(dev->dev_addr, pdata->dev_addr, sizeof(dev->dev_addr));
priv->phy = phy_connect(dev, cpmac_mii.phy_map[phy_id]->dev.bus_id,
priv->phy = phy_connect(dev, cpmac_mii->phy_map[phy_id]->dev.bus_id,
&cpmac_adjust_link, 0, PHY_INTERFACE_MODE_MII);
if (IS_ERR(priv->phy)) {
if (netif_msg_drv(priv))
@ -1216,11 +1210,22 @@ int __devinit cpmac_init(void)
u32 mask;
int i, res;
cpmac_mii.priv = ioremap(AR7_REGS_MDIO, 256);
cpmac_mii = mdiobus_alloc();
if (cpmac_mii == NULL)
return -ENOMEM;
if (!cpmac_mii.priv) {
cpmac_mii->name = "cpmac-mii";
cpmac_mii->read = cpmac_mdio_read;
cpmac_mii->write = cpmac_mdio_write;
cpmac_mii->reset = cpmac_mdio_reset;
cpmac_mii->irq = mii_irqs;
cpmac_mii->priv = ioremap(AR7_REGS_MDIO, 256);
if (!cpmac_mii->priv) {
printk(KERN_ERR "Can't ioremap mdio registers\n");
return -ENXIO;
res = -ENXIO;
goto fail_alloc;
}
#warning FIXME: unhardcode gpio&reset bits
@ -1230,10 +1235,10 @@ int __devinit cpmac_init(void)
ar7_device_reset(AR7_RESET_BIT_CPMAC_HI);
ar7_device_reset(AR7_RESET_BIT_EPHY);
cpmac_mii.reset(&cpmac_mii);
cpmac_mii->reset(cpmac_mii);
for (i = 0; i < 300000; i++)
if ((mask = cpmac_read(cpmac_mii.priv, CPMAC_MDIO_ALIVE)))
if ((mask = cpmac_read(cpmac_mii->priv, CPMAC_MDIO_ALIVE)))
break;
else
cpu_relax();
@ -1244,10 +1249,10 @@ int __devinit cpmac_init(void)
mask = 0;
}
cpmac_mii.phy_mask = ~(mask | 0x80000000);
snprintf(cpmac_mii.id, MII_BUS_ID_SIZE, "0");
cpmac_mii->phy_mask = ~(mask | 0x80000000);
snprintf(cpmac_mii->id, MII_BUS_ID_SIZE, "0");
res = mdiobus_register(&cpmac_mii);
res = mdiobus_register(cpmac_mii);
if (res)
goto fail_mii;
@ -1258,10 +1263,13 @@ int __devinit cpmac_init(void)
return 0;
fail_cpmac:
mdiobus_unregister(&cpmac_mii);
mdiobus_unregister(cpmac_mii);
fail_mii:
iounmap(cpmac_mii.priv);
iounmap(cpmac_mii->priv);
fail_alloc:
mdiobus_free(cpmac_mii);
return res;
}
@ -1269,8 +1277,9 @@ fail_mii:
void __devexit cpmac_exit(void)
{
platform_driver_unregister(&cpmac_driver);
mdiobus_unregister(&cpmac_mii);
iounmap(cpmac_mii.priv);
mdiobus_unregister(cpmac_mii);
mdiobus_free(cpmac_mii);
iounmap(cpmac_mii->priv);
}
module_init(cpmac_init);

View File

@ -1397,9 +1397,7 @@ net_open(struct net_device *dev)
release_dma:
#if ALLOW_DMA
free_dma(dev->dma);
#endif
release_irq:
#if ALLOW_DMA
release_dma_buff(lp);
#endif
writereg(dev, PP_LineCTL, readreg(dev, PP_LineCTL) & ~(SERIAL_TX_ON | SERIAL_RX_ON));

View File

@ -54,7 +54,6 @@ struct port_info {
struct adapter *adapter;
struct vlan_group *vlan_grp;
struct sge_qset *qs;
const struct port_type_info *port_type;
u8 port_id;
u8 rx_csum_offload;
u8 nqsets;
@ -124,8 +123,7 @@ struct sge_rspq { /* state for an SGE response queue */
dma_addr_t phys_addr; /* physical address of the ring */
unsigned int cntxt_id; /* SGE context id for the response q */
spinlock_t lock; /* guards response processing */
struct sk_buff *rx_head; /* offload packet receive queue head */
struct sk_buff *rx_tail; /* offload packet receive queue tail */
struct sk_buff_head rx_queue; /* offload packet receive queue */
struct sk_buff *pg_skb; /* used to build frag list in napi handler */
unsigned long offload_pkts;
@ -241,6 +239,7 @@ struct adapter {
unsigned int check_task_cnt;
struct delayed_work adap_check_task;
struct work_struct ext_intr_handler_task;
struct work_struct fatal_error_handler_task;
struct dentry *debugfs_root;
@ -282,9 +281,11 @@ int t3_offload_tx(struct t3cdev *tdev, struct sk_buff *skb);
void t3_os_ext_intr_handler(struct adapter *adapter);
void t3_os_link_changed(struct adapter *adapter, int port_id, int link_status,
int speed, int duplex, int fc);
void t3_os_phymod_changed(struct adapter *adap, int port_id);
void t3_sge_start(struct adapter *adap);
void t3_sge_stop(struct adapter *adap);
void t3_stop_sge_timers(struct adapter *adap);
void t3_free_sge_resources(struct adapter *adap);
void t3_sge_err_intr_handler(struct adapter *adapter);
irq_handler_t t3_intr_handler(struct adapter *adap, int polling);

File diff suppressed because it is too large Load Diff

View File

@ -193,22 +193,13 @@ struct mdio_ops {
struct adapter_info {
unsigned char nports; /* # of ports */
unsigned char phy_base_addr; /* MDIO PHY base address */
unsigned char mdien;
unsigned char mdiinv;
unsigned int gpio_out; /* GPIO output settings */
unsigned int gpio_intr; /* GPIO IRQ enable mask */
unsigned char gpio_intr[MAX_NPORTS]; /* GPIO PHY IRQ pins */
unsigned long caps; /* adapter capabilities */
const struct mdio_ops *mdio_ops; /* MDIO operations */
const char *desc; /* product description */
};
struct port_type_info {
void (*phy_prep)(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *ops);
unsigned int caps;
const char *desc;
};
struct mc5_stats {
unsigned long parity_err;
unsigned long active_rgn_full;
@ -358,6 +349,7 @@ struct qset_params { /* SGE queue set parameters */
unsigned int jumbo_size; /* # of entries in jumbo free list */
unsigned int txq_size[SGE_TXQ_PER_SET]; /* Tx queue sizes */
unsigned int cong_thres; /* FL congestion threshold */
unsigned int vector; /* Interrupt (line or vector) number */
};
struct sge_params {
@ -525,12 +517,25 @@ enum {
MAC_RXFIFO_SIZE = 32768
};
/* IEEE 802.3ae specified MDIO devices */
/* IEEE 802.3 specified MDIO devices */
enum {
MDIO_DEV_PMA_PMD = 1,
MDIO_DEV_WIS = 2,
MDIO_DEV_PCS = 3,
MDIO_DEV_XGXS = 4
MDIO_DEV_XGXS = 4,
MDIO_DEV_ANEG = 7,
MDIO_DEV_VEND1 = 30,
MDIO_DEV_VEND2 = 31
};
/* LASI control and status registers */
enum {
RX_ALARM_CTRL = 0x9000,
TX_ALARM_CTRL = 0x9001,
LASI_CTRL = 0x9002,
RX_ALARM_STAT = 0x9003,
TX_ALARM_STAT = 0x9004,
LASI_STAT = 0x9005
};
/* PHY loopback direction */
@ -542,12 +547,23 @@ enum {
/* PHY interrupt types */
enum {
cphy_cause_link_change = 1,
cphy_cause_fifo_error = 2
cphy_cause_fifo_error = 2,
cphy_cause_module_change = 4,
};
/* PHY module types */
enum {
phy_modtype_none,
phy_modtype_sr,
phy_modtype_lr,
phy_modtype_lrm,
phy_modtype_twinax,
phy_modtype_twinax_long,
phy_modtype_unknown
};
/* PHY operations */
struct cphy_ops {
void (*destroy)(struct cphy *phy);
int (*reset)(struct cphy *phy, int wait);
int (*intr_enable)(struct cphy *phy);
@ -568,8 +584,12 @@ struct cphy_ops {
/* A PHY instance */
struct cphy {
int addr; /* PHY address */
u8 addr; /* PHY address */
u8 modtype; /* PHY module type */
short priv; /* scratch pad */
unsigned int caps; /* PHY capabilities */
struct adapter *adapter; /* associated adapter */
const char *desc; /* PHY description */
unsigned long fifo_errors; /* FIFO over/under-flows */
const struct cphy_ops *ops; /* PHY operations */
int (*mdio_read)(struct adapter *adapter, int phy_addr, int mmd_addr,
@ -594,10 +614,13 @@ static inline int mdio_write(struct cphy *phy, int mmd, int reg,
/* Convenience initializer */
static inline void cphy_init(struct cphy *phy, struct adapter *adapter,
int phy_addr, struct cphy_ops *phy_ops,
const struct mdio_ops *mdio_ops)
const struct mdio_ops *mdio_ops,
unsigned int caps, const char *desc)
{
phy->adapter = adapter;
phy->addr = phy_addr;
phy->caps = caps;
phy->adapter = adapter;
phy->desc = desc;
phy->ops = phy_ops;
if (mdio_ops) {
phy->mdio_read = mdio_ops->read;
@ -668,7 +691,12 @@ int t3_mdio_change_bits(struct cphy *phy, int mmd, int reg, unsigned int clear,
unsigned int set);
int t3_phy_reset(struct cphy *phy, int mmd, int wait);
int t3_phy_advertise(struct cphy *phy, unsigned int advert);
int t3_phy_advertise_fiber(struct cphy *phy, unsigned int advert);
int t3_set_phy_speed_duplex(struct cphy *phy, int speed, int duplex);
int t3_phy_lasi_intr_enable(struct cphy *phy);
int t3_phy_lasi_intr_disable(struct cphy *phy);
int t3_phy_lasi_intr_clear(struct cphy *phy);
int t3_phy_lasi_intr_handler(struct cphy *phy);
void t3_intr_enable(struct adapter *adapter);
void t3_intr_disable(struct adapter *adapter);
@ -698,6 +726,7 @@ int t3_check_fw_version(struct adapter *adapter, int *must_load);
int t3_init_hw(struct adapter *adapter, u32 fw_params);
void mac_prep(struct cmac *mac, struct adapter *adapter, int index);
void early_hw_init(struct adapter *adapter, const struct adapter_info *ai);
int t3_reset_adapter(struct adapter *adapter);
int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
int reset);
int t3_replay_prep_adapter(struct adapter *adapter);
@ -774,14 +803,16 @@ int t3_sge_read_rspq(struct adapter *adapter, unsigned int id, u32 data[4]);
int t3_sge_cqcntxt_op(struct adapter *adapter, unsigned int id, unsigned int op,
unsigned int credits);
void t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
void t3_ael1002_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
void t3_ael1006_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
void t3_qt2045_phy_prep(struct cphy *phy, struct adapter *adapter, int phy_addr,
const struct mdio_ops *mdio_ops);
void t3_xaui_direct_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
int t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
int t3_ael1002_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
int t3_ael1006_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
int t3_ael2005_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
int t3_qt2045_phy_prep(struct cphy *phy, struct adapter *adapter, int phy_addr,
const struct mdio_ops *mdio_ops);
int t3_xaui_direct_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
#endif /* __CHELSIO_COMMON_H */

View File

@ -92,6 +92,8 @@ struct ch_qset_params {
int32_t polling;
int32_t lro;
int32_t cong_thres;
int32_t vector;
int32_t qnum;
};
struct ch_pktsched_params {

View File

@ -208,6 +208,31 @@ void t3_os_link_changed(struct adapter *adapter, int port_id, int link_stat,
}
}
/**
* t3_os_phymod_changed - handle PHY module changes
* @phy: the PHY reporting the module change
* @mod_type: new module type
*
* This is the OS-dependent handler for PHY module changes. It is
* invoked when a PHY module is removed or inserted for any OS-specific
* processing.
*/
void t3_os_phymod_changed(struct adapter *adap, int port_id)
{
static const char *mod_str[] = {
NULL, "SR", "LR", "LRM", "TWINAX", "TWINAX", "unknown"
};
const struct net_device *dev = adap->port[port_id];
const struct port_info *pi = netdev_priv(dev);
if (pi->phy.modtype == phy_modtype_none)
printk(KERN_INFO "%s: PHY module unplugged\n", dev->name);
else
printk(KERN_INFO "%s: %s PHY module inserted\n", dev->name,
mod_str[pi->phy.modtype]);
}
static void cxgb_set_rxmode(struct net_device *dev)
{
struct t3_rx_mode rm;
@ -274,10 +299,10 @@ static void name_msix_vecs(struct adapter *adap)
for (i = 0; i < pi->nqsets; i++, msi_idx++) {
snprintf(adap->msix_info[msi_idx].desc, n,
"%s (queue %d)", d->name, i);
"%s-%d", d->name, pi->first_qset + i);
adap->msix_info[msi_idx].desc[n] = 0;
}
}
}
}
static int request_msix_data_irqs(struct adapter *adap)
@ -306,6 +331,22 @@ static int request_msix_data_irqs(struct adapter *adap)
return 0;
}
static void free_irq_resources(struct adapter *adapter)
{
if (adapter->flags & USING_MSIX) {
int i, n = 0;
free_irq(adapter->msix_info[0].vec, adapter);
for_each_port(adapter, i)
n += adap2pinfo(adapter, i)->nqsets;
for (i = 0; i < n; ++i)
free_irq(adapter->msix_info[i + 1].vec,
&adapter->sge.qs[i]);
} else
free_irq(adapter->pdev->irq, adapter);
}
static int await_mgmt_replies(struct adapter *adap, unsigned long init_cnt,
unsigned long n)
{
@ -473,12 +514,16 @@ static int setup_sge_qsets(struct adapter *adap)
struct port_info *pi = netdev_priv(dev);
pi->qs = &adap->sge.qs[pi->first_qset];
for (j = 0; j < pi->nqsets; ++j, ++qset_idx) {
for (j = pi->first_qset; j < pi->first_qset + pi->nqsets;
++j, ++qset_idx) {
if (!pi->rx_csum_offload)
adap->params.sge.qset[qset_idx].lro = 0;
err = t3_sge_alloc_qset(adap, qset_idx, 1,
(adap->flags & USING_MSIX) ? qset_idx + 1 :
irq_idx,
&adap->params.sge.qset[qset_idx], ntxq, dev);
if (err) {
t3_stop_sge_timers(adap);
t3_free_sge_resources(adap);
return err;
}
@ -739,11 +784,12 @@ static void init_port_mtus(struct adapter *adapter)
t3_write_reg(adapter, A_TP_MTU_PORT_TABLE, mtus);
}
static void send_pktsched_cmd(struct adapter *adap, int sched, int qidx, int lo,
static int send_pktsched_cmd(struct adapter *adap, int sched, int qidx, int lo,
int hi, int port)
{
struct sk_buff *skb;
struct mngt_pktsched_wr *req;
int ret;
skb = alloc_skb(sizeof(*req), GFP_KERNEL | __GFP_NOFAIL);
req = (struct mngt_pktsched_wr *)skb_put(skb, sizeof(*req));
@ -754,20 +800,28 @@ static void send_pktsched_cmd(struct adapter *adap, int sched, int qidx, int lo,
req->min = lo;
req->max = hi;
req->binding = port;
t3_mgmt_tx(adap, skb);
ret = t3_mgmt_tx(adap, skb);
return ret;
}
static void bind_qsets(struct adapter *adap)
static int bind_qsets(struct adapter *adap)
{
int i, j;
int i, j, err = 0;
for_each_port(adap, i) {
const struct port_info *pi = adap2pinfo(adap, i);
for (j = 0; j < pi->nqsets; ++j)
send_pktsched_cmd(adap, 1, pi->first_qset + j, -1,
-1, i);
for (j = 0; j < pi->nqsets; ++j) {
int ret = send_pktsched_cmd(adap, 1,
pi->first_qset + j, -1,
-1, i);
if (ret)
err = ret;
}
}
return err;
}
#define FW_FNAME "t3fw-%d.%d.%d.bin"
@ -891,6 +945,13 @@ static int cxgb_up(struct adapter *adap)
goto out;
}
/*
* Clear interrupts now to catch errors if t3_init_hw fails.
* We clear them again later as initialization may trigger
* conditions that can interrupt.
*/
t3_intr_clear(adap);
err = t3_init_hw(adap, 0);
if (err)
goto out;
@ -946,9 +1007,16 @@ static int cxgb_up(struct adapter *adap)
t3_write_reg(adap, A_TP_INT_ENABLE, 0x7fbfffff);
}
if ((adap->flags & (USING_MSIX | QUEUES_BOUND)) == USING_MSIX)
bind_qsets(adap);
adap->flags |= QUEUES_BOUND;
if (!(adap->flags & QUEUES_BOUND)) {
err = bind_qsets(adap);
if (err) {
CH_ERR(adap, "failed to bind qsets, err %d\n", err);
t3_intr_disable(adap);
free_irq_resources(adap);
goto out;
}
adap->flags |= QUEUES_BOUND;
}
out:
return err;
@ -967,19 +1035,7 @@ static void cxgb_down(struct adapter *adapter)
t3_intr_disable(adapter);
spin_unlock_irq(&adapter->work_lock);
if (adapter->flags & USING_MSIX) {
int i, n = 0;
free_irq(adapter->msix_info[0].vec, adapter);
for_each_port(adapter, i)
n += adap2pinfo(adapter, i)->nqsets;
for (i = 0; i < n; ++i)
free_irq(adapter->msix_info[i + 1].vec,
&adapter->sge.qs[i]);
} else
free_irq(adapter->pdev->irq, adapter);
free_irq_resources(adapter);
flush_workqueue(cxgb3_wq); /* wait for external IRQ handler */
quiesce_rx(adapter);
}
@ -1100,9 +1156,9 @@ static int cxgb_close(struct net_device *dev)
netif_carrier_off(dev);
t3_mac_disable(&pi->mac, MAC_DIRECTION_TX | MAC_DIRECTION_RX);
spin_lock(&adapter->work_lock); /* sync with update task */
spin_lock_irq(&adapter->work_lock); /* sync with update task */
clear_bit(pi->port_id, &adapter->open_device_map);
spin_unlock(&adapter->work_lock);
spin_unlock_irq(&adapter->work_lock);
if (!(adapter->open_device_map & PORT_MASK))
cancel_rearming_delayed_workqueue(cxgb3_wq,
@ -1284,8 +1340,8 @@ static unsigned long collect_sge_port_stats(struct adapter *adapter,
int i;
unsigned long tot = 0;
for (i = 0; i < p->nqsets; ++i)
tot += adapter->sge.qs[i + p->first_qset].port_stats[idx];
for (i = p->first_qset; i < p->first_qset + p->nqsets; ++i)
tot += adapter->sge.qs[i].port_stats[idx];
return tot;
}
@ -1485,11 +1541,22 @@ static int speed_duplex_to_caps(int speed, int duplex)
static int set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
int cap;
struct port_info *p = netdev_priv(dev);
struct link_config *lc = &p->link_config;
if (!(lc->supported & SUPPORTED_Autoneg))
return -EOPNOTSUPP; /* can't change speed/duplex */
if (!(lc->supported & SUPPORTED_Autoneg)) {
/*
* PHY offers a single speed/duplex. See if that's what's
* being requested.
*/
if (cmd->autoneg == AUTONEG_DISABLE) {
cap = speed_duplex_to_caps(cmd->speed, cmd->duplex);
if (lc->supported & cap)
return 0;
}
return -EINVAL;
}
if (cmd->autoneg == AUTONEG_DISABLE) {
int cap = speed_duplex_to_caps(cmd->speed, cmd->duplex);
@ -1568,8 +1635,10 @@ static int set_rx_csum(struct net_device *dev, u32 data)
struct adapter *adap = p->adapter;
int i;
for (i = p->first_qset; i < p->first_qset + p->nqsets; i++)
for (i = p->first_qset; i < p->first_qset + p->nqsets; i++) {
adap->params.sge.qset[i].lro = 0;
adap->sge.qs[i].lro_enabled = 0;
}
}
return 0;
}
@ -1775,6 +1844,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
int i;
struct qset_params *q;
struct ch_qset_params t;
int q1 = pi->first_qset;
int nqsets = pi->nqsets;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
@ -1797,6 +1868,16 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
|| !in_range(t.rspq_size, MIN_RSPQ_ENTRIES,
MAX_RSPQ_ENTRIES))
return -EINVAL;
if ((adapter->flags & FULL_INIT_DONE) && t.lro > 0)
for_each_port(adapter, i) {
pi = adap2pinfo(adapter, i);
if (t.qset_idx >= pi->first_qset &&
t.qset_idx < pi->first_qset + pi->nqsets &&
!pi->rx_csum_offload)
return -EINVAL;
}
if ((adapter->flags & FULL_INIT_DONE) &&
(t.rspq_size >= 0 || t.fl_size[0] >= 0 ||
t.fl_size[1] >= 0 || t.txq_size[0] >= 0 ||
@ -1804,6 +1885,20 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
t.polling >= 0 || t.cong_thres >= 0))
return -EBUSY;
/* Allow setting of any available qset when offload enabled */
if (test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map)) {
q1 = 0;
for_each_port(adapter, i) {
pi = adap2pinfo(adapter, i);
nqsets += pi->first_qset + pi->nqsets;
}
}
if (t.qset_idx < q1)
return -EINVAL;
if (t.qset_idx > q1 + nqsets - 1)
return -EINVAL;
q = &adapter->params.sge.qset[t.qset_idx];
if (t.rspq_size >= 0)
@ -1853,13 +1948,26 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
case CHELSIO_GET_QSET_PARAMS:{
struct qset_params *q;
struct ch_qset_params t;
int q1 = pi->first_qset;
int nqsets = pi->nqsets;
int i;
if (copy_from_user(&t, useraddr, sizeof(t)))
return -EFAULT;
if (t.qset_idx >= SGE_QSETS)
/* Display qsets for all ports when offload enabled */
if (test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map)) {
q1 = 0;
for_each_port(adapter, i) {
pi = adap2pinfo(adapter, i);
nqsets = pi->first_qset + pi->nqsets;
}
}
if (t.qset_idx >= nqsets)
return -EINVAL;
q = &adapter->params.sge.qset[t.qset_idx];
q = &adapter->params.sge.qset[q1 + t.qset_idx];
t.rspq_size = q->rspq_size;
t.txq_size[0] = q->txq_size[0];
t.txq_size[1] = q->txq_size[1];
@ -1870,6 +1978,12 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
t.lro = q->lro;
t.intr_lat = q->coalesce_usecs;
t.cong_thres = q->cong_thres;
t.qnum = q1;
if (adapter->flags & USING_MSIX)
t.vector = adapter->msix_info[q1 + t.qset_idx + 1].vec;
else
t.vector = adapter->pdev->irq;
if (copy_to_user(useraddr, &t, sizeof(t)))
return -EFAULT;
@ -2117,7 +2231,7 @@ static int cxgb_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
mmd = data->phy_id >> 8;
if (!mmd)
mmd = MDIO_DEV_PCS;
else if (mmd > MDIO_DEV_XGXS)
else if (mmd > MDIO_DEV_VEND2)
return -EINVAL;
ret =
@ -2143,7 +2257,7 @@ static int cxgb_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
mmd = data->phy_id >> 8;
if (!mmd)
mmd = MDIO_DEV_PCS;
else if (mmd > MDIO_DEV_XGXS)
else if (mmd > MDIO_DEV_VEND2)
return -EINVAL;
ret =
@ -2215,8 +2329,8 @@ static void t3_synchronize_rx(struct adapter *adap, const struct port_info *p)
{
int i;
for (i = 0; i < p->nqsets; i++) {
struct sge_rspq *q = &adap->sge.qs[i + p->first_qset].rspq;
for (i = p->first_qset; i < p->first_qset + p->nqsets; i++) {
struct sge_rspq *q = &adap->sge.qs[i].rspq;
spin_lock_irq(&q->lock);
spin_unlock_irq(&q->lock);
@ -2290,7 +2404,7 @@ static void check_link_status(struct adapter *adapter)
struct net_device *dev = adapter->port[i];
struct port_info *p = netdev_priv(dev);
if (!(p->port_type->caps & SUPPORTED_IRQ) && netif_running(dev))
if (!(p->phy.caps & SUPPORTED_IRQ) && netif_running(dev))
t3_link_changed(adapter, i);
}
}
@ -2355,10 +2469,10 @@ static void t3_adap_check_task(struct work_struct *work)
check_t3b2_mac(adapter);
/* Schedule the next check update if any port is active. */
spin_lock(&adapter->work_lock);
spin_lock_irq(&adapter->work_lock);
if (adapter->open_device_map & PORT_MASK)
schedule_chk_task(adapter);
spin_unlock(&adapter->work_lock);
spin_unlock_irq(&adapter->work_lock);
}
/*
@ -2403,6 +2517,96 @@ void t3_os_ext_intr_handler(struct adapter *adapter)
spin_unlock(&adapter->work_lock);
}
static int t3_adapter_error(struct adapter *adapter, int reset)
{
int i, ret = 0;
/* Stop all ports */
for_each_port(adapter, i) {
struct net_device *netdev = adapter->port[i];
if (netif_running(netdev))
cxgb_close(netdev);
}
if (is_offload(adapter) &&
test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map))
offload_close(&adapter->tdev);
/* Stop SGE timers */
t3_stop_sge_timers(adapter);
adapter->flags &= ~FULL_INIT_DONE;
if (reset)
ret = t3_reset_adapter(adapter);
pci_disable_device(adapter->pdev);
return ret;
}
static int t3_reenable_adapter(struct adapter *adapter)
{
if (pci_enable_device(adapter->pdev)) {
dev_err(&adapter->pdev->dev,
"Cannot re-enable PCI device after reset.\n");
goto err;
}
pci_set_master(adapter->pdev);
pci_restore_state(adapter->pdev);
/* Free sge resources */
t3_free_sge_resources(adapter);
if (t3_replay_prep_adapter(adapter))
goto err;
return 0;
err:
return -1;
}
static void t3_resume_ports(struct adapter *adapter)
{
int i;
/* Restart the ports */
for_each_port(adapter, i) {
struct net_device *netdev = adapter->port[i];
if (netif_running(netdev)) {
if (cxgb_open(netdev)) {
dev_err(&adapter->pdev->dev,
"can't bring device back up"
" after reset\n");
continue;
}
}
}
}
/*
* processes a fatal error.
* Bring the ports down, reset the chip, bring the ports back up.
*/
static void fatal_error_task(struct work_struct *work)
{
struct adapter *adapter = container_of(work, struct adapter,
fatal_error_handler_task);
int err = 0;
rtnl_lock();
err = t3_adapter_error(adapter, 1);
if (!err)
err = t3_reenable_adapter(adapter);
if (!err)
t3_resume_ports(adapter);
CH_ALERT(adapter, "adapter reset %s\n", err ? "failed" : "succeeded");
rtnl_unlock();
}
void t3_fatal_err(struct adapter *adapter)
{
unsigned int fw_status[4];
@ -2413,7 +2617,11 @@ void t3_fatal_err(struct adapter *adapter)
t3_write_reg(adapter, A_XGM_RX_CTRL, 0);
t3_write_reg(adapter, XGM_REG(A_XGM_TX_CTRL, 1), 0);
t3_write_reg(adapter, XGM_REG(A_XGM_RX_CTRL, 1), 0);
spin_lock(&adapter->work_lock);
t3_intr_disable(adapter);
queue_work(cxgb3_wq, &adapter->fatal_error_handler_task);
spin_unlock(&adapter->work_lock);
}
CH_ALERT(adapter, "encountered fatal error, operation suspended\n");
if (!t3_cim_ctl_blk_read(adapter, 0xa0, 4, fw_status))
@ -2435,23 +2643,9 @@ static pci_ers_result_t t3_io_error_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
struct adapter *adapter = pci_get_drvdata(pdev);
int i;
int ret;
/* Stop all ports */
for_each_port(adapter, i) {
struct net_device *netdev = adapter->port[i];
if (netif_running(netdev))
cxgb_close(netdev);
}
if (is_offload(adapter) &&
test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map))
offload_close(&adapter->tdev);
adapter->flags &= ~FULL_INIT_DONE;
pci_disable_device(pdev);
ret = t3_adapter_error(adapter, 0);
/* Request a slot reset. */
return PCI_ERS_RESULT_NEED_RESET;
@ -2467,22 +2661,9 @@ static pci_ers_result_t t3_io_slot_reset(struct pci_dev *pdev)
{
struct adapter *adapter = pci_get_drvdata(pdev);
if (pci_enable_device(pdev)) {
dev_err(&pdev->dev,
"Cannot re-enable PCI device after reset.\n");
goto err;
}
pci_set_master(pdev);
pci_restore_state(pdev);
if (!t3_reenable_adapter(adapter))
return PCI_ERS_RESULT_RECOVERED;
/* Free sge resources */
t3_free_sge_resources(adapter);
if (t3_replay_prep_adapter(adapter))
goto err;
return PCI_ERS_RESULT_RECOVERED;
err:
return PCI_ERS_RESULT_DISCONNECT;
}
@ -2496,22 +2677,8 @@ err:
static void t3_io_resume(struct pci_dev *pdev)
{
struct adapter *adapter = pci_get_drvdata(pdev);
int i;
/* Restart the ports */
for_each_port(adapter, i) {
struct net_device *netdev = adapter->port[i];
if (netif_running(netdev)) {
if (cxgb_open(netdev)) {
dev_err(&pdev->dev,
"can't bring device back up"
" after reset\n");
continue;
}
netif_device_attach(netdev);
}
}
t3_resume_ports(adapter);
}
static struct pci_error_handlers t3_err_handler = {
@ -2520,6 +2687,42 @@ static struct pci_error_handlers t3_err_handler = {
.resume = t3_io_resume,
};
/*
* Set the number of qsets based on the number of CPUs and the number of ports,
* not to exceed the number of available qsets, assuming there are enough qsets
* per port in HW.
*/
static void set_nqsets(struct adapter *adap)
{
int i, j = 0;
int num_cpus = num_online_cpus();
int hwports = adap->params.nports;
int nqsets = SGE_QSETS;
if (adap->params.rev > 0) {
if (hwports == 2 &&
(hwports * nqsets > SGE_QSETS ||
num_cpus >= nqsets / hwports))
nqsets /= hwports;
if (nqsets > num_cpus)
nqsets = num_cpus;
if (nqsets < 1 || hwports == 4)
nqsets = 1;
} else
nqsets = 1;
for_each_port(adap, i) {
struct port_info *pi = adap2pinfo(adap, i);
pi->first_qset = j;
pi->nqsets = nqsets;
j = pi->first_qset + nqsets;
dev_info(&adap->pdev->dev,
"Port %d using %d queue sets.\n", i, nqsets);
}
}
static int __devinit cxgb_enable_msix(struct adapter *adap)
{
struct msix_entry entries[SGE_QSETS + 1];
@ -2564,7 +2767,7 @@ static void __devinit print_port_info(struct adapter *adap,
if (!test_bit(i, &adap->registered_device_map))
continue;
printk(KERN_INFO "%s: %s %s %sNIC (rev %d) %s%s\n",
dev->name, ai->desc, pi->port_type->desc,
dev->name, ai->desc, pi->phy.desc,
is_offload(adap) ? "R" : "", adap->params.rev, buf,
(adap->flags & USING_MSIX) ? " MSI-X" :
(adap->flags & USING_MSI) ? " MSI" : "");
@ -2660,6 +2863,7 @@ static int __devinit init_one(struct pci_dev *pdev,
INIT_LIST_HEAD(&adapter->adapter_list);
INIT_WORK(&adapter->ext_intr_handler_task, ext_intr_task);
INIT_WORK(&adapter->fatal_error_handler_task, fatal_error_task);
INIT_DELAYED_WORK(&adapter->adap_check_task, t3_adap_check_task);
for (i = 0; i < ai->nports; ++i) {
@ -2677,9 +2881,6 @@ static int __devinit init_one(struct pci_dev *pdev,
pi = netdev_priv(netdev);
pi->adapter = adapter;
pi->rx_csum_offload = 1;
pi->nqsets = 1;
pi->first_qset = i;
pi->activity = 0;
pi->port_id = i;
netif_carrier_off(netdev);
netdev->irq = pdev->irq;
@ -2756,6 +2957,8 @@ static int __devinit init_one(struct pci_dev *pdev,
else if (msi > 0 && pci_enable_msi(pdev) == 0)
adapter->flags |= USING_MSI;
set_nqsets(adapter);
err = sysfs_create_group(&adapter->port[0]->dev.kobj,
&cxgb3_attr_group);
@ -2801,6 +3004,7 @@ static void __devexit remove_one(struct pci_dev *pdev)
if (test_bit(i, &adapter->registered_device_map))
unregister_netdev(adapter->port[i]);
t3_stop_sge_timers(adapter);
t3_free_sge_resources(adapter);
cxgb_disable_msi(adapter);

View File

@ -1018,7 +1018,7 @@ static void set_l2t_ix(struct t3cdev *tdev, u32 tid, struct l2t_entry *e)
skb = alloc_skb(sizeof(*req), GFP_ATOMIC);
if (!skb) {
printk(KERN_ERR "%s: cannot allocate skb!\n", __FUNCTION__);
printk(KERN_ERR "%s: cannot allocate skb!\n", __func__);
return;
}
skb->priority = CPL_PRIORITY_CONTROL;
@ -1049,14 +1049,14 @@ void cxgb_redirect(struct dst_entry *old, struct dst_entry *new)
return;
if (!is_offloading(newdev)) {
printk(KERN_WARNING "%s: Redirect to non-offload "
"device ignored.\n", __FUNCTION__);
"device ignored.\n", __func__);
return;
}
tdev = dev2t3cdev(olddev);
BUG_ON(!tdev);
if (tdev != dev2t3cdev(newdev)) {
printk(KERN_WARNING "%s: Redirect to different "
"offload device ignored.\n", __FUNCTION__);
"offload device ignored.\n", __func__);
return;
}
@ -1064,7 +1064,7 @@ void cxgb_redirect(struct dst_entry *old, struct dst_entry *new)
e = t3_l2t_get(tdev, new->neighbour, newdev);
if (!e) {
printk(KERN_ERR "%s: couldn't allocate new l2t entry!\n",
__FUNCTION__);
__func__);
return;
}

View File

@ -86,6 +86,7 @@ static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,
struct l2t_entry *e)
{
struct cpl_l2t_write_req *req;
struct sk_buff *tmp;
if (!skb) {
skb = alloc_skb(sizeof(*req), GFP_ATOMIC);
@ -103,13 +104,11 @@ static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,
memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));
skb->priority = CPL_PRIORITY_CONTROL;
cxgb3_ofld_send(dev, skb);
while (e->arpq_head) {
skb = e->arpq_head;
e->arpq_head = skb->next;
skb->next = NULL;
skb_queue_walk_safe(&e->arpq, skb, tmp) {
__skb_unlink(skb, &e->arpq);
cxgb3_ofld_send(dev, skb);
}
e->arpq_tail = NULL;
e->state = L2T_STATE_VALID;
return 0;
@ -121,12 +120,7 @@ static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,
*/
static inline void arpq_enqueue(struct l2t_entry *e, struct sk_buff *skb)
{
skb->next = NULL;
if (e->arpq_head)
e->arpq_tail->next = skb;
else
e->arpq_head = skb;
e->arpq_tail = skb;
__skb_queue_tail(&e->arpq, skb);
}
int t3_l2t_send_slow(struct t3cdev *dev, struct sk_buff *skb,
@ -167,7 +161,7 @@ again:
break;
spin_lock_bh(&e->lock);
if (e->arpq_head)
if (!skb_queue_empty(&e->arpq))
setup_l2e_send_pending(dev, skb, e);
else /* we lost the race */
__kfree_skb(skb);
@ -357,14 +351,14 @@ EXPORT_SYMBOL(t3_l2t_get);
* XXX: maybe we should abandon the latter behavior and just require a failure
* handler.
*/
static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff *arpq)
static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff_head *arpq)
{
while (arpq) {
struct sk_buff *skb = arpq;
struct sk_buff *skb, *tmp;
skb_queue_walk_safe(arpq, skb, tmp) {
struct l2t_skb_cb *cb = L2T_SKB_CB(skb);
arpq = skb->next;
skb->next = NULL;
__skb_unlink(skb, arpq);
if (cb->arp_failure_handler)
cb->arp_failure_handler(dev, skb);
else
@ -378,8 +372,8 @@ static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff *arpq)
*/
void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh)
{
struct sk_buff_head arpq;
struct l2t_entry *e;
struct sk_buff *arpq = NULL;
struct l2t_data *d = L2DATA(dev);
u32 addr = *(u32 *) neigh->primary_key;
int ifidx = neigh->dev->ifindex;
@ -395,6 +389,8 @@ void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh)
return;
found:
__skb_queue_head_init(&arpq);
read_unlock(&d->lock);
if (atomic_read(&e->refcnt)) {
if (neigh != e->neigh)
@ -402,8 +398,7 @@ found:
if (e->state == L2T_STATE_RESOLVING) {
if (neigh->nud_state & NUD_FAILED) {
arpq = e->arpq_head;
e->arpq_head = e->arpq_tail = NULL;
skb_queue_splice_init(&e->arpq, &arpq);
} else if (neigh->nud_state & (NUD_CONNECTED|NUD_STALE))
setup_l2e_send_pending(dev, NULL, e);
} else {
@ -415,8 +410,8 @@ found:
}
spin_unlock_bh(&e->lock);
if (arpq)
handle_failed_resolution(dev, arpq);
if (!skb_queue_empty(&arpq))
handle_failed_resolution(dev, &arpq);
}
struct l2t_data *t3_init_l2t(unsigned int l2t_capacity)

View File

@ -64,8 +64,7 @@ struct l2t_entry {
struct neighbour *neigh; /* associated neighbour */
struct l2t_entry *first; /* start of hash chain */
struct l2t_entry *next; /* next l2t_entry on chain */
struct sk_buff *arpq_head; /* queue of packets awaiting resolution */
struct sk_buff *arpq_tail;
struct sk_buff_head arpq; /* queue of packets awaiting resolution */
spinlock_t lock;
atomic_t refcnt; /* entry reference count */
u8 dmac[6]; /* neighbour's MAC address */

View File

@ -573,6 +573,10 @@
#define V_GPIO10(x) ((x) << S_GPIO10)
#define F_GPIO10 V_GPIO10(1U)
#define S_GPIO9 9
#define V_GPIO9(x) ((x) << S_GPIO9)
#define F_GPIO9 V_GPIO9(1U)
#define S_GPIO7 7
#define V_GPIO7(x) ((x) << S_GPIO7)
#define F_GPIO7 V_GPIO7(1U)

View File

@ -351,7 +351,8 @@ static void free_rx_bufs(struct pci_dev *pdev, struct sge_fl *q)
pci_unmap_single(pdev, pci_unmap_addr(d, dma_addr),
q->buf_size, PCI_DMA_FROMDEVICE);
if (q->use_pages) {
put_page(d->pg_chunk.page);
if (d->pg_chunk.page)
put_page(d->pg_chunk.page);
d->pg_chunk.page = NULL;
} else {
kfree_skb(d->skb);
@ -583,7 +584,7 @@ static void t3_reset_qset(struct sge_qset *q)
memset(q->fl, 0, sizeof(struct sge_fl) * SGE_RXQ_PER_SET);
memset(q->txq, 0, sizeof(struct sge_txq) * SGE_TXQ_PER_SET);
q->txq_stopped = 0;
memset(&q->tx_reclaim_timer, 0, sizeof(q->tx_reclaim_timer));
q->tx_reclaim_timer.function = NULL; /* for t3_stop_sge_timers() */
kfree(q->lro_frag_tbl);
q->lro_nfrags = q->lro_frag_len = 0;
}
@ -603,9 +604,6 @@ static void t3_free_qset(struct adapter *adapter, struct sge_qset *q)
int i;
struct pci_dev *pdev = adapter->pdev;
if (q->tx_reclaim_timer.function)
del_timer_sync(&q->tx_reclaim_timer);
for (i = 0; i < SGE_RXQ_PER_SET; ++i)
if (q->fl[i].desc) {
spin_lock_irq(&adapter->sge.reg_lock);
@ -1704,16 +1702,15 @@ int t3_offload_tx(struct t3cdev *tdev, struct sk_buff *skb)
*/
static inline void offload_enqueue(struct sge_rspq *q, struct sk_buff *skb)
{
skb->next = skb->prev = NULL;
if (q->rx_tail)
q->rx_tail->next = skb;
else {
int was_empty = skb_queue_empty(&q->rx_queue);
__skb_queue_tail(&q->rx_queue, skb);
if (was_empty) {
struct sge_qset *qs = rspq_to_qset(q);
napi_schedule(&qs->napi);
q->rx_head = skb;
}
q->rx_tail = skb;
}
/**
@ -1754,26 +1751,29 @@ static int ofld_poll(struct napi_struct *napi, int budget)
int work_done = 0;
while (work_done < budget) {
struct sk_buff *head, *tail, *skbs[RX_BUNDLE_SIZE];
struct sk_buff *skb, *tmp, *skbs[RX_BUNDLE_SIZE];
struct sk_buff_head queue;
int ngathered;
spin_lock_irq(&q->lock);
head = q->rx_head;
if (!head) {
__skb_queue_head_init(&queue);
skb_queue_splice_init(&q->rx_queue, &queue);
if (skb_queue_empty(&queue)) {
napi_complete(napi);
spin_unlock_irq(&q->lock);
return work_done;
}
tail = q->rx_tail;
q->rx_head = q->rx_tail = NULL;
spin_unlock_irq(&q->lock);
for (ngathered = 0; work_done < budget && head; work_done++) {
prefetch(head->data);
skbs[ngathered] = head;
head = head->next;
skbs[ngathered]->next = NULL;
ngathered = 0;
skb_queue_walk_safe(&queue, skb, tmp) {
if (work_done >= budget)
break;
work_done++;
__skb_unlink(skb, &queue);
prefetch(skb->data);
skbs[ngathered] = skb;
if (++ngathered == RX_BUNDLE_SIZE) {
q->offload_bundles++;
adapter->tdev.recv(&adapter->tdev, skbs,
@ -1781,12 +1781,10 @@ static int ofld_poll(struct napi_struct *napi, int budget)
ngathered = 0;
}
}
if (head) { /* splice remaining packets back onto Rx queue */
if (!skb_queue_empty(&queue)) {
/* splice remaining packets back onto Rx queue */
spin_lock_irq(&q->lock);
tail->next = q->rx_head;
if (!q->rx_head)
q->rx_tail = tail;
q->rx_head = head;
skb_queue_splice(&queue, &q->rx_queue);
spin_unlock_irq(&q->lock);
}
deliver_partial_bundle(&adapter->tdev, q, skbs, ngathered);
@ -1937,38 +1935,6 @@ static inline int lro_frame_ok(const struct cpl_rx_pkt *p)
eh->h_proto == htons(ETH_P_IP) && ih->ihl == (sizeof(*ih) >> 2);
}
#define TCP_FLAG_MASK (TCP_FLAG_CWR | TCP_FLAG_ECE | TCP_FLAG_URG |\
TCP_FLAG_ACK | TCP_FLAG_PSH | TCP_FLAG_RST |\
TCP_FLAG_SYN | TCP_FLAG_FIN)
#define TSTAMP_WORD ((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) |\
(TCPOPT_TIMESTAMP << 8) | TCPOLEN_TIMESTAMP)
/**
* lro_segment_ok - check if a TCP segment is eligible for LRO
* @tcph: the TCP header of the packet
*
* Returns true if a TCP packet is eligible for LRO. This requires that
* the packet have only the ACK flag set and no TCP options besides
* time stamps.
*/
static inline int lro_segment_ok(const struct tcphdr *tcph)
{
int optlen;
if (unlikely((tcp_flag_word(tcph) & TCP_FLAG_MASK) != TCP_FLAG_ACK))
return 0;
optlen = (tcph->doff << 2) - sizeof(*tcph);
if (optlen) {
const u32 *opt = (const u32 *)(tcph + 1);
if (optlen != TCPOLEN_TSTAMP_ALIGNED ||
*opt != htonl(TSTAMP_WORD) || !opt[2])
return 0;
}
return 1;
}
static int t3_get_lro_header(void **eh, void **iph, void **tcph,
u64 *hdr_flags, void *priv)
{
@ -1981,9 +1947,6 @@ static int t3_get_lro_header(void **eh, void **iph, void **tcph,
*iph = (struct iphdr *)((struct ethhdr *)*eh + 1);
*tcph = (struct tcphdr *)((struct iphdr *)*iph + 1);
if (!lro_segment_ok(*tcph))
return -1;
*hdr_flags = LRO_IPV4 | LRO_TCP;
return 0;
}
@ -2878,9 +2841,7 @@ int t3_sge_alloc_qset(struct adapter *adapter, unsigned int id, int nports,
struct net_lro_mgr *lro_mgr = &q->lro_mgr;
init_qset_cntxt(q, id);
init_timer(&q->tx_reclaim_timer);
q->tx_reclaim_timer.data = (unsigned long)q;
q->tx_reclaim_timer.function = sge_timer_cb;
setup_timer(&q->tx_reclaim_timer, sge_timer_cb, (unsigned long)q);
q->fl[0].desc = alloc_ring(adapter->pdev, p->fl_size,
sizeof(struct rx_desc),
@ -2934,6 +2895,7 @@ int t3_sge_alloc_qset(struct adapter *adapter, unsigned int id, int nports,
q->rspq.gen = 1;
q->rspq.size = p->rspq_size;
spin_lock_init(&q->rspq.lock);
skb_queue_head_init(&q->rspq.rx_queue);
q->txq[TXQ_ETH].stop_thres = nports *
flits_to_desc(sgl_len(MAX_SKB_FRAGS + 1) + 3);
@ -3042,6 +3004,24 @@ err:
return ret;
}
/**
* t3_stop_sge_timers - stop SGE timer call backs
* @adap: the adapter
*
* Stops each SGE queue set's timer call back
*/
void t3_stop_sge_timers(struct adapter *adap)
{
int i;
for (i = 0; i < SGE_QSETS; ++i) {
struct sge_qset *q = &adap->sge.qs[i];
if (q->tx_reclaim_timer.function)
del_timer_sync(&q->tx_reclaim_timer);
}
}
/**
* t3_free_sge_resources - free SGE resources
* @adap: the adapter

View File

@ -194,21 +194,18 @@ int t3_mc7_bd_read(struct mc7 *mc7, unsigned int start, unsigned int n,
static void mi1_init(struct adapter *adap, const struct adapter_info *ai)
{
u32 clkdiv = adap->params.vpd.cclk / (2 * adap->params.vpd.mdc) - 1;
u32 val = F_PREEN | V_MDIINV(ai->mdiinv) | V_MDIEN(ai->mdien) |
V_CLKDIV(clkdiv);
u32 val = F_PREEN | V_CLKDIV(clkdiv);
if (!(ai->caps & SUPPORTED_10000baseT_Full))
val |= V_ST(1);
t3_write_reg(adap, A_MI1_CFG, val);
}
#define MDIO_ATTEMPTS 10
#define MDIO_ATTEMPTS 20
/*
* MI1 read/write operations for direct-addressed PHYs.
* MI1 read/write operations for clause 22 PHYs.
*/
static int mi1_read(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int *valp)
static int t3_mi1_read(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int *valp)
{
int ret;
u32 addr = V_REGADDR(reg_addr) | V_PHYADDR(phy_addr);
@ -217,16 +214,17 @@ static int mi1_read(struct adapter *adapter, int phy_addr, int mmd_addr,
return -EINVAL;
mutex_lock(&adapter->mdio_lock);
t3_set_reg_field(adapter, A_MI1_CFG, V_ST(M_ST), V_ST(1));
t3_write_reg(adapter, A_MI1_ADDR, addr);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(2));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 20);
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 10);
if (!ret)
*valp = t3_read_reg(adapter, A_MI1_DATA);
mutex_unlock(&adapter->mdio_lock);
return ret;
}
static int mi1_write(struct adapter *adapter, int phy_addr, int mmd_addr,
static int t3_mi1_write(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int val)
{
int ret;
@ -236,19 +234,37 @@ static int mi1_write(struct adapter *adapter, int phy_addr, int mmd_addr,
return -EINVAL;
mutex_lock(&adapter->mdio_lock);
t3_set_reg_field(adapter, A_MI1_CFG, V_ST(M_ST), V_ST(1));
t3_write_reg(adapter, A_MI1_ADDR, addr);
t3_write_reg(adapter, A_MI1_DATA, val);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(1));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 20);
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 10);
mutex_unlock(&adapter->mdio_lock);
return ret;
}
static const struct mdio_ops mi1_mdio_ops = {
mi1_read,
mi1_write
t3_mi1_read,
t3_mi1_write
};
/*
* Performs the address cycle for clause 45 PHYs.
* Must be called with the MDIO_LOCK held.
*/
static int mi1_wr_addr(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr)
{
u32 addr = V_REGADDR(mmd_addr) | V_PHYADDR(phy_addr);
t3_set_reg_field(adapter, A_MI1_CFG, V_ST(M_ST), 0);
t3_write_reg(adapter, A_MI1_ADDR, addr);
t3_write_reg(adapter, A_MI1_DATA, reg_addr);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(0));
return t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0,
MDIO_ATTEMPTS, 10);
}
/*
* MI1 read/write operations for indirect-addressed PHYs.
*/
@ -256,17 +272,13 @@ static int mi1_ext_read(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int *valp)
{
int ret;
u32 addr = V_REGADDR(mmd_addr) | V_PHYADDR(phy_addr);
mutex_lock(&adapter->mdio_lock);
t3_write_reg(adapter, A_MI1_ADDR, addr);
t3_write_reg(adapter, A_MI1_DATA, reg_addr);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(0));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 20);
ret = mi1_wr_addr(adapter, phy_addr, mmd_addr, reg_addr);
if (!ret) {
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(3));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0,
MDIO_ATTEMPTS, 20);
MDIO_ATTEMPTS, 10);
if (!ret)
*valp = t3_read_reg(adapter, A_MI1_DATA);
}
@ -278,18 +290,14 @@ static int mi1_ext_write(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int val)
{
int ret;
u32 addr = V_REGADDR(mmd_addr) | V_PHYADDR(phy_addr);
mutex_lock(&adapter->mdio_lock);
t3_write_reg(adapter, A_MI1_ADDR, addr);
t3_write_reg(adapter, A_MI1_DATA, reg_addr);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(0));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 20);
ret = mi1_wr_addr(adapter, phy_addr, mmd_addr, reg_addr);
if (!ret) {
t3_write_reg(adapter, A_MI1_DATA, val);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(1));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0,
MDIO_ATTEMPTS, 20);
MDIO_ATTEMPTS, 10);
}
mutex_unlock(&adapter->mdio_lock);
return ret;
@ -399,6 +407,29 @@ int t3_phy_advertise(struct cphy *phy, unsigned int advert)
return mdio_write(phy, 0, MII_ADVERTISE, val);
}
/**
* t3_phy_advertise_fiber - set fiber PHY advertisement register
* @phy: the PHY to operate on
* @advert: bitmap of capabilities the PHY should advertise
*
* Sets a fiber PHY's advertisement register to advertise the
* requested capabilities.
*/
int t3_phy_advertise_fiber(struct cphy *phy, unsigned int advert)
{
unsigned int val = 0;
if (advert & ADVERTISED_1000baseT_Half)
val |= ADVERTISE_1000XHALF;
if (advert & ADVERTISED_1000baseT_Full)
val |= ADVERTISE_1000XFULL;
if (advert & ADVERTISED_Pause)
val |= ADVERTISE_1000XPAUSE;
if (advert & ADVERTISED_Asym_Pause)
val |= ADVERTISE_1000XPSE_ASYM;
return mdio_write(phy, 0, MII_ADVERTISE, val);
}
/**
* t3_set_phy_speed_duplex - force PHY speed and duplex
* @phy: the PHY to operate on
@ -434,27 +465,52 @@ int t3_set_phy_speed_duplex(struct cphy *phy, int speed, int duplex)
return mdio_write(phy, 0, MII_BMCR, ctl);
}
int t3_phy_lasi_intr_enable(struct cphy *phy)
{
return mdio_write(phy, MDIO_DEV_PMA_PMD, LASI_CTRL, 1);
}
int t3_phy_lasi_intr_disable(struct cphy *phy)
{
return mdio_write(phy, MDIO_DEV_PMA_PMD, LASI_CTRL, 0);
}
int t3_phy_lasi_intr_clear(struct cphy *phy)
{
u32 val;
return mdio_read(phy, MDIO_DEV_PMA_PMD, LASI_STAT, &val);
}
int t3_phy_lasi_intr_handler(struct cphy *phy)
{
unsigned int status;
int err = mdio_read(phy, MDIO_DEV_PMA_PMD, LASI_STAT, &status);
if (err)
return err;
return (status & 1) ? cphy_cause_link_change : 0;
}
static const struct adapter_info t3_adap_info[] = {
{2, 0, 0, 0,
{2, 0,
F_GPIO2_OEN | F_GPIO4_OEN |
F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, F_GPIO3 | F_GPIO5,
0,
F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, { S_GPIO3, S_GPIO5 }, 0,
&mi1_mdio_ops, "Chelsio PE9000"},
{2, 0, 0, 0,
{2, 0,
F_GPIO2_OEN | F_GPIO4_OEN |
F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, F_GPIO3 | F_GPIO5,
0,
F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, { S_GPIO3, S_GPIO5 }, 0,
&mi1_mdio_ops, "Chelsio T302"},
{1, 0, 0, 0,
{1, 0,
F_GPIO1_OEN | F_GPIO6_OEN | F_GPIO7_OEN | F_GPIO10_OEN |
F_GPIO11_OEN | F_GPIO1_OUT_VAL | F_GPIO6_OUT_VAL | F_GPIO10_OUT_VAL,
0, SUPPORTED_10000baseT_Full | SUPPORTED_AUI,
{ 0 }, SUPPORTED_10000baseT_Full | SUPPORTED_AUI,
&mi1_mdio_ext_ops, "Chelsio T310"},
{2, 0, 0, 0,
{2, 0,
F_GPIO1_OEN | F_GPIO2_OEN | F_GPIO4_OEN | F_GPIO5_OEN | F_GPIO6_OEN |
F_GPIO7_OEN | F_GPIO10_OEN | F_GPIO11_OEN | F_GPIO1_OUT_VAL |
F_GPIO5_OUT_VAL | F_GPIO6_OUT_VAL | F_GPIO10_OUT_VAL, 0,
SUPPORTED_10000baseT_Full | SUPPORTED_AUI,
F_GPIO5_OUT_VAL | F_GPIO6_OUT_VAL | F_GPIO10_OUT_VAL,
{ S_GPIO9, S_GPIO3 }, SUPPORTED_10000baseT_Full | SUPPORTED_AUI,
&mi1_mdio_ext_ops, "Chelsio T320"},
};
@ -467,28 +523,22 @@ const struct adapter_info *t3_get_adapter_info(unsigned int id)
return id < ARRAY_SIZE(t3_adap_info) ? &t3_adap_info[id] : NULL;
}
#define CAPS_1G (SUPPORTED_10baseT_Full | SUPPORTED_100baseT_Full | \
SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_MII)
#define CAPS_10G (SUPPORTED_10000baseT_Full | SUPPORTED_AUI)
static const struct port_type_info port_types[] = {
{NULL},
{t3_ael1002_phy_prep, CAPS_10G | SUPPORTED_FIBRE,
"10GBASE-XR"},
{t3_vsc8211_phy_prep, CAPS_1G | SUPPORTED_TP | SUPPORTED_IRQ,
"10/100/1000BASE-T"},
{NULL, CAPS_1G | SUPPORTED_TP | SUPPORTED_IRQ,
"10/100/1000BASE-T"},
{t3_xaui_direct_phy_prep, CAPS_10G | SUPPORTED_TP, "10GBASE-CX4"},
{NULL, CAPS_10G, "10GBASE-KX4"},
{t3_qt2045_phy_prep, CAPS_10G | SUPPORTED_TP, "10GBASE-CX4"},
{t3_ael1006_phy_prep, CAPS_10G | SUPPORTED_FIBRE,
"10GBASE-SR"},
{NULL, CAPS_10G | SUPPORTED_TP, "10GBASE-CX4"},
struct port_type_info {
int (*phy_prep)(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *ops);
};
#undef CAPS_1G
#undef CAPS_10G
static const struct port_type_info port_types[] = {
{ NULL },
{ t3_ael1002_phy_prep },
{ t3_vsc8211_phy_prep },
{ NULL},
{ t3_xaui_direct_phy_prep },
{ t3_ael2005_phy_prep },
{ t3_qt2045_phy_prep },
{ t3_ael1006_phy_prep },
{ NULL },
};
#define VPD_ENTRY(name, len) \
u8 name##_kword[2]; u8 name##_len; u8 name##_data[len]
@ -1132,6 +1182,15 @@ void t3_link_changed(struct adapter *adapter, int port_id)
phy->ops->get_link_status(phy, &link_ok, &speed, &duplex, &fc);
if (lc->requested_fc & PAUSE_AUTONEG)
fc &= lc->requested_fc;
else
fc = lc->requested_fc & (PAUSE_RX | PAUSE_TX);
if (link_ok == lc->link_ok && speed == lc->speed &&
duplex == lc->duplex && fc == lc->fc)
return; /* nothing changed */
if (link_ok != lc->link_ok && adapter->params.rev > 0 &&
uses_xaui(adapter)) {
if (link_ok)
@ -1142,10 +1201,6 @@ void t3_link_changed(struct adapter *adapter, int port_id)
lc->link_ok = link_ok;
lc->speed = speed < 0 ? SPEED_INVALID : speed;
lc->duplex = duplex < 0 ? DUPLEX_INVALID : duplex;
if (lc->requested_fc & PAUSE_AUTONEG)
fc &= lc->requested_fc;
else
fc = lc->requested_fc & (PAUSE_RX | PAUSE_TX);
if (link_ok && speed >= 0 && lc->autoneg == AUTONEG_ENABLE) {
/* Set MAC speed, duplex, and flow control to match PHY. */
@ -1191,7 +1246,6 @@ int t3_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc)
fc);
/* Also disables autoneg */
phy->ops->set_speed_duplex(phy, lc->speed, lc->duplex);
phy->ops->reset(phy, 0);
} else
phy->ops->autoneg_enable(phy);
} else {
@ -1221,7 +1275,7 @@ struct intr_info {
unsigned int mask; /* bits to check in interrupt status */
const char *msg; /* message to print or NULL */
short stat_idx; /* stat counter to increment or -1 */
unsigned short fatal:1; /* whether the condition reported is fatal */
unsigned short fatal; /* whether the condition reported is fatal */
};
/**
@ -1682,25 +1736,23 @@ static int mac_intr_handler(struct adapter *adap, unsigned int idx)
*/
int t3_phy_intr_handler(struct adapter *adapter)
{
u32 mask, gpi = adapter_info(adapter)->gpio_intr;
u32 i, cause = t3_read_reg(adapter, A_T3DBG_INT_CAUSE);
for_each_port(adapter, i) {
struct port_info *p = adap2pinfo(adapter, i);
mask = gpi - (gpi & (gpi - 1));
gpi -= mask;
if (!(p->port_type->caps & SUPPORTED_IRQ))
if (!(p->phy.caps & SUPPORTED_IRQ))
continue;
if (cause & mask) {
if (cause & (1 << adapter_info(adapter)->gpio_intr[i])) {
int phy_cause = p->phy.ops->intr_handler(&p->phy);
if (phy_cause & cphy_cause_link_change)
t3_link_changed(adapter, i);
if (phy_cause & cphy_cause_fifo_error)
p->phy.fifo_errors++;
if (phy_cause & cphy_cause_module_change)
t3_os_phymod_changed(adapter, i);
}
}
@ -1763,6 +1815,17 @@ int t3_slow_intr_handler(struct adapter *adapter)
return 1;
}
static unsigned int calc_gpio_intr(struct adapter *adap)
{
unsigned int i, gpi_intr = 0;
for_each_port(adap, i)
if ((adap2pinfo(adap, i)->phy.caps & SUPPORTED_IRQ) &&
adapter_info(adap)->gpio_intr[i])
gpi_intr |= 1 << adapter_info(adap)->gpio_intr[i];
return gpi_intr;
}
/**
* t3_intr_enable - enable interrupts
* @adapter: the adapter whose interrupts should be enabled
@ -1805,10 +1868,8 @@ void t3_intr_enable(struct adapter *adapter)
t3_write_reg(adapter, A_ULPTX_INT_ENABLE, ULPTX_INTR_MASK);
}
t3_write_reg(adapter, A_T3DBG_GPIO_ACT_LOW,
adapter_info(adapter)->gpio_intr);
t3_write_reg(adapter, A_T3DBG_INT_ENABLE,
adapter_info(adapter)->gpio_intr);
t3_write_reg(adapter, A_T3DBG_INT_ENABLE, calc_gpio_intr(adapter));
if (is_pcie(adapter))
t3_write_reg(adapter, A_PCIE_INT_ENABLE, PCIE_INTR_MASK);
else
@ -3329,6 +3390,8 @@ int t3_init_hw(struct adapter *adapter, u32 fw_params)
init_hw_for_avail_ports(adapter, adapter->params.nports);
t3_sge_init(adapter, &adapter->params.sge);
t3_write_reg(adapter, A_T3DBG_GPIO_ACT_LOW, calc_gpio_intr(adapter));
t3_write_reg(adapter, A_CIM_HOST_ACC_DATA, vpd->uclk | fw_params);
t3_write_reg(adapter, A_CIM_BOOT_CFG,
V_BOOTADDR(FW_FLASH_BOOT_ADDR >> 2));
@ -3488,7 +3551,7 @@ void early_hw_init(struct adapter *adapter, const struct adapter_info *ai)
* Older PCIe cards lose their config space during reset, PCI-X
* ones don't.
*/
static int t3_reset_adapter(struct adapter *adapter)
int t3_reset_adapter(struct adapter *adapter)
{
int i, save_and_restore_pcie =
adapter->params.rev < T3_REV_B2 && is_pcie(adapter);
@ -3556,7 +3619,7 @@ int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
int reset)
{
int ret;
unsigned int i, j = 0;
unsigned int i, j = -1;
get_pci_mode(adapter, &adapter->params.pci);
@ -3620,16 +3683,18 @@ int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
for_each_port(adapter, i) {
u8 hw_addr[6];
const struct port_type_info *pti;
struct port_info *p = adap2pinfo(adapter, i);
while (!adapter->params.vpd.port_type[j])
++j;
while (!adapter->params.vpd.port_type[++j])
;
p->port_type = &port_types[adapter->params.vpd.port_type[j]];
p->port_type->phy_prep(&p->phy, adapter, ai->phy_base_addr + j,
ai->mdio_ops);
pti = &port_types[adapter->params.vpd.port_type[j]];
ret = pti->phy_prep(&p->phy, adapter, ai->phy_base_addr + j,
ai->mdio_ops);
if (ret)
return ret;
mac_prep(&p->mac, adapter, j);
++j;
/*
* The VPD EEPROM stores the base Ethernet address for the
@ -3643,9 +3708,9 @@ int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
ETH_ALEN);
memcpy(adapter->port[i]->perm_addr, hw_addr,
ETH_ALEN);
init_link_config(&p->link_config, p->port_type->caps);
init_link_config(&p->link_config, p->phy.caps);
p->phy.ops->power_down(&p->phy, 1);
if (!(p->port_type->caps & SUPPORTED_IRQ))
if (!(p->phy.caps & SUPPORTED_IRQ))
adapter->params.linkpoll_period = 10;
}
@ -3661,7 +3726,7 @@ void t3_led_ready(struct adapter *adapter)
int t3_replay_prep_adapter(struct adapter *adapter)
{
const struct adapter_info *ai = adapter->params.info;
unsigned int i, j = 0;
unsigned int i, j = -1;
int ret;
early_hw_init(adapter, ai);
@ -3670,15 +3735,17 @@ int t3_replay_prep_adapter(struct adapter *adapter)
return ret;
for_each_port(adapter, i) {
const struct port_type_info *pti;
struct port_info *p = adap2pinfo(adapter, i);
while (!adapter->params.vpd.port_type[j])
++j;
p->port_type->phy_prep(&p->phy, adapter, ai->phy_base_addr + j,
ai->mdio_ops);
while (!adapter->params.vpd.port_type[++j])
;
pti = &port_types[adapter->params.vpd.port_type[j]];
ret = pti->phy_prep(&p->phy, adapter, p->phy.addr, NULL);
if (ret)
return ret;
p->phy.ops->power_down(&p->phy, 1);
++j;
}
return 0;

View File

@ -33,28 +33,40 @@
/* VSC8211 PHY specific registers. */
enum {
VSC8211_SIGDET_CTRL = 19,
VSC8211_EXT_CTRL = 23,
VSC8211_INTR_ENABLE = 25,
VSC8211_INTR_STATUS = 26,
VSC8211_LED_CTRL = 27,
VSC8211_AUX_CTRL_STAT = 28,
VSC8211_EXT_PAGE_AXS = 31,
};
enum {
VSC_INTR_RX_ERR = 1 << 0,
VSC_INTR_MS_ERR = 1 << 1, /* master/slave resolution error */
VSC_INTR_CABLE = 1 << 2, /* cable impairment */
VSC_INTR_FALSE_CARR = 1 << 3, /* false carrier */
VSC_INTR_MEDIA_CHG = 1 << 4, /* AMS media change */
VSC_INTR_RX_FIFO = 1 << 5, /* Rx FIFO over/underflow */
VSC_INTR_TX_FIFO = 1 << 6, /* Tx FIFO over/underflow */
VSC_INTR_DESCRAMBL = 1 << 7, /* descrambler lock-lost */
VSC_INTR_SYMBOL_ERR = 1 << 8, /* symbol error */
VSC_INTR_NEG_DONE = 1 << 10, /* autoneg done */
VSC_INTR_NEG_ERR = 1 << 11, /* autoneg error */
VSC_INTR_LINK_CHG = 1 << 13, /* link change */
VSC_INTR_ENABLE = 1 << 15, /* interrupt enable */
VSC_INTR_MS_ERR = 1 << 1, /* master/slave resolution error */
VSC_INTR_CABLE = 1 << 2, /* cable impairment */
VSC_INTR_FALSE_CARR = 1 << 3, /* false carrier */
VSC_INTR_MEDIA_CHG = 1 << 4, /* AMS media change */
VSC_INTR_RX_FIFO = 1 << 5, /* Rx FIFO over/underflow */
VSC_INTR_TX_FIFO = 1 << 6, /* Tx FIFO over/underflow */
VSC_INTR_DESCRAMBL = 1 << 7, /* descrambler lock-lost */
VSC_INTR_SYMBOL_ERR = 1 << 8, /* symbol error */
VSC_INTR_NEG_DONE = 1 << 10, /* autoneg done */
VSC_INTR_NEG_ERR = 1 << 11, /* autoneg error */
VSC_INTR_DPLX_CHG = 1 << 12, /* duplex change */
VSC_INTR_LINK_CHG = 1 << 13, /* link change */
VSC_INTR_SPD_CHG = 1 << 14, /* speed change */
VSC_INTR_ENABLE = 1 << 15, /* interrupt enable */
};
enum {
VSC_CTRL_CLAUSE37_VIEW = 1 << 4, /* Switch to Clause 37 view */
VSC_CTRL_MEDIA_MODE_HI = 0xf000 /* High part of media mode select */
};
#define CFG_CHG_INTR_MASK (VSC_INTR_LINK_CHG | VSC_INTR_NEG_ERR | \
VSC_INTR_DPLX_CHG | VSC_INTR_SPD_CHG | \
VSC_INTR_NEG_DONE)
#define INTR_MASK (CFG_CHG_INTR_MASK | VSC_INTR_TX_FIFO | VSC_INTR_RX_FIFO | \
VSC_INTR_ENABLE)
@ -184,6 +196,112 @@ static int vsc8211_get_link_status(struct cphy *cphy, int *link_ok,
return 0;
}
static int vsc8211_get_link_status_fiber(struct cphy *cphy, int *link_ok,
int *speed, int *duplex, int *fc)
{
unsigned int bmcr, status, lpa, adv;
int err, sp = -1, dplx = -1, pause = 0;
err = mdio_read(cphy, 0, MII_BMCR, &bmcr);
if (!err)
err = mdio_read(cphy, 0, MII_BMSR, &status);
if (err)
return err;
if (link_ok) {
/*
* BMSR_LSTATUS is latch-low, so if it is 0 we need to read it
* once more to get the current link state.
*/
if (!(status & BMSR_LSTATUS))
err = mdio_read(cphy, 0, MII_BMSR, &status);
if (err)
return err;
*link_ok = (status & BMSR_LSTATUS) != 0;
}
if (!(bmcr & BMCR_ANENABLE)) {
dplx = (bmcr & BMCR_FULLDPLX) ? DUPLEX_FULL : DUPLEX_HALF;
if (bmcr & BMCR_SPEED1000)
sp = SPEED_1000;
else if (bmcr & BMCR_SPEED100)
sp = SPEED_100;
else
sp = SPEED_10;
} else if (status & BMSR_ANEGCOMPLETE) {
err = mdio_read(cphy, 0, MII_LPA, &lpa);
if (!err)
err = mdio_read(cphy, 0, MII_ADVERTISE, &adv);
if (err)
return err;
if (adv & lpa & ADVERTISE_1000XFULL) {
dplx = DUPLEX_FULL;
sp = SPEED_1000;
} else if (adv & lpa & ADVERTISE_1000XHALF) {
dplx = DUPLEX_HALF;
sp = SPEED_1000;
}
if (fc && dplx == DUPLEX_FULL) {
if (lpa & adv & ADVERTISE_1000XPAUSE)
pause = PAUSE_RX | PAUSE_TX;
else if ((lpa & ADVERTISE_1000XPAUSE) &&
(adv & lpa & ADVERTISE_1000XPSE_ASYM))
pause = PAUSE_TX;
else if ((lpa & ADVERTISE_1000XPSE_ASYM) &&
(adv & ADVERTISE_1000XPAUSE))
pause = PAUSE_RX;
}
}
if (speed)
*speed = sp;
if (duplex)
*duplex = dplx;
if (fc)
*fc = pause;
return 0;
}
/*
* Enable/disable auto MDI/MDI-X in forced link speed mode.
*/
static int vsc8211_set_automdi(struct cphy *phy, int enable)
{
int err;
err = mdio_write(phy, 0, VSC8211_EXT_PAGE_AXS, 0x52b5);
if (err)
return err;
err = mdio_write(phy, 0, 18, 0x12);
if (err)
return err;
err = mdio_write(phy, 0, 17, enable ? 0x2803 : 0x3003);
if (err)
return err;
err = mdio_write(phy, 0, 16, 0x87fa);
if (err)
return err;
err = mdio_write(phy, 0, VSC8211_EXT_PAGE_AXS, 0);
if (err)
return err;
return 0;
}
int vsc8211_set_speed_duplex(struct cphy *phy, int speed, int duplex)
{
int err;
err = t3_set_phy_speed_duplex(phy, speed, duplex);
if (!err)
err = vsc8211_set_automdi(phy, 1);
return err;
}
static int vsc8211_power_down(struct cphy *cphy, int enable)
{
return t3_mdio_change_bits(cphy, 0, MII_BMCR, BMCR_PDOWN,
@ -221,8 +339,66 @@ static struct cphy_ops vsc8211_ops = {
.power_down = vsc8211_power_down,
};
void t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops)
static struct cphy_ops vsc8211_fiber_ops = {
.reset = vsc8211_reset,
.intr_enable = vsc8211_intr_enable,
.intr_disable = vsc8211_intr_disable,
.intr_clear = vsc8211_intr_clear,
.intr_handler = vsc8211_intr_handler,
.autoneg_enable = vsc8211_autoneg_enable,
.autoneg_restart = vsc8211_autoneg_restart,
.advertise = t3_phy_advertise_fiber,
.set_speed_duplex = t3_set_phy_speed_duplex,
.get_link_status = vsc8211_get_link_status_fiber,
.power_down = vsc8211_power_down,
};
int t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops)
{
cphy_init(phy, adapter, phy_addr, &vsc8211_ops, mdio_ops);
int err;
unsigned int val;
cphy_init(phy, adapter, phy_addr, &vsc8211_ops, mdio_ops,
SUPPORTED_10baseT_Full | SUPPORTED_100baseT_Full |
SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_MII |
SUPPORTED_TP | SUPPORTED_IRQ, "10/100/1000BASE-T");
msleep(20); /* PHY needs ~10ms to start responding to MDIO */
err = mdio_read(phy, 0, VSC8211_EXT_CTRL, &val);
if (err)
return err;
if (val & VSC_CTRL_MEDIA_MODE_HI) {
/* copper interface, just need to configure the LEDs */
return mdio_write(phy, 0, VSC8211_LED_CTRL, 0x100);
}
phy->caps = SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg |
SUPPORTED_MII | SUPPORTED_FIBRE | SUPPORTED_IRQ;
phy->desc = "1000BASE-X";
phy->ops = &vsc8211_fiber_ops;
err = mdio_write(phy, 0, VSC8211_EXT_PAGE_AXS, 1);
if (err)
return err;
err = mdio_write(phy, 0, VSC8211_SIGDET_CTRL, 1);
if (err)
return err;
err = mdio_write(phy, 0, VSC8211_EXT_PAGE_AXS, 0);
if (err)
return err;
err = mdio_write(phy, 0, VSC8211_EXT_CTRL,
val | VSC_CTRL_CLAUSE37_VIEW);
if (err)
return err;
err = vsc8211_reset(phy, 0);
if (err)
return err;
udelay(5); /* delay after reset before next SMI */
return 0;
}

View File

@ -191,7 +191,7 @@ MODULE_PARM_DESC(use_io, "Force use of i/o access mode");
#define DPRINTK(nlevel, klevel, fmt, args...) \
(void)((NETIF_MSG_##nlevel & nic->msg_enable) && \
printk(KERN_##klevel PFX "%s: %s: " fmt, nic->netdev->name, \
__FUNCTION__ , ## args))
__func__ , ## args))
#define INTEL_8255X_ETHERNET_DEVICE(device_id, ich) {\
PCI_VENDOR_ID_INTEL, device_id, PCI_ANY_ID, PCI_ANY_ID, \

View File

@ -155,8 +155,6 @@ do { \
#endif
#define E1000_MNG_VLAN_NONE (-1)
/* Number of packet split data buffers (not including the header buffer) */
#define PS_PAGE_BUFFERS (MAX_PS_BUFFERS - 1)
/* wrapper around a pointer to a socket buffer,
* so a DMA handle can be stored along with the buffer */
@ -168,14 +166,6 @@ struct e1000_buffer {
u16 next_to_watch;
};
struct e1000_ps_page {
struct page *ps_page[PS_PAGE_BUFFERS];
};
struct e1000_ps_page_dma {
u64 ps_page_dma[PS_PAGE_BUFFERS];
};
struct e1000_tx_ring {
/* pointer to the descriptor ring memory */
void *desc;
@ -213,9 +203,6 @@ struct e1000_rx_ring {
unsigned int next_to_clean;
/* array of buffer information structs */
struct e1000_buffer *buffer_info;
/* arrays of page information for packet split */
struct e1000_ps_page *ps_page;
struct e1000_ps_page_dma *ps_page_dma;
/* cpu for rx queue */
int cpu;
@ -228,8 +215,6 @@ struct e1000_rx_ring {
((((R)->next_to_clean > (R)->next_to_use) \
? 0 : (R)->count) + (R)->next_to_clean - (R)->next_to_use - 1)
#define E1000_RX_DESC_PS(R, i) \
(&(((union e1000_rx_desc_packet_split *)((R).desc))[i]))
#define E1000_RX_DESC_EXT(R, i) \
(&(((union e1000_rx_desc_extended *)((R).desc))[i]))
#define E1000_GET_DESC(R, i, type) (&(((struct type *)((R).desc))[i]))
@ -311,10 +296,8 @@ struct e1000_adapter {
u32 rx_int_delay;
u32 rx_abs_int_delay;
bool rx_csum;
unsigned int rx_ps_pages;
u32 gorcl;
u64 gorcl_old;
u16 rx_ps_bsize0;
/* OS defined structs */
struct net_device *netdev;

View File

@ -137,15 +137,9 @@ static int e1000_clean(struct napi_struct *napi, int budget);
static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do);
static bool e1000_clean_rx_irq_ps(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do);
static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int cleaned_count);
static void e1000_alloc_rx_buffers_ps(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int cleaned_count);
static int e1000_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd);
static int e1000_mii_ioctl(struct net_device *netdev, struct ifreq *ifr,
int cmd);
@ -1331,7 +1325,6 @@ static int __devinit e1000_sw_init(struct e1000_adapter *adapter)
pci_read_config_word(pdev, PCI_COMMAND, &hw->pci_cmd_word);
adapter->rx_buffer_len = MAXIMUM_ETHERNET_VLAN_SIZE;
adapter->rx_ps_bsize0 = E1000_RXBUFFER_128;
hw->max_frame_size = netdev->mtu +
ENET_HEADER_SIZE + ETHERNET_FCS_SIZE;
hw->min_frame_size = MINIMUM_ETHERNET_FRAME_SIZE;
@ -1815,26 +1808,6 @@ static int e1000_setup_rx_resources(struct e1000_adapter *adapter,
}
memset(rxdr->buffer_info, 0, size);
rxdr->ps_page = kcalloc(rxdr->count, sizeof(struct e1000_ps_page),
GFP_KERNEL);
if (!rxdr->ps_page) {
vfree(rxdr->buffer_info);
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the receive descriptor ring\n");
return -ENOMEM;
}
rxdr->ps_page_dma = kcalloc(rxdr->count,
sizeof(struct e1000_ps_page_dma),
GFP_KERNEL);
if (!rxdr->ps_page_dma) {
vfree(rxdr->buffer_info);
kfree(rxdr->ps_page);
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the receive descriptor ring\n");
return -ENOMEM;
}
if (hw->mac_type <= e1000_82547_rev_2)
desc_len = sizeof(struct e1000_rx_desc);
else
@ -1852,8 +1825,6 @@ static int e1000_setup_rx_resources(struct e1000_adapter *adapter,
"Unable to allocate memory for the receive descriptor ring\n");
setup_rx_desc_die:
vfree(rxdr->buffer_info);
kfree(rxdr->ps_page);
kfree(rxdr->ps_page_dma);
return -ENOMEM;
}
@ -1932,11 +1903,7 @@ int e1000_setup_all_rx_resources(struct e1000_adapter *adapter)
static void e1000_setup_rctl(struct e1000_adapter *adapter)
{
struct e1000_hw *hw = &adapter->hw;
u32 rctl, rfctl;
u32 psrctl = 0;
#ifndef CONFIG_E1000_DISABLE_PACKET_SPLIT
u32 pages = 0;
#endif
u32 rctl;
rctl = er32(RCTL);
@ -1988,55 +1955,6 @@ static void e1000_setup_rctl(struct e1000_adapter *adapter)
break;
}
#ifndef CONFIG_E1000_DISABLE_PACKET_SPLIT
/* 82571 and greater support packet-split where the protocol
* header is placed in skb->data and the packet data is
* placed in pages hanging off of skb_shinfo(skb)->nr_frags.
* In the case of a non-split, skb->data is linearly filled,
* followed by the page buffers. Therefore, skb->data is
* sized to hold the largest protocol header.
*/
/* allocations using alloc_page take too long for regular MTU
* so only enable packet split for jumbo frames */
pages = PAGE_USE_COUNT(adapter->netdev->mtu);
if ((hw->mac_type >= e1000_82571) && (pages <= 3) &&
PAGE_SIZE <= 16384 && (rctl & E1000_RCTL_LPE))
adapter->rx_ps_pages = pages;
else
adapter->rx_ps_pages = 0;
#endif
if (adapter->rx_ps_pages) {
/* Configure extra packet-split registers */
rfctl = er32(RFCTL);
rfctl |= E1000_RFCTL_EXTEN;
/* disable packet split support for IPv6 extension headers,
* because some malformed IPv6 headers can hang the RX */
rfctl |= (E1000_RFCTL_IPV6_EX_DIS |
E1000_RFCTL_NEW_IPV6_EXT_DIS);
ew32(RFCTL, rfctl);
rctl |= E1000_RCTL_DTYP_PS;
psrctl |= adapter->rx_ps_bsize0 >>
E1000_PSRCTL_BSIZE0_SHIFT;
switch (adapter->rx_ps_pages) {
case 3:
psrctl |= PAGE_SIZE <<
E1000_PSRCTL_BSIZE3_SHIFT;
case 2:
psrctl |= PAGE_SIZE <<
E1000_PSRCTL_BSIZE2_SHIFT;
case 1:
psrctl |= PAGE_SIZE >>
E1000_PSRCTL_BSIZE1_SHIFT;
break;
}
ew32(PSRCTL, psrctl);
}
ew32(RCTL, rctl);
}
@ -2053,18 +1971,10 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
struct e1000_hw *hw = &adapter->hw;
u32 rdlen, rctl, rxcsum, ctrl_ext;
if (adapter->rx_ps_pages) {
/* this is a 32 byte descriptor */
rdlen = adapter->rx_ring[0].count *
sizeof(union e1000_rx_desc_packet_split);
adapter->clean_rx = e1000_clean_rx_irq_ps;
adapter->alloc_rx_buf = e1000_alloc_rx_buffers_ps;
} else {
rdlen = adapter->rx_ring[0].count *
sizeof(struct e1000_rx_desc);
adapter->clean_rx = e1000_clean_rx_irq;
adapter->alloc_rx_buf = e1000_alloc_rx_buffers;
}
rdlen = adapter->rx_ring[0].count *
sizeof(struct e1000_rx_desc);
adapter->clean_rx = e1000_clean_rx_irq;
adapter->alloc_rx_buf = e1000_alloc_rx_buffers;
/* disable receives while setting up the descriptors */
rctl = er32(RCTL);
@ -2109,28 +2019,14 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
/* Enable 82543 Receive Checksum Offload for TCP and UDP */
if (hw->mac_type >= e1000_82543) {
rxcsum = er32(RXCSUM);
if (adapter->rx_csum) {
if (adapter->rx_csum)
rxcsum |= E1000_RXCSUM_TUOFL;
/* Enable 82571 IPv4 payload checksum for UDP fragments
* Must be used in conjunction with packet-split. */
if ((hw->mac_type >= e1000_82571) &&
(adapter->rx_ps_pages)) {
rxcsum |= E1000_RXCSUM_IPPCSE;
}
} else {
rxcsum &= ~E1000_RXCSUM_TUOFL;
else
/* don't need to clear IPPCSE as it defaults to 0 */
}
rxcsum &= ~E1000_RXCSUM_TUOFL;
ew32(RXCSUM, rxcsum);
}
/* enable early receives on 82573, only takes effect if using > 2048
* byte total frame size. for example only for jumbo frames */
#define E1000_ERT_2048 0x100
if (hw->mac_type == e1000_82573)
ew32(ERT, E1000_ERT_2048);
/* Enable Receives */
ew32(RCTL, rctl);
}
@ -2256,10 +2152,6 @@ static void e1000_free_rx_resources(struct e1000_adapter *adapter,
vfree(rx_ring->buffer_info);
rx_ring->buffer_info = NULL;
kfree(rx_ring->ps_page);
rx_ring->ps_page = NULL;
kfree(rx_ring->ps_page_dma);
rx_ring->ps_page_dma = NULL;
pci_free_consistent(pdev, rx_ring->size, rx_ring->desc, rx_ring->dma);
@ -2292,11 +2184,9 @@ static void e1000_clean_rx_ring(struct e1000_adapter *adapter,
{
struct e1000_hw *hw = &adapter->hw;
struct e1000_buffer *buffer_info;
struct e1000_ps_page *ps_page;
struct e1000_ps_page_dma *ps_page_dma;
struct pci_dev *pdev = adapter->pdev;
unsigned long size;
unsigned int i, j;
unsigned int i;
/* Free all the Rx ring sk_buffs */
for (i = 0; i < rx_ring->count; i++) {
@ -2310,25 +2200,10 @@ static void e1000_clean_rx_ring(struct e1000_adapter *adapter,
dev_kfree_skb(buffer_info->skb);
buffer_info->skb = NULL;
}
ps_page = &rx_ring->ps_page[i];
ps_page_dma = &rx_ring->ps_page_dma[i];
for (j = 0; j < adapter->rx_ps_pages; j++) {
if (!ps_page->ps_page[j]) break;
pci_unmap_page(pdev,
ps_page_dma->ps_page_dma[j],
PAGE_SIZE, PCI_DMA_FROMDEVICE);
ps_page_dma->ps_page_dma[j] = 0;
put_page(ps_page->ps_page[j]);
ps_page->ps_page[j] = NULL;
}
}
size = sizeof(struct e1000_buffer) * rx_ring->count;
memset(rx_ring->buffer_info, 0, size);
size = sizeof(struct e1000_ps_page) * rx_ring->count;
memset(rx_ring->ps_page, 0, size);
size = sizeof(struct e1000_ps_page_dma) * rx_ring->count;
memset(rx_ring->ps_page_dma, 0, size);
/* Zero out the descriptor ring */
@ -2998,32 +2873,49 @@ static bool e1000_tx_csum(struct e1000_adapter *adapter,
struct e1000_buffer *buffer_info;
unsigned int i;
u8 css;
u32 cmd_len = E1000_TXD_CMD_DEXT;
if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
css = skb_transport_offset(skb);
if (skb->ip_summed != CHECKSUM_PARTIAL)
return false;
i = tx_ring->next_to_use;
buffer_info = &tx_ring->buffer_info[i];
context_desc = E1000_CONTEXT_DESC(*tx_ring, i);
context_desc->lower_setup.ip_config = 0;
context_desc->upper_setup.tcp_fields.tucss = css;
context_desc->upper_setup.tcp_fields.tucso =
css + skb->csum_offset;
context_desc->upper_setup.tcp_fields.tucse = 0;
context_desc->tcp_seg_setup.data = 0;
context_desc->cmd_and_length = cpu_to_le32(E1000_TXD_CMD_DEXT);
buffer_info->time_stamp = jiffies;
buffer_info->next_to_watch = i;
if (unlikely(++i == tx_ring->count)) i = 0;
tx_ring->next_to_use = i;
return true;
switch (skb->protocol) {
case __constant_htons(ETH_P_IP):
if (ip_hdr(skb)->protocol == IPPROTO_TCP)
cmd_len |= E1000_TXD_CMD_TCP;
break;
case __constant_htons(ETH_P_IPV6):
/* XXX not handling all IPV6 headers */
if (ipv6_hdr(skb)->nexthdr == IPPROTO_TCP)
cmd_len |= E1000_TXD_CMD_TCP;
break;
default:
if (unlikely(net_ratelimit()))
DPRINTK(DRV, WARNING,
"checksum_partial proto=%x!\n", skb->protocol);
break;
}
return false;
css = skb_transport_offset(skb);
i = tx_ring->next_to_use;
buffer_info = &tx_ring->buffer_info[i];
context_desc = E1000_CONTEXT_DESC(*tx_ring, i);
context_desc->lower_setup.ip_config = 0;
context_desc->upper_setup.tcp_fields.tucss = css;
context_desc->upper_setup.tcp_fields.tucso =
css + skb->csum_offset;
context_desc->upper_setup.tcp_fields.tucse = 0;
context_desc->tcp_seg_setup.data = 0;
context_desc->cmd_and_length = cpu_to_le32(cmd_len);
buffer_info->time_stamp = jiffies;
buffer_info->next_to_watch = i;
if (unlikely(++i == tx_ring->count)) i = 0;
tx_ring->next_to_use = i;
return true;
}
#define E1000_MAX_TXD_PWR 12
@ -4234,181 +4126,6 @@ next_desc:
return cleaned;
}
/**
* e1000_clean_rx_irq_ps - Send received data up the network stack; packet split
* @adapter: board private structure
**/
static bool e1000_clean_rx_irq_ps(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do)
{
union e1000_rx_desc_packet_split *rx_desc, *next_rxd;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_buffer *buffer_info, *next_buffer;
struct e1000_ps_page *ps_page;
struct e1000_ps_page_dma *ps_page_dma;
struct sk_buff *skb;
unsigned int i, j;
u32 length, staterr;
int cleaned_count = 0;
bool cleaned = false;
unsigned int total_rx_bytes=0, total_rx_packets=0;
i = rx_ring->next_to_clean;
rx_desc = E1000_RX_DESC_PS(*rx_ring, i);
staterr = le32_to_cpu(rx_desc->wb.middle.status_error);
buffer_info = &rx_ring->buffer_info[i];
while (staterr & E1000_RXD_STAT_DD) {
ps_page = &rx_ring->ps_page[i];
ps_page_dma = &rx_ring->ps_page_dma[i];
if (unlikely(*work_done >= work_to_do))
break;
(*work_done)++;
skb = buffer_info->skb;
/* in the packet split case this is header only */
prefetch(skb->data - NET_IP_ALIGN);
if (++i == rx_ring->count) i = 0;
next_rxd = E1000_RX_DESC_PS(*rx_ring, i);
prefetch(next_rxd);
next_buffer = &rx_ring->buffer_info[i];
cleaned = true;
cleaned_count++;
pci_unmap_single(pdev, buffer_info->dma,
buffer_info->length,
PCI_DMA_FROMDEVICE);
if (unlikely(!(staterr & E1000_RXD_STAT_EOP))) {
E1000_DBG("%s: Packet Split buffers didn't pick up"
" the full packet\n", netdev->name);
dev_kfree_skb_irq(skb);
goto next_desc;
}
if (unlikely(staterr & E1000_RXDEXT_ERR_FRAME_ERR_MASK)) {
dev_kfree_skb_irq(skb);
goto next_desc;
}
length = le16_to_cpu(rx_desc->wb.middle.length0);
if (unlikely(!length)) {
E1000_DBG("%s: Last part of the packet spanning"
" multiple descriptors\n", netdev->name);
dev_kfree_skb_irq(skb);
goto next_desc;
}
/* Good Receive */
skb_put(skb, length);
{
/* this looks ugly, but it seems compiler issues make it
more efficient than reusing j */
int l1 = le16_to_cpu(rx_desc->wb.upper.length[0]);
/* page alloc/put takes too long and effects small packet
* throughput, so unsplit small packets and save the alloc/put*/
if (l1 && (l1 <= copybreak) && ((length + l1) <= adapter->rx_ps_bsize0)) {
u8 *vaddr;
/* there is no documentation about how to call
* kmap_atomic, so we can't hold the mapping
* very long */
pci_dma_sync_single_for_cpu(pdev,
ps_page_dma->ps_page_dma[0],
PAGE_SIZE,
PCI_DMA_FROMDEVICE);
vaddr = kmap_atomic(ps_page->ps_page[0],
KM_SKB_DATA_SOFTIRQ);
memcpy(skb_tail_pointer(skb), vaddr, l1);
kunmap_atomic(vaddr, KM_SKB_DATA_SOFTIRQ);
pci_dma_sync_single_for_device(pdev,
ps_page_dma->ps_page_dma[0],
PAGE_SIZE, PCI_DMA_FROMDEVICE);
/* remove the CRC */
l1 -= 4;
skb_put(skb, l1);
goto copydone;
} /* if */
}
for (j = 0; j < adapter->rx_ps_pages; j++) {
length = le16_to_cpu(rx_desc->wb.upper.length[j]);
if (!length)
break;
pci_unmap_page(pdev, ps_page_dma->ps_page_dma[j],
PAGE_SIZE, PCI_DMA_FROMDEVICE);
ps_page_dma->ps_page_dma[j] = 0;
skb_fill_page_desc(skb, j, ps_page->ps_page[j], 0,
length);
ps_page->ps_page[j] = NULL;
skb->len += length;
skb->data_len += length;
skb->truesize += length;
}
/* strip the ethernet crc, problem is we're using pages now so
* this whole operation can get a little cpu intensive */
pskb_trim(skb, skb->len - 4);
copydone:
total_rx_bytes += skb->len;
total_rx_packets++;
e1000_rx_checksum(adapter, staterr,
le16_to_cpu(rx_desc->wb.lower.hi_dword.csum_ip.csum), skb);
skb->protocol = eth_type_trans(skb, netdev);
if (likely(rx_desc->wb.upper.header_status &
cpu_to_le16(E1000_RXDPS_HDRSTAT_HDRSP)))
adapter->rx_hdr_split++;
if (unlikely(adapter->vlgrp && (staterr & E1000_RXD_STAT_VP))) {
vlan_hwaccel_receive_skb(skb, adapter->vlgrp,
le16_to_cpu(rx_desc->wb.middle.vlan));
} else {
netif_receive_skb(skb);
}
netdev->last_rx = jiffies;
next_desc:
rx_desc->wb.middle.status_error &= cpu_to_le32(~0xFF);
buffer_info->skb = NULL;
/* return some buffers to hardware, one at a time is too slow */
if (unlikely(cleaned_count >= E1000_RX_BUFFER_WRITE)) {
adapter->alloc_rx_buf(adapter, rx_ring, cleaned_count);
cleaned_count = 0;
}
/* use prefetched values */
rx_desc = next_rxd;
buffer_info = next_buffer;
staterr = le32_to_cpu(rx_desc->wb.middle.status_error);
}
rx_ring->next_to_clean = i;
cleaned_count = E1000_DESC_UNUSED(rx_ring);
if (cleaned_count)
adapter->alloc_rx_buf(adapter, rx_ring, cleaned_count);
adapter->total_rx_packets += total_rx_packets;
adapter->total_rx_bytes += total_rx_bytes;
adapter->net_stats.rx_bytes += total_rx_bytes;
adapter->net_stats.rx_packets += total_rx_packets;
return cleaned;
}
/**
* e1000_alloc_rx_buffers - Replace used receive buffers; legacy & extended
* @adapter: address of board private structure
@ -4520,104 +4237,6 @@ map_skb:
}
}
/**
* e1000_alloc_rx_buffers_ps - Replace used receive buffers; packet split
* @adapter: address of board private structure
**/
static void e1000_alloc_rx_buffers_ps(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int cleaned_count)
{
struct e1000_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
union e1000_rx_desc_packet_split *rx_desc;
struct e1000_buffer *buffer_info;
struct e1000_ps_page *ps_page;
struct e1000_ps_page_dma *ps_page_dma;
struct sk_buff *skb;
unsigned int i, j;
i = rx_ring->next_to_use;
buffer_info = &rx_ring->buffer_info[i];
ps_page = &rx_ring->ps_page[i];
ps_page_dma = &rx_ring->ps_page_dma[i];
while (cleaned_count--) {
rx_desc = E1000_RX_DESC_PS(*rx_ring, i);
for (j = 0; j < PS_PAGE_BUFFERS; j++) {
if (j < adapter->rx_ps_pages) {
if (likely(!ps_page->ps_page[j])) {
ps_page->ps_page[j] =
alloc_page(GFP_ATOMIC);
if (unlikely(!ps_page->ps_page[j])) {
adapter->alloc_rx_buff_failed++;
goto no_buffers;
}
ps_page_dma->ps_page_dma[j] =
pci_map_page(pdev,
ps_page->ps_page[j],
0, PAGE_SIZE,
PCI_DMA_FROMDEVICE);
}
/* Refresh the desc even if buffer_addrs didn't
* change because each write-back erases
* this info.
*/
rx_desc->read.buffer_addr[j+1] =
cpu_to_le64(ps_page_dma->ps_page_dma[j]);
} else
rx_desc->read.buffer_addr[j+1] = ~cpu_to_le64(0);
}
skb = netdev_alloc_skb(netdev,
adapter->rx_ps_bsize0 + NET_IP_ALIGN);
if (unlikely(!skb)) {
adapter->alloc_rx_buff_failed++;
break;
}
/* Make buffer alignment 2 beyond a 16 byte boundary
* this will result in a 16 byte aligned IP header after
* the 14 byte MAC header is removed
*/
skb_reserve(skb, NET_IP_ALIGN);
buffer_info->skb = skb;
buffer_info->length = adapter->rx_ps_bsize0;
buffer_info->dma = pci_map_single(pdev, skb->data,
adapter->rx_ps_bsize0,
PCI_DMA_FROMDEVICE);
rx_desc->read.buffer_addr[0] = cpu_to_le64(buffer_info->dma);
if (unlikely(++i == rx_ring->count)) i = 0;
buffer_info = &rx_ring->buffer_info[i];
ps_page = &rx_ring->ps_page[i];
ps_page_dma = &rx_ring->ps_page_dma[i];
}
no_buffers:
if (likely(rx_ring->next_to_use != i)) {
rx_ring->next_to_use = i;
if (unlikely(i-- == 0)) i = (rx_ring->count - 1);
/* Force memory writes to complete before letting h/w
* know there are new descriptors to fetch. (Only
* applicable for weak-ordered memory model archs,
* such as IA-64). */
wmb();
/* Hardware increments by 16 bytes, but packet split
* descriptors are 32 bytes...so we increment tail
* twice as much.
*/
writel(i<<1, hw->hw_addr + rx_ring->rdt);
}
}
/**
* e1000_smartspeed - Workaround for SmartSpeed on 82541 and 82547 controllers.
* @adapter:

View File

@ -38,6 +38,7 @@
* 82573V Gigabit Ethernet Controller (Copper)
* 82573E Gigabit Ethernet Controller (Copper)
* 82573L Gigabit Ethernet Controller
* 82574L Gigabit Network Connection
*/
#include <linux/netdevice.h>
@ -54,6 +55,8 @@
#define E1000_GCR_L1_ACT_WITHOUT_L0S_RX 0x08000000
#define E1000_NVM_INIT_CTRL2_MNGM 0x6000 /* Manageability Operation Mode mask */
static s32 e1000_get_phy_id_82571(struct e1000_hw *hw);
static s32 e1000_setup_copper_link_82571(struct e1000_hw *hw);
static s32 e1000_setup_fiber_serdes_link_82571(struct e1000_hw *hw);
@ -63,6 +66,8 @@ static s32 e1000_fix_nvm_checksum_82571(struct e1000_hw *hw);
static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw);
static s32 e1000_setup_link_82571(struct e1000_hw *hw);
static void e1000_clear_hw_cntrs_82571(struct e1000_hw *hw);
static bool e1000_check_mng_mode_82574(struct e1000_hw *hw);
static s32 e1000_led_on_82574(struct e1000_hw *hw);
/**
* e1000_init_phy_params_82571 - Init PHY func ptrs.
@ -92,6 +97,9 @@ static s32 e1000_init_phy_params_82571(struct e1000_hw *hw)
case e1000_82573:
phy->type = e1000_phy_m88;
break;
case e1000_82574:
phy->type = e1000_phy_bm;
break;
default:
return -E1000_ERR_PHY;
break;
@ -111,6 +119,10 @@ static s32 e1000_init_phy_params_82571(struct e1000_hw *hw)
if (phy->id != M88E1111_I_PHY_ID)
return -E1000_ERR_PHY;
break;
case e1000_82574:
if (phy->id != BME1000_E_PHY_ID_R2)
return -E1000_ERR_PHY;
break;
default:
return -E1000_ERR_PHY;
break;
@ -150,6 +162,7 @@ static s32 e1000_init_nvm_params_82571(struct e1000_hw *hw)
switch (hw->mac.type) {
case e1000_82573:
case e1000_82574:
if (((eecd >> 15) & 0x3) == 0x3) {
nvm->type = e1000_nvm_flash_hw;
nvm->word_size = 2048;
@ -245,6 +258,17 @@ static s32 e1000_init_mac_params_82571(struct e1000_adapter *adapter)
break;
}
switch (hw->mac.type) {
case e1000_82574:
func->check_mng_mode = e1000_check_mng_mode_82574;
func->led_on = e1000_led_on_82574;
break;
default:
func->check_mng_mode = e1000e_check_mng_mode_generic;
func->led_on = e1000e_led_on_generic;
break;
}
return 0;
}
@ -330,6 +354,8 @@ static s32 e1000_get_variants_82571(struct e1000_adapter *adapter)
static s32 e1000_get_phy_id_82571(struct e1000_hw *hw)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val;
u16 phy_id = 0;
switch (hw->mac.type) {
case e1000_82571:
@ -345,6 +371,20 @@ static s32 e1000_get_phy_id_82571(struct e1000_hw *hw)
case e1000_82573:
return e1000e_get_phy_id(hw);
break;
case e1000_82574:
ret_val = e1e_rphy(hw, PHY_ID1, &phy_id);
if (ret_val)
return ret_val;
phy->id = (u32)(phy_id << 16);
udelay(20);
ret_val = e1e_rphy(hw, PHY_ID2, &phy_id);
if (ret_val)
return ret_val;
phy->id |= (u32)(phy_id);
phy->revision = (u32)(phy_id & ~PHY_REVISION_MASK);
break;
default:
return -E1000_ERR_PHY;
break;
@ -421,7 +461,7 @@ static s32 e1000_acquire_nvm_82571(struct e1000_hw *hw)
if (ret_val)
return ret_val;
if (hw->mac.type != e1000_82573)
if (hw->mac.type != e1000_82573 && hw->mac.type != e1000_82574)
ret_val = e1000e_acquire_nvm(hw);
if (ret_val)
@ -461,6 +501,7 @@ static s32 e1000_write_nvm_82571(struct e1000_hw *hw, u16 offset, u16 words,
switch (hw->mac.type) {
case e1000_82573:
case e1000_82574:
ret_val = e1000_write_nvm_eewr_82571(hw, offset, words, data);
break;
case e1000_82571:
@ -735,7 +776,7 @@ static s32 e1000_reset_hw_82571(struct e1000_hw *hw)
* Must acquire the MDIO ownership before MAC reset.
* Ownership defaults to firmware after a reset.
*/
if (hw->mac.type == e1000_82573) {
if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) {
extcnf_ctrl = er32(EXTCNF_CTRL);
extcnf_ctrl |= E1000_EXTCNF_CTRL_MDIO_SW_OWNERSHIP;
@ -776,7 +817,7 @@ static s32 e1000_reset_hw_82571(struct e1000_hw *hw)
* Need to wait for Phy configuration completion before accessing
* NVM and Phy.
*/
if (hw->mac.type == e1000_82573)
if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574)
msleep(25);
/* Clear any pending interrupt events. */
@ -843,7 +884,7 @@ static s32 e1000_init_hw_82571(struct e1000_hw *hw)
ew32(TXDCTL(0), reg_data);
/* ...for both queues. */
if (mac->type != e1000_82573) {
if (mac->type != e1000_82573 && mac->type != e1000_82574) {
reg_data = er32(TXDCTL(1));
reg_data = (reg_data & ~E1000_TXDCTL_WTHRESH) |
E1000_TXDCTL_FULL_TX_DESC_WB |
@ -918,19 +959,28 @@ static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw)
}
/* Device Control */
if (hw->mac.type == e1000_82573) {
if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) {
reg = er32(CTRL);
reg &= ~(1 << 29);
ew32(CTRL, reg);
}
/* Extended Device Control */
if (hw->mac.type == e1000_82573) {
if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) {
reg = er32(CTRL_EXT);
reg &= ~(1 << 23);
reg |= (1 << 22);
ew32(CTRL_EXT, reg);
}
/* PCI-Ex Control Register */
if (hw->mac.type == e1000_82574) {
reg = er32(GCR);
reg |= (1 << 22);
ew32(GCR, reg);
}
return;
}
/**
@ -947,7 +997,7 @@ void e1000e_clear_vfta(struct e1000_hw *hw)
u32 vfta_offset = 0;
u32 vfta_bit_in_reg = 0;
if (hw->mac.type == e1000_82573) {
if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) {
if (hw->mng_cookie.vlan_id != 0) {
/*
* The VFTA is a 4096b bit-field, each identifying
@ -975,6 +1025,48 @@ void e1000e_clear_vfta(struct e1000_hw *hw)
}
}
/**
* e1000_check_mng_mode_82574 - Check manageability is enabled
* @hw: pointer to the HW structure
*
* Reads the NVM Initialization Control Word 2 and returns true
* (>0) if any manageability is enabled, else false (0).
**/
static bool e1000_check_mng_mode_82574(struct e1000_hw *hw)
{
u16 data;
e1000_read_nvm(hw, NVM_INIT_CONTROL2_REG, 1, &data);
return (data & E1000_NVM_INIT_CTRL2_MNGM) != 0;
}
/**
* e1000_led_on_82574 - Turn LED on
* @hw: pointer to the HW structure
*
* Turn LED on.
**/
static s32 e1000_led_on_82574(struct e1000_hw *hw)
{
u32 ctrl;
u32 i;
ctrl = hw->mac.ledctl_mode2;
if (!(E1000_STATUS_LU & er32(STATUS))) {
/*
* If no link, then turn LED on by setting the invert bit
* for each LED that's "on" (0x0E) in ledctl_mode2.
*/
for (i = 0; i < 4; i++)
if (((hw->mac.ledctl_mode2 >> (i * 8)) & 0xFF) ==
E1000_LEDCTL_MODE_LED_ON)
ctrl |= (E1000_LEDCTL_LED0_IVRT << (i * 8));
}
ew32(LEDCTL, ctrl);
return 0;
}
/**
* e1000_update_mc_addr_list_82571 - Update Multicast addresses
* @hw: pointer to the HW structure
@ -1018,7 +1110,8 @@ static s32 e1000_setup_link_82571(struct e1000_hw *hw)
* the default flow control setting, so we explicitly
* set it to full.
*/
if (hw->mac.type == e1000_82573)
if ((hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) &&
hw->fc.type == e1000_fc_default)
hw->fc.type = e1000_fc_full;
return e1000e_setup_link(hw);
@ -1045,6 +1138,7 @@ static s32 e1000_setup_copper_link_82571(struct e1000_hw *hw)
switch (hw->phy.type) {
case e1000_phy_m88:
case e1000_phy_bm:
ret_val = e1000e_copper_link_setup_m88(hw);
break;
case e1000_phy_igp_2:
@ -1114,11 +1208,10 @@ static s32 e1000_valid_led_default_82571(struct e1000_hw *hw, u16 *data)
return ret_val;
}
if (hw->mac.type == e1000_82573 &&
if ((hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) &&
*data == ID_LED_RESERVED_F746)
*data = ID_LED_DEFAULT_82573;
else if (*data == ID_LED_RESERVED_0000 ||
*data == ID_LED_RESERVED_FFFF)
else if (*data == ID_LED_RESERVED_0000 || *data == ID_LED_RESERVED_FFFF)
*data = ID_LED_DEFAULT;
return 0;
@ -1265,13 +1358,13 @@ static void e1000_clear_hw_cntrs_82571(struct e1000_hw *hw)
}
static struct e1000_mac_operations e82571_mac_ops = {
.mng_mode_enab = E1000_MNG_IAMT_MODE << E1000_FWSM_MODE_SHIFT,
/* .check_mng_mode: mac type dependent */
/* .check_for_link: media type dependent */
.cleanup_led = e1000e_cleanup_led_generic,
.clear_hw_cntrs = e1000_clear_hw_cntrs_82571,
.get_bus_info = e1000e_get_bus_info_pcie,
/* .get_link_up_info: media type dependent */
.led_on = e1000e_led_on_generic,
/* .led_on: mac type dependent */
.led_off = e1000e_led_off_generic,
.update_mc_addr_list = e1000_update_mc_addr_list_82571,
.reset_hw = e1000_reset_hw_82571,
@ -1312,6 +1405,22 @@ static struct e1000_phy_operations e82_phy_ops_m88 = {
.write_phy_reg = e1000e_write_phy_reg_m88,
};
static struct e1000_phy_operations e82_phy_ops_bm = {
.acquire_phy = e1000_get_hw_semaphore_82571,
.check_reset_block = e1000e_check_reset_block_generic,
.commit_phy = e1000e_phy_sw_reset,
.force_speed_duplex = e1000e_phy_force_speed_duplex_m88,
.get_cfg_done = e1000e_get_cfg_done,
.get_cable_length = e1000e_get_cable_length_m88,
.get_phy_info = e1000e_get_phy_info_m88,
.read_phy_reg = e1000e_read_phy_reg_bm2,
.release_phy = e1000_put_hw_semaphore_82571,
.reset_phy = e1000e_phy_hw_reset_generic,
.set_d0_lplu_state = e1000_set_d0_lplu_state_82571,
.set_d3_lplu_state = e1000e_set_d3_lplu_state,
.write_phy_reg = e1000e_write_phy_reg_bm2,
};
static struct e1000_nvm_operations e82571_nvm_ops = {
.acquire_nvm = e1000_acquire_nvm_82571,
.read_nvm = e1000e_read_nvm_eerd,
@ -1375,3 +1484,21 @@ struct e1000_info e1000_82573_info = {
.nvm_ops = &e82571_nvm_ops,
};
struct e1000_info e1000_82574_info = {
.mac = e1000_82574,
.flags = FLAG_HAS_HW_VLAN_FILTER
| FLAG_HAS_MSIX
| FLAG_HAS_JUMBO_FRAMES
| FLAG_HAS_WOL
| FLAG_APME_IN_CTRL3
| FLAG_RX_CSUM_ENABLED
| FLAG_HAS_SMART_POWER_DOWN
| FLAG_HAS_AMT
| FLAG_HAS_CTRLEXT_ON_LOAD,
.pba = 20,
.get_variants = e1000_get_variants_82571,
.mac_ops = &e82571_mac_ops,
.phy_ops = &e82_phy_ops_bm,
.nvm_ops = &e82571_nvm_ops,
};

View File

@ -71,9 +71,11 @@
#define E1000_CTRL_EXT_RO_DIS 0x00020000 /* Relaxed Ordering disable */
#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000
#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000
#define E1000_CTRL_EXT_EIAME 0x01000000
#define E1000_CTRL_EXT_DRV_LOAD 0x10000000 /* Driver loaded bit for FW */
#define E1000_CTRL_EXT_IAME 0x08000000 /* Interrupt acknowledge Auto-mask */
#define E1000_CTRL_EXT_INT_TIMER_CLR 0x20000000 /* Clear Interrupt timers after IMS clear */
#define E1000_CTRL_EXT_PBA_CLR 0x80000000 /* PBA Clear */
/* Receive Descriptor bit definitions */
#define E1000_RXD_STAT_DD 0x01 /* Descriptor Done */
@ -299,6 +301,7 @@
#define E1000_RXCSUM_IPPCSE 0x00001000 /* IP payload checksum enable */
/* Header split receive */
#define E1000_RFCTL_ACK_DIS 0x00001000
#define E1000_RFCTL_EXTEN 0x00008000
#define E1000_RFCTL_IPV6_EX_DIS 0x00010000
#define E1000_RFCTL_NEW_IPV6_EXT_DIS 0x00020000
@ -363,6 +366,11 @@
#define E1000_ICR_RXDMT0 0x00000010 /* Rx desc min. threshold (0) */
#define E1000_ICR_RXT0 0x00000080 /* Rx timer intr (ring 0) */
#define E1000_ICR_INT_ASSERTED 0x80000000 /* If this bit asserted, the driver should claim the interrupt */
#define E1000_ICR_RXQ0 0x00100000 /* Rx Queue 0 Interrupt */
#define E1000_ICR_RXQ1 0x00200000 /* Rx Queue 1 Interrupt */
#define E1000_ICR_TXQ0 0x00400000 /* Tx Queue 0 Interrupt */
#define E1000_ICR_TXQ1 0x00800000 /* Tx Queue 1 Interrupt */
#define E1000_ICR_OTHER 0x01000000 /* Other Interrupts */
/*
* This defines the bits that are set in the Interrupt Mask
@ -386,6 +394,11 @@
#define E1000_IMS_RXSEQ E1000_ICR_RXSEQ /* Rx sequence error */
#define E1000_IMS_RXDMT0 E1000_ICR_RXDMT0 /* Rx desc min. threshold */
#define E1000_IMS_RXT0 E1000_ICR_RXT0 /* Rx timer intr */
#define E1000_IMS_RXQ0 E1000_ICR_RXQ0 /* Rx Queue 0 Interrupt */
#define E1000_IMS_RXQ1 E1000_ICR_RXQ1 /* Rx Queue 1 Interrupt */
#define E1000_IMS_TXQ0 E1000_ICR_TXQ0 /* Tx Queue 0 Interrupt */
#define E1000_IMS_TXQ1 E1000_ICR_TXQ1 /* Tx Queue 1 Interrupt */
#define E1000_IMS_OTHER E1000_ICR_OTHER /* Other Interrupts */
/* Interrupt Cause Set */
#define E1000_ICS_LSC E1000_ICR_LSC /* Link Status Change */
@ -505,6 +518,7 @@
#define NWAY_LPAR_ASM_DIR 0x0800 /* LP Asymmetric Pause Direction bit */
/* Autoneg Expansion Register */
#define NWAY_ER_LP_NWAY_CAPS 0x0001 /* LP has Auto Neg Capability */
/* 1000BASE-T Control Register */
#define CR_1000T_HD_CAPS 0x0100 /* Advertise 1000T HD capability */
@ -540,6 +554,7 @@
#define E1000_EECD_DO 0x00000008 /* NVM Data Out */
#define E1000_EECD_REQ 0x00000040 /* NVM Access Request */
#define E1000_EECD_GNT 0x00000080 /* NVM Access Grant */
#define E1000_EECD_PRES 0x00000100 /* NVM Present */
#define E1000_EECD_SIZE 0x00000200 /* NVM Size (0=64 word 1=256 word) */
/* NVM Addressing bits based on type (0-small, 1-large) */
#define E1000_EECD_ADDR_BITS 0x00000400

Some files were not shown because too many files have changed in this diff Show More