1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next

Pull networking updates from David Miller:
 "Highlights (1721 non-merge commits, this has to be a record of some
  sort):

   1) Add 'random' mode to team driver, from Jiri Pirko and Eric
      Dumazet.

   2) Make it so that any driver that supports configuration of multiple
      MAC addresses can provide the forwarding database add and del
      calls by providing a default implementation and hooking that up if
      the driver doesn't have an explicit set of handlers.  From Vlad
      Yasevich.

   3) Support GSO segmentation over tunnels and other encapsulating
      devices such as VXLAN, from Pravin B Shelar.

   4) Support L2 GRE tunnels in the flow dissector, from Michael Dalton.

   5) Implement Tail Loss Probe (TLP) detection in TCP, from Nandita
      Dukkipati.

   6) In the PHY layer, allow supporting wake-on-lan in situations where
      the PHY registers have to be written for it to be configured.

      Use it to support wake-on-lan in mv643xx_eth.

      From Michael Stapelberg.

   7) Significantly improve firewire IPV6 support, from YOSHIFUJI
      Hideaki.

   8) Allow multiple packets to be sent in a single transmission using
      network coding in batman-adv, from Martin Hundebøll.

   9) Add support for T5 cxgb4 chips, from Santosh Rastapur.

  10) Generalize the VXLAN forwarding tables so that there is more
      flexibility in configurating various aspects of the endpoints.
      From David Stevens.

  11) Support RSS and TSO in hardware over GRE tunnels in bxn2x driver,
      from Dmitry Kravkov.

  12) Zero copy support in nfnelink_queue, from Eric Dumazet and Pablo
      Neira Ayuso.

  13) Start adding networking selftests.

  14) In situations of overload on the same AF_PACKET fanout socket, or
      per-cpu packet receive queue, minimize drop by distributing the
      load to other cpus/fanouts.  From Willem de Bruijn and Eric
      Dumazet.

  15) Add support for new payload offset BPF instruction, from Daniel
      Borkmann.

  16) Convert several drivers over to mdoule_platform_driver(), from
      Sachin Kamat.

  17) Provide a minimal BPF JIT image disassembler userspace tool, from
      Daniel Borkmann.

  18) Rewrite F-RTO implementation in TCP to match the final
      specification of it in RFC4138 and RFC5682.  From Yuchung Cheng.

  19) Provide netlink socket diag of netlink sockets ("Yo dawg, I hear
      you like netlink, so I implemented netlink dumping of netlink
      sockets.") From Andrey Vagin.

  20) Remove ugly passing of rtnetlink attributes into rtnl_doit
      functions, from Thomas Graf.

  21) Allow userspace to be able to see if a configuration change occurs
      in the middle of an address or device list dump, from Nicolas
      Dichtel.

  22) Support RFC3168 ECN protection for ipv6 fragments, from Hannes
      Frederic Sowa.

  23) Increase accuracy of packet length used by packet scheduler, from
      Jason Wang.

  24) Beginning set of changes to make ipv4/ipv6 fragment handling more
      scalable and less susceptible to overload and locking contention,
      from Jesper Dangaard Brouer.

  25) Get rid of using non-type-safe NLMSG_* macros and use nlmsg_*()
      instead.  From Hong Zhiguo.

  26) Optimize route usage in IPVS by avoiding reference counting where
      possible, from Julian Anastasov.

  27) Convert IPVS schedulers to RCU, also from Julian Anastasov.

  28) Support cpu fanouts in xt_NFQUEUE netfilter target, from Holger
      Eitzenberger.

  29) Network namespace support for nf_log, ebt_log, xt_LOG, ipt_ULOG,
      nfnetlink_log, and nfnetlink_queue.  From Gao feng.

  30) Implement RFC3168 ECN protection, from Hannes Frederic Sowa.

  31) Support several new r8169 chips, from Hayes Wang.

  32) Support tokenized interface identifiers in ipv6, from Daniel
      Borkmann.

  33) Use usbnet_link_change() helper in USB net driver, from Ming Lei.

  34) Add 802.1ad vlan offload support, from Patrick McHardy.

  35) Support mmap() based netlink communication, also from Patrick
      McHardy.

  36) Support HW timestamping in mlx4 driver, from Amir Vadai.

  37) Rationalize AF_PACKET packet timestamping when transmitting, from
      Willem de Bruijn and Daniel Borkmann.

  38) Bring parity to what's provided by /proc/net/packet socket dumping
      and the info provided by netlink socket dumping of AF_PACKET
      sockets.  From Nicolas Dichtel.

  39) Fix peeking beyond zero sized SKBs in AF_UNIX, from Benjamin
      Poirier"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1722 commits)
  filter: fix va_list build error
  af_unix: fix a fatal race with bit fields
  bnx2x: Prevent memory leak when cnic is absent
  bnx2x: correct reading of speed capabilities
  net: sctp: attribute printl with __printf for gcc fmt checks
  netlink: kconfig: move mmap i/o into netlink kconfig
  netpoll: convert mutex into a semaphore
  netlink: Fix skb ref counting.
  net_sched: act_ipt forward compat with xtables
  mlx4_en: fix a build error on 32bit arches
  Revert "bnx2x: allow nvram test to run when device is down"
  bridge: avoid OOPS if root port not found
  drivers: net: cpsw: fix kernel warn on cpsw irq enable
  sh_eth: use random MAC address if no valid one supplied
  3c509.c: call SET_NETDEV_DEV for all device types (ISA/ISAPnP/EISA)
  tg3: fix to append hardware time stamping flags
  unix/stream: fix peeking with an offset larger than data in queue
  unix/dgram: fix peeking with an offset larger than data in queue
  unix/dgram: peek beyond 0-sized skbs
  openvswitch: Remove unneeded ovs_netdev_get_ifindex()
  ...
hifive-unleashed-5.1
Linus Torvalds 2013-05-01 14:08:52 -07:00
commit 73287a43cc
1506 changed files with 86582 additions and 37282 deletions

View File

@ -67,6 +67,14 @@ Description:
Defines the penalty which will be applied to an
originator message's tq-field on every hop.
What: /sys/class/net/<mesh_iface>/mesh/network_coding
Date: Nov 2012
Contact: Martin Hundeboll <martin@hundeboll.net>
Description:
Controls whether Network Coding (using some magic
to send fewer wifi packets but still the same
content) is enabled or not.
What: /sys/class/net/<mesh_iface>/mesh/orig_interval
Date: May 2010
Contact: Marek Lindner <lindner_marek@yahoo.de>

View File

@ -437,7 +437,7 @@
</section>
!Finclude/net/mac80211.h ieee80211_get_buffered_bc
!Finclude/net/mac80211.h ieee80211_beacon_get
!Finclude/net/mac80211.h ieee80211_sta_eosp_irqsafe
!Finclude/net/mac80211.h ieee80211_sta_eosp
!Finclude/net/mac80211.h ieee80211_frame_release_type
!Finclude/net/mac80211.h ieee80211_sta_ps_transition
!Finclude/net/mac80211.h ieee80211_sta_ps_transition_ni

View File

@ -18,6 +18,8 @@ memcg_test.txt
- Memory Resource Controller; implementation details.
memory.txt
- Memory Resource Controller; design, accounting, interface, testing.
net_cls.txt
- Network classifier cgroups details and usages.
net_prio.txt
- Network priority cgroups details and usages.
resource_counter.txt

View File

@ -0,0 +1,34 @@
Network classifier cgroup
-------------------------
The Network classifier cgroup provides an interface to
tag network packets with a class identifier (classid).
The Traffic Controller (tc) can be used to assign
different priorities to packets from different cgroups.
Creating a net_cls cgroups instance creates a net_cls.classid file.
This net_cls.classid value is initialized to 0.
You can write hexadecimal values to net_cls.classid; the format for these
values is 0xAAAABBBB; AAAA is the major handle number and BBBB
is the minor handle number.
Reading net_cls.classid yields a decimal result.
Example:
mkdir /sys/fs/cgroup/net_cls
mount -t cgroup -onet_cls net_cls /sys/fs/cgroup/net_cls
mkdir /sys/fs/cgroup/net_cls/0
echo 0x100001 > /sys/fs/cgroup/net_cls/0/net_cls.classid
- setting a 10:1 handle.
cat /sys/fs/cgroup/net_cls/0/net_cls.classid
1048577
configuring tc:
tc qdisc add dev eth0 root handle 10: htb
tc class add dev eth0 parent 10: classid 10:1 htb rate 40mbit
- creating traffic class 10:1
tc filter add dev eth0 parent 10: protocol ip prio 10 handle 1: cgroup

View File

@ -115,6 +115,9 @@ prefixed with the string "marvell,", for Marvell Technology Group Ltd.
- compatible : "marvell,mv64360-eth-block"
- reg : Offset and length of the register set for this block
Optional properties:
- clocks : Phandle to the clock control device and gate bit
Example Discovery Ethernet block node:
ethernet-block@2000 {
#address-cells = <1>;

View File

@ -0,0 +1,14 @@
* AT91 CAN *
Required properties:
- compatible: Should be "atmel,at91sam9263-can" or "atmel,at91sam9x5-can"
- reg: Should contain CAN controller registers location and length
- interrupts: Should contain IRQ line for the CAN controller
Example:
can0: can@f000c000 {
compatbile = "atmel,at91sam9x5-can";
reg = <0xf000c000 0x300>;
interrupts = <40 4 5>
};

View File

@ -15,16 +15,22 @@ Required properties:
- mac_control : Specifies Default MAC control register content
for the specific platform
- slaves : Specifies number for slaves
- cpts_active_slave : Specifies the slave to use for time stamping
- active_slave : Specifies the slave to use for time stamping,
ethtool and SIOCGMIIPHY
- cpts_clock_mult : Numerator to convert input clock ticks into nanoseconds
- cpts_clock_shift : Denominator to convert input clock ticks into nanoseconds
- phy_id : Specifies slave phy id
- mac-address : Specifies slave MAC address
Optional properties:
- ti,hwmods : Must be "cpgmac0"
- no_bd_ram : Must be 0 or 1
- dual_emac : Specifies Switch to act as Dual EMAC
Slave Properties:
Required properties:
- phy_id : Specifies slave phy id
- mac-address : Specifies slave MAC address
Optional properties:
- dual_emac_res_vlan : Specifies VID to be used to segregate the ports
Note: "ti,hwmods" field is used to fetch the base address and irq
@ -47,7 +53,7 @@ Examples:
rx_descs = <64>;
mac_control = <0x20>;
slaves = <2>;
cpts_active_slave = <0>;
active_slave = <0>;
cpts_clock_mult = <0x80000000>;
cpts_clock_shift = <29>;
cpsw_emac0: slave@0 {
@ -73,7 +79,7 @@ Examples:
rx_descs = <64>;
mac_control = <0x20>;
slaves = <2>;
cpts_active_slave = <0>;
active_slave = <0>;
cpts_clock_mult = <0x80000000>;
cpts_clock_shift = <29>;
cpsw_emac0: slave@0 {

View File

@ -0,0 +1,91 @@
Marvell Distributed Switch Architecture Device Tree Bindings
------------------------------------------------------------
Required properties:
- compatible : Should be "marvell,dsa"
- #address-cells : Must be 2, first cell is the address on the MDIO bus
and second cell is the address in the switch tree.
Second cell is used only when cascading/chaining.
- #size-cells : Must be 0
- dsa,ethernet : Should be a phandle to a valid Ethernet device node
- dsa,mii-bus : Should be a phandle to a valid MDIO bus device node
Optionnal properties:
- interrupts : property with a value describing the switch
interrupt number (not supported by the driver)
A DSA node can contain multiple switch chips which are therefore child nodes of
the parent DSA node. The maximum number of allowed child nodes is 4
(DSA_MAX_SWITCHES).
Each of these switch child nodes should have the following required properties:
- reg : Describes the switch address on the MII bus
- #address-cells : Must be 1
- #size-cells : Must be 0
A switch may have multiple "port" children nodes
Each port children node must have the following mandatory properties:
- reg : Describes the port address in the switch
- label : Describes the label associated with this port, special
labels are "cpu" to indicate a CPU port and "dsa" to
indicate an uplink/downlink port.
Note that a port labelled "dsa" will imply checking for the uplink phandle
described below.
Optionnal property:
- link : Should be a phandle to another switch's DSA port.
This property is only used when switches are being
chained/cascaded together.
Example:
dsa@0 {
compatible = "marvell,dsa";
#address-cells = <2>;
#size-cells = <0>;
interrupts = <10>;
dsa,ethernet = <&ethernet0>;
dsa,mii-bus = <&mii_bus0>;
switch@0 {
#address-cells = <1>;
#size-cells = <0>;
reg = <16 0>; /* MDIO address 16, switch 0 in tree */
port@0 {
reg = <0>;
label = "lan1";
};
port@1 {
reg = <1>;
label = "lan2";
};
port@5 {
reg = <5>;
label = "cpu";
};
switch0uplink: port@6 {
reg = <6>;
label = "dsa";
link = <&switch1uplink>;
};
};
switch@1 {
#address-cells = <1>;
#size-cells = <0>;
reg = <17 1>; /* MDIO address 17, switch 1 in tree */
switch1uplink: port@0 {
reg = <0>;
label = "dsa";
link = <&switch0uplink>;
};
};
};

View File

@ -9,6 +9,10 @@ Required properties:
- compatible: "marvell,orion-mdio"
- reg: address and length of the SMI register
Optional properties:
- interrupts: interrupt line number for the SMI error/done interrupt
- clocks: Phandle to the clock control device and gate bit
The child nodes of the MDIO driver are the individual PHY devices
connected to this MDIO bus. They must have a "reg" property given the
PHY address on the MDIO bus.

View File

@ -71,8 +71,9 @@ submits skb to qdisc), so if you need something from that cb later, you should
store info in the skb->data on your own.
To hook the MLME interface you have to populate the ml_priv field of your
net_device with a pointer to struct ieee802154_mlme_ops instance. All fields are
required.
net_device with a pointer to struct ieee802154_mlme_ops instance. The fields
assoc_req, assoc_resp, disassoc_req, start_req, and scan_req are optional.
All other fields are required.
We provide an example of simple HardMAC driver at drivers/ieee802154/fakehard.c

View File

@ -29,7 +29,7 @@ route/max_size - INTEGER
neigh/default/gc_thresh1 - INTEGER
Minimum number of entries to keep. Garbage collector will not
purge entries if there are fewer than this number.
Default: 256
Default: 128
neigh/default/gc_thresh3 - INTEGER
Maximum number of neighbor entries allowed. Increase this
@ -175,14 +175,6 @@ tcp_congestion_control - STRING
is inherited.
[see setsockopt(listenfd, SOL_TCP, TCP_CONGESTION, "name" ...) ]
tcp_cookie_size - INTEGER
Default size of TCP Cookie Transactions (TCPCT) option, that may be
overridden on a per socket basis by the TCPCT socket option.
Values greater than the maximum (16) are interpreted as the maximum.
Values greater than zero and less than the minimum (8) are interpreted
as the minimum. Odd values are interpreted as the next even value.
Default: 0 (off).
tcp_dsack - BOOLEAN
Allows TCP to send "duplicate" SACKs.
@ -190,7 +182,9 @@ tcp_early_retrans - INTEGER
Enable Early Retransmit (ER), per RFC 5827. ER lowers the threshold
for triggering fast retransmit when the amount of outstanding data is
small and when no previously unsent data can be transmitted (such
that limited transmit could be used).
that limited transmit could be used). Also controls the use of
Tail loss probe (TLP) that converts RTOs occuring due to tail
losses into fast recovery (draft-dukkipati-tcpm-tcp-loss-probe-01).
Possible values:
0 disables ER
1 enables ER
@ -198,7 +192,9 @@ tcp_early_retrans - INTEGER
by a fourth of RTT. This mitigates connection falsely
recovers when network has a small degree of reordering
(less than 3 packets).
Default: 2
3 enables delayed ER and TLP.
4 enables TLP only.
Default: 3
tcp_ecn - INTEGER
Control use of Explicit Congestion Notification (ECN) by TCP.
@ -229,36 +225,13 @@ tcp_fin_timeout - INTEGER
Default: 60 seconds
tcp_frto - INTEGER
Enables Forward RTO-Recovery (F-RTO) defined in RFC4138.
Enables Forward RTO-Recovery (F-RTO) defined in RFC5682.
F-RTO is an enhanced recovery algorithm for TCP retransmission
timeouts. It is particularly beneficial in wireless environments
where packet loss is typically due to random radio interference
rather than intermediate router congestion. F-RTO is sender-side
only modification. Therefore it does not require any support from
the peer.
timeouts. It is particularly beneficial in networks where the
RTT fluctuates (e.g., wireless). F-RTO is sender-side only
modification. It does not require any support from the peer.
If set to 1, basic version is enabled. 2 enables SACK enhanced
F-RTO if flow uses SACK. The basic version can be used also when
SACK is in use though scenario(s) with it exists where F-RTO
interacts badly with the packet counting of the SACK enabled TCP
flow.
tcp_frto_response - INTEGER
When F-RTO has detected that a TCP retransmission timeout was
spurious (i.e, the timeout would have been avoided had TCP set a
longer retransmission timeout), TCP has several options what to do
next. Possible values are:
0 Rate halving based; a smooth and conservative response,
results in halved cwnd and ssthresh after one RTT
1 Very conservative response; not recommended because even
though being valid, it interacts poorly with the rest of
Linux TCP, halves cwnd and ssthresh immediately
2 Aggressive response; undoes congestion control measures
that are now known to be unnecessary (ignoring the
possibility of a lost retransmission that would require
TCP to be more cautious), cwnd and ssthresh are restored
to the values prior timeout
Default: 0 (rate halving based)
By default it's enabled with a non-zero value. 0 disables F-RTO.
tcp_keepalive_time - INTEGER
How often TCP sends out keepalive messages when keepalive is enabled.

View File

@ -0,0 +1,339 @@
This file documents how to use memory mapped I/O with netlink.
Author: Patrick McHardy <kaber@trash.net>
Overview
--------
Memory mapped netlink I/O can be used to increase throughput and decrease
overhead of unicast receive and transmit operations. Some netlink subsystems
require high throughput, these are mainly the netfilter subsystems
nfnetlink_queue and nfnetlink_log, but it can also help speed up large
dump operations of f.i. the routing database.
Memory mapped netlink I/O used two circular ring buffers for RX and TX which
are mapped into the processes address space.
The RX ring is used by the kernel to directly construct netlink messages into
user-space memory without copying them as done with regular socket I/O,
additionally as long as the ring contains messages no recvmsg() or poll()
syscalls have to be issued by user-space to get more message.
The TX ring is used to process messages directly from user-space memory, the
kernel processes all messages contained in the ring using a single sendmsg()
call.
Usage overview
--------------
In order to use memory mapped netlink I/O, user-space needs three main changes:
- ring setup
- conversion of the RX path to get messages from the ring instead of recvmsg()
- conversion of the TX path to construct messages into the ring
Ring setup is done using setsockopt() to provide the ring parameters to the
kernel, then a call to mmap() to map the ring into the processes address space:
- setsockopt(fd, SOL_NETLINK, NETLINK_RX_RING, &params, sizeof(params));
- setsockopt(fd, SOL_NETLINK, NETLINK_TX_RING, &params, sizeof(params));
- ring = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0)
Usage of either ring is optional, but even if only the RX ring is used the
mapping still needs to be writable in order to update the frame status after
processing.
Conversion of the reception path involves calling poll() on the file
descriptor, once the socket is readable the frames from the ring are
processsed in order until no more messages are available, as indicated by
a status word in the frame header.
On kernel side, in order to make use of memory mapped I/O on receive, the
originating netlink subsystem needs to support memory mapped I/O, otherwise
it will use an allocated socket buffer as usual and the contents will be
copied to the ring on transmission, nullifying most of the performance gains.
Dumps of kernel databases automatically support memory mapped I/O.
Conversion of the transmit path involves changing message contruction to
use memory from the TX ring instead of (usually) a buffer declared on the
stack and setting up the frame header approriately. Optionally poll() can
be used to wait for free frames in the TX ring.
Structured and definitions for using memory mapped I/O are contained in
<linux/netlink.h>.
RX and TX rings
----------------
Each ring contains a number of continous memory blocks, containing frames of
fixed size dependant on the parameters used for ring setup.
Ring: [ block 0 ]
[ frame 0 ]
[ frame 1 ]
[ block 1 ]
[ frame 2 ]
[ frame 3 ]
...
[ block n ]
[ frame 2 * n ]
[ frame 2 * n + 1 ]
The blocks are only visible to the kernel, from the point of view of user-space
the ring just contains the frames in a continous memory zone.
The ring parameters used for setting up the ring are defined as follows:
struct nl_mmap_req {
unsigned int nm_block_size;
unsigned int nm_block_nr;
unsigned int nm_frame_size;
unsigned int nm_frame_nr;
};
Frames are grouped into blocks, where each block is a continous region of memory
and holds nm_block_size / nm_frame_size frames. The total number of frames in
the ring is nm_frame_nr. The following invariants hold:
- frames_per_block = nm_block_size / nm_frame_size
- nm_frame_nr = frames_per_block * nm_block_nr
Some parameters are constrained, specifically:
- nm_block_size must be a multiple of the architectures memory page size.
The getpagesize() function can be used to get the page size.
- nm_frame_size must be equal or larger to NL_MMAP_HDRLEN, IOW a frame must be
able to hold at least the frame header
- nm_frame_size must be smaller or equal to nm_block_size
- nm_frame_size must be a multiple of NL_MMAP_MSG_ALIGNMENT
- nm_frame_nr must equal the actual number of frames as specified above.
When the kernel can't allocate phsyically continous memory for a ring block,
it will fall back to use physically discontinous memory. This might affect
performance negatively, in order to avoid this the nm_frame_size parameter
should be chosen to be as small as possible for the required frame size and
the number of blocks should be increased instead.
Ring frames
------------
Each frames contain a frame header, consisting of a synchronization word and some
meta-data, and the message itself.
Frame: [ header message ]
The frame header is defined as follows:
struct nl_mmap_hdr {
unsigned int nm_status;
unsigned int nm_len;
__u32 nm_group;
/* credentials */
__u32 nm_pid;
__u32 nm_uid;
__u32 nm_gid;
};
- nm_status is used for synchronizing processing between the kernel and user-
space and specifies ownership of the frame as well as the operation to perform
- nm_len contains the length of the message contained in the data area
- nm_group specified the destination multicast group of message
- nm_pid, nm_uid and nm_gid contain the netlink pid, UID and GID of the sending
process. These values correspond to the data available using SOCK_PASSCRED in
the SCM_CREDENTIALS cmsg.
The possible values in the status word are:
- NL_MMAP_STATUS_UNUSED:
RX ring: frame belongs to the kernel and contains no message
for user-space. Approriate action is to invoke poll()
to wait for new messages.
TX ring: frame belongs to user-space and can be used for
message construction.
- NL_MMAP_STATUS_RESERVED:
RX ring only: frame is currently used by the kernel for message
construction and contains no valid message yet.
Appropriate action is to invoke poll() to wait for
new messages.
- NL_MMAP_STATUS_VALID:
RX ring: frame contains a valid message. Approriate action is
to process the message and release the frame back to
the kernel by setting the status to
NL_MMAP_STATUS_UNUSED or queue the frame by setting the
status to NL_MMAP_STATUS_SKIP.
TX ring: the frame contains a valid message from user-space to
be processed by the kernel. After completing processing
the kernel will release the frame back to user-space by
setting the status to NL_MMAP_STATUS_UNUSED.
- NL_MMAP_STATUS_COPY:
RX ring only: a message is ready to be processed but could not be
stored in the ring, either because it exceeded the
frame size or because the originating subsystem does
not support memory mapped I/O. Appropriate action is
to invoke recvmsg() to receive the message and release
the frame back to the kernel by setting the status to
NL_MMAP_STATUS_UNUSED.
- NL_MMAP_STATUS_SKIP:
RX ring only: user-space queued the message for later processing, but
processed some messages following it in the ring. The
kernel should skip this frame when looking for unused
frames.
The data area of a frame begins at a offset of NL_MMAP_HDRLEN relative to the
frame header.
TX limitations
--------------
Kernel processing usually involves validation of the message received by
user-space, then processing its contents. The kernel must assure that
userspace is not able to modify the message contents after they have been
validated. In order to do so, the message is copied from the ring frame
to an allocated buffer if either of these conditions is false:
- only a single mapping of the ring exists
- the file descriptor is not shared between processes
This means that for threaded programs, the kernel will fall back to copying.
Example
-------
Ring setup:
unsigned int block_size = 16 * getpagesize();
struct nl_mmap_req req = {
.nm_block_size = block_size,
.nm_block_nr = 64,
.nm_frame_size = 16384,
.nm_frame_nr = 64 * block_size / 16384,
};
unsigned int ring_size;
void *rx_ring, *tx_ring;
/* Configure ring parameters */
if (setsockopt(fd, NETLINK_RX_RING, &req, sizeof(req)) < 0)
exit(1);
if (setsockopt(fd, NETLINK_TX_RING, &req, sizeof(req)) < 0)
exit(1)
/* Calculate size of each invididual ring */
ring_size = req.nm_block_nr * req.nm_block_size;
/* Map RX/TX rings. The TX ring is located after the RX ring */
rx_ring = mmap(NULL, 2 * ring_size, PROT_READ | PROT_WRITE,
MAP_SHARED, fd, 0);
if ((long)rx_ring == -1L)
exit(1);
tx_ring = rx_ring + ring_size:
Message reception:
This example assumes some ring parameters of the ring setup are available.
unsigned int frame_offset = 0;
struct nl_mmap_hdr *hdr;
struct nlmsghdr *nlh;
unsigned char buf[16384];
ssize_t len;
while (1) {
struct pollfd pfds[1];
pfds[0].fd = fd;
pfds[0].events = POLLIN | POLLERR;
pfds[0].revents = 0;
if (poll(pfds, 1, -1) < 0 && errno != -EINTR)
exit(1);
/* Check for errors. Error handling omitted */
if (pfds[0].revents & POLLERR)
<handle error>
/* If no new messages, poll again */
if (!(pfds[0].revents & POLLIN))
continue;
/* Process all frames */
while (1) {
/* Get next frame header */
hdr = rx_ring + frame_offset;
if (hdr->nm_status == NL_MMAP_STATUS_VALID)
/* Regular memory mapped frame */
nlh = (void *hdr) + NL_MMAP_HDRLEN;
len = hdr->nm_len;
/* Release empty message immediately. May happen
* on error during message construction.
*/
if (len == 0)
goto release;
} else if (hdr->nm_status == NL_MMAP_STATUS_COPY) {
/* Frame queued to socket receive queue */
len = recv(fd, buf, sizeof(buf), MSG_DONTWAIT);
if (len <= 0)
break;
nlh = buf;
} else
/* No more messages to process, continue polling */
break;
process_msg(nlh);
release:
/* Release frame back to the kernel */
hdr->nm_status = NL_MMAP_STATUS_UNUSED;
/* Advance frame offset to next frame */
frame_offset = (frame_offset + frame_size) % ring_size;
}
}
Message transmission:
This example assumes some ring parameters of the ring setup are available.
A single message is constructed and transmitted, to send multiple messages
at once they would be constructed in consecutive frames before a final call
to sendto().
unsigned int frame_offset = 0;
struct nl_mmap_hdr *hdr;
struct nlmsghdr *nlh;
struct sockaddr_nl addr = {
.nl_family = AF_NETLINK,
};
hdr = tx_ring + frame_offset;
if (hdr->nm_status != NL_MMAP_STATUS_UNUSED)
/* No frame available. Use poll() to avoid. */
exit(1);
nlh = (void *)hdr + NL_MMAP_HDRLEN;
/* Build message */
build_message(nlh);
/* Fill frame header: length and status need to be set */
hdr->nm_len = nlh->nlmsg_len;
hdr->nm_status = NL_MMAP_STATUS_VALID;
if (sendto(fd, NULL, 0, 0, &addr, sizeof(addr)) < 0)
exit(1);
/* Advance frame offset to next frame */
frame_offset = (frame_offset + frame_size) % ring_size;

View File

@ -684,15 +684,343 @@ int main(int argc, char **argp)
return 0;
}
-------------------------------------------------------------------------------
+ AF_PACKET TPACKET_V3 example
-------------------------------------------------------------------------------
AF_PACKET's TPACKET_V3 ring buffer can be configured to use non-static frame
sizes by doing it's own memory management. It is based on blocks where polling
works on a per block basis instead of per ring as in TPACKET_V2 and predecessor.
It is said that TPACKET_V3 brings the following benefits:
*) ~15 - 20% reduction in CPU-usage
*) ~20% increase in packet capture rate
*) ~2x increase in packet density
*) Port aggregation analysis
*) Non static frame size to capture entire packet payload
So it seems to be a good candidate to be used with packet fanout.
Minimal example code by Daniel Borkmann based on Chetan Loke's lolpcap (compile
it with gcc -Wall -O2 blob.c, and try things like "./a.out eth0", etc.):
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <assert.h>
#include <net/if.h>
#include <arpa/inet.h>
#include <netdb.h>
#include <poll.h>
#include <unistd.h>
#include <signal.h>
#include <inttypes.h>
#include <sys/socket.h>
#include <sys/mman.h>
#include <linux/if_packet.h>
#include <linux/if_ether.h>
#include <linux/ip.h>
#define BLOCK_SIZE (1 << 22)
#define FRAME_SIZE 2048
#define NUM_BLOCKS 64
#define NUM_FRAMES ((BLOCK_SIZE * NUM_BLOCKS) / FRAME_SIZE)
#define BLOCK_RETIRE_TOV_IN_MS 64
#define BLOCK_PRIV_AREA_SZ 13
#define ALIGN_8(x) (((x) + 8 - 1) & ~(8 - 1))
#define BLOCK_STATUS(x) ((x)->h1.block_status)
#define BLOCK_NUM_PKTS(x) ((x)->h1.num_pkts)
#define BLOCK_O2FP(x) ((x)->h1.offset_to_first_pkt)
#define BLOCK_LEN(x) ((x)->h1.blk_len)
#define BLOCK_SNUM(x) ((x)->h1.seq_num)
#define BLOCK_O2PRIV(x) ((x)->offset_to_priv)
#define BLOCK_PRIV(x) ((void *) ((uint8_t *) (x) + BLOCK_O2PRIV(x)))
#define BLOCK_HDR_LEN (ALIGN_8(sizeof(struct block_desc)))
#define BLOCK_PLUS_PRIV(sz_pri) (BLOCK_HDR_LEN + ALIGN_8((sz_pri)))
#ifndef likely
# define likely(x) __builtin_expect(!!(x), 1)
#endif
#ifndef unlikely
# define unlikely(x) __builtin_expect(!!(x), 0)
#endif
struct block_desc {
uint32_t version;
uint32_t offset_to_priv;
struct tpacket_hdr_v1 h1;
};
struct ring {
struct iovec *rd;
uint8_t *map;
struct tpacket_req3 req;
};
static unsigned long packets_total = 0, bytes_total = 0;
static sig_atomic_t sigint = 0;
void sighandler(int num)
{
sigint = 1;
}
static int setup_socket(struct ring *ring, char *netdev)
{
int err, i, fd, v = TPACKET_V3;
struct sockaddr_ll ll;
fd = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
if (fd < 0) {
perror("socket");
exit(1);
}
err = setsockopt(fd, SOL_PACKET, PACKET_VERSION, &v, sizeof(v));
if (err < 0) {
perror("setsockopt");
exit(1);
}
memset(&ring->req, 0, sizeof(ring->req));
ring->req.tp_block_size = BLOCK_SIZE;
ring->req.tp_frame_size = FRAME_SIZE;
ring->req.tp_block_nr = NUM_BLOCKS;
ring->req.tp_frame_nr = NUM_FRAMES;
ring->req.tp_retire_blk_tov = BLOCK_RETIRE_TOV_IN_MS;
ring->req.tp_sizeof_priv = BLOCK_PRIV_AREA_SZ;
ring->req.tp_feature_req_word |= TP_FT_REQ_FILL_RXHASH;
err = setsockopt(fd, SOL_PACKET, PACKET_RX_RING, &ring->req,
sizeof(ring->req));
if (err < 0) {
perror("setsockopt");
exit(1);
}
ring->map = mmap(NULL, ring->req.tp_block_size * ring->req.tp_block_nr,
PROT_READ | PROT_WRITE, MAP_SHARED | MAP_LOCKED,
fd, 0);
if (ring->map == MAP_FAILED) {
perror("mmap");
exit(1);
}
ring->rd = malloc(ring->req.tp_block_nr * sizeof(*ring->rd));
assert(ring->rd);
for (i = 0; i < ring->req.tp_block_nr; ++i) {
ring->rd[i].iov_base = ring->map + (i * ring->req.tp_block_size);
ring->rd[i].iov_len = ring->req.tp_block_size;
}
memset(&ll, 0, sizeof(ll));
ll.sll_family = PF_PACKET;
ll.sll_protocol = htons(ETH_P_ALL);
ll.sll_ifindex = if_nametoindex(netdev);
ll.sll_hatype = 0;
ll.sll_pkttype = 0;
ll.sll_halen = 0;
err = bind(fd, (struct sockaddr *) &ll, sizeof(ll));
if (err < 0) {
perror("bind");
exit(1);
}
return fd;
}
#ifdef __checked
static uint64_t prev_block_seq_num = 0;
void assert_block_seq_num(struct block_desc *pbd)
{
if (unlikely(prev_block_seq_num + 1 != BLOCK_SNUM(pbd))) {
printf("prev_block_seq_num:%"PRIu64", expected seq:%"PRIu64" != "
"actual seq:%"PRIu64"\n", prev_block_seq_num,
prev_block_seq_num + 1, (uint64_t) BLOCK_SNUM(pbd));
exit(1);
}
prev_block_seq_num = BLOCK_SNUM(pbd);
}
static void assert_block_len(struct block_desc *pbd, uint32_t bytes, int block_num)
{
if (BLOCK_NUM_PKTS(pbd)) {
if (unlikely(bytes != BLOCK_LEN(pbd))) {
printf("block:%u with %upackets, expected len:%u != actual len:%u\n",
block_num, BLOCK_NUM_PKTS(pbd), bytes, BLOCK_LEN(pbd));
exit(1);
}
} else {
if (unlikely(BLOCK_LEN(pbd) != BLOCK_PLUS_PRIV(BLOCK_PRIV_AREA_SZ))) {
printf("block:%u, expected len:%lu != actual len:%u\n",
block_num, BLOCK_HDR_LEN, BLOCK_LEN(pbd));
exit(1);
}
}
}
static void assert_block_header(struct block_desc *pbd, const int block_num)
{
uint32_t block_status = BLOCK_STATUS(pbd);
if (unlikely((block_status & TP_STATUS_USER) == 0)) {
printf("block:%u, not in TP_STATUS_USER\n", block_num);
exit(1);
}
assert_block_seq_num(pbd);
}
#else
static inline void assert_block_header(struct block_desc *pbd, const int block_num)
{
}
static void assert_block_len(struct block_desc *pbd, uint32_t bytes, int block_num)
{
}
#endif
static void display(struct tpacket3_hdr *ppd)
{
struct ethhdr *eth = (struct ethhdr *) ((uint8_t *) ppd + ppd->tp_mac);
struct iphdr *ip = (struct iphdr *) ((uint8_t *) eth + ETH_HLEN);
if (eth->h_proto == htons(ETH_P_IP)) {
struct sockaddr_in ss, sd;
char sbuff[NI_MAXHOST], dbuff[NI_MAXHOST];
memset(&ss, 0, sizeof(ss));
ss.sin_family = PF_INET;
ss.sin_addr.s_addr = ip->saddr;
getnameinfo((struct sockaddr *) &ss, sizeof(ss),
sbuff, sizeof(sbuff), NULL, 0, NI_NUMERICHOST);
memset(&sd, 0, sizeof(sd));
sd.sin_family = PF_INET;
sd.sin_addr.s_addr = ip->daddr;
getnameinfo((struct sockaddr *) &sd, sizeof(sd),
dbuff, sizeof(dbuff), NULL, 0, NI_NUMERICHOST);
printf("%s -> %s, ", sbuff, dbuff);
}
printf("rxhash: 0x%x\n", ppd->hv1.tp_rxhash);
}
static void walk_block(struct block_desc *pbd, const int block_num)
{
int num_pkts = BLOCK_NUM_PKTS(pbd), i;
unsigned long bytes = 0;
unsigned long bytes_with_padding = BLOCK_PLUS_PRIV(BLOCK_PRIV_AREA_SZ);
struct tpacket3_hdr *ppd;
assert_block_header(pbd, block_num);
ppd = (struct tpacket3_hdr *) ((uint8_t *) pbd + BLOCK_O2FP(pbd));
for (i = 0; i < num_pkts; ++i) {
bytes += ppd->tp_snaplen;
if (ppd->tp_next_offset)
bytes_with_padding += ppd->tp_next_offset;
else
bytes_with_padding += ALIGN_8(ppd->tp_snaplen + ppd->tp_mac);
display(ppd);
ppd = (struct tpacket3_hdr *) ((uint8_t *) ppd + ppd->tp_next_offset);
__sync_synchronize();
}
assert_block_len(pbd, bytes_with_padding, block_num);
packets_total += num_pkts;
bytes_total += bytes;
}
void flush_block(struct block_desc *pbd)
{
BLOCK_STATUS(pbd) = TP_STATUS_KERNEL;
__sync_synchronize();
}
static void teardown_socket(struct ring *ring, int fd)
{
munmap(ring->map, ring->req.tp_block_size * ring->req.tp_block_nr);
free(ring->rd);
close(fd);
}
int main(int argc, char **argp)
{
int fd, err;
socklen_t len;
struct ring ring;
struct pollfd pfd;
unsigned int block_num = 0;
struct block_desc *pbd;
struct tpacket_stats_v3 stats;
if (argc != 2) {
fprintf(stderr, "Usage: %s INTERFACE\n", argp[0]);
return EXIT_FAILURE;
}
signal(SIGINT, sighandler);
memset(&ring, 0, sizeof(ring));
fd = setup_socket(&ring, argp[argc - 1]);
assert(fd > 0);
memset(&pfd, 0, sizeof(pfd));
pfd.fd = fd;
pfd.events = POLLIN | POLLERR;
pfd.revents = 0;
while (likely(!sigint)) {
pbd = (struct block_desc *) ring.rd[block_num].iov_base;
retry_block:
if ((BLOCK_STATUS(pbd) & TP_STATUS_USER) == 0) {
poll(&pfd, 1, -1);
goto retry_block;
}
walk_block(pbd, block_num);
flush_block(pbd);
block_num = (block_num + 1) % NUM_BLOCKS;
}
len = sizeof(stats);
err = getsockopt(fd, SOL_PACKET, PACKET_STATISTICS, &stats, &len);
if (err < 0) {
perror("getsockopt");
exit(1);
}
fflush(stdout);
printf("\nReceived %u packets, %lu bytes, %u dropped, freeze_q_cnt: %u\n",
stats.tp_packets, bytes_total, stats.tp_drops,
stats.tp_freeze_q_cnt);
teardown_socket(&ring, fd);
return 0;
}
-------------------------------------------------------------------------------
+ PACKET_TIMESTAMP
-------------------------------------------------------------------------------
The PACKET_TIMESTAMP setting determines the source of the timestamp in
the packet meta information. If your NIC is capable of timestamping
packets in hardware, you can request those hardware timestamps to used.
Note: you may need to enable the generation of hardware timestamps with
SIOCSHWTSTAMP.
the packet meta information for mmap(2)ed RX_RING and TX_RINGs. If your
NIC is capable of timestamping packets in hardware, you can request those
hardware timestamps to be used. Note: you may need to enable the generation
of hardware timestamps with SIOCSHWTSTAMP (see related information from
Documentation/networking/timestamping.txt).
PACKET_TIMESTAMP accepts the same integer bit field as
SO_TIMESTAMPING. However, only the SOF_TIMESTAMPING_SYS_HARDWARE
@ -704,8 +1032,36 @@ SOF_TIMESTAMPING_RAW_HARDWARE if both bits are set.
req |= SOF_TIMESTAMPING_SYS_HARDWARE;
setsockopt(fd, SOL_PACKET, PACKET_TIMESTAMP, (void *) &req, sizeof(req))
If PACKET_TIMESTAMP is not set, a software timestamp generated inside
the networking stack is used (the behavior before this setting was added).
For the mmap(2)ed ring buffers, such timestamps are stored in the
tpacket{,2,3}_hdr structure's tp_sec and tp_{n,u}sec members. To determine
what kind of timestamp has been reported, the tp_status field is binary |'ed
with the following possible bits ...
TP_STATUS_TS_SYS_HARDWARE
TP_STATUS_TS_RAW_HARDWARE
TP_STATUS_TS_SOFTWARE
... that are equivalent to its SOF_TIMESTAMPING_* counterparts. For the
RX_RING, if none of those 3 are set (i.e. PACKET_TIMESTAMP is not set),
then this means that a software fallback was invoked *within* PF_PACKET's
processing code (less precise).
Getting timestamps for the TX_RING works as follows: i) fill the ring frames,
ii) call sendto() e.g. in blocking mode, iii) wait for status of relevant
frames to be updated resp. the frame handed over to the application, iv) walk
through the frames to pick up the individual hw/sw timestamps.
Only (!) if transmit timestamping is enabled, then these bits are combined
with binary | with TP_STATUS_AVAILABLE, so you must check for that in your
application (e.g. !(tp_status & (TP_STATUS_SEND_REQUEST | TP_STATUS_SENDING))
in a first step to see if the frame belongs to the application, and then
one can extract the type of timestamp in a second step from tp_status)!
If you don't care about them, thus having it disabled, checking for
TP_STATUS_AVAILABLE resp. TP_STATUS_WRONG_FORMAT is sufficient. If in the
TX_RING part only TP_STATUS_AVAILABLE is set, then the tp_sec and tp_{n,u}sec
members do not contain a valid value. For TX_RINGs, by default no timestamp
is generated!
See include/linux/net_tstamp.h and Documentation/networking/timestamping
for more information on hardware timestamps.

View File

@ -1,6 +1,6 @@
STMicroelectronics 10/100/1000 Synopsys Ethernet driver
Copyright (C) 2007-2010 STMicroelectronics Ltd
Copyright (C) 2007-2013 STMicroelectronics Ltd
Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>
This is the driver for the MAC 10/100/1000 on-chip Ethernet controllers
@ -10,7 +10,7 @@ Currently this network device driver is for all STM embedded MAC/GMAC
(i.e. 7xxx/5xxx SoCs), SPEAr (arm), Loongson1B (mips) and XLINX XC2V3000
FF1152AMT0221 D1215994A VIRTEX FPGA board.
DWC Ether MAC 10/100/1000 Universal version 3.60a (and older) and DWC Ether
DWC Ether MAC 10/100/1000 Universal version 3.70a (and older) and DWC Ether
MAC 10/100 Universal version 4.0 have been used for developing this driver.
This driver supports both the platform bus and PCI.
@ -32,6 +32,8 @@ The kernel configuration option is STMMAC_ETH:
watchdog: transmit timeout (in milliseconds);
flow_ctrl: Flow control ability [on/off];
pause: Flow Control Pause Time;
eee_timer: tx EEE timer;
chain_mode: select chain mode instead of ring.
3) Command line options
Driver parameters can be also passed in command line by using:
@ -164,12 +166,12 @@ Where:
o bus_setup: perform HW setup of the bus. For example, on some ST platforms
this field is used to configure the AMBA bridge to generate more
efficient STBus traffic.
o init/exit: callbacks used for calling a custom initialisation;
o init/exit: callbacks used for calling a custom initialization;
this is sometime necessary on some platforms (e.g. ST boxes)
where the HW needs to have set some PIO lines or system cfg
registers.
o custom_cfg/custom_data: this is a custom configuration that can be passed
while initialising the resources.
while initializing the resources.
o bsp_priv: another private poiter.
For MDIO bus The we have:
@ -273,6 +275,8 @@ reset procedure etc).
o norm_desc.c: functions for handling normal descriptors;
o chain_mode.c/ring_mode.c:: functions to manage RING/CHAINED modes;
o mmc_core.c/mmc.h: Management MAC Counters;
o stmmac_hwtstamp.c: HW timestamp support for PTP
o stmmac_ptp.c: PTP 1588 clock
5) Debug Information
@ -326,6 +330,35 @@ To enter in Tx LPI mode the driver needs to have a software timer
that enable and disable the LPI mode when there is nothing to be
transmitted.
7) TODO:
7) Extended descriptors
The extended descriptors give us information about the receive Ethernet payload
when it is carrying PTP packets or TCP/UDP/ICMP over IP.
These are not available on GMAC Synopsys chips older than the 3.50.
At probe time the driver will decide if these can be actually used.
This support also is mandatory for PTPv2 because the extra descriptors 6 and 7
are used for saving the hardware timestamps.
8) Precision Time Protocol (PTP)
The driver supports the IEEE 1588-2002, Precision Time Protocol (PTP),
which enables precise synchronization of clocks in measurement and
control systems implemented with technologies such as network
communication.
In addition to the basic timestamp features mentioned in IEEE 1588-2002
Timestamps, new GMAC cores support the advanced timestamp features.
IEEE 1588-2008 that can be enabled when configure the Kernel.
9) SGMII/RGMII supports
New GMAC devices provide own way to manage RGMII/SGMII.
This information is available at run-time by looking at the
HW capability register. This means that the stmmac can manage
auto-negotiation and link status w/o using the PHYLIB stuff
In fact, the HW provides a subset of extended registers to
restart the ANE, verify Full/Half duplex mode and Speed.
Also thanks to these registers it is possible to look at the
Auto-negotiated Link Parter Ability.
10) TODO:
o XGMAC is not supported.
o Add the PTP - precision time protocol
o Complete the TBI & RTBI support.
o extened VLAN support for 3.70a SYNP GMAC.

View File

@ -1767,7 +1767,7 @@ F: arch/arm/configs/bcm2835_defconfig
F: drivers/*/*bcm2835*
BROADCOM TG3 GIGABIT ETHERNET DRIVER
M: Matt Carlson <mcarlson@broadcom.com>
M: Nithin Nayak Sujir <nsujir@broadcom.com>
M: Michael Chan <mchan@broadcom.com>
L: netdev@vger.kernel.org
S: Supported
@ -1889,7 +1889,7 @@ F: Documentation/video4linux/cafe_ccic
F: drivers/media/platform/marvell-ccic/
CAIF NETWORK LAYER
M: Sjur Braendeland <sjur.brandeland@stericsson.com>
M: Dmitry Tarnyagin <dmitry.tarnyagin@lockless.no>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/caif/
@ -6396,6 +6396,7 @@ F: drivers/acpi/apei/erst.c
PTP HARDWARE CLOCK SUPPORT
M: Richard Cochran <richardcochran@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
W: http://linuxptp.sourceforge.net/
F: Documentation/ABI/testing/sysfs-ptp
@ -6527,6 +6528,7 @@ S: Supported
F: drivers/net/ethernet/qlogic/qlcnic/
QLOGIC QLGE 10Gb ETHERNET DRIVER
M: Shahed Shaikh <shahed.shaikh@qlogic.com>
M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
M: Ron Mercer <ron.mercer@qlogic.com>
M: linux-driver@qlogic.com
@ -8627,7 +8629,7 @@ F: drivers/usb/gadget/*uvc*.c
F: drivers/usb/gadget/webcam.c
USB WIRELESS RNDIS DRIVER (rndis_wlan)
M: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
M: Jussi Kivilinna <jussi.kivilinna@iki.fi>
L: linux-wireless@vger.kernel.org
S: Maintained
F: drivers/net/wireless/rndis_wlan.c

View File

@ -79,4 +79,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _UAPI_ASM_SOCKET_H */

View File

@ -349,7 +349,7 @@
rx_descs = <64>;
mac_control = <0x20>;
slaves = <2>;
cpts_active_slave = <0>;
active_slave = <0>;
cpts_clock_mult = <0x80000000>;
cpts_clock_shift = <29>;
reg = <0x4a100000 0x800

View File

@ -918,9 +918,8 @@ void bpf_jit_compile(struct sk_filter *fp)
#endif
if (bpf_jit_enable > 1)
print_hex_dump(KERN_INFO, "BPF JIT code: ",
DUMP_PREFIX_ADDRESS, 16, 4, ctx.target,
alloc_size, false);
/* there are 2 passes here */
bpf_jit_dump(fp->len, alloc_size, 2, ctx.target);
fp->bpf_func = (void *)ctx.target;
out:

View File

@ -238,6 +238,7 @@ static __init void ge_complete(
struct mv643xx_eth_shared_platform_data *orion_ge_shared_data,
struct resource *orion_ge_resource, unsigned long irq,
struct platform_device *orion_ge_shared,
struct platform_device *orion_ge_mvmdio,
struct mv643xx_eth_platform_data *eth_data,
struct platform_device *orion_ge)
{
@ -247,6 +248,8 @@ static __init void ge_complete(
orion_ge->dev.platform_data = eth_data;
platform_device_register(orion_ge_shared);
if (orion_ge_mvmdio)
platform_device_register(orion_ge_mvmdio);
platform_device_register(orion_ge);
}
@ -258,8 +261,6 @@ struct mv643xx_eth_shared_platform_data orion_ge00_shared_data;
static struct resource orion_ge00_shared_resources[] = {
{
.name = "ge00 base",
}, {
.name = "ge00 err irq",
},
};
@ -271,6 +272,19 @@ static struct platform_device orion_ge00_shared = {
},
};
static struct resource orion_ge_mvmdio_resources[] = {
{
.name = "ge00 mvmdio base",
}, {
.name = "ge00 mvmdio err irq",
},
};
static struct platform_device orion_ge_mvmdio = {
.name = "orion-mdio",
.id = -1,
};
static struct resource orion_ge00_resources[] = {
{
.name = "ge00 irq",
@ -295,26 +309,25 @@ void __init orion_ge00_init(struct mv643xx_eth_platform_data *eth_data,
unsigned int tx_csum_limit)
{
fill_resources(&orion_ge00_shared, orion_ge00_shared_resources,
mapbase + 0x2000, SZ_16K - 1, irq_err);
mapbase + 0x2000, SZ_16K - 1, NO_IRQ);
fill_resources(&orion_ge_mvmdio, orion_ge_mvmdio_resources,
mapbase + 0x2004, 0x84 - 1, irq_err);
orion_ge00_shared_data.tx_csum_limit = tx_csum_limit;
ge_complete(&orion_ge00_shared_data,
orion_ge00_resources, irq, &orion_ge00_shared,
&orion_ge_mvmdio,
eth_data, &orion_ge00);
}
/*****************************************************************************
* GE01
****************************************************************************/
struct mv643xx_eth_shared_platform_data orion_ge01_shared_data = {
.shared_smi = &orion_ge00_shared,
};
struct mv643xx_eth_shared_platform_data orion_ge01_shared_data;
static struct resource orion_ge01_shared_resources[] = {
{
.name = "ge01 base",
}, {
.name = "ge01 err irq",
},
}
};
static struct platform_device orion_ge01_shared = {
@ -349,26 +362,23 @@ void __init orion_ge01_init(struct mv643xx_eth_platform_data *eth_data,
unsigned int tx_csum_limit)
{
fill_resources(&orion_ge01_shared, orion_ge01_shared_resources,
mapbase + 0x2000, SZ_16K - 1, irq_err);
mapbase + 0x2000, SZ_16K - 1, NO_IRQ);
orion_ge01_shared_data.tx_csum_limit = tx_csum_limit;
ge_complete(&orion_ge01_shared_data,
orion_ge01_resources, irq, &orion_ge01_shared,
NULL,
eth_data, &orion_ge01);
}
/*****************************************************************************
* GE10
****************************************************************************/
struct mv643xx_eth_shared_platform_data orion_ge10_shared_data = {
.shared_smi = &orion_ge00_shared,
};
struct mv643xx_eth_shared_platform_data orion_ge10_shared_data;
static struct resource orion_ge10_shared_resources[] = {
{
.name = "ge10 base",
}, {
.name = "ge10 err irq",
},
}
};
static struct platform_device orion_ge10_shared = {
@ -402,24 +412,21 @@ void __init orion_ge10_init(struct mv643xx_eth_platform_data *eth_data,
unsigned long irq_err)
{
fill_resources(&orion_ge10_shared, orion_ge10_shared_resources,
mapbase + 0x2000, SZ_16K - 1, irq_err);
mapbase + 0x2000, SZ_16K - 1, NO_IRQ);
ge_complete(&orion_ge10_shared_data,
orion_ge10_resources, irq, &orion_ge10_shared,
NULL,
eth_data, &orion_ge10);
}
/*****************************************************************************
* GE11
****************************************************************************/
struct mv643xx_eth_shared_platform_data orion_ge11_shared_data = {
.shared_smi = &orion_ge00_shared,
};
struct mv643xx_eth_shared_platform_data orion_ge11_shared_data;
static struct resource orion_ge11_shared_resources[] = {
{
.name = "ge11 base",
}, {
.name = "ge11 err irq",
},
};
@ -454,9 +461,10 @@ void __init orion_ge11_init(struct mv643xx_eth_platform_data *eth_data,
unsigned long irq_err)
{
fill_resources(&orion_ge11_shared, orion_ge11_shared_resources,
mapbase + 0x2000, SZ_16K - 1, irq_err);
mapbase + 0x2000, SZ_16K - 1, NO_IRQ);
ge_complete(&orion_ge11_shared_data,
orion_ge11_resources, irq, &orion_ge11_shared,
NULL,
eth_data, &orion_ge11);
}

View File

@ -72,4 +72,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* __ASM_AVR32_SOCKET_H */

View File

@ -74,6 +74,8 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _ASM_SOCKET_H */

View File

@ -72,5 +72,7 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _ASM_SOCKET_H */

View File

@ -72,4 +72,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _ASM_SOCKET_H */

View File

@ -81,4 +81,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _ASM_IA64_SOCKET_H */

View File

@ -72,4 +72,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _ASM_M32R_SOCKET_H */

View File

@ -90,4 +90,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _UAPI_ASM_SOCKET_H */

View File

@ -72,4 +72,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _ASM_SOCKET_H */

View File

@ -71,6 +71,8 @@
#define SO_LOCK_FILTER 0x4025
#define SO_SELECT_ERR_QUEUE 0x4026
/* O_NONBLOCK clashes with the bits used for socket types. Therefore we
* have to define SOCK_NONBLOCK to a different value here.
*/

View File

@ -79,4 +79,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _ASM_POWERPC_SOCKET_H */

View File

@ -671,16 +671,12 @@ void bpf_jit_compile(struct sk_filter *fp)
}
if (bpf_jit_enable > 1)
pr_info("flen=%d proglen=%u pass=%d image=%p\n",
flen, proglen, pass, image);
/* Note that we output the base address of the code_base
* rather than image, since opcodes are in code_base.
*/
bpf_jit_dump(flen, proglen, pass, code_base);
if (image) {
if (bpf_jit_enable > 1)
print_hex_dump(KERN_ERR, "JIT code: ",
DUMP_PREFIX_ADDRESS,
16, 1, code_base,
proglen, false);
bpf_flush_icache(code_base, code_base + (proglen/4));
/* Function descriptor nastiness: Address + TOC */
((u64 *)image)[0] = (u64)code_base;

View File

@ -47,6 +47,25 @@ static struct platform_device mv643xx_eth_shared_device = {
.resource = mv643xx_eth_shared_resources,
};
/*
* The orion mdio driver only covers shared + 0x4 up to shared + 0x84 - 1
*/
static struct resource mv643xx_eth_mvmdio_resources[] = {
[0] = {
.name = "ethernet mdio base",
.start = 0xf1000000 + MV643XX_ETH_SHARED_REGS + 0x4,
.end = 0xf1000000 + MV643XX_ETH_SHARED_REGS + 0x83,
.flags = IORESOURCE_MEM,
},
};
static struct platform_device mv643xx_eth_mvmdio_device = {
.name = "orion-mdio",
.id = -1,
.num_resources = ARRAY_SIZE(mv643xx_eth_mvmdio_resources),
.resource = mv643xx_eth_shared_resources,
};
static struct resource mv643xx_eth_port1_resources[] = {
[0] = {
.name = "eth port1 irq",
@ -82,6 +101,7 @@ static struct platform_device eth_port1_device = {
static struct platform_device *mv643xx_eth_pd_devs[] __initdata = {
&mv643xx_eth_shared_device,
&mv643xx_eth_mvmdio_device,
&eth_port1_device,
};

View File

@ -214,15 +214,27 @@ static struct platform_device * __init mv64x60_eth_register_shared_pdev(
struct device_node *np, int id)
{
struct platform_device *pdev;
struct resource r[1];
struct resource r[2];
int err;
err = of_address_to_resource(np, 0, &r[0]);
if (err)
return ERR_PTR(err);
/* register an orion mdio bus driver */
r[1].start = r[0].start + 0x4;
r[1].end = r[0].start + 0x84 - 1;
r[1].flags = IORESOURCE_MEM;
if (id == 0) {
pdev = platform_device_register_simple("orion-mdio", -1, &r[1], 1);
if (!pdev)
return pdev;
}
pdev = platform_device_register_simple(MV643XX_ETH_SHARED_NAME, id,
r, 1);
&r[0], 1);
return pdev;
}

View File

@ -78,4 +78,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _ASM_SOCKET_H */

View File

@ -68,6 +68,8 @@
#define SO_LOCK_FILTER 0x0028
#define SO_SELECT_ERR_QUEUE 0x0029
/* Security levels - as per NRL IPv6 - don't actually do anything */
#define SO_SECURITY_AUTHENTICATION 0x5001
#define SO_SECURITY_ENCRYPTION_TRANSPORT 0x5002

View File

@ -795,13 +795,9 @@ cond_branch: f_offset = addrs[i + filter[i].jf];
}
if (bpf_jit_enable > 1)
pr_err("flen=%d proglen=%u pass=%d image=%p\n",
flen, proglen, pass, image);
bpf_jit_dump(flen, proglen, pass, image);
if (image) {
if (bpf_jit_enable > 1)
print_hex_dump(KERN_ERR, "JIT code: ", DUMP_PREFIX_ADDRESS,
16, 1, image, proglen, false);
bpf_flush_icache(image, image + proglen);
fp->bpf_func = (void *)image;
}

View File

@ -725,17 +725,12 @@ cond_branch: f_offset = addrs[i + filter[i].jf] - addrs[i];
}
oldproglen = proglen;
}
if (bpf_jit_enable > 1)
pr_err("flen=%d proglen=%u pass=%d image=%p\n",
flen, proglen, pass, image);
bpf_jit_dump(flen, proglen, pass, image);
if (image) {
if (bpf_jit_enable > 1)
print_hex_dump(KERN_ERR, "JIT code: ", DUMP_PREFIX_ADDRESS,
16, 1, image, proglen, false);
bpf_flush_icache(image, image + proglen);
fp->bpf_func = (void *)image;
}
out:

View File

@ -83,4 +83,6 @@
#define SO_LOCK_FILTER 44
#define SO_SELECT_ERR_QUEUE 45
#endif /* _XTENSA_SOCKET_H */

View File

@ -1055,7 +1055,7 @@ static int he_start(struct atm_dev *dev)
he_writel(he_dev, 0x0, RESET_CNTL);
he_writel(he_dev, 0xff, RESET_CNTL);
udelay(16*1000); /* 16 ms */
msleep(16); /* 16 ms */
status = he_readl(he_dev, RESET_CNTL);
if ((status & BOARD_RST_STATUS) == 0) {
hprintk("reset failed\n");

View File

@ -104,7 +104,13 @@ void bcma_core_pll_ctl(struct bcma_device *core, u32 req, u32 status, bool on)
if (i)
bcma_err(core->bus, "PLL enable timeout\n");
} else {
bcma_warn(core->bus, "Disabling PLL not supported yet!\n");
/*
* Mask the PLL but don't wait for it to be disabled. PLL may be
* shared between cores and will be still up if there is another
* core using it.
*/
bcma_mask32(core, BCMA_CLKCTLST, ~req);
bcma_read32(core, BCMA_CLKCTLST);
}
}
EXPORT_SYMBOL_GPL(bcma_core_pll_ctl);

View File

@ -25,13 +25,14 @@ static inline u32 bcma_cc_write32_masked(struct bcma_drv_cc *cc, u16 offset,
return value;
}
static u32 bcma_chipco_get_alp_clock(struct bcma_drv_cc *cc)
u32 bcma_chipco_get_alp_clock(struct bcma_drv_cc *cc)
{
if (cc->capabilities & BCMA_CC_CAP_PMU)
return bcma_pmu_get_alp_clock(cc);
return 20000000;
}
EXPORT_SYMBOL_GPL(bcma_chipco_get_alp_clock);
static u32 bcma_chipco_watchdog_get_max_timer(struct bcma_drv_cc *cc)
{
@ -213,6 +214,7 @@ u32 bcma_chipco_gpio_out(struct bcma_drv_cc *cc, u32 mask, u32 value)
return res;
}
EXPORT_SYMBOL_GPL(bcma_chipco_gpio_out);
u32 bcma_chipco_gpio_outen(struct bcma_drv_cc *cc, u32 mask, u32 value)
{
@ -225,6 +227,7 @@ u32 bcma_chipco_gpio_outen(struct bcma_drv_cc *cc, u32 mask, u32 value)
return res;
}
EXPORT_SYMBOL_GPL(bcma_chipco_gpio_outen);
/*
* If the bit is set to 0, chipcommon controlls this GPIO,

View File

@ -174,19 +174,35 @@ u32 bcma_pmu_get_alp_clock(struct bcma_drv_cc *cc)
struct bcma_bus *bus = cc->core->bus;
switch (bus->chipinfo.id) {
case BCMA_CHIP_ID_BCM4716:
case BCMA_CHIP_ID_BCM4748:
case BCMA_CHIP_ID_BCM47162:
case BCMA_CHIP_ID_BCM4313:
case BCMA_CHIP_ID_BCM5357:
case BCMA_CHIP_ID_BCM43224:
case BCMA_CHIP_ID_BCM43225:
case BCMA_CHIP_ID_BCM43227:
case BCMA_CHIP_ID_BCM43228:
case BCMA_CHIP_ID_BCM4331:
case BCMA_CHIP_ID_BCM43421:
case BCMA_CHIP_ID_BCM43428:
case BCMA_CHIP_ID_BCM43431:
case BCMA_CHIP_ID_BCM4716:
case BCMA_CHIP_ID_BCM47162:
case BCMA_CHIP_ID_BCM4748:
case BCMA_CHIP_ID_BCM4749:
case BCMA_CHIP_ID_BCM5357:
case BCMA_CHIP_ID_BCM53572:
case BCMA_CHIP_ID_BCM6362:
/* always 20Mhz */
return 20000 * 1000;
case BCMA_CHIP_ID_BCM5356:
case BCMA_CHIP_ID_BCM4706:
case BCMA_CHIP_ID_BCM5356:
/* always 25Mhz */
return 25000 * 1000;
case BCMA_CHIP_ID_BCM43460:
case BCMA_CHIP_ID_BCM4352:
case BCMA_CHIP_ID_BCM4360:
if (cc->status & BCMA_CC_CHIPST_4360_XTAL_40MZ)
return 40000 * 1000;
else
return 20000 * 1000;
default:
bcma_warn(bus, "No ALP clock specified for %04X device, pmu rev. %d, using default %d Hz\n",
bus->chipinfo.id, cc->pmu.rev, BCMA_CC_PMU_ALP_CLOCK);
@ -373,7 +389,7 @@ void bcma_pmu_spuravoid_pllupdate(struct bcma_drv_cc *cc, int spuravoid)
tmp |= (bcm5357_bcm43236_ndiv[spuravoid]) << BCMA_CC_PMU1_PLL0_PC2_NDIV_INT_SHIFT;
bcma_cc_write32(cc, BCMA_CC_PLLCTL_DATA, tmp);
tmp = 1 << 10;
tmp = BCMA_CC_PMU_CTL_PLL_UPD;
break;
case BCMA_CHIP_ID_BCM4331:
@ -394,7 +410,7 @@ void bcma_pmu_spuravoid_pllupdate(struct bcma_drv_cc *cc, int spuravoid)
bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL2,
0x03000a08);
}
tmp = 1 << 10;
tmp = BCMA_CC_PMU_CTL_PLL_UPD;
break;
case BCMA_CHIP_ID_BCM43224:
@ -427,7 +443,7 @@ void bcma_pmu_spuravoid_pllupdate(struct bcma_drv_cc *cc, int spuravoid)
bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL5,
0x88888815);
}
tmp = 1 << 10;
tmp = BCMA_CC_PMU_CTL_PLL_UPD;
break;
case BCMA_CHIP_ID_BCM4716:
@ -461,7 +477,7 @@ void bcma_pmu_spuravoid_pllupdate(struct bcma_drv_cc *cc, int spuravoid)
0x88888815);
}
tmp = 3 << 9;
tmp = BCMA_CC_PMU_CTL_PLL_UPD | BCMA_CC_PMU_CTL_NOILPONW;
break;
case BCMA_CHIP_ID_BCM43227:
@ -497,7 +513,7 @@ void bcma_pmu_spuravoid_pllupdate(struct bcma_drv_cc *cc, int spuravoid)
bcma_pmu_spuravoid_pll_write(cc, BCMA_CC_PMU_PLL_CTL5,
0x88888815);
}
tmp = 1 << 10;
tmp = BCMA_CC_PMU_CTL_PLL_UPD;
break;
default:
bcma_err(bus, "Unknown spuravoidance settings for chip 0x%04X, not changing PLL\n",

View File

@ -120,6 +120,11 @@ static int bcma_register_cores(struct bcma_bus *bus)
continue;
}
/* Only first GMAC core on BCM4706 is connected and working */
if (core->id.id == BCMA_CORE_4706_MAC_GBIT &&
core->core_unit > 0)
continue;
core->dev.release = bcma_release_core_dev;
core->dev.bus = &bcma_bus_type;
dev_set_name(&core->dev, "bcma%d:%d", bus->num, dev_id);

View File

@ -137,19 +137,19 @@ static void bcma_scan_switch_core(struct bcma_bus *bus, u32 addr)
addr);
}
static u32 bcma_erom_get_ent(struct bcma_bus *bus, u32 **eromptr)
static u32 bcma_erom_get_ent(struct bcma_bus *bus, u32 __iomem **eromptr)
{
u32 ent = readl(*eromptr);
(*eromptr)++;
return ent;
}
static void bcma_erom_push_ent(u32 **eromptr)
static void bcma_erom_push_ent(u32 __iomem **eromptr)
{
(*eromptr)--;
}
static s32 bcma_erom_get_ci(struct bcma_bus *bus, u32 **eromptr)
static s32 bcma_erom_get_ci(struct bcma_bus *bus, u32 __iomem **eromptr)
{
u32 ent = bcma_erom_get_ent(bus, eromptr);
if (!(ent & SCAN_ER_VALID))
@ -159,14 +159,14 @@ static s32 bcma_erom_get_ci(struct bcma_bus *bus, u32 **eromptr)
return ent;
}
static bool bcma_erom_is_end(struct bcma_bus *bus, u32 **eromptr)
static bool bcma_erom_is_end(struct bcma_bus *bus, u32 __iomem **eromptr)
{
u32 ent = bcma_erom_get_ent(bus, eromptr);
bcma_erom_push_ent(eromptr);
return (ent == (SCAN_ER_TAG_END | SCAN_ER_VALID));
}
static bool bcma_erom_is_bridge(struct bcma_bus *bus, u32 **eromptr)
static bool bcma_erom_is_bridge(struct bcma_bus *bus, u32 __iomem **eromptr)
{
u32 ent = bcma_erom_get_ent(bus, eromptr);
bcma_erom_push_ent(eromptr);
@ -175,7 +175,7 @@ static bool bcma_erom_is_bridge(struct bcma_bus *bus, u32 **eromptr)
((ent & SCAN_ADDR_TYPE) == SCAN_ADDR_TYPE_BRIDGE));
}
static void bcma_erom_skip_component(struct bcma_bus *bus, u32 **eromptr)
static void bcma_erom_skip_component(struct bcma_bus *bus, u32 __iomem **eromptr)
{
u32 ent;
while (1) {
@ -189,7 +189,7 @@ static void bcma_erom_skip_component(struct bcma_bus *bus, u32 **eromptr)
bcma_erom_push_ent(eromptr);
}
static s32 bcma_erom_get_mst_port(struct bcma_bus *bus, u32 **eromptr)
static s32 bcma_erom_get_mst_port(struct bcma_bus *bus, u32 __iomem **eromptr)
{
u32 ent = bcma_erom_get_ent(bus, eromptr);
if (!(ent & SCAN_ER_VALID))
@ -199,7 +199,7 @@ static s32 bcma_erom_get_mst_port(struct bcma_bus *bus, u32 **eromptr)
return ent;
}
static s32 bcma_erom_get_addr_desc(struct bcma_bus *bus, u32 **eromptr,
static s32 bcma_erom_get_addr_desc(struct bcma_bus *bus, u32 __iomem **eromptr,
u32 type, u8 port)
{
u32 addrl, addrh, sizel, sizeh = 0;

View File

@ -217,6 +217,7 @@ static void bcma_sprom_extract_r8(struct bcma_bus *bus, const u16 *sprom)
}
SPEX(board_rev, SSB_SPROM8_BOARDREV, ~0, 0);
SPEX(board_type, SSB_SPROM1_SPID, ~0, 0);
SPEX(txpid2g[0], SSB_SPROM4_TXPID2G01, SSB_SPROM4_TXPID2G0,
SSB_SPROM4_TXPID2G0_SHIFT);

View File

@ -90,6 +90,7 @@ static struct usb_device_id ath3k_table[] = {
{ USB_DEVICE(0x13d3, 0x3393) },
{ USB_DEVICE(0x0489, 0xe04e) },
{ USB_DEVICE(0x0489, 0xe056) },
{ USB_DEVICE(0x0489, 0xe04d) },
/* Atheros AR5BBU12 with sflash firmware */
{ USB_DEVICE(0x0489, 0xE02C) },
@ -126,6 +127,7 @@ static struct usb_device_id ath3k_blist_tbl[] = {
{ USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 },
/* Atheros AR5BBU22 with sflash firmware */
{ USB_DEVICE(0x0489, 0xE03C), .driver_info = BTUSB_ATH3012 },

View File

@ -29,20 +29,6 @@
struct btmrvl_debugfs_data {
struct dentry *config_dir;
struct dentry *status_dir;
/* config */
struct dentry *psmode;
struct dentry *pscmd;
struct dentry *hsmode;
struct dentry *hscmd;
struct dentry *gpiogap;
struct dentry *hscfgcmd;
/* status */
struct dentry *curpsmode;
struct dentry *hsstate;
struct dentry *psstate;
struct dentry *txdnldready;
};
static ssize_t btmrvl_hscfgcmd_write(struct file *file,
@ -91,47 +77,6 @@ static const struct file_operations btmrvl_hscfgcmd_fops = {
.llseek = default_llseek,
};
static ssize_t btmrvl_psmode_write(struct file *file, const char __user *ubuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
long result, ret;
memset(buf, 0, sizeof(buf));
if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
return -EFAULT;
ret = strict_strtol(buf, 10, &result);
if (ret)
return ret;
priv->btmrvl_dev.psmode = result;
return count;
}
static ssize_t btmrvl_psmode_read(struct file *file, char __user *userbuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
int ret;
ret = snprintf(buf, sizeof(buf) - 1, "%d\n",
priv->btmrvl_dev.psmode);
return simple_read_from_buffer(userbuf, count, ppos, buf, ret);
}
static const struct file_operations btmrvl_psmode_fops = {
.read = btmrvl_psmode_read,
.write = btmrvl_psmode_write,
.open = simple_open,
.llseek = default_llseek,
};
static ssize_t btmrvl_pscmd_write(struct file *file, const char __user *ubuf,
size_t count, loff_t *ppos)
{
@ -178,47 +123,6 @@ static const struct file_operations btmrvl_pscmd_fops = {
.llseek = default_llseek,
};
static ssize_t btmrvl_gpiogap_write(struct file *file, const char __user *ubuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
long result, ret;
memset(buf, 0, sizeof(buf));
if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
return -EFAULT;
ret = strict_strtol(buf, 16, &result);
if (ret)
return ret;
priv->btmrvl_dev.gpio_gap = result;
return count;
}
static ssize_t btmrvl_gpiogap_read(struct file *file, char __user *userbuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
int ret;
ret = snprintf(buf, sizeof(buf) - 1, "0x%x\n",
priv->btmrvl_dev.gpio_gap);
return simple_read_from_buffer(userbuf, count, ppos, buf, ret);
}
static const struct file_operations btmrvl_gpiogap_fops = {
.read = btmrvl_gpiogap_read,
.write = btmrvl_gpiogap_write,
.open = simple_open,
.llseek = default_llseek,
};
static ssize_t btmrvl_hscmd_write(struct file *file, const char __user *ubuf,
size_t count, loff_t *ppos)
{
@ -263,119 +167,6 @@ static const struct file_operations btmrvl_hscmd_fops = {
.llseek = default_llseek,
};
static ssize_t btmrvl_hsmode_write(struct file *file, const char __user *ubuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
long result, ret;
memset(buf, 0, sizeof(buf));
if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
return -EFAULT;
ret = strict_strtol(buf, 10, &result);
if (ret)
return ret;
priv->btmrvl_dev.hsmode = result;
return count;
}
static ssize_t btmrvl_hsmode_read(struct file *file, char __user * userbuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
int ret;
ret = snprintf(buf, sizeof(buf) - 1, "%d\n", priv->btmrvl_dev.hsmode);
return simple_read_from_buffer(userbuf, count, ppos, buf, ret);
}
static const struct file_operations btmrvl_hsmode_fops = {
.read = btmrvl_hsmode_read,
.write = btmrvl_hsmode_write,
.open = simple_open,
.llseek = default_llseek,
};
static ssize_t btmrvl_curpsmode_read(struct file *file, char __user *userbuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
int ret;
ret = snprintf(buf, sizeof(buf) - 1, "%d\n", priv->adapter->psmode);
return simple_read_from_buffer(userbuf, count, ppos, buf, ret);
}
static const struct file_operations btmrvl_curpsmode_fops = {
.read = btmrvl_curpsmode_read,
.open = simple_open,
.llseek = default_llseek,
};
static ssize_t btmrvl_psstate_read(struct file *file, char __user * userbuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
int ret;
ret = snprintf(buf, sizeof(buf) - 1, "%d\n", priv->adapter->ps_state);
return simple_read_from_buffer(userbuf, count, ppos, buf, ret);
}
static const struct file_operations btmrvl_psstate_fops = {
.read = btmrvl_psstate_read,
.open = simple_open,
.llseek = default_llseek,
};
static ssize_t btmrvl_hsstate_read(struct file *file, char __user *userbuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
int ret;
ret = snprintf(buf, sizeof(buf) - 1, "%d\n", priv->adapter->hs_state);
return simple_read_from_buffer(userbuf, count, ppos, buf, ret);
}
static const struct file_operations btmrvl_hsstate_fops = {
.read = btmrvl_hsstate_read,
.open = simple_open,
.llseek = default_llseek,
};
static ssize_t btmrvl_txdnldready_read(struct file *file, char __user *userbuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = file->private_data;
char buf[16];
int ret;
ret = snprintf(buf, sizeof(buf) - 1, "%d\n",
priv->btmrvl_dev.tx_dnld_rdy);
return simple_read_from_buffer(userbuf, count, ppos, buf, ret);
}
static const struct file_operations btmrvl_txdnldready_fops = {
.read = btmrvl_txdnldready_read,
.open = simple_open,
.llseek = default_llseek,
};
void btmrvl_debugfs_init(struct hci_dev *hdev)
{
struct btmrvl_private *priv = hci_get_drvdata(hdev);
@ -394,30 +185,28 @@ void btmrvl_debugfs_init(struct hci_dev *hdev)
dbg->config_dir = debugfs_create_dir("config", hdev->debugfs);
dbg->psmode = debugfs_create_file("psmode", 0644, dbg->config_dir,
priv, &btmrvl_psmode_fops);
dbg->pscmd = debugfs_create_file("pscmd", 0644, dbg->config_dir,
priv, &btmrvl_pscmd_fops);
dbg->gpiogap = debugfs_create_file("gpiogap", 0644, dbg->config_dir,
priv, &btmrvl_gpiogap_fops);
dbg->hsmode = debugfs_create_file("hsmode", 0644, dbg->config_dir,
priv, &btmrvl_hsmode_fops);
dbg->hscmd = debugfs_create_file("hscmd", 0644, dbg->config_dir,
priv, &btmrvl_hscmd_fops);
dbg->hscfgcmd = debugfs_create_file("hscfgcmd", 0644, dbg->config_dir,
priv, &btmrvl_hscfgcmd_fops);
debugfs_create_u8("psmode", 0644, dbg->config_dir,
&priv->btmrvl_dev.psmode);
debugfs_create_file("pscmd", 0644, dbg->config_dir,
priv, &btmrvl_pscmd_fops);
debugfs_create_x16("gpiogap", 0644, dbg->config_dir,
&priv->btmrvl_dev.gpio_gap);
debugfs_create_u8("hsmode", 0644, dbg->config_dir,
&priv->btmrvl_dev.hsmode);
debugfs_create_file("hscmd", 0644, dbg->config_dir,
priv, &btmrvl_hscmd_fops);
debugfs_create_file("hscfgcmd", 0644, dbg->config_dir,
priv, &btmrvl_hscfgcmd_fops);
dbg->status_dir = debugfs_create_dir("status", hdev->debugfs);
dbg->curpsmode = debugfs_create_file("curpsmode", 0444,
dbg->status_dir, priv,
&btmrvl_curpsmode_fops);
dbg->psstate = debugfs_create_file("psstate", 0444, dbg->status_dir,
priv, &btmrvl_psstate_fops);
dbg->hsstate = debugfs_create_file("hsstate", 0444, dbg->status_dir,
priv, &btmrvl_hsstate_fops);
dbg->txdnldready = debugfs_create_file("txdnldready", 0444,
dbg->status_dir, priv,
&btmrvl_txdnldready_fops);
debugfs_create_u8("curpsmode", 0444, dbg->status_dir,
&priv->adapter->psmode);
debugfs_create_u8("psstate", 0444, dbg->status_dir,
&priv->adapter->ps_state);
debugfs_create_u8("hsstate", 0444, dbg->status_dir,
&priv->adapter->hs_state);
debugfs_create_u8("txdnldready", 0444, dbg->status_dir,
&priv->btmrvl_dev.tx_dnld_rdy);
}
void btmrvl_debugfs_remove(struct hci_dev *hdev)
@ -428,19 +217,8 @@ void btmrvl_debugfs_remove(struct hci_dev *hdev)
if (!dbg)
return;
debugfs_remove(dbg->psmode);
debugfs_remove(dbg->pscmd);
debugfs_remove(dbg->gpiogap);
debugfs_remove(dbg->hsmode);
debugfs_remove(dbg->hscmd);
debugfs_remove(dbg->hscfgcmd);
debugfs_remove(dbg->config_dir);
debugfs_remove(dbg->curpsmode);
debugfs_remove(dbg->psstate);
debugfs_remove(dbg->hsstate);
debugfs_remove(dbg->txdnldready);
debugfs_remove(dbg->status_dir);
debugfs_remove_recursive(dbg->config_dir);
debugfs_remove_recursive(dbg->status_dir);
kfree(dbg);
}

View File

@ -83,8 +83,8 @@ static const struct btmrvl_sdio_card_reg btmrvl_reg_87xx = {
};
static const struct btmrvl_sdio_device btmrvl_sdio_sd8688 = {
.helper = "sd8688_helper.bin",
.firmware = "sd8688.bin",
.helper = "mrvl/sd8688_helper.bin",
.firmware = "mrvl/sd8688.bin",
.reg = &btmrvl_reg_8688,
.sd_blksz_fw_dl = 64,
};
@ -228,24 +228,24 @@ failed:
static int btmrvl_sdio_verify_fw_download(struct btmrvl_sdio_card *card,
int pollnum)
{
int ret = -ETIMEDOUT;
u16 firmwarestat;
unsigned int tries;
int tries, ret;
/* Wait for firmware to become ready */
for (tries = 0; tries < pollnum; tries++) {
if (btmrvl_sdio_read_fw_status(card, &firmwarestat) < 0)
sdio_claim_host(card->func);
ret = btmrvl_sdio_read_fw_status(card, &firmwarestat);
sdio_release_host(card->func);
if (ret < 0)
continue;
if (firmwarestat == FIRMWARE_READY) {
ret = 0;
break;
} else {
msleep(10);
}
if (firmwarestat == FIRMWARE_READY)
return 0;
msleep(10);
}
return ret;
return -ETIMEDOUT;
}
static int btmrvl_sdio_download_helper(struct btmrvl_sdio_card *card)
@ -874,7 +874,7 @@ exit:
static int btmrvl_sdio_download_fw(struct btmrvl_sdio_card *card)
{
int ret = 0;
int ret;
u8 fws0;
int pollnum = MAX_POLL_TRIES;
@ -882,13 +882,14 @@ static int btmrvl_sdio_download_fw(struct btmrvl_sdio_card *card)
BT_ERR("card or function is NULL!");
return -EINVAL;
}
sdio_claim_host(card->func);
if (!btmrvl_sdio_verify_fw_download(card, 1)) {
BT_DBG("Firmware already downloaded!");
goto done;
return 0;
}
sdio_claim_host(card->func);
/* Check if other function driver is downloading the firmware */
fws0 = sdio_readb(card->func, card->reg->card_fw_status0, &ret);
if (ret) {
@ -918,15 +919,21 @@ static int btmrvl_sdio_download_fw(struct btmrvl_sdio_card *card)
}
}
sdio_release_host(card->func);
/*
* winner or not, with this test the FW synchronizes when the
* module can continue its initialization
*/
if (btmrvl_sdio_verify_fw_download(card, pollnum)) {
BT_ERR("FW failed to be active in time!");
ret = -ETIMEDOUT;
goto done;
return -ETIMEDOUT;
}
return 0;
done:
sdio_release_host(card->func);
return ret;
}
@ -989,8 +996,6 @@ static int btmrvl_sdio_probe(struct sdio_func *func,
goto unreg_dev;
}
msleep(100);
btmrvl_sdio_enable_host_int(card);
priv = btmrvl_add_card(card);
@ -1185,7 +1190,7 @@ MODULE_AUTHOR("Marvell International Ltd.");
MODULE_DESCRIPTION("Marvell BT-over-SDIO driver ver " VERSION);
MODULE_VERSION(VERSION);
MODULE_LICENSE("GPL v2");
MODULE_FIRMWARE("sd8688_helper.bin");
MODULE_FIRMWARE("sd8688.bin");
MODULE_FIRMWARE("mrvl/sd8688_helper.bin");
MODULE_FIRMWARE("mrvl/sd8688.bin");
MODULE_FIRMWARE("mrvl/sd8787_uapsta.bin");
MODULE_FIRMWARE("mrvl/sd8797_uapsta.bin");

View File

@ -23,6 +23,7 @@
#include <linux/module.h>
#include <linux/usb.h>
#include <linux/firmware.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
@ -47,6 +48,7 @@ static struct usb_driver btusb_driver;
#define BTUSB_BROKEN_ISOC 0x20
#define BTUSB_WRONG_SCO_MTU 0x40
#define BTUSB_ATH3012 0x80
#define BTUSB_INTEL 0x100
static struct usb_device_id btusb_table[] = {
/* Generic Bluetooth USB device */
@ -148,6 +150,7 @@ static struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 },
/* Atheros AR5BBU12 with sflash firmware */
{ USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
@ -206,6 +209,9 @@ static struct usb_device_id blacklist_table[] = {
/* Frontline ComProbe Bluetooth Sniffer */
{ USB_DEVICE(0x16d3, 0x0002), .driver_info = BTUSB_SNIFFER },
/* Intel Bluetooth device */
{ USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL },
{ } /* Terminating entry */
};
@ -926,6 +932,391 @@ static void btusb_waker(struct work_struct *work)
usb_autopm_put_interface(data->intf);
}
static int btusb_setup_bcm92035(struct hci_dev *hdev)
{
struct sk_buff *skb;
u8 val = 0x00;
BT_DBG("%s", hdev->name);
skb = __hci_cmd_sync(hdev, 0xfc3b, 1, &val, HCI_INIT_TIMEOUT);
if (IS_ERR(skb))
BT_ERR("BCM92035 command failed (%ld)", -PTR_ERR(skb));
else
kfree_skb(skb);
return 0;
}
struct intel_version {
u8 status;
u8 hw_platform;
u8 hw_variant;
u8 hw_revision;
u8 fw_variant;
u8 fw_revision;
u8 fw_build_num;
u8 fw_build_ww;
u8 fw_build_yy;
u8 fw_patch_num;
} __packed;
static const struct firmware *btusb_setup_intel_get_fw(struct hci_dev *hdev,
struct intel_version *ver)
{
const struct firmware *fw;
char fwname[64];
int ret;
snprintf(fwname, sizeof(fwname),
"intel/ibt-hw-%x.%x.%x-fw-%x.%x.%x.%x.%x.bseq",
ver->hw_platform, ver->hw_variant, ver->hw_revision,
ver->fw_variant, ver->fw_revision, ver->fw_build_num,
ver->fw_build_ww, ver->fw_build_yy);
ret = request_firmware(&fw, fwname, &hdev->dev);
if (ret < 0) {
if (ret == -EINVAL) {
BT_ERR("%s Intel firmware file request failed (%d)",
hdev->name, ret);
return NULL;
}
BT_ERR("%s failed to open Intel firmware file: %s(%d)",
hdev->name, fwname, ret);
/* If the correct firmware patch file is not found, use the
* default firmware patch file instead
*/
snprintf(fwname, sizeof(fwname), "intel/ibt-hw-%x.%x.bseq",
ver->hw_platform, ver->hw_variant);
if (request_firmware(&fw, fwname, &hdev->dev) < 0) {
BT_ERR("%s failed to open default Intel fw file: %s",
hdev->name, fwname);
return NULL;
}
}
BT_INFO("%s: Intel Bluetooth firmware file: %s", hdev->name, fwname);
return fw;
}
static int btusb_setup_intel_patching(struct hci_dev *hdev,
const struct firmware *fw,
const u8 **fw_ptr, int *disable_patch)
{
struct sk_buff *skb;
struct hci_command_hdr *cmd;
const u8 *cmd_param;
struct hci_event_hdr *evt = NULL;
const u8 *evt_param = NULL;
int remain = fw->size - (*fw_ptr - fw->data);
/* The first byte indicates the types of the patch command or event.
* 0x01 means HCI command and 0x02 is HCI event. If the first bytes
* in the current firmware buffer doesn't start with 0x01 or
* the size of remain buffer is smaller than HCI command header,
* the firmware file is corrupted and it should stop the patching
* process.
*/
if (remain > HCI_COMMAND_HDR_SIZE && *fw_ptr[0] != 0x01) {
BT_ERR("%s Intel fw corrupted: invalid cmd read", hdev->name);
return -EINVAL;
}
(*fw_ptr)++;
remain--;
cmd = (struct hci_command_hdr *)(*fw_ptr);
*fw_ptr += sizeof(*cmd);
remain -= sizeof(*cmd);
/* Ensure that the remain firmware data is long enough than the length
* of command parameter. If not, the firmware file is corrupted.
*/
if (remain < cmd->plen) {
BT_ERR("%s Intel fw corrupted: invalid cmd len", hdev->name);
return -EFAULT;
}
/* If there is a command that loads a patch in the firmware
* file, then enable the patch upon success, otherwise just
* disable the manufacturer mode, for example patch activation
* is not required when the default firmware patch file is used
* because there are no patch data to load.
*/
if (*disable_patch && le16_to_cpu(cmd->opcode) == 0xfc8e)
*disable_patch = 0;
cmd_param = *fw_ptr;
*fw_ptr += cmd->plen;
remain -= cmd->plen;
/* This reads the expected events when the above command is sent to the
* device. Some vendor commands expects more than one events, for
* example command status event followed by vendor specific event.
* For this case, it only keeps the last expected event. so the command
* can be sent with __hci_cmd_sync_ev() which returns the sk_buff of
* last expected event.
*/
while (remain > HCI_EVENT_HDR_SIZE && *fw_ptr[0] == 0x02) {
(*fw_ptr)++;
remain--;
evt = (struct hci_event_hdr *)(*fw_ptr);
*fw_ptr += sizeof(*evt);
remain -= sizeof(*evt);
if (remain < evt->plen) {
BT_ERR("%s Intel fw corrupted: invalid evt len",
hdev->name);
return -EFAULT;
}
evt_param = *fw_ptr;
*fw_ptr += evt->plen;
remain -= evt->plen;
}
/* Every HCI commands in the firmware file has its correspond event.
* If event is not found or remain is smaller than zero, the firmware
* file is corrupted.
*/
if (!evt || !evt_param || remain < 0) {
BT_ERR("%s Intel fw corrupted: invalid evt read", hdev->name);
return -EFAULT;
}
skb = __hci_cmd_sync_ev(hdev, le16_to_cpu(cmd->opcode), cmd->plen,
cmd_param, evt->evt, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s sending Intel patch command (0x%4.4x) failed (%ld)",
hdev->name, cmd->opcode, PTR_ERR(skb));
return -PTR_ERR(skb);
}
/* It ensures that the returned event matches the event data read from
* the firmware file. At fist, it checks the length and then
* the contents of the event.
*/
if (skb->len != evt->plen) {
BT_ERR("%s mismatch event length (opcode 0x%4.4x)", hdev->name,
le16_to_cpu(cmd->opcode));
kfree_skb(skb);
return -EFAULT;
}
if (memcmp(skb->data, evt_param, evt->plen)) {
BT_ERR("%s mismatch event parameter (opcode 0x%4.4x)",
hdev->name, le16_to_cpu(cmd->opcode));
kfree_skb(skb);
return -EFAULT;
}
kfree_skb(skb);
return 0;
}
static int btusb_setup_intel(struct hci_dev *hdev)
{
struct sk_buff *skb;
const struct firmware *fw;
const u8 *fw_ptr;
int disable_patch;
struct intel_version *ver;
const u8 mfg_enable[] = { 0x01, 0x00 };
const u8 mfg_disable[] = { 0x00, 0x00 };
const u8 mfg_reset_deactivate[] = { 0x00, 0x01 };
const u8 mfg_reset_activate[] = { 0x00, 0x02 };
BT_DBG("%s", hdev->name);
/* The controller has a bug with the first HCI command sent to it
* returning number of completed commands as zero. This would stall the
* command processing in the Bluetooth core.
*
* As a workaround, send HCI Reset command first which will reset the
* number of completed commands and allow normal command processing
* from now on.
*/
skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s sending initial HCI reset command failed (%ld)",
hdev->name, PTR_ERR(skb));
return -PTR_ERR(skb);
}
kfree_skb(skb);
/* Read Intel specific controller version first to allow selection of
* which firmware file to load.
*
* The returned information are hardware variant and revision plus
* firmware variant, revision and build number.
*/
skb = __hci_cmd_sync(hdev, 0xfc05, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s reading Intel fw version command failed (%ld)",
hdev->name, PTR_ERR(skb));
return -PTR_ERR(skb);
}
if (skb->len != sizeof(*ver)) {
BT_ERR("%s Intel version event length mismatch", hdev->name);
kfree_skb(skb);
return -EIO;
}
ver = (struct intel_version *)skb->data;
if (ver->status) {
BT_ERR("%s Intel fw version event failed (%02x)", hdev->name,
ver->status);
kfree_skb(skb);
return -bt_to_errno(ver->status);
}
BT_INFO("%s: read Intel version: %02x%02x%02x%02x%02x%02x%02x%02x%02x",
hdev->name, ver->hw_platform, ver->hw_variant,
ver->hw_revision, ver->fw_variant, ver->fw_revision,
ver->fw_build_num, ver->fw_build_ww, ver->fw_build_yy,
ver->fw_patch_num);
/* fw_patch_num indicates the version of patch the device currently
* have. If there is no patch data in the device, it is always 0x00.
* So, if it is other than 0x00, no need to patch the deivce again.
*/
if (ver->fw_patch_num) {
BT_INFO("%s: Intel device is already patched. patch num: %02x",
hdev->name, ver->fw_patch_num);
kfree_skb(skb);
return 0;
}
/* Opens the firmware patch file based on the firmware version read
* from the controller. If it fails to open the matching firmware
* patch file, it tries to open the default firmware patch file.
* If no patch file is found, allow the device to operate without
* a patch.
*/
fw = btusb_setup_intel_get_fw(hdev, ver);
if (!fw) {
kfree_skb(skb);
return 0;
}
fw_ptr = fw->data;
/* This Intel specific command enables the manufacturer mode of the
* controller.
*
* Only while this mode is enabled, the driver can download the
* firmware patch data and configuration parameters.
*/
skb = __hci_cmd_sync(hdev, 0xfc11, 2, mfg_enable, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s entering Intel manufacturer mode failed (%ld)",
hdev->name, PTR_ERR(skb));
release_firmware(fw);
return -PTR_ERR(skb);
}
if (skb->data[0]) {
u8 evt_status = skb->data[0];
BT_ERR("%s enable Intel manufacturer mode event failed (%02x)",
hdev->name, evt_status);
kfree_skb(skb);
release_firmware(fw);
return -bt_to_errno(evt_status);
}
kfree_skb(skb);
disable_patch = 1;
/* The firmware data file consists of list of Intel specific HCI
* commands and its expected events. The first byte indicates the
* type of the message, either HCI command or HCI event.
*
* It reads the command and its expected event from the firmware file,
* and send to the controller. Once __hci_cmd_sync_ev() returns,
* the returned event is compared with the event read from the firmware
* file and it will continue until all the messages are downloaded to
* the controller.
*
* Once the firmware patching is completed successfully,
* the manufacturer mode is disabled with reset and activating the
* downloaded patch.
*
* If the firmware patching fails, the manufacturer mode is
* disabled with reset and deactivating the patch.
*
* If the default patch file is used, no reset is done when disabling
* the manufacturer.
*/
while (fw->size > fw_ptr - fw->data) {
int ret;
ret = btusb_setup_intel_patching(hdev, fw, &fw_ptr,
&disable_patch);
if (ret < 0)
goto exit_mfg_deactivate;
}
release_firmware(fw);
if (disable_patch)
goto exit_mfg_disable;
/* Patching completed successfully and disable the manufacturer mode
* with reset and activate the downloaded firmware patches.
*/
skb = __hci_cmd_sync(hdev, 0xfc11, sizeof(mfg_reset_activate),
mfg_reset_activate, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s exiting Intel manufacturer mode failed (%ld)",
hdev->name, PTR_ERR(skb));
return -PTR_ERR(skb);
}
kfree_skb(skb);
BT_INFO("%s: Intel Bluetooth firmware patch completed and activated",
hdev->name);
return 0;
exit_mfg_disable:
/* Disable the manufacturer mode without reset */
skb = __hci_cmd_sync(hdev, 0xfc11, sizeof(mfg_disable), mfg_disable,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s exiting Intel manufacturer mode failed (%ld)",
hdev->name, PTR_ERR(skb));
return -PTR_ERR(skb);
}
kfree_skb(skb);
BT_INFO("%s: Intel Bluetooth firmware patch completed", hdev->name);
return 0;
exit_mfg_deactivate:
release_firmware(fw);
/* Patching failed. Disable the manufacturer mode with reset and
* deactivate the downloaded firmware patches.
*/
skb = __hci_cmd_sync(hdev, 0xfc11, sizeof(mfg_reset_deactivate),
mfg_reset_deactivate, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s exiting Intel manufacturer mode failed (%ld)",
hdev->name, PTR_ERR(skb));
return -PTR_ERR(skb);
}
kfree_skb(skb);
BT_INFO("%s: Intel Bluetooth firmware patch completed and deactivated",
hdev->name);
return 0;
}
static int btusb_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
@ -1022,11 +1413,17 @@ static int btusb_probe(struct usb_interface *intf,
SET_HCIDEV_DEV(hdev, &intf->dev);
hdev->open = btusb_open;
hdev->close = btusb_close;
hdev->flush = btusb_flush;
hdev->send = btusb_send_frame;
hdev->notify = btusb_notify;
hdev->open = btusb_open;
hdev->close = btusb_close;
hdev->flush = btusb_flush;
hdev->send = btusb_send_frame;
hdev->notify = btusb_notify;
if (id->driver_info & BTUSB_BCM92035)
hdev->setup = btusb_setup_bcm92035;
if (id->driver_info & BTUSB_INTEL)
hdev->setup = btusb_setup_intel;
/* Interface numbers are hardcoded in the specification */
data->isoc = usb_ifnum_to_if(data->udev, 1);
@ -1065,17 +1462,6 @@ static int btusb_probe(struct usb_interface *intf,
data->isoc = NULL;
}
if (id->driver_info & BTUSB_BCM92035) {
unsigned char cmd[] = { 0x3b, 0xfc, 0x01, 0x00 };
struct sk_buff *skb;
skb = bt_skb_alloc(sizeof(cmd), GFP_KERNEL);
if (skb) {
memcpy(skb_put(skb, sizeof(cmd)), cmd, sizeof(cmd));
skb_queue_tail(&hdev->driver_init, skb);
}
}
if (data->isoc) {
err = usb_driver_claim_interface(&btusb_driver,
data->isoc, data);

View File

@ -153,6 +153,9 @@ static int h4_recv(struct hci_uart *hu, void *data, int count)
{
int ret;
if (!test_bit(HCI_UART_REGISTERED, &hu->flags))
return -EUNATCH;
ret = hci_recv_stream_fragment(hu->hdev, data, count);
if (ret < 0) {
BT_ERR("Frame Reassembly Failed");

View File

@ -260,12 +260,12 @@ static int hci_uart_send_frame(struct sk_buff *skb)
/* ------ LDISC part ------ */
/* hci_uart_tty_open
*
*
* Called when line discipline changed to HCI_UART.
*
* Arguments:
* tty pointer to tty info structure
* Return Value:
* Return Value:
* 0 if success, otherwise error code
*/
static int hci_uart_tty_open(struct tty_struct *tty)
@ -365,15 +365,15 @@ static void hci_uart_tty_wakeup(struct tty_struct *tty)
}
/* hci_uart_tty_receive()
*
*
* Called by tty low level driver when receive data is
* available.
*
*
* Arguments: tty pointer to tty isntance data
* data pointer to received data
* flags pointer to flags for data
* count count of received data in bytes
*
*
* Return Value: None
*/
static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data, char *flags, int count)
@ -388,7 +388,10 @@ static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data, char *f
spin_lock(&hu->rx_lock);
hu->proto->recv(hu, (void *) data, count);
hu->hdev->stat.byte_rx += count;
if (hu->hdev)
hu->hdev->stat.byte_rx += count;
spin_unlock(&hu->rx_lock);
tty_unthrottle(tty);

View File

@ -232,6 +232,31 @@ void proc_comm_connector(struct task_struct *task)
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
void proc_coredump_connector(struct task_struct *task)
{
struct cn_msg *msg;
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE];
struct timespec ts;
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->what = PROC_EVENT_COREDUMP;
ev->event_data.coredump.process_pid = task->pid;
ev->event_data.coredump.process_tgid = task->tgid;
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
msg->len = sizeof(*ev);
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
void proc_exit_connector(struct task_struct *task)
{
struct cn_msg *msg;

View File

@ -23,7 +23,7 @@
#include <linux/module.h>
#include <linux/list.h>
#include <linux/skbuff.h>
#include <linux/netlink.h>
#include <net/netlink.h>
#include <linux/moduleparam.h>
#include <linux/connector.h>
#include <linux/slab.h>
@ -95,13 +95,13 @@ int cn_netlink_send(struct cn_msg *msg, u32 __group, gfp_t gfp_mask)
if (!netlink_has_listeners(dev->nls, group))
return -ESRCH;
size = NLMSG_SPACE(sizeof(*msg) + msg->len);
size = sizeof(*msg) + msg->len;
skb = alloc_skb(size, gfp_mask);
skb = nlmsg_new(size, gfp_mask);
if (!skb)
return -ENOMEM;
nlh = nlmsg_put(skb, 0, msg->seq, NLMSG_DONE, size - sizeof(*nlh), 0);
nlh = nlmsg_put(skb, 0, msg->seq, NLMSG_DONE, size, 0);
if (!nlh) {
kfree_skb(skb);
return -EMSGSIZE;
@ -124,7 +124,7 @@ static int cn_call_callback(struct sk_buff *skb)
{
struct cn_callback_entry *i, *cbq = NULL;
struct cn_dev *dev = &cdev;
struct cn_msg *msg = NLMSG_DATA(nlmsg_hdr(skb));
struct cn_msg *msg = nlmsg_data(nlmsg_hdr(skb));
struct netlink_skb_parms *nsp = &NETLINK_CB(skb);
int err = -ENODEV;
@ -162,7 +162,7 @@ static void cn_rx_skb(struct sk_buff *__skb)
skb = skb_get(__skb);
if (skb->len >= NLMSG_SPACE(0)) {
if (skb->len >= NLMSG_HDRLEN) {
nlh = nlmsg_hdr(skb);
if (nlh->nlmsg_len < sizeof(struct cn_msg) ||

View File

@ -470,8 +470,10 @@ struct dca_provider *ioat2_dca_init(struct pci_dev *pdev, void __iomem *iobase)
}
if (!dca2_tag_map_valid(ioatdca->tag_map)) {
dev_err(&pdev->dev, "APICID_TAG_MAP set incorrectly by BIOS, "
"disabling DCA\n");
WARN_TAINT_ONCE(1, TAINT_FIRMWARE_WORKAROUND,
"%s %s: APICID_TAG_MAP set incorrectly by BIOS, disabling DCA\n",
dev_driver_string(&pdev->dev),
dev_name(&pdev->dev));
free_dca_provider(dca);
return NULL;
}
@ -689,7 +691,10 @@ struct dca_provider *ioat3_dca_init(struct pci_dev *pdev, void __iomem *iobase)
}
if (dca3_tag_map_invalid(ioatdca->tag_map)) {
dev_err(&pdev->dev, "APICID_TAG_MAP set incorrectly by BIOS, disabling DCA\n");
WARN_TAINT_ONCE(1, TAINT_FIRMWARE_WORKAROUND,
"%s %s: APICID_TAG_MAP set incorrectly by BIOS, disabling DCA\n",
dev_driver_string(&pdev->dev),
dev_name(&pdev->dev));
free_dca_provider(dca);
return NULL;
}

View File

@ -47,9 +47,9 @@ config FIREWIRE_NET
tristate "IP networking over 1394"
depends on FIREWIRE && INET
help
This enables IPv4 over IEEE 1394, providing IP connectivity with
other implementations of RFC 2734 as found on several operating
systems. Multicast support is currently limited.
This enables IPv4/IPv6 over IEEE 1394, providing IP connectivity
with other implementations of RFC 2734/3146 as found on several
operating systems. Multicast support is currently limited.
To compile this driver as a module, say M here: The module will be
called firewire-net.

View File

@ -1,5 +1,6 @@
/*
* IPv4 over IEEE 1394, per RFC 2734
* IPv6 over IEEE 1394, per RFC 3146
*
* Copyright (C) 2009 Jay Fenlason <fenlason@redhat.com>
*
@ -28,6 +29,7 @@
#include <asm/unaligned.h>
#include <net/arp.h>
#include <net/firewire.h>
/* rx limits */
#define FWNET_MAX_FRAGMENTS 30 /* arbitrary, > TX queue depth */
@ -45,6 +47,7 @@
#define IANA_SPECIFIER_ID 0x00005eU
#define RFC2734_SW_VERSION 0x000001U
#define RFC3146_SW_VERSION 0x000002U
#define IEEE1394_GASP_HDR_SIZE 8
@ -57,32 +60,10 @@
#define RFC2374_HDR_LASTFRAG 2 /* last fragment */
#define RFC2374_HDR_INTFRAG 3 /* interior fragment */
#define RFC2734_HW_ADDR_LEN 16
struct rfc2734_arp {
__be16 hw_type; /* 0x0018 */
__be16 proto_type; /* 0x0806 */
u8 hw_addr_len; /* 16 */
u8 ip_addr_len; /* 4 */
__be16 opcode; /* ARP Opcode */
/* Above is exactly the same format as struct arphdr */
__be64 s_uniq_id; /* Sender's 64bit EUI */
u8 max_rec; /* Sender's max packet size */
u8 sspd; /* Sender's max speed */
__be16 fifo_hi; /* hi 16bits of sender's FIFO addr */
__be32 fifo_lo; /* lo 32bits of sender's FIFO addr */
__be32 sip; /* Sender's IP Address */
__be32 tip; /* IP Address of requested hw addr */
} __packed;
/* This header format is specific to this driver implementation. */
#define FWNET_ALEN 8
#define FWNET_HLEN 10
struct fwnet_header {
u8 h_dest[FWNET_ALEN]; /* destination address */
__be16 h_proto; /* packet type ID field */
} __packed;
static bool fwnet_hwaddr_is_multicast(u8 *ha)
{
return !!(*ha & 1);
}
/* IPv4 and IPv6 encapsulation header */
struct rfc2734_header {
@ -191,8 +172,6 @@ struct fwnet_peer {
struct list_head peer_link;
struct fwnet_device *dev;
u64 guid;
u64 fifo;
__be32 ip;
/* guarded by dev->lock */
struct list_head pd_list; /* received partial datagrams */
@ -221,6 +200,15 @@ struct fwnet_packet_task {
u8 enqueued;
};
/*
* Get fifo address embedded in hwaddr
*/
static __u64 fwnet_hwaddr_fifo(union fwnet_hwaddr *ha)
{
return (u64)get_unaligned_be16(&ha->uc.fifo_hi) << 32
| get_unaligned_be32(&ha->uc.fifo_lo);
}
/*
* saddr == NULL means use device source address.
* daddr == NULL means leave destination address (eg unresolved arp).
@ -513,10 +501,20 @@ static int fwnet_finish_incoming_packet(struct net_device *net,
bool is_broadcast, u16 ether_type)
{
struct fwnet_device *dev;
static const __be64 broadcast_hw = cpu_to_be64(~0ULL);
int status;
__be64 guid;
switch (ether_type) {
case ETH_P_ARP:
case ETH_P_IP:
#if IS_ENABLED(CONFIG_IPV6)
case ETH_P_IPV6:
#endif
break;
default:
goto err;
}
dev = netdev_priv(net);
/* Write metadata, and then pass to the receive level */
skb->dev = net;
@ -524,92 +522,11 @@ static int fwnet_finish_incoming_packet(struct net_device *net,
/*
* Parse the encapsulation header. This actually does the job of
* converting to an ethernet frame header, as well as arp
* conversion if needed. ARP conversion is easier in this
* direction, since we are using ethernet as our backend.
* converting to an ethernet-like pseudo frame header.
*/
/*
* If this is an ARP packet, convert it. First, we want to make
* use of some of the fields, since they tell us a little bit
* about the sending machine.
*/
if (ether_type == ETH_P_ARP) {
struct rfc2734_arp *arp1394;
struct arphdr *arp;
unsigned char *arp_ptr;
u64 fifo_addr;
u64 peer_guid;
unsigned sspd;
u16 max_payload;
struct fwnet_peer *peer;
unsigned long flags;
arp1394 = (struct rfc2734_arp *)skb->data;
arp = (struct arphdr *)skb->data;
arp_ptr = (unsigned char *)(arp + 1);
peer_guid = get_unaligned_be64(&arp1394->s_uniq_id);
fifo_addr = (u64)get_unaligned_be16(&arp1394->fifo_hi) << 32
| get_unaligned_be32(&arp1394->fifo_lo);
sspd = arp1394->sspd;
/* Sanity check. OS X 10.3 PPC reportedly sends 131. */
if (sspd > SCODE_3200) {
dev_notice(&net->dev, "sspd %x out of range\n", sspd);
sspd = SCODE_3200;
}
max_payload = fwnet_max_payload(arp1394->max_rec, sspd);
spin_lock_irqsave(&dev->lock, flags);
peer = fwnet_peer_find_by_guid(dev, peer_guid);
if (peer) {
peer->fifo = fifo_addr;
if (peer->speed > sspd)
peer->speed = sspd;
if (peer->max_payload > max_payload)
peer->max_payload = max_payload;
peer->ip = arp1394->sip;
}
spin_unlock_irqrestore(&dev->lock, flags);
if (!peer) {
dev_notice(&net->dev,
"no peer for ARP packet from %016llx\n",
(unsigned long long)peer_guid);
goto no_peer;
}
/*
* Now that we're done with the 1394 specific stuff, we'll
* need to alter some of the data. Believe it or not, all
* that needs to be done is sender_IP_address needs to be
* moved, the destination hardware address get stuffed
* in and the hardware address length set to 8.
*
* IMPORTANT: The code below overwrites 1394 specific data
* needed above so keep the munging of the data for the
* higher level IP stack last.
*/
arp->ar_hln = 8;
/* skip over sender unique id */
arp_ptr += arp->ar_hln;
/* move sender IP addr */
put_unaligned(arp1394->sip, (u32 *)arp_ptr);
/* skip over sender IP addr */
arp_ptr += arp->ar_pln;
if (arp->ar_op == htons(ARPOP_REQUEST))
memset(arp_ptr, 0, sizeof(u64));
else
memcpy(arp_ptr, net->dev_addr, sizeof(u64));
}
/* Now add the ethernet header. */
guid = cpu_to_be64(dev->card->guid);
if (dev_hard_header(skb, net, ether_type,
is_broadcast ? &broadcast_hw : &guid,
is_broadcast ? net->broadcast : net->dev_addr,
NULL, skb->len) >= 0) {
struct fwnet_header *eth;
u16 *rawp;
@ -618,7 +535,7 @@ static int fwnet_finish_incoming_packet(struct net_device *net,
skb_reset_mac_header(skb);
skb_pull(skb, sizeof(*eth));
eth = (struct fwnet_header *)skb_mac_header(skb);
if (*eth->h_dest & 1) {
if (fwnet_hwaddr_is_multicast(eth->h_dest)) {
if (memcmp(eth->h_dest, net->broadcast,
net->addr_len) == 0)
skb->pkt_type = PACKET_BROADCAST;
@ -630,7 +547,7 @@ static int fwnet_finish_incoming_packet(struct net_device *net,
if (memcmp(eth->h_dest, net->dev_addr, net->addr_len))
skb->pkt_type = PACKET_OTHERHOST;
}
if (ntohs(eth->h_proto) >= 1536) {
if (ntohs(eth->h_proto) >= ETH_P_802_3_MIN) {
protocol = eth->h_proto;
} else {
rawp = (u16 *)skb->data;
@ -652,7 +569,7 @@ static int fwnet_finish_incoming_packet(struct net_device *net,
return 0;
no_peer:
err:
net->stats.rx_errors++;
net->stats.rx_dropped++;
@ -856,7 +773,12 @@ static void fwnet_receive_broadcast(struct fw_iso_context *context,
ver = be32_to_cpu(buf_ptr[1]) & 0xffffff;
source_node_id = be32_to_cpu(buf_ptr[0]) >> 16;
if (specifier_id == IANA_SPECIFIER_ID && ver == RFC2734_SW_VERSION) {
if (specifier_id == IANA_SPECIFIER_ID &&
(ver == RFC2734_SW_VERSION
#if IS_ENABLED(CONFIG_IPV6)
|| ver == RFC3146_SW_VERSION
#endif
)) {
buf_ptr += 2;
length -= IEEE1394_GASP_HDR_SIZE;
fwnet_incoming_packet(dev, buf_ptr, length, source_node_id,
@ -1059,16 +981,27 @@ static int fwnet_send_packet(struct fwnet_packet_task *ptask)
u8 *p;
int generation;
int node_id;
unsigned int sw_version;
/* ptask->generation may not have been set yet */
generation = dev->card->generation;
smp_rmb();
node_id = dev->card->node_id;
switch (ptask->skb->protocol) {
default:
sw_version = RFC2734_SW_VERSION;
break;
#if IS_ENABLED(CONFIG_IPV6)
case htons(ETH_P_IPV6):
sw_version = RFC3146_SW_VERSION;
#endif
}
p = skb_push(ptask->skb, IEEE1394_GASP_HDR_SIZE);
put_unaligned_be32(node_id << 16 | IANA_SPECIFIER_ID >> 8, p);
put_unaligned_be32((IANA_SPECIFIER_ID & 0xff) << 24
| RFC2734_SW_VERSION, &p[4]);
| sw_version, &p[4]);
/* We should not transmit if broadcast_channel.valid == 0. */
fw_send_request(dev->card, &ptask->transaction,
@ -1116,6 +1049,62 @@ static int fwnet_send_packet(struct fwnet_packet_task *ptask)
return 0;
}
static void fwnet_fifo_stop(struct fwnet_device *dev)
{
if (dev->local_fifo == FWNET_NO_FIFO_ADDR)
return;
fw_core_remove_address_handler(&dev->handler);
dev->local_fifo = FWNET_NO_FIFO_ADDR;
}
static int fwnet_fifo_start(struct fwnet_device *dev)
{
int retval;
if (dev->local_fifo != FWNET_NO_FIFO_ADDR)
return 0;
dev->handler.length = 4096;
dev->handler.address_callback = fwnet_receive_packet;
dev->handler.callback_data = dev;
retval = fw_core_add_address_handler(&dev->handler,
&fw_high_memory_region);
if (retval < 0)
return retval;
dev->local_fifo = dev->handler.offset;
return 0;
}
static void __fwnet_broadcast_stop(struct fwnet_device *dev)
{
unsigned u;
if (dev->broadcast_state != FWNET_BROADCAST_ERROR) {
for (u = 0; u < FWNET_ISO_PAGE_COUNT; u++)
kunmap(dev->broadcast_rcv_buffer.pages[u]);
fw_iso_buffer_destroy(&dev->broadcast_rcv_buffer, dev->card);
}
if (dev->broadcast_rcv_context) {
fw_iso_context_destroy(dev->broadcast_rcv_context);
dev->broadcast_rcv_context = NULL;
}
kfree(dev->broadcast_rcv_buffer_ptrs);
dev->broadcast_rcv_buffer_ptrs = NULL;
dev->broadcast_state = FWNET_BROADCAST_ERROR;
}
static void fwnet_broadcast_stop(struct fwnet_device *dev)
{
if (dev->broadcast_state == FWNET_BROADCAST_ERROR)
return;
fw_iso_context_stop(dev->broadcast_rcv_context);
__fwnet_broadcast_stop(dev);
}
static int fwnet_broadcast_start(struct fwnet_device *dev)
{
struct fw_iso_context *context;
@ -1124,60 +1113,47 @@ static int fwnet_broadcast_start(struct fwnet_device *dev)
unsigned max_receive;
struct fw_iso_packet packet;
unsigned long offset;
void **ptrptr;
unsigned u;
if (dev->local_fifo == FWNET_NO_FIFO_ADDR) {
dev->handler.length = 4096;
dev->handler.address_callback = fwnet_receive_packet;
dev->handler.callback_data = dev;
retval = fw_core_add_address_handler(&dev->handler,
&fw_high_memory_region);
if (retval < 0)
goto failed_initial;
dev->local_fifo = dev->handler.offset;
}
if (dev->broadcast_state != FWNET_BROADCAST_ERROR)
return 0;
max_receive = 1U << (dev->card->max_receive + 1);
num_packets = (FWNET_ISO_PAGE_COUNT * PAGE_SIZE) / max_receive;
if (!dev->broadcast_rcv_context) {
void **ptrptr;
context = fw_iso_context_create(dev->card,
FW_ISO_CONTEXT_RECEIVE, IEEE1394_BROADCAST_CHANNEL,
dev->card->link_speed, 8, fwnet_receive_broadcast, dev);
if (IS_ERR(context)) {
retval = PTR_ERR(context);
goto failed_context_create;
}
retval = fw_iso_buffer_init(&dev->broadcast_rcv_buffer,
dev->card, FWNET_ISO_PAGE_COUNT, DMA_FROM_DEVICE);
if (retval < 0)
goto failed_buffer_init;
ptrptr = kmalloc(sizeof(void *) * num_packets, GFP_KERNEL);
if (!ptrptr) {
retval = -ENOMEM;
goto failed_ptrs_alloc;
}
dev->broadcast_rcv_buffer_ptrs = ptrptr;
for (u = 0; u < FWNET_ISO_PAGE_COUNT; u++) {
void *ptr;
unsigned v;
ptr = kmap(dev->broadcast_rcv_buffer.pages[u]);
for (v = 0; v < num_packets / FWNET_ISO_PAGE_COUNT; v++)
*ptrptr++ = (void *)
((char *)ptr + v * max_receive);
}
dev->broadcast_rcv_context = context;
} else {
context = dev->broadcast_rcv_context;
ptrptr = kmalloc(sizeof(void *) * num_packets, GFP_KERNEL);
if (!ptrptr) {
retval = -ENOMEM;
goto failed;
}
dev->broadcast_rcv_buffer_ptrs = ptrptr;
context = fw_iso_context_create(dev->card, FW_ISO_CONTEXT_RECEIVE,
IEEE1394_BROADCAST_CHANNEL,
dev->card->link_speed, 8,
fwnet_receive_broadcast, dev);
if (IS_ERR(context)) {
retval = PTR_ERR(context);
goto failed;
}
retval = fw_iso_buffer_init(&dev->broadcast_rcv_buffer, dev->card,
FWNET_ISO_PAGE_COUNT, DMA_FROM_DEVICE);
if (retval < 0)
goto failed;
dev->broadcast_state = FWNET_BROADCAST_STOPPED;
for (u = 0; u < FWNET_ISO_PAGE_COUNT; u++) {
void *ptr;
unsigned v;
ptr = kmap(dev->broadcast_rcv_buffer.pages[u]);
for (v = 0; v < num_packets / FWNET_ISO_PAGE_COUNT; v++)
*ptrptr++ = (void *) ((char *)ptr + v * max_receive);
}
dev->broadcast_rcv_context = context;
packet.payload_length = max_receive;
packet.interrupt = 1;
@ -1191,7 +1167,7 @@ static int fwnet_broadcast_start(struct fwnet_device *dev)
retval = fw_iso_context_queue(context, &packet,
&dev->broadcast_rcv_buffer, offset);
if (retval < 0)
goto failed_rcv_queue;
goto failed;
offset += max_receive;
}
@ -1201,7 +1177,7 @@ static int fwnet_broadcast_start(struct fwnet_device *dev)
retval = fw_iso_context_start(context, -1, 0,
FW_ISO_CONTEXT_MATCH_ALL_TAGS); /* ??? sync */
if (retval < 0)
goto failed_rcv_queue;
goto failed;
/* FIXME: adjust it according to the min. speed of all known peers? */
dev->broadcast_xmt_max_payload = IEEE1394_MAX_PAYLOAD_S100
@ -1210,19 +1186,8 @@ static int fwnet_broadcast_start(struct fwnet_device *dev)
return 0;
failed_rcv_queue:
kfree(dev->broadcast_rcv_buffer_ptrs);
dev->broadcast_rcv_buffer_ptrs = NULL;
failed_ptrs_alloc:
fw_iso_buffer_destroy(&dev->broadcast_rcv_buffer, dev->card);
failed_buffer_init:
fw_iso_context_destroy(context);
dev->broadcast_rcv_context = NULL;
failed_context_create:
fw_core_remove_address_handler(&dev->handler);
failed_initial:
dev->local_fifo = FWNET_NO_FIFO_ADDR;
failed:
__fwnet_broadcast_stop(dev);
return retval;
}
@ -1240,11 +1205,10 @@ static int fwnet_open(struct net_device *net)
struct fwnet_device *dev = netdev_priv(net);
int ret;
if (dev->broadcast_state == FWNET_BROADCAST_ERROR) {
ret = fwnet_broadcast_start(dev);
if (ret)
return ret;
}
ret = fwnet_broadcast_start(dev);
if (ret)
return ret;
netif_start_queue(net);
spin_lock_irq(&dev->lock);
@ -1257,9 +1221,10 @@ static int fwnet_open(struct net_device *net)
/* ifdown */
static int fwnet_stop(struct net_device *net)
{
netif_stop_queue(net);
struct fwnet_device *dev = netdev_priv(net);
/* Deallocate iso context for use by other applications? */
netif_stop_queue(net);
fwnet_broadcast_stop(dev);
return 0;
}
@ -1299,19 +1264,27 @@ static netdev_tx_t fwnet_tx(struct sk_buff *skb, struct net_device *net)
* We might need to rebuild the header on tx failure.
*/
memcpy(&hdr_buf, skb->data, sizeof(hdr_buf));
skb_pull(skb, sizeof(hdr_buf));
proto = hdr_buf.h_proto;
switch (proto) {
case htons(ETH_P_ARP):
case htons(ETH_P_IP):
#if IS_ENABLED(CONFIG_IPV6)
case htons(ETH_P_IPV6):
#endif
break;
default:
goto fail;
}
skb_pull(skb, sizeof(hdr_buf));
dg_size = skb->len;
/*
* Set the transmission type for the packet. ARP packets and IP
* broadcast packets are sent via GASP.
*/
if (memcmp(hdr_buf.h_dest, net->broadcast, FWNET_ALEN) == 0
|| proto == htons(ETH_P_ARP)
|| (proto == htons(ETH_P_IP)
&& IN_MULTICAST(ntohl(ip_hdr(skb)->daddr)))) {
if (fwnet_hwaddr_is_multicast(hdr_buf.h_dest)) {
max_payload = dev->broadcast_xmt_max_payload;
datagram_label_ptr = &dev->broadcast_xmt_datagramlabel;
@ -1320,11 +1293,12 @@ static netdev_tx_t fwnet_tx(struct sk_buff *skb, struct net_device *net)
ptask->dest_node = IEEE1394_ALL_NODES;
ptask->speed = SCODE_100;
} else {
__be64 guid = get_unaligned((__be64 *)hdr_buf.h_dest);
union fwnet_hwaddr *ha = (union fwnet_hwaddr *)hdr_buf.h_dest;
__be64 guid = get_unaligned(&ha->uc.uniq_id);
u8 generation;
peer = fwnet_peer_find_by_guid(dev, be64_to_cpu(guid));
if (!peer || peer->fifo == FWNET_NO_FIFO_ADDR)
if (!peer)
goto fail;
generation = peer->generation;
@ -1332,32 +1306,12 @@ static netdev_tx_t fwnet_tx(struct sk_buff *skb, struct net_device *net)
max_payload = peer->max_payload;
datagram_label_ptr = &peer->datagram_label;
ptask->fifo_addr = peer->fifo;
ptask->fifo_addr = fwnet_hwaddr_fifo(ha);
ptask->generation = generation;
ptask->dest_node = dest_node;
ptask->speed = peer->speed;
}
/* If this is an ARP packet, convert it */
if (proto == htons(ETH_P_ARP)) {
struct arphdr *arp = (struct arphdr *)skb->data;
unsigned char *arp_ptr = (unsigned char *)(arp + 1);
struct rfc2734_arp *arp1394 = (struct rfc2734_arp *)skb->data;
__be32 ipaddr;
ipaddr = get_unaligned((__be32 *)(arp_ptr + FWNET_ALEN));
arp1394->hw_addr_len = RFC2734_HW_ADDR_LEN;
arp1394->max_rec = dev->card->max_receive;
arp1394->sspd = dev->card->link_speed;
put_unaligned_be16(dev->local_fifo >> 32,
&arp1394->fifo_hi);
put_unaligned_be32(dev->local_fifo & 0xffffffff,
&arp1394->fifo_lo);
put_unaligned(ipaddr, &arp1394->sip);
}
ptask->hdr.w0 = 0;
ptask->hdr.w1 = 0;
ptask->skb = skb;
@ -1472,8 +1426,6 @@ static int fwnet_add_peer(struct fwnet_device *dev,
peer->dev = dev;
peer->guid = (u64)device->config_rom[3] << 32 | device->config_rom[4];
peer->fifo = FWNET_NO_FIFO_ADDR;
peer->ip = 0;
INIT_LIST_HEAD(&peer->pd_list);
peer->pdg_size = 0;
peer->datagram_label = 0;
@ -1503,6 +1455,7 @@ static int fwnet_probe(struct device *_dev)
struct fwnet_device *dev;
unsigned max_mtu;
int ret;
union fwnet_hwaddr *ha;
mutex_lock(&fwnet_device_mutex);
@ -1533,6 +1486,11 @@ static int fwnet_probe(struct device *_dev)
dev->card = card;
dev->netdev = net;
ret = fwnet_fifo_start(dev);
if (ret < 0)
goto out;
dev->local_fifo = dev->handler.offset;
/*
* Use the RFC 2734 default 1500 octets or the maximum payload
* as initial MTU
@ -1542,24 +1500,31 @@ static int fwnet_probe(struct device *_dev)
net->mtu = min(1500U, max_mtu);
/* Set our hardware address while we're at it */
put_unaligned_be64(card->guid, net->dev_addr);
put_unaligned_be64(~0ULL, net->broadcast);
ha = (union fwnet_hwaddr *)net->dev_addr;
put_unaligned_be64(card->guid, &ha->uc.uniq_id);
ha->uc.max_rec = dev->card->max_receive;
ha->uc.sspd = dev->card->link_speed;
put_unaligned_be16(dev->local_fifo >> 32, &ha->uc.fifo_hi);
put_unaligned_be32(dev->local_fifo & 0xffffffff, &ha->uc.fifo_lo);
memset(net->broadcast, -1, net->addr_len);
ret = register_netdev(net);
if (ret)
goto out;
list_add_tail(&dev->dev_link, &fwnet_device_list);
dev_notice(&net->dev, "IPv4 over IEEE 1394 on card %s\n",
dev_notice(&net->dev, "IP over IEEE 1394 on card %s\n",
dev_name(card->device));
have_dev:
ret = fwnet_add_peer(dev, unit, device);
if (ret && allocated_netdev) {
unregister_netdev(net);
list_del(&dev->dev_link);
}
out:
if (ret && allocated_netdev)
fwnet_fifo_stop(dev);
free_netdev(net);
}
mutex_unlock(&fwnet_device_mutex);
@ -1592,22 +1557,14 @@ static int fwnet_remove(struct device *_dev)
mutex_lock(&fwnet_device_mutex);
net = dev->netdev;
if (net && peer->ip)
arp_invalidate(net, peer->ip);
fwnet_remove_peer(peer, dev);
if (list_empty(&dev->peer_list)) {
unregister_netdev(net);
if (dev->local_fifo != FWNET_NO_FIFO_ADDR)
fw_core_remove_address_handler(&dev->handler);
if (dev->broadcast_rcv_context) {
fw_iso_context_stop(dev->broadcast_rcv_context);
fw_iso_buffer_destroy(&dev->broadcast_rcv_buffer,
dev->card);
fw_iso_context_destroy(dev->broadcast_rcv_context);
}
fwnet_fifo_stop(dev);
for (i = 0; dev->queued_datagrams && i < 5; i++)
ssleep(1);
WARN_ON(dev->queued_datagrams);
@ -1646,6 +1603,14 @@ static const struct ieee1394_device_id fwnet_id_table[] = {
.specifier_id = IANA_SPECIFIER_ID,
.version = RFC2734_SW_VERSION,
},
#if IS_ENABLED(CONFIG_IPV6)
{
.match_flags = IEEE1394_MATCH_SPECIFIER_ID |
IEEE1394_MATCH_VERSION,
.specifier_id = IANA_SPECIFIER_ID,
.version = RFC3146_SW_VERSION,
},
#endif
{ }
};
@ -1683,6 +1648,30 @@ static struct fw_descriptor rfc2374_unit_directory = {
.data = rfc2374_unit_directory_data
};
#if IS_ENABLED(CONFIG_IPV6)
static const u32 rfc3146_unit_directory_data[] = {
0x00040000, /* directory_length */
0x1200005e, /* unit_specifier_id: IANA */
0x81000003, /* textual descriptor offset */
0x13000002, /* unit_sw_version: RFC 3146 */
0x81000005, /* textual descriptor offset */
0x00030000, /* descriptor_length */
0x00000000, /* text */
0x00000000, /* minimal ASCII, en */
0x49414e41, /* I A N A */
0x00030000, /* descriptor_length */
0x00000000, /* text */
0x00000000, /* minimal ASCII, en */
0x49507636, /* I P v 6 */
};
static struct fw_descriptor rfc3146_unit_directory = {
.length = ARRAY_SIZE(rfc3146_unit_directory_data),
.key = (CSR_DIRECTORY | CSR_UNIT) << 24,
.data = rfc3146_unit_directory_data
};
#endif
static int __init fwnet_init(void)
{
int err;
@ -1691,11 +1680,17 @@ static int __init fwnet_init(void)
if (err)
return err;
#if IS_ENABLED(CONFIG_IPV6)
err = fw_core_add_descriptor(&rfc3146_unit_directory);
if (err)
goto out;
#endif
fwnet_packet_task_cache = kmem_cache_create("packet_task",
sizeof(struct fwnet_packet_task), 0, 0, NULL);
if (!fwnet_packet_task_cache) {
err = -ENOMEM;
goto out;
goto out2;
}
err = driver_register(&fwnet_driver.driver);
@ -1703,7 +1698,11 @@ static int __init fwnet_init(void)
return 0;
kmem_cache_destroy(fwnet_packet_task_cache);
out2:
#if IS_ENABLED(CONFIG_IPV6)
fw_core_remove_descriptor(&rfc3146_unit_directory);
out:
#endif
fw_core_remove_descriptor(&rfc2374_unit_directory);
return err;
@ -1714,11 +1713,14 @@ static void __exit fwnet_cleanup(void)
{
driver_unregister(&fwnet_driver.driver);
kmem_cache_destroy(fwnet_packet_task_cache);
#if IS_ENABLED(CONFIG_IPV6)
fw_core_remove_descriptor(&rfc3146_unit_directory);
#endif
fw_core_remove_descriptor(&rfc2374_unit_directory);
}
module_exit(fwnet_cleanup);
MODULE_AUTHOR("Jay Fenlason <fenlason@redhat.com>");
MODULE_DESCRIPTION("IPv4 over IEEE1394 as per RFC 2734");
MODULE_DESCRIPTION("IP over IEEE1394 as per RFC 2734/3146");
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(ieee1394, fwnet_id_table);

View File

@ -511,12 +511,16 @@ static unsigned int select_ntuple(struct c4iw_dev *dev, struct dst_entry *dst,
static int send_connect(struct c4iw_ep *ep)
{
struct cpl_act_open_req *req;
struct cpl_t5_act_open_req *t5_req;
struct sk_buff *skb;
u64 opt0;
u32 opt2;
unsigned int mtu_idx;
int wscale;
int wrlen = roundup(sizeof *req, 16);
int size = is_t4(ep->com.dev->rdev.lldi.adapter_type) ?
sizeof(struct cpl_act_open_req) :
sizeof(struct cpl_t5_act_open_req);
int wrlen = roundup(size, 16);
PDBG("%s ep %p atid %u\n", __func__, ep, ep->atid);
@ -552,17 +556,36 @@ static int send_connect(struct c4iw_ep *ep)
opt2 |= WND_SCALE_EN(1);
t4_set_arp_err_handler(skb, NULL, act_open_req_arp_failure);
req = (struct cpl_act_open_req *) skb_put(skb, wrlen);
INIT_TP_WR(req, 0);
OPCODE_TID(req) = cpu_to_be32(
MK_OPCODE_TID(CPL_ACT_OPEN_REQ, ((ep->rss_qid<<14)|ep->atid)));
req->local_port = ep->com.local_addr.sin_port;
req->peer_port = ep->com.remote_addr.sin_port;
req->local_ip = ep->com.local_addr.sin_addr.s_addr;
req->peer_ip = ep->com.remote_addr.sin_addr.s_addr;
req->opt0 = cpu_to_be64(opt0);
req->params = cpu_to_be32(select_ntuple(ep->com.dev, ep->dst, ep->l2t));
req->opt2 = cpu_to_be32(opt2);
if (is_t4(ep->com.dev->rdev.lldi.adapter_type)) {
req = (struct cpl_act_open_req *) skb_put(skb, wrlen);
INIT_TP_WR(req, 0);
OPCODE_TID(req) = cpu_to_be32(
MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
((ep->rss_qid << 14) | ep->atid)));
req->local_port = ep->com.local_addr.sin_port;
req->peer_port = ep->com.remote_addr.sin_port;
req->local_ip = ep->com.local_addr.sin_addr.s_addr;
req->peer_ip = ep->com.remote_addr.sin_addr.s_addr;
req->opt0 = cpu_to_be64(opt0);
req->params = cpu_to_be32(select_ntuple(ep->com.dev,
ep->dst, ep->l2t));
req->opt2 = cpu_to_be32(opt2);
} else {
t5_req = (struct cpl_t5_act_open_req *) skb_put(skb, wrlen);
INIT_TP_WR(t5_req, 0);
OPCODE_TID(t5_req) = cpu_to_be32(
MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
((ep->rss_qid << 14) | ep->atid)));
t5_req->local_port = ep->com.local_addr.sin_port;
t5_req->peer_port = ep->com.remote_addr.sin_port;
t5_req->local_ip = ep->com.local_addr.sin_addr.s_addr;
t5_req->peer_ip = ep->com.remote_addr.sin_addr.s_addr;
t5_req->opt0 = cpu_to_be64(opt0);
t5_req->params = cpu_to_be64(V_FILTER_TUPLE(
select_ntuple(ep->com.dev, ep->dst, ep->l2t)));
t5_req->opt2 = cpu_to_be32(opt2);
}
set_bit(ACT_OPEN_REQ, &ep->com.history);
return c4iw_l2t_send(&ep->com.dev->rdev, skb, ep->l2t);
}
@ -1676,9 +1699,9 @@ static int act_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
case CPL_ERR_CONN_TIMEDOUT:
break;
case CPL_ERR_TCAM_FULL:
dev->rdev.stats.tcam_full++;
if (dev->rdev.lldi.enable_fw_ofld_conn) {
mutex_lock(&dev->rdev.stats.lock);
dev->rdev.stats.tcam_full++;
mutex_unlock(&dev->rdev.stats.lock);
send_fw_act_open_req(ep,
GET_TID_TID(GET_AOPEN_ATID(
@ -2875,12 +2898,14 @@ static int deferred_fw6_msg(struct c4iw_dev *dev, struct sk_buff *skb)
static void build_cpl_pass_accept_req(struct sk_buff *skb, int stid , u8 tos)
{
u32 l2info;
u16 vlantag, len, hdr_len;
u16 vlantag, len, hdr_len, eth_hdr_len;
u8 intf;
struct cpl_rx_pkt *cpl = cplhdr(skb);
struct cpl_pass_accept_req *req;
struct tcp_options_received tmp_opt;
struct c4iw_dev *dev;
dev = *((struct c4iw_dev **) (skb->cb + sizeof(void *)));
/* Store values from cpl_rx_pkt in temporary location. */
vlantag = (__force u16) cpl->vlan;
len = (__force u16) cpl->len;
@ -2896,7 +2921,7 @@ static void build_cpl_pass_accept_req(struct sk_buff *skb, int stid , u8 tos)
*/
memset(&tmp_opt, 0, sizeof(tmp_opt));
tcp_clear_options(&tmp_opt);
tcp_parse_options(skb, &tmp_opt, NULL, 0, NULL);
tcp_parse_options(skb, &tmp_opt, 0, NULL);
req = (struct cpl_pass_accept_req *)__skb_push(skb, sizeof(*req));
memset(req, 0, sizeof(*req));
@ -2904,14 +2929,16 @@ static void build_cpl_pass_accept_req(struct sk_buff *skb, int stid , u8 tos)
V_SYN_MAC_IDX(G_RX_MACIDX(
(__force int) htonl(l2info))) |
F_SYN_XACT_MATCH);
eth_hdr_len = is_t4(dev->rdev.lldi.adapter_type) ?
G_RX_ETHHDR_LEN((__force int) htonl(l2info)) :
G_RX_T5_ETHHDR_LEN((__force int) htonl(l2info));
req->hdr_len = cpu_to_be32(V_SYN_RX_CHAN(G_RX_CHAN(
(__force int) htonl(l2info))) |
V_TCP_HDR_LEN(G_RX_TCPHDR_LEN(
(__force int) htons(hdr_len))) |
V_IP_HDR_LEN(G_RX_IPHDR_LEN(
(__force int) htons(hdr_len))) |
V_ETH_HDR_LEN(G_RX_ETHHDR_LEN(
(__force int) htonl(l2info))));
V_ETH_HDR_LEN(G_RX_ETHHDR_LEN(eth_hdr_len)));
req->vlan = (__force __be16) vlantag;
req->len = (__force __be16) len;
req->tos_stid = cpu_to_be32(PASS_OPEN_TID(stid) |
@ -2999,7 +3026,7 @@ static int rx_pkt(struct c4iw_dev *dev, struct sk_buff *skb)
u16 window;
struct port_info *pi;
struct net_device *pdev;
u16 rss_qid;
u16 rss_qid, eth_hdr_len;
int step;
u32 tx_chan;
struct neighbour *neigh;
@ -3028,7 +3055,10 @@ static int rx_pkt(struct c4iw_dev *dev, struct sk_buff *skb)
goto reject;
}
if (G_RX_ETHHDR_LEN(ntohl(cpl->l2info)) == ETH_HLEN) {
eth_hdr_len = is_t4(dev->rdev.lldi.adapter_type) ?
G_RX_ETHHDR_LEN(htonl(cpl->l2info)) :
G_RX_T5_ETHHDR_LEN(htonl(cpl->l2info));
if (eth_hdr_len == ETH_HLEN) {
eh = (struct ethhdr *)(req + 1);
iph = (struct iphdr *)(eh + 1);
} else {

View File

@ -41,10 +41,20 @@
#define DRV_VERSION "0.1"
MODULE_AUTHOR("Steve Wise");
MODULE_DESCRIPTION("Chelsio T4 RDMA Driver");
MODULE_DESCRIPTION("Chelsio T4/T5 RDMA Driver");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(DRV_VERSION);
static int allow_db_fc_on_t5;
module_param(allow_db_fc_on_t5, int, 0644);
MODULE_PARM_DESC(allow_db_fc_on_t5,
"Allow DB Flow Control on T5 (default = 0)");
static int allow_db_coalescing_on_t5;
module_param(allow_db_coalescing_on_t5, int, 0644);
MODULE_PARM_DESC(allow_db_coalescing_on_t5,
"Allow DB Coalescing on T5 (default = 0)");
struct uld_ctx {
struct list_head entry;
struct cxgb4_lld_info lldi;
@ -614,7 +624,7 @@ static int rdma_supported(const struct cxgb4_lld_info *infop)
{
return infop->vr->stag.size > 0 && infop->vr->pbl.size > 0 &&
infop->vr->rq.size > 0 && infop->vr->qp.size > 0 &&
infop->vr->cq.size > 0 && infop->vr->ocq.size > 0;
infop->vr->cq.size > 0;
}
static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop)
@ -627,6 +637,22 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop)
pci_name(infop->pdev));
return ERR_PTR(-ENOSYS);
}
if (!ocqp_supported(infop))
pr_info("%s: On-Chip Queues not supported on this device.\n",
pci_name(infop->pdev));
if (!is_t4(infop->adapter_type)) {
if (!allow_db_fc_on_t5) {
db_fc_threshold = 100000;
pr_info("DB Flow Control Disabled.\n");
}
if (!allow_db_coalescing_on_t5) {
db_coalescing_threshold = -1;
pr_info("DB Coalescing Disabled.\n");
}
}
devp = (struct c4iw_dev *)ib_alloc_device(sizeof(*devp));
if (!devp) {
printk(KERN_ERR MOD "Cannot allocate ib device\n");
@ -678,8 +704,8 @@ static void *c4iw_uld_add(const struct cxgb4_lld_info *infop)
int i;
if (!vers_printed++)
printk(KERN_INFO MOD "Chelsio T4 RDMA Driver - version %s\n",
DRV_VERSION);
pr_info("Chelsio T4/T5 RDMA Driver - version %s\n",
DRV_VERSION);
ctx = kzalloc(sizeof *ctx, GFP_KERNEL);
if (!ctx) {

View File

@ -162,7 +162,7 @@ static inline int c4iw_num_stags(struct c4iw_rdev *rdev)
return min((int)T4_MAX_NUM_STAG, (int)(rdev->lldi.vr->stag.size >> 5));
}
#define C4IW_WR_TO (10*HZ)
#define C4IW_WR_TO (30*HZ)
struct c4iw_wr_wait {
struct completion completion;
@ -369,7 +369,6 @@ struct c4iw_fr_page_list {
DEFINE_DMA_UNMAP_ADDR(mapping);
dma_addr_t dma_addr;
struct c4iw_dev *dev;
int size;
};
static inline struct c4iw_fr_page_list *to_c4iw_fr_page_list(
@ -817,6 +816,15 @@ static inline int compute_wscale(int win)
return wscale;
}
static inline int ocqp_supported(const struct cxgb4_lld_info *infop)
{
#if defined(__i386__) || defined(__x86_64__) || defined(CONFIG_PPC64)
return infop->vr->ocq.size > 0;
#else
return 0;
#endif
}
u32 c4iw_id_alloc(struct c4iw_id_table *alloc);
void c4iw_id_free(struct c4iw_id_table *alloc, u32 obj);
int c4iw_id_table_alloc(struct c4iw_id_table *alloc, u32 start, u32 num,
@ -930,6 +938,8 @@ extern struct cxgb4_client t4c_client;
extern c4iw_handler_func c4iw_handlers[NUM_CPL_CMDS];
extern int c4iw_max_read_depth;
extern int db_fc_threshold;
extern int db_coalescing_threshold;
extern int use_dsgl;
#endif

View File

@ -30,16 +30,76 @@
* SOFTWARE.
*/
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <rdma/ib_umem.h>
#include <linux/atomic.h>
#include "iw_cxgb4.h"
int use_dsgl = 1;
module_param(use_dsgl, int, 0644);
MODULE_PARM_DESC(use_dsgl, "Use DSGL for PBL/FastReg (default=1)");
#define T4_ULPTX_MIN_IO 32
#define C4IW_MAX_INLINE_SIZE 96
#define T4_ULPTX_MAX_DMA 1024
#define C4IW_INLINE_THRESHOLD 128
static int write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len,
void *data)
static int inline_threshold = C4IW_INLINE_THRESHOLD;
module_param(inline_threshold, int, 0644);
MODULE_PARM_DESC(inline_threshold, "inline vs dsgl threshold (default=128)");
static int _c4iw_write_mem_dma_aligned(struct c4iw_rdev *rdev, u32 addr,
u32 len, dma_addr_t data, int wait)
{
struct sk_buff *skb;
struct ulp_mem_io *req;
struct ulptx_sgl *sgl;
u8 wr_len;
int ret = 0;
struct c4iw_wr_wait wr_wait;
addr &= 0x7FFFFFF;
if (wait)
c4iw_init_wr_wait(&wr_wait);
wr_len = roundup(sizeof(*req) + sizeof(*sgl), 16);
skb = alloc_skb(wr_len, GFP_KERNEL | __GFP_NOFAIL);
if (!skb)
return -ENOMEM;
set_wr_txq(skb, CPL_PRIORITY_CONTROL, 0);
req = (struct ulp_mem_io *)__skb_put(skb, wr_len);
memset(req, 0, wr_len);
INIT_ULPTX_WR(req, wr_len, 0, 0);
req->wr.wr_hi = cpu_to_be32(FW_WR_OP(FW_ULPTX_WR) |
(wait ? FW_WR_COMPL(1) : 0));
req->wr.wr_lo = wait ? (__force __be64)&wr_wait : 0;
req->wr.wr_mid = cpu_to_be32(FW_WR_LEN16(DIV_ROUND_UP(wr_len, 16)));
req->cmd = cpu_to_be32(ULPTX_CMD(ULP_TX_MEM_WRITE));
req->cmd |= cpu_to_be32(V_T5_ULP_MEMIO_ORDER(1));
req->dlen = cpu_to_be32(ULP_MEMIO_DATA_LEN(len>>5));
req->len16 = cpu_to_be32(DIV_ROUND_UP(wr_len-sizeof(req->wr), 16));
req->lock_addr = cpu_to_be32(ULP_MEMIO_ADDR(addr));
sgl = (struct ulptx_sgl *)(req + 1);
sgl->cmd_nsge = cpu_to_be32(ULPTX_CMD(ULP_TX_SC_DSGL) |
ULPTX_NSGE(1));
sgl->len0 = cpu_to_be32(len);
sgl->addr0 = cpu_to_be64(data);
ret = c4iw_ofld_send(rdev, skb);
if (ret)
return ret;
if (wait)
ret = c4iw_wait_for_reply(rdev, &wr_wait, 0, 0, __func__);
return ret;
}
static int _c4iw_write_mem_inline(struct c4iw_rdev *rdev, u32 addr, u32 len,
void *data)
{
struct sk_buff *skb;
struct ulp_mem_io *req;
@ -47,6 +107,12 @@ static int write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len,
u8 wr_len, *to_dp, *from_dp;
int copy_len, num_wqe, i, ret = 0;
struct c4iw_wr_wait wr_wait;
__be32 cmd = cpu_to_be32(ULPTX_CMD(ULP_TX_MEM_WRITE));
if (is_t4(rdev->lldi.adapter_type))
cmd |= cpu_to_be32(ULP_MEMIO_ORDER(1));
else
cmd |= cpu_to_be32(V_T5_ULP_MEMIO_IMM(1));
addr &= 0x7FFFFFF;
PDBG("%s addr 0x%x len %u\n", __func__, addr, len);
@ -77,7 +143,7 @@ static int write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len,
req->wr.wr_mid = cpu_to_be32(
FW_WR_LEN16(DIV_ROUND_UP(wr_len, 16)));
req->cmd = cpu_to_be32(ULPTX_CMD(ULP_TX_MEM_WRITE) | (1<<23));
req->cmd = cmd;
req->dlen = cpu_to_be32(ULP_MEMIO_DATA_LEN(
DIV_ROUND_UP(copy_len, T4_ULPTX_MIN_IO)));
req->len16 = cpu_to_be32(DIV_ROUND_UP(wr_len-sizeof(req->wr),
@ -107,6 +173,67 @@ static int write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len,
return ret;
}
int _c4iw_write_mem_dma(struct c4iw_rdev *rdev, u32 addr, u32 len, void *data)
{
u32 remain = len;
u32 dmalen;
int ret = 0;
dma_addr_t daddr;
dma_addr_t save;
daddr = dma_map_single(&rdev->lldi.pdev->dev, data, len, DMA_TO_DEVICE);
if (dma_mapping_error(&rdev->lldi.pdev->dev, daddr))
return -1;
save = daddr;
while (remain > inline_threshold) {
if (remain < T4_ULPTX_MAX_DMA) {
if (remain & ~T4_ULPTX_MIN_IO)
dmalen = remain & ~(T4_ULPTX_MIN_IO-1);
else
dmalen = remain;
} else
dmalen = T4_ULPTX_MAX_DMA;
remain -= dmalen;
ret = _c4iw_write_mem_dma_aligned(rdev, addr, dmalen, daddr,
!remain);
if (ret)
goto out;
addr += dmalen >> 5;
data += dmalen;
daddr += dmalen;
}
if (remain)
ret = _c4iw_write_mem_inline(rdev, addr, remain, data);
out:
dma_unmap_single(&rdev->lldi.pdev->dev, save, len, DMA_TO_DEVICE);
return ret;
}
/*
* write len bytes of data into addr (32B aligned address)
* If data is NULL, clear len byte of memory to zero.
*/
static int write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len,
void *data)
{
if (is_t5(rdev->lldi.adapter_type) && use_dsgl) {
if (len > inline_threshold) {
if (_c4iw_write_mem_dma(rdev, addr, len, data)) {
printk_ratelimited(KERN_WARNING
"%s: dma map"
" failure (non fatal)\n",
pci_name(rdev->lldi.pdev));
return _c4iw_write_mem_inline(rdev, addr, len,
data);
} else
return 0;
} else
return _c4iw_write_mem_inline(rdev, addr, len, data);
} else
return _c4iw_write_mem_inline(rdev, addr, len, data);
}
/*
* Build and write a TPT entry.
* IN: stag key, pdid, perm, bind_enabled, zbva, to, len, page_size,
@ -760,19 +887,23 @@ struct ib_fast_reg_page_list *c4iw_alloc_fastreg_pbl(struct ib_device *device,
struct c4iw_fr_page_list *c4pl;
struct c4iw_dev *dev = to_c4iw_dev(device);
dma_addr_t dma_addr;
int size = sizeof *c4pl + page_list_len * sizeof(u64);
int pll_len = roundup(page_list_len * sizeof(u64), 32);
c4pl = dma_alloc_coherent(&dev->rdev.lldi.pdev->dev, size,
&dma_addr, GFP_KERNEL);
c4pl = kmalloc(sizeof(*c4pl), GFP_KERNEL);
if (!c4pl)
return ERR_PTR(-ENOMEM);
c4pl->ibpl.page_list = dma_alloc_coherent(&dev->rdev.lldi.pdev->dev,
pll_len, &dma_addr,
GFP_KERNEL);
if (!c4pl->ibpl.page_list) {
kfree(c4pl);
return ERR_PTR(-ENOMEM);
}
dma_unmap_addr_set(c4pl, mapping, dma_addr);
c4pl->dma_addr = dma_addr;
c4pl->dev = dev;
c4pl->size = size;
c4pl->ibpl.page_list = (u64 *)(c4pl + 1);
c4pl->ibpl.max_page_list_len = page_list_len;
c4pl->ibpl.max_page_list_len = pll_len;
return &c4pl->ibpl;
}
@ -781,8 +912,10 @@ void c4iw_free_fastreg_pbl(struct ib_fast_reg_page_list *ibpl)
{
struct c4iw_fr_page_list *c4pl = to_c4iw_fr_page_list(ibpl);
dma_free_coherent(&c4pl->dev->rdev.lldi.pdev->dev, c4pl->size,
c4pl, dma_unmap_addr(c4pl, mapping));
dma_free_coherent(&c4pl->dev->rdev.lldi.pdev->dev,
c4pl->ibpl.max_page_list_len,
c4pl->ibpl.page_list, dma_unmap_addr(c4pl, mapping));
kfree(c4pl);
}
int c4iw_dereg_mr(struct ib_mr *ib_mr)

View File

@ -162,8 +162,14 @@ static int c4iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
*/
if (addr >= rdev->oc_mw_pa)
vma->vm_page_prot = t4_pgprot_wc(vma->vm_page_prot);
else
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
else {
if (is_t5(rdev->lldi.adapter_type))
vma->vm_page_prot =
t4_pgprot_wc(vma->vm_page_prot);
else
vma->vm_page_prot =
pgprot_noncached(vma->vm_page_prot);
}
ret = io_remap_pfn_range(vma, vma->vm_start,
addr >> PAGE_SHIFT,
len, vma->vm_page_prot);
@ -263,7 +269,7 @@ static int c4iw_query_device(struct ib_device *ibdev,
dev = to_c4iw_dev(ibdev);
memset(props, 0, sizeof *props);
memcpy(&props->sys_image_guid, dev->rdev.lldi.ports[0]->dev_addr, 6);
props->hw_ver = dev->rdev.lldi.adapter_type;
props->hw_ver = CHELSIO_CHIP_RELEASE(dev->rdev.lldi.adapter_type);
props->fw_ver = dev->rdev.lldi.fw_vers;
props->device_cap_flags = dev->device_cap_flags;
props->page_size_cap = T4_PAGESIZE_MASK;
@ -346,7 +352,8 @@ static ssize_t show_rev(struct device *dev, struct device_attribute *attr,
struct c4iw_dev *c4iw_dev = container_of(dev, struct c4iw_dev,
ibdev.dev);
PDBG("%s dev 0x%p\n", __func__, dev);
return sprintf(buf, "%d\n", c4iw_dev->rdev.lldi.adapter_type);
return sprintf(buf, "%d\n",
CHELSIO_CHIP_RELEASE(c4iw_dev->rdev.lldi.adapter_type));
}
static ssize_t show_fw_ver(struct device *dev, struct device_attribute *attr,

View File

@ -42,10 +42,21 @@ static int ocqp_support = 1;
module_param(ocqp_support, int, 0644);
MODULE_PARM_DESC(ocqp_support, "Support on-chip SQs (default=1)");
int db_fc_threshold = 2000;
int db_fc_threshold = 1000;
module_param(db_fc_threshold, int, 0644);
MODULE_PARM_DESC(db_fc_threshold, "QP count/threshold that triggers automatic "
"db flow control mode (default = 2000)");
MODULE_PARM_DESC(db_fc_threshold,
"QP count/threshold that triggers"
" automatic db flow control mode (default = 1000)");
int db_coalescing_threshold;
module_param(db_coalescing_threshold, int, 0644);
MODULE_PARM_DESC(db_coalescing_threshold,
"QP count/threshold that triggers"
" disabling db coalescing (default = 0)");
static int max_fr_immd = T4_MAX_FR_IMMD;
module_param(max_fr_immd, int, 0644);
MODULE_PARM_DESC(max_fr_immd, "fastreg threshold for using DSGL instead of immedate");
static void set_state(struct c4iw_qp *qhp, enum c4iw_qp_state state)
{
@ -76,7 +87,7 @@ static void dealloc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
static int alloc_oc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
{
if (!ocqp_support || !t4_ocqp_supported())
if (!ocqp_support || !ocqp_supported(&rdev->lldi))
return -ENOSYS;
sq->dma_addr = c4iw_ocqp_pool_alloc(rdev, sq->memsize);
if (!sq->dma_addr)
@ -129,7 +140,7 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
int wr_len;
struct c4iw_wr_wait wr_wait;
struct sk_buff *skb;
int ret;
int ret = 0;
int eqsize;
wq->sq.qid = c4iw_get_qpid(rdev, uctx);
@ -169,17 +180,14 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
}
if (user) {
ret = alloc_oc_sq(rdev, &wq->sq);
if (alloc_oc_sq(rdev, &wq->sq) && alloc_host_sq(rdev, &wq->sq))
goto free_hwaddr;
} else {
ret = alloc_host_sq(rdev, &wq->sq);
if (ret)
goto free_hwaddr;
}
ret = alloc_host_sq(rdev, &wq->sq);
if (ret)
goto free_sq;
} else
ret = alloc_host_sq(rdev, &wq->sq);
if (ret)
goto free_hwaddr;
memset(wq->sq.queue, 0, wq->sq.memsize);
dma_unmap_addr_set(&wq->sq, mapping, wq->sq.dma_addr);
@ -534,7 +542,7 @@ static int build_rdma_recv(struct c4iw_qp *qhp, union t4_recv_wr *wqe,
}
static int build_fastreg(struct t4_sq *sq, union t4_wr *wqe,
struct ib_send_wr *wr, u8 *len16)
struct ib_send_wr *wr, u8 *len16, u8 t5dev)
{
struct fw_ri_immd *imdp;
@ -556,28 +564,51 @@ static int build_fastreg(struct t4_sq *sq, union t4_wr *wqe,
wqe->fr.va_hi = cpu_to_be32(wr->wr.fast_reg.iova_start >> 32);
wqe->fr.va_lo_fbo = cpu_to_be32(wr->wr.fast_reg.iova_start &
0xffffffff);
WARN_ON(pbllen > T4_MAX_FR_IMMD);
imdp = (struct fw_ri_immd *)(&wqe->fr + 1);
imdp->op = FW_RI_DATA_IMMD;
imdp->r1 = 0;
imdp->r2 = 0;
imdp->immdlen = cpu_to_be32(pbllen);
p = (__be64 *)(imdp + 1);
rem = pbllen;
for (i = 0; i < wr->wr.fast_reg.page_list_len; i++) {
*p = cpu_to_be64((u64)wr->wr.fast_reg.page_list->page_list[i]);
rem -= sizeof *p;
if (++p == (__be64 *)&sq->queue[sq->size])
p = (__be64 *)sq->queue;
if (t5dev && use_dsgl && (pbllen > max_fr_immd)) {
struct c4iw_fr_page_list *c4pl =
to_c4iw_fr_page_list(wr->wr.fast_reg.page_list);
struct fw_ri_dsgl *sglp;
for (i = 0; i < wr->wr.fast_reg.page_list_len; i++) {
wr->wr.fast_reg.page_list->page_list[i] = (__force u64)
cpu_to_be64((u64)
wr->wr.fast_reg.page_list->page_list[i]);
}
sglp = (struct fw_ri_dsgl *)(&wqe->fr + 1);
sglp->op = FW_RI_DATA_DSGL;
sglp->r1 = 0;
sglp->nsge = cpu_to_be16(1);
sglp->addr0 = cpu_to_be64(c4pl->dma_addr);
sglp->len0 = cpu_to_be32(pbllen);
*len16 = DIV_ROUND_UP(sizeof(wqe->fr) + sizeof(*sglp), 16);
} else {
imdp = (struct fw_ri_immd *)(&wqe->fr + 1);
imdp->op = FW_RI_DATA_IMMD;
imdp->r1 = 0;
imdp->r2 = 0;
imdp->immdlen = cpu_to_be32(pbllen);
p = (__be64 *)(imdp + 1);
rem = pbllen;
for (i = 0; i < wr->wr.fast_reg.page_list_len; i++) {
*p = cpu_to_be64(
(u64)wr->wr.fast_reg.page_list->page_list[i]);
rem -= sizeof(*p);
if (++p == (__be64 *)&sq->queue[sq->size])
p = (__be64 *)sq->queue;
}
BUG_ON(rem < 0);
while (rem) {
*p = 0;
rem -= sizeof(*p);
if (++p == (__be64 *)&sq->queue[sq->size])
p = (__be64 *)sq->queue;
}
*len16 = DIV_ROUND_UP(sizeof(wqe->fr) + sizeof(*imdp)
+ pbllen, 16);
}
BUG_ON(rem < 0);
while (rem) {
*p = 0;
rem -= sizeof *p;
if (++p == (__be64 *)&sq->queue[sq->size])
p = (__be64 *)sq->queue;
}
*len16 = DIV_ROUND_UP(sizeof wqe->fr + sizeof *imdp + pbllen, 16);
return 0;
}
@ -678,7 +709,10 @@ int c4iw_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
case IB_WR_FAST_REG_MR:
fw_opcode = FW_RI_FR_NSMR_WR;
swsqe->opcode = FW_RI_FAST_REGISTER;
err = build_fastreg(&qhp->wq.sq, wqe, wr, &len16);
err = build_fastreg(&qhp->wq.sq, wqe, wr, &len16,
is_t5(
qhp->rhp->rdev.lldi.adapter_type) ?
1 : 0);
break;
case IB_WR_LOCAL_INV:
if (wr->send_flags & IB_SEND_FENCE)
@ -1450,6 +1484,9 @@ int c4iw_destroy_qp(struct ib_qp *ib_qp)
rhp->db_state = NORMAL;
idr_for_each(&rhp->qpidr, enable_qp_db, NULL);
}
if (db_coalescing_threshold >= 0)
if (rhp->qpcnt <= db_coalescing_threshold)
cxgb4_enable_db_coalescing(rhp->rdev.lldi.ports[0]);
spin_unlock_irq(&rhp->lock);
atomic_dec(&qhp->refcnt);
wait_event(qhp->wait, !atomic_read(&qhp->refcnt));
@ -1561,11 +1598,15 @@ struct ib_qp *c4iw_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attrs,
spin_lock_irq(&rhp->lock);
if (rhp->db_state != NORMAL)
t4_disable_wq_db(&qhp->wq);
if (++rhp->qpcnt > db_fc_threshold && rhp->db_state == NORMAL) {
rhp->qpcnt++;
if (rhp->qpcnt > db_fc_threshold && rhp->db_state == NORMAL) {
rhp->rdev.stats.db_state_transitions++;
rhp->db_state = FLOW_CONTROL;
idr_for_each(&rhp->qpidr, disable_qp_db, NULL);
}
if (db_coalescing_threshold >= 0)
if (rhp->qpcnt > db_coalescing_threshold)
cxgb4_disable_db_coalescing(rhp->rdev.lldi.ports[0]);
ret = insert_handle_nolock(rhp, &rhp->qpidr, qhp, qhp->wq.sq.qid);
spin_unlock_irq(&rhp->lock);
if (ret)

View File

@ -84,7 +84,7 @@ struct t4_status_page {
sizeof(struct fw_ri_isgl)) / sizeof(struct fw_ri_sge))
#define T4_MAX_FR_IMMD ((T4_SQ_NUM_BYTES - sizeof(struct fw_ri_fr_nsmr_wr) - \
sizeof(struct fw_ri_immd)) & ~31UL)
#define T4_MAX_FR_DEPTH (T4_MAX_FR_IMMD / sizeof(u64))
#define T4_MAX_FR_DEPTH (1024 / sizeof(u64))
#define T4_RQ_NUM_SLOTS 2
#define T4_RQ_NUM_BYTES (T4_EQ_ENTRY_SIZE * T4_RQ_NUM_SLOTS)
@ -280,15 +280,6 @@ static inline pgprot_t t4_pgprot_wc(pgprot_t prot)
#endif
}
static inline int t4_ocqp_supported(void)
{
#if defined(__i386__) || defined(__x86_64__) || defined(CONFIG_PPC64)
return 1;
#else
return 0;
#endif
}
enum {
T4_SQ_ONCHIP = (1<<0),
};

View File

@ -228,7 +228,7 @@ struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev, int entries, int vector
vector = dev->eq_table[vector % ibdev->num_comp_vectors];
err = mlx4_cq_alloc(dev->dev, entries, &cq->buf.mtt, uar,
cq->db.dma, &cq->mcq, vector, 0);
cq->db.dma, &cq->mcq, vector, 0, 0);
if (err)
goto err_dbmap;

View File

@ -2948,7 +2948,7 @@ void nes_nic_ce_handler(struct nes_device *nesdev, struct nes_hw_nic_cq *cq)
nes_debug(NES_DBG_CQ, "%s: Reporting stripped VLAN packet. Tag = 0x%04X\n",
nesvnic->netdev->name, vlan_tag);
__vlan_hwaccel_put_tag(rx_skb, vlan_tag);
__vlan_hwaccel_put_tag(rx_skb, htons(ETH_P_8021Q), vlan_tag);
}
if (nes_use_lro)
lro_receive_skb(&nesvnic->lro_mgr, rx_skb, NULL);

View File

@ -1599,7 +1599,7 @@ static void nes_vlan_mode(struct net_device *netdev, struct nes_device *nesdev,
/* Enable/Disable VLAN Stripping */
u32temp = nes_read_indexed(nesdev, NES_IDX_PCIX_DIAG);
if (features & NETIF_F_HW_VLAN_RX)
if (features & NETIF_F_HW_VLAN_CTAG_RX)
u32temp &= 0xfdffffff;
else
u32temp |= 0x02000000;
@ -1614,10 +1614,10 @@ static netdev_features_t nes_fix_features(struct net_device *netdev, netdev_feat
* Since there is no support for separate rx/tx vlan accel
* enable/disable make sure tx flag is always in same state as rx.
*/
if (features & NETIF_F_HW_VLAN_RX)
features |= NETIF_F_HW_VLAN_TX;
if (features & NETIF_F_HW_VLAN_CTAG_RX)
features |= NETIF_F_HW_VLAN_CTAG_TX;
else
features &= ~NETIF_F_HW_VLAN_TX;
features &= ~NETIF_F_HW_VLAN_CTAG_TX;
return features;
}
@ -1628,7 +1628,7 @@ static int nes_set_features(struct net_device *netdev, netdev_features_t feature
struct nes_device *nesdev = nesvnic->nesdev;
u32 changed = netdev->features ^ features;
if (changed & NETIF_F_HW_VLAN_RX)
if (changed & NETIF_F_HW_VLAN_CTAG_RX)
nes_vlan_mode(netdev, nesdev, features);
return 0;
@ -1706,11 +1706,11 @@ struct net_device *nes_netdev_init(struct nes_device *nesdev,
netdev->dev_addr[4] = (u8)(u64temp>>8);
netdev->dev_addr[5] = (u8)u64temp;
netdev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_RXCSUM | NETIF_F_HW_VLAN_RX;
netdev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_RXCSUM | NETIF_F_HW_VLAN_CTAG_RX;
if ((nesvnic->logical_port < 2) || (nesdev->nesadapter->hw_rev != NE020_REV))
netdev->hw_features |= NETIF_F_TSO;
netdev->features = netdev->hw_features | NETIF_F_HIGHDMA | NETIF_F_HW_VLAN_TX;
netdev->features = netdev->hw_features | NETIF_F_HIGHDMA | NETIF_F_HW_VLAN_CTAG_TX;
netdev->hw_features |= NETIF_F_LRO;
nes_debug(NES_DBG_INIT, "nesvnic = %p, reported features = 0x%lX, QPid = %d,"

View File

@ -730,7 +730,8 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
if ((header->proto != htons(ETH_P_IP)) &&
(header->proto != htons(ETH_P_IPV6)) &&
(header->proto != htons(ETH_P_ARP)) &&
(header->proto != htons(ETH_P_RARP))) {
(header->proto != htons(ETH_P_RARP)) &&
(header->proto != htons(ETH_P_TIPC))) {
/* ethertype not supported by IPoIB */
++dev->stats.tx_dropped;
dev_kfree_skb_any(skb);
@ -751,6 +752,7 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
switch (header->proto) {
case htons(ETH_P_IP):
case htons(ETH_P_IPV6):
case htons(ETH_P_TIPC):
neigh = ipoib_neigh_get(dev, cb->hwaddr);
if (unlikely(!neigh)) {
neigh_add_path(skb, cb->hwaddr, dev);

View File

@ -469,8 +469,7 @@ static int capidrv_add_ack(struct capidrv_ncci *nccip,
{
struct ncci_datahandle_queue *n, **pp;
n = (struct ncci_datahandle_queue *)
kmalloc(sizeof(struct ncci_datahandle_queue), GFP_ATOMIC);
n = kmalloc(sizeof(struct ncci_datahandle_queue), GFP_ATOMIC);
if (!n) {
printk(KERN_ERR "capidrv: kmalloc ncci_datahandle failed\n");
return -1;

View File

@ -441,8 +441,7 @@ static int isdn_divert_icall(isdn_ctrl *ic)
switch (dv->rule.action) {
case DEFLECT_IGNORE:
return (0);
break;
return 0;
case DEFLECT_ALERT:
case DEFLECT_PROCEED:
@ -510,10 +509,9 @@ static int isdn_divert_icall(isdn_ctrl *ic)
break;
default:
return (0); /* ignore call */
break;
return 0; /* ignore call */
} /* switch action */
break;
break; /* will break the 'for' looping */
} /* scan_table */
if (cs) {

View File

@ -26,7 +26,7 @@ FsmNew(struct Fsm *fsm, struct FsmNode *fnlist, int fncount)
{
int i;
fsm->jumpmatrix = (FSMFNPTR *)
fsm->jumpmatrix =
kzalloc(sizeof(FSMFNPTR) * fsm->state_count * fsm->event_count, GFP_KERNEL);
if (!fsm->jumpmatrix)
return -ENOMEM;

View File

@ -1479,7 +1479,7 @@ int setup_hfcsx(struct IsdnCard *card)
release_region(cs->hw.hfcsx.base, 2);
return (0);
}
if (!(cs->hw.hfcsx.extra = (void *)
if (!(cs->hw.hfcsx.extra =
kmalloc(sizeof(struct hfcsx_extra), GFP_ATOMIC))) {
release_region(cs->hw.hfcsx.base, 2);
printk(KERN_WARNING "HFC-SX: unable to allocate memory\n");

View File

@ -1385,7 +1385,7 @@ isdn_net_type_trans(struct sk_buff *skb, struct net_device *dev)
if (memcmp(eth->h_dest, dev->dev_addr, ETH_ALEN))
skb->pkt_type = PACKET_OTHERHOST;
}
if (ntohs(eth->h_proto) >= 1536)
if (ntohs(eth->h_proto) >= ETH_P_802_3_MIN)
return eth->h_proto;
rawp = skb->data;

View File

@ -578,6 +578,7 @@ data_sock_getname(struct socket *sock, struct sockaddr *addr,
lock_sock(sk);
*addr_len = sizeof(*maddr);
maddr->family = AF_ISDN;
maddr->dev = _pms(sk)->dev->id;
maddr->channel = _pms(sk)->ch.nr;
maddr->sapi = _pms(sk)->ch.addr & 0xff;

View File

@ -33,8 +33,8 @@ static unsigned long ram[] = {0, 0, 0, 0};
static bool do_reset = 0;
module_param_array(io, int, NULL, 0);
module_param_array(irq, int, NULL, 0);
module_param_array(ram, int, NULL, 0);
module_param_array(irq, byte, NULL, 0);
module_param_array(ram, long, NULL, 0);
module_param(do_reset, bool, 0);
static int identify_board(unsigned long, unsigned int);

View File

@ -185,7 +185,7 @@ static __be16 dvb_net_eth_type_trans(struct sk_buff *skb,
skb->pkt_type=PACKET_MULTICAST;
}
if (ntohs(eth->h_proto) >= 1536)
if (ntohs(eth->h_proto) >= ETH_P_802_3_MIN)
return eth->h_proto;
rawp = skb->data;
@ -228,9 +228,9 @@ static int ule_test_sndu( struct dvb_net_priv *p )
static int ule_bridged_sndu( struct dvb_net_priv *p )
{
struct ethhdr *hdr = (struct ethhdr*) p->ule_next_hdr;
if(ntohs(hdr->h_proto) < 1536) {
if(ntohs(hdr->h_proto) < ETH_P_802_3_MIN) {
int framelen = p->ule_sndu_len - ((p->ule_next_hdr+sizeof(struct ethhdr)) - p->ule_skb->data);
/* A frame Type < 1536 for a bridged frame, introduces a LLC Length field. */
/* A frame Type < ETH_P_802_3_MIN for a bridged frame, introduces a LLC Length field. */
if(framelen != ntohs(hdr->h_proto)) {
return -1;
}
@ -320,7 +320,7 @@ static int handle_ule_extensions( struct dvb_net_priv *p )
(int) p->ule_sndu_type, l, total_ext_len);
#endif
} while (p->ule_sndu_type < 1536);
} while (p->ule_sndu_type < ETH_P_802_3_MIN);
return total_ext_len;
}
@ -712,7 +712,7 @@ static void dvb_net_ule( struct net_device *dev, const u8 *buf, size_t buf_len )
}
/* Handle ULE Extension Headers. */
if (priv->ule_sndu_type < 1536) {
if (priv->ule_sndu_type < ETH_P_802_3_MIN) {
/* There is an extension header. Handle it accordingly. */
int l = handle_ule_extensions(priv);
if (l < 0) {

View File

@ -151,6 +151,7 @@ config MACVTAP
config VXLAN
tristate "Virtual eXtensible Local Area Network (VXLAN)"
depends on INET
select NET_IP_TUNNEL
---help---
This allows one to create vxlan virtual interfaces that provide
Layer 2 Networks over Layer 3 Networks. VXLAN is often used

View File

@ -106,20 +106,4 @@ config IPDDP_ENCAP
IP packets inside AppleTalk frames; this is useful if your Linux box
is stuck on an AppleTalk network (which hopefully contains a
decapsulator somewhere). Please see
<file:Documentation/networking/ipddp.txt> for more information. If
you said Y to "AppleTalk-IP driver support" above and you say Y
here, then you cannot say Y to "AppleTalk-IP to IP Decapsulation
support", below.
config IPDDP_DECAP
bool "Appletalk-IP to IP Decapsulation support"
depends on IPDDP
help
If you say Y here, the AppleTalk-IP code will be able to decapsulate
AppleTalk-IP frames to IP packets; this is useful if you want your
Linux box to act as an Internet gateway for an AppleTalk network.
Please see <file:Documentation/networking/ipddp.txt> for more
information. If you said Y to "AppleTalk-IP driver support" above
and you say Y here, then you cannot say Y to "IP to AppleTalk-IP
Encapsulation support", above.
<file:Documentation/networking/ipddp.txt> for more information.

View File

@ -514,7 +514,7 @@ static void rlb_update_client(struct rlb_client_info *client_info)
skb->dev = client_info->slave->dev;
if (client_info->tag) {
skb = vlan_put_tag(skb, client_info->vlan_id);
skb = vlan_put_tag(skb, htons(ETH_P_8021Q), client_info->vlan_id);
if (!skb) {
pr_err("%s: Error: failed to insert VLAN tag\n",
client_info->slave->bond->dev->name);
@ -1014,7 +1014,7 @@ static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[])
continue;
}
skb = vlan_put_tag(skb, vlan->vlan_id);
skb = vlan_put_tag(skb, htons(ETH_P_8021Q), vlan->vlan_id);
if (!skb) {
pr_err("%s: Error: failed to insert VLAN tag\n",
bond->dev->name);

View File

@ -428,14 +428,15 @@ int bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb,
* @bond_dev: bonding net device that got called
* @vid: vlan id being added
*/
static int bond_vlan_rx_add_vid(struct net_device *bond_dev, uint16_t vid)
static int bond_vlan_rx_add_vid(struct net_device *bond_dev,
__be16 proto, u16 vid)
{
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave, *stop_at;
int i, res;
bond_for_each_slave(bond, slave, i) {
res = vlan_vid_add(slave->dev, vid);
res = vlan_vid_add(slave->dev, proto, vid);
if (res)
goto unwind;
}
@ -453,7 +454,7 @@ unwind:
/* unwind from head to the slave that failed */
stop_at = slave;
bond_for_each_slave_from_to(bond, slave, i, bond->first_slave, stop_at)
vlan_vid_del(slave->dev, vid);
vlan_vid_del(slave->dev, proto, vid);
return res;
}
@ -463,14 +464,15 @@ unwind:
* @bond_dev: bonding net device that got called
* @vid: vlan id being removed
*/
static int bond_vlan_rx_kill_vid(struct net_device *bond_dev, uint16_t vid)
static int bond_vlan_rx_kill_vid(struct net_device *bond_dev,
__be16 proto, u16 vid)
{
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave;
int i, res;
bond_for_each_slave(bond, slave, i)
vlan_vid_del(slave->dev, vid);
vlan_vid_del(slave->dev, proto, vid);
res = bond_del_vlan(bond, vid);
if (res) {
@ -488,7 +490,8 @@ static void bond_add_vlans_on_slave(struct bonding *bond, struct net_device *sla
int res;
list_for_each_entry(vlan, &bond->vlan_list, vlan_list) {
res = vlan_vid_add(slave_dev, vlan->vlan_id);
res = vlan_vid_add(slave_dev, htons(ETH_P_8021Q),
vlan->vlan_id);
if (res)
pr_warning("%s: Failed to add vlan id %d to device %s\n",
bond->dev->name, vlan->vlan_id,
@ -504,7 +507,7 @@ static void bond_del_vlans_from_slave(struct bonding *bond,
list_for_each_entry(vlan, &bond->vlan_list, vlan_list) {
if (!vlan->vlan_id)
continue;
vlan_vid_del(slave_dev, vlan->vlan_id);
vlan_vid_del(slave_dev, htons(ETH_P_8021Q), vlan->vlan_id);
}
}
@ -779,7 +782,7 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
/* rejoin all groups on vlan devices */
list_for_each_entry(vlan, &bond->vlan_list, vlan_list) {
vlan_dev = __vlan_find_dev_deep(bond_dev,
vlan_dev = __vlan_find_dev_deep(bond_dev, htons(ETH_P_8021Q),
vlan->vlan_id);
if (vlan_dev)
__bond_resend_igmp_join_requests(vlan_dev);
@ -796,9 +799,8 @@ static void bond_resend_igmp_join_requests_delayed(struct work_struct *work)
{
struct bonding *bond = container_of(work, struct bonding,
mcast_work.work);
rcu_read_lock();
bond_resend_igmp_join_requests(bond);
rcu_read_unlock();
}
/*
@ -1915,14 +1917,16 @@ err_detach:
bond_detach_slave(bond, new_slave);
if (bond->primary_slave == new_slave)
bond->primary_slave = NULL;
write_unlock_bh(&bond->lock);
if (bond->curr_active_slave == new_slave) {
bond_change_active_slave(bond, NULL);
write_unlock_bh(&bond->lock);
read_lock(&bond->lock);
write_lock_bh(&bond->curr_slave_lock);
bond_change_active_slave(bond, NULL);
bond_select_active_slave(bond);
write_unlock_bh(&bond->curr_slave_lock);
read_unlock(&bond->lock);
} else {
write_unlock_bh(&bond->lock);
}
slave_disable_netpoll(new_slave);
@ -2532,7 +2536,8 @@ static int bond_has_this_ip(struct bonding *bond, __be32 ip)
list_for_each_entry(vlan, &bond->vlan_list, vlan_list) {
rcu_read_lock();
vlan_dev = __vlan_find_dev_deep(bond->dev, vlan->vlan_id);
vlan_dev = __vlan_find_dev_deep(bond->dev, htons(ETH_P_8021Q),
vlan->vlan_id);
rcu_read_unlock();
if (vlan_dev && ip == bond_confirm_addr(vlan_dev, 0, ip))
return 1;
@ -2561,7 +2566,7 @@ static void bond_arp_send(struct net_device *slave_dev, int arp_op, __be32 dest_
return;
}
if (vlan_id) {
skb = vlan_put_tag(skb, vlan_id);
skb = vlan_put_tag(skb, htons(ETH_P_8021Q), vlan_id);
if (!skb) {
pr_err("failed to insert VLAN tag\n");
return;
@ -2623,6 +2628,7 @@ static void bond_arp_send_all(struct bonding *bond, struct slave *slave)
list_for_each_entry(vlan, &bond->vlan_list, vlan_list) {
rcu_read_lock();
vlan_dev = __vlan_find_dev_deep(bond->dev,
htons(ETH_P_8021Q),
vlan->vlan_id);
rcu_read_unlock();
if (vlan_dev == rt->dst.dev) {
@ -4258,6 +4264,37 @@ void bond_set_mode_ops(struct bonding *bond, int mode)
}
}
static int bond_ethtool_get_settings(struct net_device *bond_dev,
struct ethtool_cmd *ecmd)
{
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave;
int i;
unsigned long speed = 0;
ecmd->duplex = DUPLEX_UNKNOWN;
ecmd->port = PORT_OTHER;
/* Since SLAVE_IS_OK returns false for all inactive or down slaves, we
* do not need to check mode. Though link speed might not represent
* the true receive or transmit bandwidth (not all modes are symmetric)
* this is an accurate maximum.
*/
read_lock(&bond->lock);
bond_for_each_slave(bond, slave, i) {
if (SLAVE_IS_OK(slave)) {
if (slave->speed != SPEED_UNKNOWN)
speed += slave->speed;
if (ecmd->duplex == DUPLEX_UNKNOWN &&
slave->duplex != DUPLEX_UNKNOWN)
ecmd->duplex = slave->duplex;
}
}
ethtool_cmd_speed_set(ecmd, speed ? : SPEED_UNKNOWN);
read_unlock(&bond->lock);
return 0;
}
static void bond_ethtool_get_drvinfo(struct net_device *bond_dev,
struct ethtool_drvinfo *drvinfo)
{
@ -4269,6 +4306,7 @@ static void bond_ethtool_get_drvinfo(struct net_device *bond_dev,
static const struct ethtool_ops bond_ethtool_ops = {
.get_drvinfo = bond_ethtool_get_drvinfo,
.get_settings = bond_ethtool_get_settings,
.get_link = ethtool_op_get_link,
};
@ -4359,9 +4397,9 @@ static void bond_setup(struct net_device *bond_dev)
*/
bond_dev->hw_features = BOND_VLAN_FEATURES |
NETIF_F_HW_VLAN_TX |
NETIF_F_HW_VLAN_RX |
NETIF_F_HW_VLAN_FILTER;
NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_CTAG_RX |
NETIF_F_HW_VLAN_CTAG_FILTER;
bond_dev->hw_features &= ~(NETIF_F_ALL_CSUM & ~NETIF_F_HW_CSUM);
bond_dev->features |= bond_dev->hw_features;

View File

@ -32,13 +32,6 @@ config CAIF_SPI_SYNC
help to synchronize to the next transfer in case of over or under-runs.
This option also needs to be enabled on the modem.
config CAIF_SHM
tristate "CAIF shared memory protocol driver"
depends on CAIF && U5500_MBOX
default n
---help---
The CAIF shared memory protocol driver for the STE UX5500 platform.
config CAIF_HSI
tristate "CAIF HSI transport driver"
depends on CAIF

View File

@ -7,9 +7,5 @@ obj-$(CONFIG_CAIF_TTY) += caif_serial.o
cfspi_slave-objs := caif_spi.o caif_spi_slave.o
obj-$(CONFIG_CAIF_SPI_SLAVE) += cfspi_slave.o
# Shared memory
caif_shm-objs := caif_shmcore.o caif_shm_u5500.o
obj-$(CONFIG_CAIF_SHM) += caif_shm.o
# HSI interface
obj-$(CONFIG_CAIF_HSI) += caif_hsi.o

View File

@ -1,8 +1,7 @@
/*
* Copyright (C) ST-Ericsson AB 2010
* Contact: Sjur Brendeland / sjur.brandeland@stericsson.com
* Author: Daniel Martensson / daniel.martensson@stericsson.com
* Dmitry.Tarnyagin / dmitry.tarnyagin@stericsson.com
* Author: Daniel Martensson
* Dmitry.Tarnyagin / dmitry.tarnyagin@lockless.no
* License terms: GNU General Public License (GPL) version 2.
*/
@ -25,7 +24,7 @@
#include <net/caif/caif_hsi.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Daniel Martensson<daniel.martensson@stericsson.com>");
MODULE_AUTHOR("Daniel Martensson");
MODULE_DESCRIPTION("CAIF HSI driver");
/* Returns the number of padding bytes for alignment. */

View File

@ -1,6 +1,6 @@
/*
* Copyright (C) ST-Ericsson AB 2010
* Author: Sjur Brendeland / sjur.brandeland@stericsson.com
* Author: Sjur Brendeland
* License terms: GNU General Public License (GPL) version 2
*/
@ -21,7 +21,7 @@
#include <linux/debugfs.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Sjur Brendeland<sjur.brandeland@stericsson.com>");
MODULE_AUTHOR("Sjur Brendeland");
MODULE_DESCRIPTION("CAIF serial device TTY line discipline");
MODULE_LICENSE("GPL");
MODULE_ALIAS_LDISC(N_CAIF);

View File

@ -1,128 +0,0 @@
/*
* Copyright (C) ST-Ericsson AB 2010
* Contact: Sjur Brendeland / sjur.brandeland@stericsson.com
* Author: Amarnath Revanna / amarnath.bangalore.revanna@stericsson.com
* License terms: GNU General Public License (GPL) version 2
*/
#define pr_fmt(fmt) KBUILD_MODNAME ":" fmt
#include <linux/init.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <mach/mbox-db5500.h>
#include <net/caif/caif_shm.h>
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("CAIF Shared Memory protocol driver");
#define MAX_SHM_INSTANCES 1
enum {
MBX_ACC0,
MBX_ACC1,
MBX_DSP
};
static struct shmdev_layer shmdev_lyr[MAX_SHM_INSTANCES];
static unsigned int shm_start;
static unsigned int shm_size;
module_param(shm_size, uint , 0440);
MODULE_PARM_DESC(shm_total_size, "Start of SHM shared memory");
module_param(shm_start, uint , 0440);
MODULE_PARM_DESC(shm_total_start, "Total Size of SHM shared memory");
static int shmdev_send_msg(u32 dev_id, u32 mbx_msg)
{
/* Always block until msg is written successfully */
mbox_send(shmdev_lyr[dev_id].hmbx, mbx_msg, true);
return 0;
}
static int shmdev_mbx_setup(void *pshmdrv_cb, struct shmdev_layer *pshm_dev,
void *pshm_drv)
{
/*
* For UX5500, we have only 1 SHM instance which uses MBX0
* for communication with the peer modem
*/
pshm_dev->hmbx = mbox_setup(MBX_ACC0, pshmdrv_cb, pshm_drv);
if (!pshm_dev->hmbx)
return -ENODEV;
else
return 0;
}
static int __init caif_shmdev_init(void)
{
int i, result;
/* Loop is currently overkill, there is only one instance */
for (i = 0; i < MAX_SHM_INSTANCES; i++) {
shmdev_lyr[i].shm_base_addr = shm_start;
shmdev_lyr[i].shm_total_sz = shm_size;
if (((char *)shmdev_lyr[i].shm_base_addr == NULL)
|| (shmdev_lyr[i].shm_total_sz <= 0)) {
pr_warn("ERROR,"
"Shared memory Address and/or Size incorrect"
", Bailing out ...\n");
result = -EINVAL;
goto clean;
}
pr_info("SHM AREA (instance %d) STARTS"
" AT %p\n", i, (char *)shmdev_lyr[i].shm_base_addr);
shmdev_lyr[i].shm_id = i;
shmdev_lyr[i].pshmdev_mbxsend = shmdev_send_msg;
shmdev_lyr[i].pshmdev_mbxsetup = shmdev_mbx_setup;
/*
* Finally, CAIF core module is called with details in place:
* 1. SHM base address
* 2. SHM size
* 3. MBX handle
*/
result = caif_shmcore_probe(&shmdev_lyr[i]);
if (result) {
pr_warn("ERROR[%d],"
"Could not probe SHM core (instance %d)"
" Bailing out ...\n", result, i);
goto clean;
}
}
return 0;
clean:
/*
* For now, we assume that even if one instance of SHM fails, we bail
* out of the driver support completely. For this, we need to release
* any memory allocated and unregister any instance of SHM net device.
*/
for (i = 0; i < MAX_SHM_INSTANCES; i++) {
if (shmdev_lyr[i].pshm_netdev)
unregister_netdev(shmdev_lyr[i].pshm_netdev);
}
return result;
}
static void __exit caif_shmdev_exit(void)
{
int i;
for (i = 0; i < MAX_SHM_INSTANCES; i++) {
caif_shmcore_remove(shmdev_lyr[i].pshm_netdev);
kfree((void *)shmdev_lyr[i].shm_base_addr);
}
}
module_init(caif_shmdev_init);
module_exit(caif_shmdev_exit);

View File

@ -1,747 +0,0 @@
/*
* Copyright (C) ST-Ericsson AB 2010
* Contact: Sjur Brendeland / sjur.brandeland@stericsson.com
* Authors: Amarnath Revanna / amarnath.bangalore.revanna@stericsson.com,
* Daniel Martensson / daniel.martensson@stericsson.com
* License terms: GNU General Public License (GPL) version 2
*/
#define pr_fmt(fmt) KBUILD_MODNAME ":" fmt
#include <linux/spinlock.h>
#include <linux/sched.h>
#include <linux/list.h>
#include <linux/netdevice.h>
#include <linux/if_arp.h>
#include <linux/io.h>
#include <net/caif/caif_device.h>
#include <net/caif/caif_shm.h>
#define NR_TX_BUF 6
#define NR_RX_BUF 6
#define TX_BUF_SZ 0x2000
#define RX_BUF_SZ 0x2000
#define CAIF_NEEDED_HEADROOM 32
#define CAIF_FLOW_ON 1
#define CAIF_FLOW_OFF 0
#define LOW_WATERMARK 3
#define HIGH_WATERMARK 4
/* Maximum number of CAIF buffers per shared memory buffer. */
#define SHM_MAX_FRMS_PER_BUF 10
/*
* Size in bytes of the descriptor area
* (With end of descriptor signalling)
*/
#define SHM_CAIF_DESC_SIZE ((SHM_MAX_FRMS_PER_BUF + 1) * \
sizeof(struct shm_pck_desc))
/*
* Offset to the first CAIF frame within a shared memory buffer.
* Aligned on 32 bytes.
*/
#define SHM_CAIF_FRM_OFS (SHM_CAIF_DESC_SIZE + (SHM_CAIF_DESC_SIZE % 32))
/* Number of bytes for CAIF shared memory header. */
#define SHM_HDR_LEN 1
/* Number of padding bytes for the complete CAIF frame. */
#define SHM_FRM_PAD_LEN 4
#define CAIF_MAX_MTU 4096
#define SHM_SET_FULL(x) (((x+1) & 0x0F) << 0)
#define SHM_GET_FULL(x) (((x >> 0) & 0x0F) - 1)
#define SHM_SET_EMPTY(x) (((x+1) & 0x0F) << 4)
#define SHM_GET_EMPTY(x) (((x >> 4) & 0x0F) - 1)
#define SHM_FULL_MASK (0x0F << 0)
#define SHM_EMPTY_MASK (0x0F << 4)
struct shm_pck_desc {
/*
* Offset from start of shared memory area to start of
* shared memory CAIF frame.
*/
u32 frm_ofs;
u32 frm_len;
};
struct buf_list {
unsigned char *desc_vptr;
u32 phy_addr;
u32 index;
u32 len;
u32 frames;
u32 frm_ofs;
struct list_head list;
};
struct shm_caif_frm {
/* Number of bytes of padding before the CAIF frame. */
u8 hdr_ofs;
};
struct shmdrv_layer {
/* caif_dev_common must always be first in the structure*/
struct caif_dev_common cfdev;
u32 shm_tx_addr;
u32 shm_rx_addr;
u32 shm_base_addr;
u32 tx_empty_available;
spinlock_t lock;
struct list_head tx_empty_list;
struct list_head tx_pend_list;
struct list_head tx_full_list;
struct list_head rx_empty_list;
struct list_head rx_pend_list;
struct list_head rx_full_list;
struct workqueue_struct *pshm_tx_workqueue;
struct workqueue_struct *pshm_rx_workqueue;
struct work_struct shm_tx_work;
struct work_struct shm_rx_work;
struct sk_buff_head sk_qhead;
struct shmdev_layer *pshm_dev;
};
static int shm_netdev_open(struct net_device *shm_netdev)
{
netif_wake_queue(shm_netdev);
return 0;
}
static int shm_netdev_close(struct net_device *shm_netdev)
{
netif_stop_queue(shm_netdev);
return 0;
}
int caif_shmdrv_rx_cb(u32 mbx_msg, void *priv)
{
struct buf_list *pbuf;
struct shmdrv_layer *pshm_drv;
struct list_head *pos;
u32 avail_emptybuff = 0;
unsigned long flags = 0;
pshm_drv = priv;
/* Check for received buffers. */
if (mbx_msg & SHM_FULL_MASK) {
int idx;
spin_lock_irqsave(&pshm_drv->lock, flags);
/* Check whether we have any outstanding buffers. */
if (list_empty(&pshm_drv->rx_empty_list)) {
/* Release spin lock. */
spin_unlock_irqrestore(&pshm_drv->lock, flags);
/* We print even in IRQ context... */
pr_warn("No empty Rx buffers to fill: "
"mbx_msg:%x\n", mbx_msg);
/* Bail out. */
goto err_sync;
}
pbuf =
list_entry(pshm_drv->rx_empty_list.next,
struct buf_list, list);
idx = pbuf->index;
/* Check buffer synchronization. */
if (idx != SHM_GET_FULL(mbx_msg)) {
/* We print even in IRQ context... */
pr_warn(
"phyif_shm_mbx_msg_cb: RX full out of sync:"
" idx:%d, msg:%x SHM_GET_FULL(mbx_msg):%x\n",
idx, mbx_msg, SHM_GET_FULL(mbx_msg));
spin_unlock_irqrestore(&pshm_drv->lock, flags);
/* Bail out. */
goto err_sync;
}
list_del_init(&pbuf->list);
list_add_tail(&pbuf->list, &pshm_drv->rx_full_list);
spin_unlock_irqrestore(&pshm_drv->lock, flags);
/* Schedule RX work queue. */
if (!work_pending(&pshm_drv->shm_rx_work))
queue_work(pshm_drv->pshm_rx_workqueue,
&pshm_drv->shm_rx_work);
}
/* Check for emptied buffers. */
if (mbx_msg & SHM_EMPTY_MASK) {
int idx;
spin_lock_irqsave(&pshm_drv->lock, flags);
/* Check whether we have any outstanding buffers. */
if (list_empty(&pshm_drv->tx_full_list)) {
/* We print even in IRQ context... */
pr_warn("No TX to empty: msg:%x\n", mbx_msg);
spin_unlock_irqrestore(&pshm_drv->lock, flags);
/* Bail out. */
goto err_sync;
}
pbuf =
list_entry(pshm_drv->tx_full_list.next,
struct buf_list, list);
idx = pbuf->index;
/* Check buffer synchronization. */
if (idx != SHM_GET_EMPTY(mbx_msg)) {
spin_unlock_irqrestore(&pshm_drv->lock, flags);
/* We print even in IRQ context... */
pr_warn("TX empty "
"out of sync:idx:%d, msg:%x\n", idx, mbx_msg);
/* Bail out. */
goto err_sync;
}
list_del_init(&pbuf->list);
/* Reset buffer parameters. */
pbuf->frames = 0;
pbuf->frm_ofs = SHM_CAIF_FRM_OFS;
list_add_tail(&pbuf->list, &pshm_drv->tx_empty_list);
/* Check the available no. of buffers in the empty list */
list_for_each(pos, &pshm_drv->tx_empty_list)
avail_emptybuff++;
/* Check whether we have to wake up the transmitter. */
if ((avail_emptybuff > HIGH_WATERMARK) &&
(!pshm_drv->tx_empty_available)) {
pshm_drv->tx_empty_available = 1;
spin_unlock_irqrestore(&pshm_drv->lock, flags);
pshm_drv->cfdev.flowctrl
(pshm_drv->pshm_dev->pshm_netdev,
CAIF_FLOW_ON);
/* Schedule the work queue. if required */
if (!work_pending(&pshm_drv->shm_tx_work))
queue_work(pshm_drv->pshm_tx_workqueue,
&pshm_drv->shm_tx_work);
} else
spin_unlock_irqrestore(&pshm_drv->lock, flags);
}
return 0;
err_sync:
return -EIO;
}
static void shm_rx_work_func(struct work_struct *rx_work)
{
struct shmdrv_layer *pshm_drv;
struct buf_list *pbuf;
unsigned long flags = 0;
struct sk_buff *skb;
char *p;
int ret;
pshm_drv = container_of(rx_work, struct shmdrv_layer, shm_rx_work);
while (1) {
struct shm_pck_desc *pck_desc;
spin_lock_irqsave(&pshm_drv->lock, flags);
/* Check for received buffers. */
if (list_empty(&pshm_drv->rx_full_list)) {
spin_unlock_irqrestore(&pshm_drv->lock, flags);
break;
}
pbuf =
list_entry(pshm_drv->rx_full_list.next, struct buf_list,
list);
list_del_init(&pbuf->list);
spin_unlock_irqrestore(&pshm_drv->lock, flags);
/* Retrieve pointer to start of the packet descriptor area. */
pck_desc = (struct shm_pck_desc *) pbuf->desc_vptr;
/*
* Check whether descriptor contains a CAIF shared memory
* frame.
*/
while (pck_desc->frm_ofs) {
unsigned int frm_buf_ofs;
unsigned int frm_pck_ofs;
unsigned int frm_pck_len;
/*
* Check whether offset is within buffer limits
* (lower).
*/
if (pck_desc->frm_ofs <
(pbuf->phy_addr - pshm_drv->shm_base_addr))
break;
/*
* Check whether offset is within buffer limits
* (higher).
*/
if (pck_desc->frm_ofs >
((pbuf->phy_addr - pshm_drv->shm_base_addr) +
pbuf->len))
break;
/* Calculate offset from start of buffer. */
frm_buf_ofs =
pck_desc->frm_ofs - (pbuf->phy_addr -
pshm_drv->shm_base_addr);
/*
* Calculate offset and length of CAIF packet while
* taking care of the shared memory header.
*/
frm_pck_ofs =
frm_buf_ofs + SHM_HDR_LEN +
(*(pbuf->desc_vptr + frm_buf_ofs));
frm_pck_len =
(pck_desc->frm_len - SHM_HDR_LEN -
(*(pbuf->desc_vptr + frm_buf_ofs)));
/* Check whether CAIF packet is within buffer limits */
if ((frm_pck_ofs + pck_desc->frm_len) > pbuf->len)
break;
/* Get a suitable CAIF packet and copy in data. */
skb = netdev_alloc_skb(pshm_drv->pshm_dev->pshm_netdev,
frm_pck_len + 1);
if (skb == NULL) {
pr_info("OOM: Try next frame in descriptor\n");
break;
}
p = skb_put(skb, frm_pck_len);
memcpy(p, pbuf->desc_vptr + frm_pck_ofs, frm_pck_len);
skb->protocol = htons(ETH_P_CAIF);
skb_reset_mac_header(skb);
skb->dev = pshm_drv->pshm_dev->pshm_netdev;
/* Push received packet up the stack. */
ret = netif_rx_ni(skb);
if (!ret) {
pshm_drv->pshm_dev->pshm_netdev->stats.
rx_packets++;
pshm_drv->pshm_dev->pshm_netdev->stats.
rx_bytes += pck_desc->frm_len;
} else
++pshm_drv->pshm_dev->pshm_netdev->stats.
rx_dropped;
/* Move to next packet descriptor. */
pck_desc++;
}
spin_lock_irqsave(&pshm_drv->lock, flags);
list_add_tail(&pbuf->list, &pshm_drv->rx_pend_list);
spin_unlock_irqrestore(&pshm_drv->lock, flags);
}
/* Schedule the work queue. if required */
if (!work_pending(&pshm_drv->shm_tx_work))
queue_work(pshm_drv->pshm_tx_workqueue, &pshm_drv->shm_tx_work);
}
static void shm_tx_work_func(struct work_struct *tx_work)
{
u32 mbox_msg;
unsigned int frmlen, avail_emptybuff, append = 0;
unsigned long flags = 0;
struct buf_list *pbuf = NULL;
struct shmdrv_layer *pshm_drv;
struct shm_caif_frm *frm;
struct sk_buff *skb;
struct shm_pck_desc *pck_desc;
struct list_head *pos;
pshm_drv = container_of(tx_work, struct shmdrv_layer, shm_tx_work);
do {
/* Initialize mailbox message. */
mbox_msg = 0x00;
avail_emptybuff = 0;
spin_lock_irqsave(&pshm_drv->lock, flags);
/* Check for pending receive buffers. */
if (!list_empty(&pshm_drv->rx_pend_list)) {
pbuf = list_entry(pshm_drv->rx_pend_list.next,
struct buf_list, list);
list_del_init(&pbuf->list);
list_add_tail(&pbuf->list, &pshm_drv->rx_empty_list);
/*
* Value index is never changed,
* so read access should be safe.
*/
mbox_msg |= SHM_SET_EMPTY(pbuf->index);
}
skb = skb_peek(&pshm_drv->sk_qhead);
if (skb == NULL)
goto send_msg;
/* Check the available no. of buffers in the empty list */
list_for_each(pos, &pshm_drv->tx_empty_list)
avail_emptybuff++;
if ((avail_emptybuff < LOW_WATERMARK) &&
pshm_drv->tx_empty_available) {
/* Update blocking condition. */
pshm_drv->tx_empty_available = 0;
spin_unlock_irqrestore(&pshm_drv->lock, flags);
pshm_drv->cfdev.flowctrl
(pshm_drv->pshm_dev->pshm_netdev,
CAIF_FLOW_OFF);
spin_lock_irqsave(&pshm_drv->lock, flags);
}
/*
* We simply return back to the caller if we do not have space
* either in Tx pending list or Tx empty list. In this case,
* we hold the received skb in the skb list, waiting to
* be transmitted once Tx buffers become available
*/
if (list_empty(&pshm_drv->tx_empty_list))
goto send_msg;
/* Get the first free Tx buffer. */
pbuf = list_entry(pshm_drv->tx_empty_list.next,
struct buf_list, list);
do {
if (append) {
skb = skb_peek(&pshm_drv->sk_qhead);
if (skb == NULL)
break;
}
frm = (struct shm_caif_frm *)
(pbuf->desc_vptr + pbuf->frm_ofs);
frm->hdr_ofs = 0;
frmlen = 0;
frmlen += SHM_HDR_LEN + frm->hdr_ofs + skb->len;
/* Add tail padding if needed. */
if (frmlen % SHM_FRM_PAD_LEN)
frmlen += SHM_FRM_PAD_LEN -
(frmlen % SHM_FRM_PAD_LEN);
/*
* Verify that packet, header and additional padding
* can fit within the buffer frame area.
*/
if (frmlen >= (pbuf->len - pbuf->frm_ofs))
break;
if (!append) {
list_del_init(&pbuf->list);
append = 1;
}
skb = skb_dequeue(&pshm_drv->sk_qhead);
if (skb == NULL)
break;
/* Copy in CAIF frame. */
skb_copy_bits(skb, 0, pbuf->desc_vptr +
pbuf->frm_ofs + SHM_HDR_LEN +
frm->hdr_ofs, skb->len);
pshm_drv->pshm_dev->pshm_netdev->stats.tx_packets++;
pshm_drv->pshm_dev->pshm_netdev->stats.tx_bytes +=
frmlen;
dev_kfree_skb_irq(skb);
/* Fill in the shared memory packet descriptor area. */
pck_desc = (struct shm_pck_desc *) (pbuf->desc_vptr);
/* Forward to current frame. */
pck_desc += pbuf->frames;
pck_desc->frm_ofs = (pbuf->phy_addr -
pshm_drv->shm_base_addr) +
pbuf->frm_ofs;
pck_desc->frm_len = frmlen;
/* Terminate packet descriptor area. */
pck_desc++;
pck_desc->frm_ofs = 0;
/* Update buffer parameters. */
pbuf->frames++;
pbuf->frm_ofs += frmlen + (frmlen % 32);
} while (pbuf->frames < SHM_MAX_FRMS_PER_BUF);
/* Assign buffer as full. */
list_add_tail(&pbuf->list, &pshm_drv->tx_full_list);
append = 0;
mbox_msg |= SHM_SET_FULL(pbuf->index);
send_msg:
spin_unlock_irqrestore(&pshm_drv->lock, flags);
if (mbox_msg)
pshm_drv->pshm_dev->pshmdev_mbxsend
(pshm_drv->pshm_dev->shm_id, mbox_msg);
} while (mbox_msg);
}
static int shm_netdev_tx(struct sk_buff *skb, struct net_device *shm_netdev)
{
struct shmdrv_layer *pshm_drv;
pshm_drv = netdev_priv(shm_netdev);
skb_queue_tail(&pshm_drv->sk_qhead, skb);
/* Schedule Tx work queue. for deferred processing of skbs*/
if (!work_pending(&pshm_drv->shm_tx_work))
queue_work(pshm_drv->pshm_tx_workqueue, &pshm_drv->shm_tx_work);
return 0;
}
static const struct net_device_ops netdev_ops = {
.ndo_open = shm_netdev_open,
.ndo_stop = shm_netdev_close,
.ndo_start_xmit = shm_netdev_tx,
};
static void shm_netdev_setup(struct net_device *pshm_netdev)
{
struct shmdrv_layer *pshm_drv;
pshm_netdev->netdev_ops = &netdev_ops;
pshm_netdev->mtu = CAIF_MAX_MTU;
pshm_netdev->type = ARPHRD_CAIF;
pshm_netdev->hard_header_len = CAIF_NEEDED_HEADROOM;
pshm_netdev->tx_queue_len = 0;
pshm_netdev->destructor = free_netdev;
pshm_drv = netdev_priv(pshm_netdev);
/* Initialize structures in a clean state. */
memset(pshm_drv, 0, sizeof(struct shmdrv_layer));
pshm_drv->cfdev.link_select = CAIF_LINK_LOW_LATENCY;
}
int caif_shmcore_probe(struct shmdev_layer *pshm_dev)
{
int result, j;
struct shmdrv_layer *pshm_drv = NULL;
pshm_dev->pshm_netdev = alloc_netdev(sizeof(struct shmdrv_layer),
"cfshm%d", shm_netdev_setup);
if (!pshm_dev->pshm_netdev)
return -ENOMEM;
pshm_drv = netdev_priv(pshm_dev->pshm_netdev);
pshm_drv->pshm_dev = pshm_dev;
/*
* Initialization starts with the verification of the
* availability of MBX driver by calling its setup function.
* MBX driver must be available by this time for proper
* functioning of SHM driver.
*/
if ((pshm_dev->pshmdev_mbxsetup
(caif_shmdrv_rx_cb, pshm_dev, pshm_drv)) != 0) {
pr_warn("Could not config. SHM Mailbox,"
" Bailing out.....\n");
free_netdev(pshm_dev->pshm_netdev);
return -ENODEV;
}
skb_queue_head_init(&pshm_drv->sk_qhead);
pr_info("SHM DEVICE[%d] PROBED BY DRIVER, NEW SHM DRIVER"
" INSTANCE AT pshm_drv =0x%p\n",
pshm_drv->pshm_dev->shm_id, pshm_drv);
if (pshm_dev->shm_total_sz <
(NR_TX_BUF * TX_BUF_SZ + NR_RX_BUF * RX_BUF_SZ)) {
pr_warn("ERROR, Amount of available"
" Phys. SHM cannot accommodate current SHM "
"driver configuration, Bailing out ...\n");
free_netdev(pshm_dev->pshm_netdev);
return -ENOMEM;
}
pshm_drv->shm_base_addr = pshm_dev->shm_base_addr;
pshm_drv->shm_tx_addr = pshm_drv->shm_base_addr;
if (pshm_dev->shm_loopback)
pshm_drv->shm_rx_addr = pshm_drv->shm_tx_addr;
else
pshm_drv->shm_rx_addr = pshm_dev->shm_base_addr +
(NR_TX_BUF * TX_BUF_SZ);
spin_lock_init(&pshm_drv->lock);
INIT_LIST_HEAD(&pshm_drv->tx_empty_list);
INIT_LIST_HEAD(&pshm_drv->tx_pend_list);
INIT_LIST_HEAD(&pshm_drv->tx_full_list);
INIT_LIST_HEAD(&pshm_drv->rx_empty_list);
INIT_LIST_HEAD(&pshm_drv->rx_pend_list);
INIT_LIST_HEAD(&pshm_drv->rx_full_list);
INIT_WORK(&pshm_drv->shm_tx_work, shm_tx_work_func);
INIT_WORK(&pshm_drv->shm_rx_work, shm_rx_work_func);
pshm_drv->pshm_tx_workqueue =
create_singlethread_workqueue("shm_tx_work");
pshm_drv->pshm_rx_workqueue =
create_singlethread_workqueue("shm_rx_work");
for (j = 0; j < NR_TX_BUF; j++) {
struct buf_list *tx_buf =
kmalloc(sizeof(struct buf_list), GFP_KERNEL);
if (tx_buf == NULL) {
free_netdev(pshm_dev->pshm_netdev);
return -ENOMEM;
}
tx_buf->index = j;
tx_buf->phy_addr = pshm_drv->shm_tx_addr + (TX_BUF_SZ * j);
tx_buf->len = TX_BUF_SZ;
tx_buf->frames = 0;
tx_buf->frm_ofs = SHM_CAIF_FRM_OFS;
if (pshm_dev->shm_loopback)
tx_buf->desc_vptr = (unsigned char *)tx_buf->phy_addr;
else
/*
* FIXME: the result of ioremap is not a pointer - arnd
*/
tx_buf->desc_vptr =
ioremap(tx_buf->phy_addr, TX_BUF_SZ);
list_add_tail(&tx_buf->list, &pshm_drv->tx_empty_list);
}
for (j = 0; j < NR_RX_BUF; j++) {
struct buf_list *rx_buf =
kmalloc(sizeof(struct buf_list), GFP_KERNEL);
if (rx_buf == NULL) {
free_netdev(pshm_dev->pshm_netdev);
return -ENOMEM;
}
rx_buf->index = j;
rx_buf->phy_addr = pshm_drv->shm_rx_addr + (RX_BUF_SZ * j);
rx_buf->len = RX_BUF_SZ;
if (pshm_dev->shm_loopback)
rx_buf->desc_vptr = (unsigned char *)rx_buf->phy_addr;
else
rx_buf->desc_vptr =
ioremap(rx_buf->phy_addr, RX_BUF_SZ);
list_add_tail(&rx_buf->list, &pshm_drv->rx_empty_list);
}
pshm_drv->tx_empty_available = 1;
result = register_netdev(pshm_dev->pshm_netdev);
if (result)
pr_warn("ERROR[%d], SHM could not, "
"register with NW FRMWK Bailing out ...\n", result);
return result;
}
void caif_shmcore_remove(struct net_device *pshm_netdev)
{
struct buf_list *pbuf;
struct shmdrv_layer *pshm_drv = NULL;
pshm_drv = netdev_priv(pshm_netdev);
while (!(list_empty(&pshm_drv->tx_pend_list))) {
pbuf =
list_entry(pshm_drv->tx_pend_list.next,
struct buf_list, list);
list_del(&pbuf->list);
kfree(pbuf);
}
while (!(list_empty(&pshm_drv->tx_full_list))) {
pbuf =
list_entry(pshm_drv->tx_full_list.next,
struct buf_list, list);
list_del(&pbuf->list);
kfree(pbuf);
}
while (!(list_empty(&pshm_drv->tx_empty_list))) {
pbuf =
list_entry(pshm_drv->tx_empty_list.next,
struct buf_list, list);
list_del(&pbuf->list);
kfree(pbuf);
}
while (!(list_empty(&pshm_drv->rx_full_list))) {
pbuf =
list_entry(pshm_drv->tx_full_list.next,
struct buf_list, list);
list_del(&pbuf->list);
kfree(pbuf);
}
while (!(list_empty(&pshm_drv->rx_pend_list))) {
pbuf =
list_entry(pshm_drv->tx_pend_list.next,
struct buf_list, list);
list_del(&pbuf->list);
kfree(pbuf);
}
while (!(list_empty(&pshm_drv->rx_empty_list))) {
pbuf =
list_entry(pshm_drv->rx_empty_list.next,
struct buf_list, list);
list_del(&pbuf->list);
kfree(pbuf);
}
/* Destroy work queues. */
destroy_workqueue(pshm_drv->pshm_tx_workqueue);
destroy_workqueue(pshm_drv->pshm_rx_workqueue);
unregister_netdev(pshm_netdev);
}

View File

@ -1,7 +1,6 @@
/*
* Copyright (C) ST-Ericsson AB 2010
* Contact: Sjur Brendeland / sjur.brandeland@stericsson.com
* Author: Daniel Martensson / Daniel.Martensson@stericsson.com
* Author: Daniel Martensson
* License terms: GNU General Public License (GPL) version 2.
*/
@ -29,7 +28,7 @@
#endif /* CONFIG_CAIF_SPI_SYNC */
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Daniel Martensson<daniel.martensson@stericsson.com>");
MODULE_AUTHOR("Daniel Martensson");
MODULE_DESCRIPTION("CAIF SPI driver");
/* Returns the number of padding bytes for alignment. */
@ -864,6 +863,7 @@ static int __init cfspi_init_module(void)
driver_remove_file(&cfspi_spi_driver.driver,
&driver_attr_up_head_align);
err_create_up_head_align:
platform_driver_unregister(&cfspi_spi_driver);
err_dev_register:
return result;
}

View File

@ -1,7 +1,6 @@
/*
* Copyright (C) ST-Ericsson AB 2010
* Contact: Sjur Brendeland / sjur.brandeland@stericsson.com
* Author: Daniel Martensson / Daniel.Martensson@stericsson.com
* Author: Daniel Martensson
* License terms: GNU General Public License (GPL) version 2.
*/
#include <linux/init.h>

View File

@ -65,7 +65,7 @@ config CAN_LEDS
config CAN_AT91
tristate "Atmel AT91 onchip CAN controller"
depends on ARCH_AT91SAM9263 || ARCH_AT91SAM9X5
depends on ARM
---help---
This is a driver for the SoC CAN controller in Atmel's AT91SAM9263
and AT91SAM9X5 processors.

View File

@ -27,6 +27,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/rtnetlink.h>
#include <linux/skbuff.h>
@ -155,19 +156,20 @@ struct at91_priv {
canid_t mb0_id;
};
static const struct at91_devtype_data at91_devtype_data[] = {
[AT91_DEVTYPE_SAM9263] = {
.rx_first = 1,
.rx_split = 8,
.rx_last = 11,
.tx_shift = 2,
},
[AT91_DEVTYPE_SAM9X5] = {
.rx_first = 0,
.rx_split = 4,
.rx_last = 5,
.tx_shift = 1,
},
static const struct at91_devtype_data at91_at91sam9263_data = {
.rx_first = 1,
.rx_split = 8,
.rx_last = 11,
.tx_shift = 2,
.type = AT91_DEVTYPE_SAM9263,
};
static const struct at91_devtype_data at91_at91sam9x5_data = {
.rx_first = 0,
.rx_split = 4,
.rx_last = 5,
.tx_shift = 1,
.type = AT91_DEVTYPE_SAM9X5,
};
static const struct can_bittiming_const at91_bittiming_const = {
@ -1249,10 +1251,42 @@ static struct attribute_group at91_sysfs_attr_group = {
.attrs = at91_sysfs_attrs,
};
#if defined(CONFIG_OF)
static const struct of_device_id at91_can_dt_ids[] = {
{
.compatible = "atmel,at91sam9x5-can",
.data = &at91_at91sam9x5_data,
}, {
.compatible = "atmel,at91sam9263-can",
.data = &at91_at91sam9263_data,
}, {
/* sentinel */
}
};
MODULE_DEVICE_TABLE(of, at91_can_dt_ids);
#else
#define at91_can_dt_ids NULL
#endif
static const struct at91_devtype_data *at91_can_get_driver_data(struct platform_device *pdev)
{
if (pdev->dev.of_node) {
const struct of_device_id *match;
match = of_match_node(at91_can_dt_ids, pdev->dev.of_node);
if (!match) {
dev_err(&pdev->dev, "no matching node found in dtb\n");
return NULL;
}
return (const struct at91_devtype_data *)match->data;
}
return (const struct at91_devtype_data *)
platform_get_device_id(pdev)->driver_data;
}
static int at91_can_probe(struct platform_device *pdev)
{
const struct at91_devtype_data *devtype_data;
enum at91_devtype devtype;
struct net_device *dev;
struct at91_priv *priv;
struct resource *res;
@ -1260,8 +1294,12 @@ static int at91_can_probe(struct platform_device *pdev)
void __iomem *addr;
int err, irq;
devtype = pdev->id_entry->driver_data;
devtype_data = &at91_devtype_data[devtype];
devtype_data = at91_can_get_driver_data(pdev);
if (!devtype_data) {
dev_err(&pdev->dev, "no driver data\n");
err = -ENODEV;
goto exit;
}
clk = clk_get(&pdev->dev, "can_clk");
if (IS_ERR(clk)) {
@ -1310,7 +1348,6 @@ static int at91_can_probe(struct platform_device *pdev)
priv->dev = dev;
priv->reg_base = addr;
priv->devtype_data = *devtype_data;
priv->devtype_data.type = devtype;
priv->clk = clk;
priv->pdata = pdev->dev.platform_data;
priv->mb0_id = 0x7ff;
@ -1373,10 +1410,10 @@ static int at91_can_remove(struct platform_device *pdev)
static const struct platform_device_id at91_can_id_table[] = {
{
.name = "at91_can",
.driver_data = AT91_DEVTYPE_SAM9263,
.driver_data = (kernel_ulong_t)&at91_at91sam9x5_data,
}, {
.name = "at91sam9x5_can",
.driver_data = AT91_DEVTYPE_SAM9X5,
.driver_data = (kernel_ulong_t)&at91_at91sam9263_data,
}, {
/* sentinel */
}
@ -1389,6 +1426,7 @@ static struct platform_driver at91_can_driver = {
.driver = {
.name = KBUILD_MODNAME,
.owner = THIS_MODULE,
.of_match_table = at91_can_dt_ids,
},
.id_table = at91_can_id_table,
};

View File

@ -412,7 +412,7 @@ static int bfin_can_err(struct net_device *dev, u16 isrc, u16 status)
return 0;
}
irqreturn_t bfin_can_interrupt(int irq, void *dev_id)
static irqreturn_t bfin_can_interrupt(int irq, void *dev_id)
{
struct net_device *dev = dev_id;
struct bfin_can_priv *priv = netdev_priv(dev);
@ -504,7 +504,7 @@ static int bfin_can_close(struct net_device *dev)
return 0;
}
struct net_device *alloc_bfin_candev(void)
static struct net_device *alloc_bfin_candev(void)
{
struct net_device *dev;
struct bfin_can_priv *priv;

View File

@ -269,7 +269,7 @@ struct mcp251x_priv {
#define MCP251X_IS(_model) \
static inline int mcp251x_is_##_model(struct spi_device *spi) \
{ \
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev); \
struct mcp251x_priv *priv = spi_get_drvdata(spi); \
return priv->model == CAN_MCP251X_MCP##_model; \
}
@ -305,7 +305,7 @@ static void mcp251x_clean(struct net_device *net)
*/
static int mcp251x_spi_trans(struct spi_device *spi, int len)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
struct spi_transfer t = {
.tx_buf = priv->spi_tx_buf,
.rx_buf = priv->spi_rx_buf,
@ -333,7 +333,7 @@ static int mcp251x_spi_trans(struct spi_device *spi, int len)
static u8 mcp251x_read_reg(struct spi_device *spi, uint8_t reg)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
u8 val = 0;
priv->spi_tx_buf[0] = INSTRUCTION_READ;
@ -348,7 +348,7 @@ static u8 mcp251x_read_reg(struct spi_device *spi, uint8_t reg)
static void mcp251x_read_2regs(struct spi_device *spi, uint8_t reg,
uint8_t *v1, uint8_t *v2)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
priv->spi_tx_buf[0] = INSTRUCTION_READ;
priv->spi_tx_buf[1] = reg;
@ -361,7 +361,7 @@ static void mcp251x_read_2regs(struct spi_device *spi, uint8_t reg,
static void mcp251x_write_reg(struct spi_device *spi, u8 reg, uint8_t val)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
priv->spi_tx_buf[0] = INSTRUCTION_WRITE;
priv->spi_tx_buf[1] = reg;
@ -373,7 +373,7 @@ static void mcp251x_write_reg(struct spi_device *spi, u8 reg, uint8_t val)
static void mcp251x_write_bits(struct spi_device *spi, u8 reg,
u8 mask, uint8_t val)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
priv->spi_tx_buf[0] = INSTRUCTION_BIT_MODIFY;
priv->spi_tx_buf[1] = reg;
@ -386,7 +386,7 @@ static void mcp251x_write_bits(struct spi_device *spi, u8 reg,
static void mcp251x_hw_tx_frame(struct spi_device *spi, u8 *buf,
int len, int tx_buf_idx)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
if (mcp251x_is_2510(spi)) {
int i;
@ -403,7 +403,7 @@ static void mcp251x_hw_tx_frame(struct spi_device *spi, u8 *buf,
static void mcp251x_hw_tx(struct spi_device *spi, struct can_frame *frame,
int tx_buf_idx)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
u32 sid, eid, exide, rtr;
u8 buf[SPI_TRANSFER_BUF_LEN];
@ -434,7 +434,7 @@ static void mcp251x_hw_tx(struct spi_device *spi, struct can_frame *frame,
static void mcp251x_hw_rx_frame(struct spi_device *spi, u8 *buf,
int buf_idx)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
if (mcp251x_is_2510(spi)) {
int i, len;
@ -454,7 +454,7 @@ static void mcp251x_hw_rx_frame(struct spi_device *spi, u8 *buf,
static void mcp251x_hw_rx(struct spi_device *spi, int buf_idx)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
struct sk_buff *skb;
struct can_frame *frame;
u8 buf[SPI_TRANSFER_BUF_LEN];
@ -550,7 +550,7 @@ static int mcp251x_do_set_mode(struct net_device *net, enum can_mode mode)
static int mcp251x_set_normal_mode(struct spi_device *spi)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
unsigned long timeout;
/* Enable interrupts */
@ -620,7 +620,7 @@ static int mcp251x_setup(struct net_device *net, struct mcp251x_priv *priv,
static int mcp251x_hw_reset(struct spi_device *spi)
{
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
int ret;
unsigned long timeout;
@ -1026,7 +1026,7 @@ static int mcp251x_can_probe(struct spi_device *spi)
CAN_CTRLMODE_LOOPBACK | CAN_CTRLMODE_LISTENONLY;
priv->model = spi_get_device_id(spi)->driver_data;
priv->net = net;
dev_set_drvdata(&spi->dev, priv);
spi_set_drvdata(spi, priv);
priv->spi = spi;
mutex_init(&priv->mcp_lock);
@ -1124,7 +1124,7 @@ error_out:
static int mcp251x_can_remove(struct spi_device *spi)
{
struct mcp251x_platform_data *pdata = spi->dev.platform_data;
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
struct net_device *net = priv->net;
unregister_candev(net);
@ -1144,11 +1144,13 @@ static int mcp251x_can_remove(struct spi_device *spi)
return 0;
}
#ifdef CONFIG_PM
static int mcp251x_can_suspend(struct spi_device *spi, pm_message_t state)
#ifdef CONFIG_PM_SLEEP
static int mcp251x_can_suspend(struct device *dev)
{
struct spi_device *spi = to_spi_device(dev);
struct mcp251x_platform_data *pdata = spi->dev.platform_data;
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
struct net_device *net = priv->net;
priv->force_quit = 1;
@ -1176,10 +1178,11 @@ static int mcp251x_can_suspend(struct spi_device *spi, pm_message_t state)
return 0;
}
static int mcp251x_can_resume(struct spi_device *spi)
static int mcp251x_can_resume(struct device *dev)
{
struct spi_device *spi = to_spi_device(dev);
struct mcp251x_platform_data *pdata = spi->dev.platform_data;
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct mcp251x_priv *priv = spi_get_drvdata(spi);
if (priv->after_suspend & AFTER_SUSPEND_POWER) {
pdata->power_enable(1);
@ -1197,11 +1200,11 @@ static int mcp251x_can_resume(struct spi_device *spi)
enable_irq(spi->irq);
return 0;
}
#else
#define mcp251x_can_suspend NULL
#define mcp251x_can_resume NULL
#endif
static SIMPLE_DEV_PM_OPS(mcp251x_can_pm_ops, mcp251x_can_suspend,
mcp251x_can_resume);
static const struct spi_device_id mcp251x_id_table[] = {
{ "mcp2510", CAN_MCP251X_MCP2510 },
{ "mcp2515", CAN_MCP251X_MCP2515 },
@ -1213,29 +1216,15 @@ MODULE_DEVICE_TABLE(spi, mcp251x_id_table);
static struct spi_driver mcp251x_can_driver = {
.driver = {
.name = DEVICE_NAME,
.bus = &spi_bus_type,
.owner = THIS_MODULE,
.pm = &mcp251x_can_pm_ops,
},
.id_table = mcp251x_id_table,
.probe = mcp251x_can_probe,
.remove = mcp251x_can_remove,
.suspend = mcp251x_can_suspend,
.resume = mcp251x_can_resume,
};
static int __init mcp251x_can_init(void)
{
return spi_register_driver(&mcp251x_can_driver);
}
static void __exit mcp251x_can_exit(void)
{
spi_unregister_driver(&mcp251x_can_driver);
}
module_init(mcp251x_can_init);
module_exit(mcp251x_can_exit);
module_spi_driver(mcp251x_can_driver);
MODULE_AUTHOR("Chris Elston <celston@katalix.com>, "
"Christian Pellegrin <chripell@evolware.org>");

View File

@ -168,12 +168,12 @@ static inline int ems_pci_check_chan(const struct sja1000_priv *priv)
unsigned char res;
/* Make sure SJA1000 is in reset mode */
priv->write_reg(priv, REG_MOD, 1);
priv->write_reg(priv, SJA1000_MOD, 1);
priv->write_reg(priv, REG_CDR, CDR_PELICAN);
priv->write_reg(priv, SJA1000_CDR, CDR_PELICAN);
/* read reset-values */
res = priv->read_reg(priv, REG_CDR);
res = priv->read_reg(priv, SJA1000_CDR);
if (res == CDR_PELICAN)
return 1;

View File

@ -126,11 +126,11 @@ static irqreturn_t ems_pcmcia_interrupt(int irq, void *dev_id)
static inline int ems_pcmcia_check_chan(struct sja1000_priv *priv)
{
/* Make sure SJA1000 is in reset mode */
ems_pcmcia_write_reg(priv, REG_MOD, 1);
ems_pcmcia_write_reg(priv, REG_CDR, CDR_PELICAN);
ems_pcmcia_write_reg(priv, SJA1000_MOD, 1);
ems_pcmcia_write_reg(priv, SJA1000_CDR, CDR_PELICAN);
/* read reset-values */
if (ems_pcmcia_read_reg(priv, REG_CDR) == CDR_PELICAN)
if (ems_pcmcia_read_reg(priv, SJA1000_CDR) == CDR_PELICAN)
return 1;
return 0;

View File

@ -159,9 +159,9 @@ static int number_of_sja1000_chip(void __iomem *base_addr)
for (i = 0; i < MAX_NO_OF_CHANNELS; i++) {
/* reset chip */
iowrite8(MOD_RM, base_addr +
(i * KVASER_PCI_PORT_BYTES) + REG_MOD);
(i * KVASER_PCI_PORT_BYTES) + SJA1000_MOD);
status = ioread8(base_addr +
(i * KVASER_PCI_PORT_BYTES) + REG_MOD);
(i * KVASER_PCI_PORT_BYTES) + SJA1000_MOD);
/* check reset bit */
if (!(status & MOD_RM))
break;

View File

@ -402,7 +402,7 @@ static void peak_pciec_write_reg(const struct sja1000_priv *priv,
int c = (priv->reg_base - card->reg_base) / PEAK_PCI_CHAN_SIZE;
/* sja1000 register changes control the leds state */
if (port == REG_MOD)
if (port == SJA1000_MOD)
switch (val) {
case MOD_RM:
/* Reset Mode: set led on */

View File

@ -196,7 +196,7 @@ static void pcan_write_canreg(const struct sja1000_priv *priv, int port, u8 v)
int c = (priv->reg_base - card->ioport_addr) / PCC_CHAN_SIZE;
/* sja1000 register changes control the leds state */
if (port == REG_MOD)
if (port == SJA1000_MOD)
switch (v) {
case MOD_RM:
/* Reset Mode: set led on */
@ -509,11 +509,11 @@ static void pcan_free_channels(struct pcan_pccard *card)
static inline int pcan_channel_present(struct sja1000_priv *priv)
{
/* make sure SJA1000 is in reset mode */
pcan_write_canreg(priv, REG_MOD, 1);
pcan_write_canreg(priv, REG_CDR, CDR_PELICAN);
pcan_write_canreg(priv, SJA1000_MOD, 1);
pcan_write_canreg(priv, SJA1000_CDR, CDR_PELICAN);
/* read reset-values */
if (pcan_read_canreg(priv, REG_CDR) == CDR_PELICAN)
if (pcan_read_canreg(priv, SJA1000_CDR) == CDR_PELICAN)
return 1;
return 0;

View File

@ -348,20 +348,20 @@ static inline int plx_pci_check_sja1000(const struct sja1000_priv *priv)
*/
if ((priv->read_reg(priv, REG_CR) & REG_CR_BASICCAN_INITIAL_MASK) ==
REG_CR_BASICCAN_INITIAL &&
(priv->read_reg(priv, SJA1000_REG_SR) == REG_SR_BASICCAN_INITIAL) &&
(priv->read_reg(priv, REG_IR) == REG_IR_BASICCAN_INITIAL))
(priv->read_reg(priv, SJA1000_SR) == REG_SR_BASICCAN_INITIAL) &&
(priv->read_reg(priv, SJA1000_IR) == REG_IR_BASICCAN_INITIAL))
flag = 1;
/* Bring the SJA1000 into the PeliCAN mode*/
priv->write_reg(priv, REG_CDR, CDR_PELICAN);
priv->write_reg(priv, SJA1000_CDR, CDR_PELICAN);
/*
* Check registers after reset in the PeliCAN mode.
* See states on p. 23 of the Datasheet.
*/
if (priv->read_reg(priv, REG_MOD) == REG_MOD_PELICAN_INITIAL &&
priv->read_reg(priv, SJA1000_REG_SR) == REG_SR_PELICAN_INITIAL &&
priv->read_reg(priv, REG_IR) == REG_IR_PELICAN_INITIAL)
if (priv->read_reg(priv, SJA1000_MOD) == REG_MOD_PELICAN_INITIAL &&
priv->read_reg(priv, SJA1000_SR) == REG_SR_PELICAN_INITIAL &&
priv->read_reg(priv, SJA1000_IR) == REG_IR_PELICAN_INITIAL)
return flag;
return 0;

View File

@ -91,14 +91,14 @@ static void sja1000_write_cmdreg(struct sja1000_priv *priv, u8 val)
* the write_reg() operation - especially on SMP systems.
*/
spin_lock_irqsave(&priv->cmdreg_lock, flags);
priv->write_reg(priv, REG_CMR, val);
priv->read_reg(priv, SJA1000_REG_SR);
priv->write_reg(priv, SJA1000_CMR, val);
priv->read_reg(priv, SJA1000_SR);
spin_unlock_irqrestore(&priv->cmdreg_lock, flags);
}
static int sja1000_is_absent(struct sja1000_priv *priv)
{
return (priv->read_reg(priv, REG_MOD) == 0xFF);
return (priv->read_reg(priv, SJA1000_MOD) == 0xFF);
}
static int sja1000_probe_chip(struct net_device *dev)
@ -116,11 +116,11 @@ static int sja1000_probe_chip(struct net_device *dev)
static void set_reset_mode(struct net_device *dev)
{
struct sja1000_priv *priv = netdev_priv(dev);
unsigned char status = priv->read_reg(priv, REG_MOD);
unsigned char status = priv->read_reg(priv, SJA1000_MOD);
int i;
/* disable interrupts */
priv->write_reg(priv, REG_IER, IRQ_OFF);
priv->write_reg(priv, SJA1000_IER, IRQ_OFF);
for (i = 0; i < 100; i++) {
/* check reset bit */
@ -129,9 +129,10 @@ static void set_reset_mode(struct net_device *dev)
return;
}
priv->write_reg(priv, REG_MOD, MOD_RM); /* reset chip */
/* reset chip */
priv->write_reg(priv, SJA1000_MOD, MOD_RM);
udelay(10);
status = priv->read_reg(priv, REG_MOD);
status = priv->read_reg(priv, SJA1000_MOD);
}
netdev_err(dev, "setting SJA1000 into reset mode failed!\n");
@ -140,7 +141,7 @@ static void set_reset_mode(struct net_device *dev)
static void set_normal_mode(struct net_device *dev)
{
struct sja1000_priv *priv = netdev_priv(dev);
unsigned char status = priv->read_reg(priv, REG_MOD);
unsigned char status = priv->read_reg(priv, SJA1000_MOD);
int i;
for (i = 0; i < 100; i++) {
@ -149,22 +150,22 @@ static void set_normal_mode(struct net_device *dev)
priv->can.state = CAN_STATE_ERROR_ACTIVE;
/* enable interrupts */
if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
priv->write_reg(priv, REG_IER, IRQ_ALL);
priv->write_reg(priv, SJA1000_IER, IRQ_ALL);
else
priv->write_reg(priv, REG_IER,
priv->write_reg(priv, SJA1000_IER,
IRQ_ALL & ~IRQ_BEI);
return;
}
/* set chip to normal mode */
if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
priv->write_reg(priv, REG_MOD, MOD_LOM);
priv->write_reg(priv, SJA1000_MOD, MOD_LOM);
else
priv->write_reg(priv, REG_MOD, 0x00);
priv->write_reg(priv, SJA1000_MOD, 0x00);
udelay(10);
status = priv->read_reg(priv, REG_MOD);
status = priv->read_reg(priv, SJA1000_MOD);
}
netdev_err(dev, "setting SJA1000 into normal mode failed!\n");
@ -179,9 +180,9 @@ static void sja1000_start(struct net_device *dev)
set_reset_mode(dev);
/* Clear error counters and error code capture */
priv->write_reg(priv, REG_TXERR, 0x0);
priv->write_reg(priv, REG_RXERR, 0x0);
priv->read_reg(priv, REG_ECC);
priv->write_reg(priv, SJA1000_TXERR, 0x0);
priv->write_reg(priv, SJA1000_RXERR, 0x0);
priv->read_reg(priv, SJA1000_ECC);
/* leave reset mode */
set_normal_mode(dev);
@ -217,8 +218,8 @@ static int sja1000_set_bittiming(struct net_device *dev)
netdev_info(dev, "setting BTR0=0x%02x BTR1=0x%02x\n", btr0, btr1);
priv->write_reg(priv, REG_BTR0, btr0);
priv->write_reg(priv, REG_BTR1, btr1);
priv->write_reg(priv, SJA1000_BTR0, btr0);
priv->write_reg(priv, SJA1000_BTR1, btr1);
return 0;
}
@ -228,8 +229,8 @@ static int sja1000_get_berr_counter(const struct net_device *dev,
{
struct sja1000_priv *priv = netdev_priv(dev);
bec->txerr = priv->read_reg(priv, REG_TXERR);
bec->rxerr = priv->read_reg(priv, REG_RXERR);
bec->txerr = priv->read_reg(priv, SJA1000_TXERR);
bec->rxerr = priv->read_reg(priv, SJA1000_RXERR);
return 0;
}
@ -247,20 +248,20 @@ static void chipset_init(struct net_device *dev)
struct sja1000_priv *priv = netdev_priv(dev);
/* set clock divider and output control register */
priv->write_reg(priv, REG_CDR, priv->cdr | CDR_PELICAN);
priv->write_reg(priv, SJA1000_CDR, priv->cdr | CDR_PELICAN);
/* set acceptance filter (accept all) */
priv->write_reg(priv, REG_ACCC0, 0x00);
priv->write_reg(priv, REG_ACCC1, 0x00);
priv->write_reg(priv, REG_ACCC2, 0x00);
priv->write_reg(priv, REG_ACCC3, 0x00);
priv->write_reg(priv, SJA1000_ACCC0, 0x00);
priv->write_reg(priv, SJA1000_ACCC1, 0x00);
priv->write_reg(priv, SJA1000_ACCC2, 0x00);
priv->write_reg(priv, SJA1000_ACCC3, 0x00);
priv->write_reg(priv, REG_ACCM0, 0xFF);
priv->write_reg(priv, REG_ACCM1, 0xFF);
priv->write_reg(priv, REG_ACCM2, 0xFF);
priv->write_reg(priv, REG_ACCM3, 0xFF);
priv->write_reg(priv, SJA1000_ACCM0, 0xFF);
priv->write_reg(priv, SJA1000_ACCM1, 0xFF);
priv->write_reg(priv, SJA1000_ACCM2, 0xFF);
priv->write_reg(priv, SJA1000_ACCM3, 0xFF);
priv->write_reg(priv, REG_OCR, priv->ocr | OCR_MODE_NORMAL);
priv->write_reg(priv, SJA1000_OCR, priv->ocr | OCR_MODE_NORMAL);
}
/*
@ -289,21 +290,21 @@ static netdev_tx_t sja1000_start_xmit(struct sk_buff *skb,
id = cf->can_id;
if (id & CAN_RTR_FLAG)
fi |= FI_RTR;
fi |= SJA1000_FI_RTR;
if (id & CAN_EFF_FLAG) {
fi |= FI_FF;
dreg = EFF_BUF;
priv->write_reg(priv, REG_FI, fi);
priv->write_reg(priv, REG_ID1, (id & 0x1fe00000) >> (5 + 16));
priv->write_reg(priv, REG_ID2, (id & 0x001fe000) >> (5 + 8));
priv->write_reg(priv, REG_ID3, (id & 0x00001fe0) >> 5);
priv->write_reg(priv, REG_ID4, (id & 0x0000001f) << 3);
fi |= SJA1000_FI_FF;
dreg = SJA1000_EFF_BUF;
priv->write_reg(priv, SJA1000_FI, fi);
priv->write_reg(priv, SJA1000_ID1, (id & 0x1fe00000) >> 21);
priv->write_reg(priv, SJA1000_ID2, (id & 0x001fe000) >> 13);
priv->write_reg(priv, SJA1000_ID3, (id & 0x00001fe0) >> 5);
priv->write_reg(priv, SJA1000_ID4, (id & 0x0000001f) << 3);
} else {
dreg = SFF_BUF;
priv->write_reg(priv, REG_FI, fi);
priv->write_reg(priv, REG_ID1, (id & 0x000007f8) >> 3);
priv->write_reg(priv, REG_ID2, (id & 0x00000007) << 5);
dreg = SJA1000_SFF_BUF;
priv->write_reg(priv, SJA1000_FI, fi);
priv->write_reg(priv, SJA1000_ID1, (id & 0x000007f8) >> 3);
priv->write_reg(priv, SJA1000_ID2, (id & 0x00000007) << 5);
}
for (i = 0; i < dlc; i++)
@ -335,25 +336,25 @@ static void sja1000_rx(struct net_device *dev)
if (skb == NULL)
return;
fi = priv->read_reg(priv, REG_FI);
fi = priv->read_reg(priv, SJA1000_FI);
if (fi & FI_FF) {
if (fi & SJA1000_FI_FF) {
/* extended frame format (EFF) */
dreg = EFF_BUF;
id = (priv->read_reg(priv, REG_ID1) << (5 + 16))
| (priv->read_reg(priv, REG_ID2) << (5 + 8))
| (priv->read_reg(priv, REG_ID3) << 5)
| (priv->read_reg(priv, REG_ID4) >> 3);
dreg = SJA1000_EFF_BUF;
id = (priv->read_reg(priv, SJA1000_ID1) << 21)
| (priv->read_reg(priv, SJA1000_ID2) << 13)
| (priv->read_reg(priv, SJA1000_ID3) << 5)
| (priv->read_reg(priv, SJA1000_ID4) >> 3);
id |= CAN_EFF_FLAG;
} else {
/* standard frame format (SFF) */
dreg = SFF_BUF;
id = (priv->read_reg(priv, REG_ID1) << 3)
| (priv->read_reg(priv, REG_ID2) >> 5);
dreg = SJA1000_SFF_BUF;
id = (priv->read_reg(priv, SJA1000_ID1) << 3)
| (priv->read_reg(priv, SJA1000_ID2) >> 5);
}
cf->can_dlc = get_can_dlc(fi & 0x0F);
if (fi & FI_RTR) {
if (fi & SJA1000_FI_RTR) {
id |= CAN_RTR_FLAG;
} else {
for (i = 0; i < cf->can_dlc; i++)
@ -414,7 +415,7 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
priv->can.can_stats.bus_error++;
stats->rx_errors++;
ecc = priv->read_reg(priv, REG_ECC);
ecc = priv->read_reg(priv, SJA1000_ECC);
cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
@ -448,7 +449,7 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
if (isrc & IRQ_ALI) {
/* arbitration lost interrupt */
netdev_dbg(dev, "arbitration lost interrupt\n");
alc = priv->read_reg(priv, REG_ALC);
alc = priv->read_reg(priv, SJA1000_ALC);
priv->can.can_stats.arbitration_lost++;
stats->tx_errors++;
cf->can_id |= CAN_ERR_LOSTARB;
@ -457,8 +458,8 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
if (state != priv->can.state && (state == CAN_STATE_ERROR_WARNING ||
state == CAN_STATE_ERROR_PASSIVE)) {
uint8_t rxerr = priv->read_reg(priv, REG_RXERR);
uint8_t txerr = priv->read_reg(priv, REG_TXERR);
uint8_t rxerr = priv->read_reg(priv, SJA1000_RXERR);
uint8_t txerr = priv->read_reg(priv, SJA1000_TXERR);
cf->can_id |= CAN_ERR_CRTL;
if (state == CAN_STATE_ERROR_WARNING) {
priv->can.can_stats.error_warning++;
@ -494,15 +495,16 @@ irqreturn_t sja1000_interrupt(int irq, void *dev_id)
int n = 0;
/* Shared interrupts and IRQ off? */
if (priv->read_reg(priv, REG_IER) == IRQ_OFF)
if (priv->read_reg(priv, SJA1000_IER) == IRQ_OFF)
return IRQ_NONE;
if (priv->pre_irq)
priv->pre_irq(priv);
while ((isrc = priv->read_reg(priv, REG_IR)) && (n < SJA1000_MAX_IRQ)) {
while ((isrc = priv->read_reg(priv, SJA1000_IR)) &&
(n < SJA1000_MAX_IRQ)) {
n++;
status = priv->read_reg(priv, SJA1000_REG_SR);
status = priv->read_reg(priv, SJA1000_SR);
/* check for absent controller due to hw unplug */
if (status == 0xFF && sja1000_is_absent(priv))
return IRQ_NONE;
@ -519,7 +521,7 @@ irqreturn_t sja1000_interrupt(int irq, void *dev_id)
} else {
/* transmission complete */
stats->tx_bytes +=
priv->read_reg(priv, REG_FI) & 0xf;
priv->read_reg(priv, SJA1000_FI) & 0xf;
stats->tx_packets++;
can_get_echo_skb(dev, 0);
}
@ -530,7 +532,7 @@ irqreturn_t sja1000_interrupt(int irq, void *dev_id)
/* receive interrupt */
while (status & SR_RBS) {
sja1000_rx(dev);
status = priv->read_reg(priv, SJA1000_REG_SR);
status = priv->read_reg(priv, SJA1000_SR);
/* check for absent controller */
if (status == 0xFF && sja1000_is_absent(priv))
return IRQ_NONE;

View File

@ -54,46 +54,46 @@
#define SJA1000_MAX_IRQ 20 /* max. number of interrupts handled in ISR */
/* SJA1000 registers - manual section 6.4 (Pelican Mode) */
#define REG_MOD 0x00
#define REG_CMR 0x01
#define SJA1000_REG_SR 0x02
#define REG_IR 0x03
#define REG_IER 0x04
#define REG_ALC 0x0B
#define REG_ECC 0x0C
#define REG_EWL 0x0D
#define REG_RXERR 0x0E
#define REG_TXERR 0x0F
#define REG_ACCC0 0x10
#define REG_ACCC1 0x11
#define REG_ACCC2 0x12
#define REG_ACCC3 0x13
#define REG_ACCM0 0x14
#define REG_ACCM1 0x15
#define REG_ACCM2 0x16
#define REG_ACCM3 0x17
#define REG_RMC 0x1D
#define REG_RBSA 0x1E
#define SJA1000_MOD 0x00
#define SJA1000_CMR 0x01
#define SJA1000_SR 0x02
#define SJA1000_IR 0x03
#define SJA1000_IER 0x04
#define SJA1000_ALC 0x0B
#define SJA1000_ECC 0x0C
#define SJA1000_EWL 0x0D
#define SJA1000_RXERR 0x0E
#define SJA1000_TXERR 0x0F
#define SJA1000_ACCC0 0x10
#define SJA1000_ACCC1 0x11
#define SJA1000_ACCC2 0x12
#define SJA1000_ACCC3 0x13
#define SJA1000_ACCM0 0x14
#define SJA1000_ACCM1 0x15
#define SJA1000_ACCM2 0x16
#define SJA1000_ACCM3 0x17
#define SJA1000_RMC 0x1D
#define SJA1000_RBSA 0x1E
/* Common registers - manual section 6.5 */
#define REG_BTR0 0x06
#define REG_BTR1 0x07
#define REG_OCR 0x08
#define REG_CDR 0x1F
#define SJA1000_BTR0 0x06
#define SJA1000_BTR1 0x07
#define SJA1000_OCR 0x08
#define SJA1000_CDR 0x1F
#define REG_FI 0x10
#define SFF_BUF 0x13
#define EFF_BUF 0x15
#define SJA1000_FI 0x10
#define SJA1000_SFF_BUF 0x13
#define SJA1000_EFF_BUF 0x15
#define FI_FF 0x80
#define FI_RTR 0x40
#define SJA1000_FI_FF 0x80
#define SJA1000_FI_RTR 0x40
#define REG_ID1 0x11
#define REG_ID2 0x12
#define REG_ID3 0x13
#define REG_ID4 0x14
#define SJA1000_ID1 0x11
#define SJA1000_ID2 0x12
#define SJA1000_ID3 0x13
#define SJA1000_ID4 0x14
#define CAN_RAM 0x20
#define SJA1000_CAN_RAM 0x20
/* mode register */
#define MOD_RM 0x01

View File

@ -306,6 +306,7 @@ static int el3_isa_match(struct device *pdev, unsigned int ndev)
if (!dev)
return -ENOMEM;
SET_NETDEV_DEV(dev, pdev);
netdev_boot_setup_check(dev);
if (!request_region(ioaddr, EL3_IO_EXTENT, "3c509-isa")) {
@ -595,6 +596,7 @@ static int __init el3_eisa_probe (struct device *device)
return -ENOMEM;
}
SET_NETDEV_DEV(dev, device);
netdev_boot_setup_check(dev);
el3_dev_fill(dev, phys_addr, ioaddr, irq, if_port, EL3_EISA);

View File

@ -1690,7 +1690,7 @@ typhoon_rx(struct typhoon *tp, struct basic_ring *rxRing, volatile __le32 * read
skb_checksum_none_assert(new_skb);
if (rx->rxStatus & TYPHOON_RX_VLAN)
__vlan_hwaccel_put_tag(new_skb,
__vlan_hwaccel_put_tag(new_skb, htons(ETH_P_8021Q),
ntohl(rx->vlanTag) & 0xffff);
netif_receive_skb(new_skb);
@ -2445,9 +2445,9 @@ typhoon_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
* settings -- so we only allow the user to toggle the TX processing.
*/
dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
NETIF_F_HW_VLAN_TX;
NETIF_F_HW_VLAN_CTAG_TX;
dev->features = dev->hw_features |
NETIF_F_HW_VLAN_RX | NETIF_F_RXCSUM;
NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_RXCSUM;
if(register_netdev(dev) < 0) {
err_msg = "unable to register netdev";

Some files were not shown because too many files have changed in this diff Show More