1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next

Pull networking updates from David Miller:

 1) Add BQL support to via-rhine, from Tino Reichardt.

 2) Integrate SWITCHDEV layer support into the DSA layer, so DSA drivers
    can support hw switch offloading.  From Floria Fainelli.

 3) Allow 'ip address' commands to initiate multicast group join/leave,
    from Madhu Challa.

 4) Many ipv4 FIB lookup optimizations from Alexander Duyck.

 5) Support EBPF in cls_bpf classifier and act_bpf action, from Daniel
    Borkmann.

 6) Remove the ugly compat support in ARP for ugly layers like ax25,
    rose, etc.  And use this to clean up the neigh layer, then use it to
    implement MPLS support.  All from Eric Biederman.

 7) Support L3 forwarding offloading in switches, from Scott Feldman.

 8) Collapse the LOCAL and MAIN ipv4 FIB tables when possible, to speed
    up route lookups even further.  From Alexander Duyck.

 9) Many improvements and bug fixes to the rhashtable implementation,
    from Herbert Xu and Thomas Graf.  In particular, in the case where
    an rhashtable user bulk adds a large number of items into an empty
    table, we expand the table much more sanely.

10) Don't make the tcp_metrics hash table per-namespace, from Eric
    Biederman.

11) Extend EBPF to access SKB fields, from Alexei Starovoitov.

12) Split out new connection request sockets so that they can be
    established in the main hash table.  Much less false sharing since
    hash lookups go direct to the request sockets instead of having to
    go first to the listener then to the request socks hashed
    underneath.  From Eric Dumazet.

13) Add async I/O support for crytpo AF_ALG sockets, from Tadeusz Struk.

14) Support stable privacy address generation for RFC7217 in IPV6.  From
    Hannes Frederic Sowa.

15) Hash network namespace into IP frag IDs, also from Hannes Frederic
    Sowa.

16) Convert PTP get/set methods to use 64-bit time, from Richard
    Cochran.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1816 commits)
  fm10k: Bump driver version to 0.15.2
  fm10k: corrected VF multicast update
  fm10k: mbx_update_max_size does not drop all oversized messages
  fm10k: reset head instead of calling update_max_size
  fm10k: renamed mbx_tx_dropped to mbx_tx_oversized
  fm10k: update xcast mode before synchronizing multicast addresses
  fm10k: start service timer on probe
  fm10k: fix function header comment
  fm10k: comment next_vf_mbx flow
  fm10k: don't handle mailbox events in iov_event path and always process mailbox
  fm10k: use separate workqueue for fm10k driver
  fm10k: Set PF queues to unlimited bandwidth during virtualization
  fm10k: expose tx_timeout_count as an ethtool stat
  fm10k: only increment tx_timeout_count in Tx hang path
  fm10k: remove extraneous "Reset interface" message
  fm10k: separate PF only stats so that VF does not display them
  fm10k: use hw->mac.max_queues for stats
  fm10k: only show actual queues, not the maximum in hardware
  fm10k: allow creation of VLAN on default vid
  fm10k: fix unused warnings
  ...
hifive-unleashed-5.1
Linus Torvalds 2015-04-15 09:00:47 -07:00
commit 6c373ca893
1451 changed files with 55648 additions and 32591 deletions

View File

@ -188,6 +188,14 @@ Description:
Indicates the interface unique physical port identifier within
the NIC, as a string.
What: /sys/class/net/<iface>/phys_port_name
Date: March 2015
KernelVersion: 4.0
Contact: netdev@vger.kernel.org
Description:
Indicates the interface physical port name within the NIC,
as a string.
What: /sys/class/net/<iface>/speed
Date: October 2009
KernelVersion: 2.6.33

View File

@ -24,6 +24,14 @@ Description:
Indicates the number of transmit timeout events seen by this
network interface transmit queue.
What: /sys/class/<iface>/queues/tx-<queue>/tx_maxrate
Date: March 2015
KernelVersion: 4.1
Contact: netdev@vger.kernel.org
Description:
A Mbps max-rate set for the queue, a value of zero means disabled,
default is disabled.
What: /sys/class/<iface>/queues/tx-<queue>/xps_cpus
Date: November 2010
KernelVersion: 2.6.38

View File

@ -14,7 +14,11 @@ Required properties for all the ethernet interfaces:
- "enet_csr": Ethernet control and status register address space
- "ring_csr": Descriptor ring control and status register address space
- "ring_cmd": Descriptor ring command register address space
- interrupts: Ethernet main interrupt
- interrupts: Two interrupt specifiers can be specified.
- First is the Rx interrupt. This irq is mandatory.
- Second is the Tx completion interrupt.
This is supported only on SGMII based 1GbE and 10GbE interfaces.
- port-id: Port number (0 or 1)
- clocks: Reference to the clock entry.
- local-mac-address: MAC address assigned to this device
- phy-connection-type: Interface type between ethernet device and PHY device
@ -49,6 +53,7 @@ Example:
<0x0 0X10000000 0x0 0X200>;
reg-names = "enet_csr", "ring_csr", "ring_cmd";
interrupts = <0x0 0x3c 0x4>;
port-id = <0>;
clocks = <&menetclk 0>;
local-mac-address = [00 01 73 00 00 01];
phy-connection-type = "rgmii";

View File

@ -6,11 +6,14 @@ Required properties:
- spi-max-frequency: maximal bus speed, should be set to 7500000 depends
sync or async operation mode
- reg: the chipselect index
- interrupts: the interrupt generated by the device
- interrupts: the interrupt generated by the device. Non high-level
can occur deadlocks while handling isr.
Optional properties:
- reset-gpio: GPIO spec for the rstn pin
- sleep-gpio: GPIO spec for the slp_tr pin
- xtal-trim: u8 value for fine tuning the internal capacitance
arrays of xtal pins: 0 = +0 pF, 0xf = +4.5 pF
Example:
@ -18,6 +21,7 @@ Example:
compatible = "atmel,at86rf231";
spi-max-frequency = <7500000>;
reg = <0>;
interrupts = <19 1>;
interrupts = <19 4>;
interrupt-parent = <&gpio3>;
xtal-trim = /bits/ 8 <0x06>;
};

View File

@ -13,11 +13,15 @@ Required properties:
- cca-gpio: GPIO spec for the CCA pin
- vreg-gpio: GPIO spec for the VREG pin
- reset-gpio: GPIO spec for the RESET pin
Optional properties:
- amplified: include if the CC2520 is connected to a CC2591 amplifier
Example:
cc2520@0 {
compatible = "ti,cc2520";
reg = <0>;
spi-max-frequency = <4000000>;
amplified;
pinctrl-names = "default";
pinctrl-0 = <&cc2520_cape_pins>;
fifo-gpio = <&gpio1 18 0>;

View File

@ -49,6 +49,7 @@ Required properties:
- compatible: Should be "ti,netcp-1.0"
- clocks: phandle to the reference clocks for the subsystem.
- dma-id: Navigator packet dma instance id.
- ranges: address range of NetCP (includes, Ethernet SS, PA and SA)
Optional properties:
- reg: register location and the size for the following register
@ -64,10 +65,30 @@ NetCP device properties: Device specification for NetCP sub-modules.
1Gb/10Gb (gbe/xgbe) ethernet switch sub-module specifications.
Required properties:
- label: Must be "netcp-gbe" for 1Gb & "netcp-xgbe" for 10Gb.
- compatible: Must be one of below:-
"ti,netcp-gbe" for 1GbE on NetCP 1.4
"ti,netcp-gbe-5" for 1GbE N NetCP 1.5 (N=5)
"ti,netcp-gbe-9" for 1GbE N NetCP 1.5 (N=9)
"ti,netcp-gbe-2" for 1GbE N NetCP 1.5 (N=2)
"ti,netcp-xgbe" for 10 GbE
- reg: register location and the size for the following register
regions in the specified order.
- subsystem registers
- serdes registers
- switch subsystem registers
- sgmii port3/4 module registers (only for NetCP 1.4)
- switch module registers
- serdes registers (only for 10G)
NetCP 1.4 ethss, here is the order
index #0 - switch subsystem registers
index #1 - sgmii port3/4 module registers
index #2 - switch module registers
NetCP 1.5 ethss 9 port, 5 port and 2 port
index #0 - switch subsystem registers
index #1 - switch module registers
index #2 - serdes registers
- tx-channel: the navigator packet dma channel name for tx.
- tx-queue: the navigator queue number associated with the tx dma channel.
- interfaces: specification for each of the switch port to be registered as a
@ -120,14 +141,13 @@ Optional properties:
Example binding:
netcp: netcp@2090000 {
netcp: netcp@2000000 {
reg = <0x2620110 0x8>;
reg-names = "efuse";
compatible = "ti,netcp-1.0";
#address-cells = <1>;
#size-cells = <1>;
ranges;
ranges = <0 0x2000000 0xfffff>;
clocks = <&papllclk>, <&clkcpgmac>, <&chipclk12>;
dma-coherent;
/* big-endian; */
@ -137,9 +157,9 @@ netcp: netcp@2090000 {
#address-cells = <1>;
#size-cells = <1>;
ranges;
gbe@0x2090000 {
gbe@90000 {
label = "netcp-gbe";
reg = <0x2090000 0xf00>;
reg = <0x90000 0x300>, <0x90400 0x400>, <0x90800 0x700>;
/* enable-ale; */
tx-queue = <648>;
tx-channel = <8>;

View File

@ -2,10 +2,13 @@
Required properties:
- compatible: Should be "cdns,[<chip>-]{macb|gem}"
Use "cdns,at91sam9260-macb" Atmel at91sam9260 and at91sam9263 SoCs.
Use "cdns,at91sam9260-macb" for Atmel at91sam9 SoCs or the 10/100Mbit IP
available on sama5d3 SoCs.
Use "cdns,at32ap7000-macb" for other 10/100 usage or use the generic form: "cdns,macb".
Use "cdns,pc302-gem" for Picochip picoXcell pc302 and later devices based on
the Cadence GEM, or the generic form: "cdns,gem".
Use "cdns,sama5d3-gem" for the Gigabit IP available on Atmel sama5d3 SoCs.
Use "cdns,sama5d4-gem" for the Gigabit IP available on Atmel sama5d4 SoCs.
- reg: Address and length of the register set for the device
- interrupts: Should contain macb interrupt
- phy-mode: See ethernet.txt file in the same directory.

View File

@ -0,0 +1,35 @@
* NXP Semiconductors NXP NCI NFC Controllers
Required properties:
- compatible: Should be "nxp,nxp-nci-i2c".
- clock-frequency: I²C work frequency.
- reg: address on the bus
- interrupt-parent: phandle for the interrupt gpio controller
- interrupts: GPIO interrupt to which the chip is connected
- enable-gpios: Output GPIO pin used for enabling/disabling the chip
- firmware-gpios: Output GPIO pin used to enter firmware download mode
Optional SoC Specific Properties:
- pinctrl-names: Contains only one value - "default".
- pintctrl-0: Specifies the pin control groups used for this controller.
Example (for ARM-based BeagleBone with NPC100 NFC controller on I2C2):
&i2c2 {
status = "okay";
npc100: npc100@29 {
compatible = "nxp,nxp-nci-i2c";
reg = <0x29>;
clock-frequency = <100000>;
interrupt-parent = <&gpio1>;
interrupts = <29 GPIO_ACTIVE_HIGH>;
enable-gpios = <&gpio0 30 GPIO_ACTIVE_HIGH>;
firmware-gpios = <&gpio0 31 GPIO_ACTIVE_HIGH>;
};
};

View File

@ -35,10 +35,11 @@ Optional properties:
- reset-names: Should contain the reset signal name "stmmaceth", if a
reset phandle is given
- max-frame-size: See ethernet.txt file in the same directory
- clocks: If present, the first clock should be the GMAC main clock,
further clocks may be specified in derived bindings.
- clocks: If present, the first clock should be the GMAC main clock and
the second clock should be peripheral's register interface clock. Further
clocks may be specified in derived bindings.
- clock-names: One name for each entry in the clocks property, the
first one should be "stmmaceth".
first one should be "stmmaceth" and the second one should be "pclk".
- clk_ptp_ref: this is the PTP reference clock; in case of the PTP is
available this clock is used for programming the Timestamp Addend Register.
If not passed then the system clock will be used and this is fine on some

View File

@ -22,7 +22,8 @@ This file contains
4.1.3 RAW socket option CAN_RAW_LOOPBACK
4.1.4 RAW socket option CAN_RAW_RECV_OWN_MSGS
4.1.5 RAW socket option CAN_RAW_FD_FRAMES
4.1.6 RAW socket returned message flags
4.1.6 RAW socket option CAN_RAW_JOIN_FILTERS
4.1.7 RAW socket returned message flags
4.2 Broadcast Manager protocol sockets (SOCK_DGRAM)
4.2.1 Broadcast Manager operations
4.2.2 Broadcast Manager message flags
@ -601,7 +602,22 @@ solution for a couple of reasons:
CAN FD frames by checking if the device maximum transfer unit is CANFD_MTU.
The CAN device MTU can be retrieved e.g. with a SIOCGIFMTU ioctl() syscall.
4.1.6 RAW socket returned message flags
4.1.6 RAW socket option CAN_RAW_JOIN_FILTERS
The CAN_RAW socket can set multiple CAN identifier specific filters that
lead to multiple filters in the af_can.c filter processing. These filters
are indenpendent from each other which leads to logical OR'ed filters when
applied (see 4.1.1).
This socket option joines the given CAN filters in the way that only CAN
frames are passed to user space that matched *all* given CAN filters. The
semantic for the applied filters is therefore changed to a logical AND.
This is useful especially when the filterset is a combination of filters
where the CAN_INV_FILTER flag is set in order to notch single CAN IDs or
CAN ID ranges from the incoming traffic.
4.1.7 RAW socket returned message flags
When using recvmsg() call, the msg->msg_flags may contain following flags:

View File

@ -280,7 +280,8 @@ Possible BPF extensions are shown in the following table:
rxhash skb->hash
cpu raw_smp_processor_id()
vlan_tci skb_vlan_tag_get(skb)
vlan_pr skb_vlan_tag_present(skb)
vlan_avail skb_vlan_tag_present(skb)
vlan_tpid skb->vlan_proto
rand prandom_u32()
These extensions can also be prefixed with '#'.

View File

@ -42,10 +42,10 @@ Additional Configurations
Jumbo Frames
------------
Jumbo Frames support is enabled by changing the MTU to a value larger than
the default of 1500. Use the ifconfig command to increase the MTU size.
the default of 1500. Use the ip command to increase the MTU size.
For example:
ifconfig eth<x> mtu 9000 up
ip link set dev eth<x> mtu 9000
This setting is not saved across reboots.

View File

@ -388,6 +388,16 @@ tcp_mtu_probing - INTEGER
1 - Disabled by default, enabled when an ICMP black hole detected
2 - Always enabled, use initial MSS of tcp_base_mss.
tcp_probe_interval - INTEGER
Controls how often to start TCP Packetization-Layer Path MTU
Discovery reprobe. The default is reprobing every 10 minutes as
per RFC4821.
tcp_probe_threshold - INTEGER
Controls when TCP Packetization-Layer Path MTU Discovery probing
will stop in respect to the width of search range in bytes. Default
is 8 bytes.
tcp_no_metrics_save - BOOLEAN
By default, TCP saves various connection metrics in the route cache
when the connection closes, so that connections established in the
@ -1116,11 +1126,23 @@ arp_accept - BOOLEAN
gratuitous arp frame, the arp table will be updated regardless
if this setting is on or off.
mcast_solicit - INTEGER
The maximum number of multicast probes in INCOMPLETE state,
when the associated hardware address is unknown. Defaults
to 3.
ucast_solicit - INTEGER
The maximum number of unicast probes in PROBE state, when
the hardware address is being reconfirmed. Defaults to 3.
app_solicit - INTEGER
The maximum number of probes to send to the user space ARP daemon
via netlink before dropping back to multicast probes (see
mcast_solicit). Defaults to 0.
mcast_resolicit). Defaults to 0.
mcast_resolicit - INTEGER
The maximum number of multicast probes after unicast and
app probes in PROBE state. Defaults to 0.
disable_policy - BOOLEAN
Disable IPSEC policy (SPD) for this interface
@ -1198,6 +1220,17 @@ anycast_src_echo_reply - BOOLEAN
FALSE: disabled
Default: FALSE
idgen_delay - INTEGER
Controls the delay in seconds after which time to retry
privacy stable address generation if a DAD conflict is
detected.
Default: 1 (as specified in RFC7217)
idgen_retries - INTEGER
Controls the number of retries to generate a stable privacy
address if a DAD conflict is detected.
Default: 3 (as specified in RFC7217)
mld_qrv - INTEGER
Controls the MLD query robustness variable (see RFC3810 9.1).
Default: 2 (as specified by RFC3810 9.1)
@ -1518,6 +1551,20 @@ use_optimistic - BOOLEAN
0: disabled (default)
1: enabled
stable_secret - IPv6 address
This IPv6 address will be used as a secret to generate IPv6
addresses for link-local addresses and autoconfigured
ones. All addresses generated after setting this secret will
be stable privacy ones by default. This can be changed via the
addrgenmode ip-link. conf/default/stable_secret is used as the
secret for the namespace, the interface specific ones can
overwrite that. Writes to conf/all/stable_secret are refused.
It is recommended to generate this secret during installation
of a system and keep it stable after that.
By default the stable secret is unset.
icmp/*:
ratelimit - INTEGER
Limit the maximal rates for sending ICMPv6 packets.

View File

@ -22,6 +22,27 @@ backup_only - BOOLEAN
If set, disable the director function while the server is
in backup mode to avoid packet loops for DR/TUN methods.
conn_reuse_mode - INTEGER
1 - default
Controls how ipvs will deal with connections that are detected
port reuse. It is a bitmap, with the values being:
0: disable any special handling on port reuse. The new
connection will be delivered to the same real server that was
servicing the previous connection. This will effectively
disable expire_nodest_conn.
bit 1: enable rescheduling of new connections when it is safe.
That is, whenever expire_nodest_conn and for TCP sockets, when
the connection is in TIME_WAIT state (which is only possible if
you use NAT mode).
bit 2: it is bit 1 plus, for TCP connections, when connections
are in FIN_WAIT state, as this is the last state seen by load
balancer in Direct Routing mode. This bit helps on adding new
real servers to a very busy cluster.
conntrack - BOOLEAN
0 - disabled (default)
not 0 - enabled

View File

@ -39,7 +39,7 @@ Channel Bonding documentation can be found in the Linux kernel source:
The driver information previously displayed in the /proc filesystem is not
supported in this release. Alternatively, you can use ethtool (version 1.6
or later), lspci, and ifconfig to obtain the same information.
or later), lspci, and iproute2 to obtain the same information.
Instructions on updating ethtool can be found in the section "Additional
Configurations" later in this document.
@ -90,7 +90,7 @@ select m for "Intel(R) PRO/10GbE support" located at:
3. Assign an IP address to the interface by entering the following, where
x is the interface number:
ifconfig ethx <IP_address>
ip addr add ethx <IP_address>
4. Verify that the interface works. Enter the following, where <IP_address>
is the IP address for another machine on the same subnet as the interface
@ -177,7 +177,7 @@ NOTE: These changes are only suggestions, and serve as a starting point for
tuning your network performance.
The changes are made in three major ways, listed in order of greatest effect:
- Use ifconfig to modify the mtu (maximum transmission unit) and the txqueuelen
- Use ip link to modify the mtu (maximum transmission unit) and the txqueuelen
parameter.
- Use sysctl to modify /proc parameters (essentially kernel tuning)
- Use setpci to modify the MMRBC field in PCI-X configuration space to increase
@ -202,7 +202,7 @@ setpci -d 8086:1a48 e6.b=2e
# to change as well.
# set the txqueuelen
# your ixgb adapter should be loaded as eth1 for this to work, change if needed
ifconfig eth1 mtu 9000 txqueuelen 1000 up
ip li set dev eth1 mtu 9000 txqueuelen 1000 up
# call the sysctl utility to modify /proc/sys entries
sysctl -p ./sysctl_ixgb.conf
- END ixgb_perf.sh
@ -297,10 +297,10 @@ Additional Configurations
------------
The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
enabled by changing the MTU to a value larger than the default of 1500.
The maximum value for the MTU is 16114. Use the ifconfig command to
The maximum value for the MTU is 16114. Use the ip command to
increase the MTU size. For example:
ifconfig ethx mtu 9000 up
ip li set dev ethx mtu 9000
The maximum MTU setting for Jumbo Frames is 16114. This value coincides
with the maximum Jumbo Frames size of 16128.

View File

@ -70,10 +70,10 @@ Avago 1000BASE-T SFP ABCU-5710RZ
82599-based adapters support all passive and active limiting direct attach
cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
Laser turns off for SFP+ when ifconfig down
Laser turns off for SFP+ when device is down
-------------------------------------------
"ifconfig down" turns off the laser for 82599-based SFP+ fiber adapters.
"ifconfig up" turns on the laser.
"ip link set down" turns off the laser for 82599-based SFP+ fiber adapters.
"ip link set up" turns on the laser.
82598-BASED ADAPTERS
@ -213,13 +213,13 @@ Additional Configurations
------------
The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
enabled by changing the MTU to a value larger than the default of 1500.
The maximum value for the MTU is 16110. Use the ifconfig command to
The maximum value for the MTU is 16110. Use the ip command to
increase the MTU size. For example:
ifconfig ethx mtu 9000 up
ip link set dev ethx mtu 9000
The maximum MTU setting for Jumbo Frames is 16110. This value coincides
with the maximum Jumbo Frames size of 16128.
The maximum MTU setting for Jumbo Frames is 9710. This value coincides
with the maximum Jumbo Frames size of 9728.
Generic Receive Offload, aka GRO
--------------------------------

View File

@ -0,0 +1,20 @@
/proc/sys/net/mpls/* Variables:
platform_labels - INTEGER
Number of entries in the platform label table. It is not
possible to configure forwarding for label values equal to or
greater than the number of platform labels.
A dense utliziation of the entries in the platform label table
is possible and expected aas the platform labels are locally
allocated.
If the number of platform label table entries is set to 0 no
label will be recognized by the kernel and mpls forwarding
will be disabled.
Reducing this value will remove all label routing entries that
no longer fit in the table.
Possible values: 0 - 1048575
Default: 0

View File

@ -440,9 +440,10 @@ and the following flags apply:
+++ Capture process:
from include/linux/if_packet.h
#define TP_STATUS_COPY 2
#define TP_STATUS_LOSING 4
#define TP_STATUS_CSUMNOTREADY 8
#define TP_STATUS_COPY (1 << 1)
#define TP_STATUS_LOSING (1 << 2)
#define TP_STATUS_CSUMNOTREADY (1 << 3)
#define TP_STATUS_CSUM_VALID (1 << 7)
TP_STATUS_COPY : This flag indicates that the frame (and associated
meta information) has been truncated because it's
@ -466,6 +467,12 @@ TP_STATUS_CSUMNOTREADY: currently it's used for outgoing IP packets which
reading the packet we should not try to check the
checksum.
TP_STATUS_CSUM_VALID : This flag indicates that at least the transport
header checksum of the packet has been already
validated on the kernel side. If the flag is not set
then we are free to check the checksum by ourselves
provided that TP_STATUS_CSUMNOTREADY is also not set.
for convenience there are also the following defines:
#define TP_STATUS_KERNEL 0

View File

@ -3,13 +3,11 @@
HOWTO for the linux packet generator
------------------------------------
Date: 041221
Enable CONFIG_NET_PKTGEN to compile and build pktgen.o either in kernel
or as module. Module is preferred. insmod pktgen if needed. Once running
pktgen creates a thread on each CPU where each thread has affinity to its CPU.
Monitoring and controlling is done via /proc. Easiest to select a suitable
a sample script and configure.
Enable CONFIG_NET_PKTGEN to compile and build pktgen either in-kernel
or as a module. A module is preferred; modprobe pktgen if needed. Once
running, pktgen creates a thread for each CPU with affinity to that CPU.
Monitoring and controlling is done via /proc. It is easiest to select a
suitable sample script and configure that.
On a dual CPU:
@ -27,7 +25,7 @@ For monitoring and control pktgen creates:
Tuning NIC for max performance
==============================
The default NIC setting are (likely) not tuned for pktgen's artificial
The default NIC settings are (likely) not tuned for pktgen's artificial
overload type of benchmarking, as this could hurt the normal use-case.
Specifically increasing the TX ring buffer in the NIC:
@ -35,20 +33,20 @@ Specifically increasing the TX ring buffer in the NIC:
A larger TX ring can improve pktgen's performance, while it can hurt
in the general case, 1) because the TX ring buffer might get larger
than the CPUs L1/L2 cache, 2) because it allow more queueing in the
than the CPU's L1/L2 cache, 2) because it allows more queueing in the
NIC HW layer (which is bad for bufferbloat).
One should be careful to conclude, that packets/descriptors in the HW
One should hesitate to conclude that packets/descriptors in the HW
TX ring cause delay. Drivers usually delay cleaning up the
ring-buffers (for various performance reasons), thus packets stalling
the TX ring, might just be waiting for cleanup.
ring-buffers for various performance reasons, and packets stalling
the TX ring might just be waiting for cleanup.
This cleanup issues is specifically the case, for the driver ixgbe
(Intel 82599 chip). This driver (ixgbe) combine TX+RX ring cleanups,
This cleanup issue is specifically the case for the driver ixgbe
(Intel 82599 chip). This driver (ixgbe) combines TX+RX ring cleanups,
and the cleanup interval is affected by the ethtool --coalesce setting
of parameter "rx-usecs".
For ixgbe use e.g "30" resulting in approx 33K interrupts/sec (1/30*10^6):
For ixgbe use e.g. "30" resulting in approx 33K interrupts/sec (1/30*10^6):
# ethtool -C ethX rx-usecs 30
@ -60,15 +58,16 @@ Running:
Stopped: eth1
Result: OK: max_before_softirq=10000
Most important the devices assigned to thread. Note! A device can only belong
to one thread.
Most important are the devices assigned to the thread. Note that a
device can only belong to one thread.
Viewing devices
===============
Parm section holds configured info. Current hold running stats.
Result is printed after run or after interruption. Example:
The Params section holds configured information. The Current section
holds running statistics. The Result is printed after a run or after
interruption. Example:
/proc/net/pktgen/eth1
@ -93,7 +92,8 @@ Result: OK: 13101142(c12220741+d880401) usec, 10000000 (60byte,0frags)
Configuring threads and devices
================================
This is done via the /proc interface easiest done via pgset in the scripts
This is done via the /proc interface, and most easily done via pgset
as defined in the sample scripts.
Examples:
@ -192,10 +192,11 @@ Examples:
pgset "rate 300M" set rate to 300 Mb/s
pgset "ratep 1000000" set rate to 1Mpps
Example scripts
===============
Sample scripts
==============
A collection of small tutorial scripts for pktgen is in examples dir.
A collection of small tutorial scripts for pktgen is in the
samples/pktgen directory:
pktgen.conf-1-1 # 1 CPU 1 dev
pktgen.conf-1-2 # 1 CPU 2 dev
@ -206,25 +207,26 @@ pktgen.conf-1-1-ip6 # 1 CPU 1 dev ipv6
pktgen.conf-1-1-ip6-rdos # 1 CPU 1 dev ipv6 w. route DoS
pktgen.conf-1-1-flows # 1 CPU 1 dev multiple flows.
Run in shell: ./pktgen.conf-X-Y It does all the setup including sending.
Run in shell: ./pktgen.conf-X-Y
This does all the setup including sending.
Interrupt affinity
===================
Note when adding devices to a specific CPU there good idea to also assign
/proc/irq/XX/smp_affinity so the TX-interrupts gets bound to the same CPU.
as this reduces cache bouncing when freeing skb's.
Note that when adding devices to a specific CPU it is a good idea to
also assign /proc/irq/XX/smp_affinity so that the TX interrupts are bound
to the same CPU. This reduces cache bouncing when freeing skbs.
Enable IPsec
============
Default IPsec transformation with ESP encapsulation plus Transport mode
could be enabled by simply setting:
Default IPsec transformation with ESP encapsulation plus transport mode
can be enabled by simply setting:
pgset "flag IPSEC"
pgset "flows 1"
To avoid breaking existing testbed scripts for using AH type and tunnel mode,
user could use "pgset spi SPI_VALUE" to specify which formal of transformation
you can use "pgset spi SPI_VALUE" to specify which transformation mode
to employ.

View File

@ -62,11 +62,10 @@ Socket Interface
================
AF_RDS, PF_RDS, SOL_RDS
These constants haven't been assigned yet, because RDS isn't in
mainline yet. Currently, the kernel module assigns some constant
and publishes it to user space through two sysctl files
/proc/sys/net/rds/pf_rds
/proc/sys/net/rds/sol_rds
AF_RDS and PF_RDS are the domain type to be used with socket(2)
to create RDS sockets. SOL_RDS is the socket-level to be used
with setsockopt(2) and getsockopt(2) for RDS specific socket
options.
fd = socket(PF_RDS, SOCK_SEQPACKET, 0);
This creates a new, unbound RDS socket.

View File

@ -38,7 +38,7 @@ The corresponding adapter's LED will blink multiple times.
3. Features supported:
a. Jumbo frames. Xframe I/II supports MTU up to 9600 bytes,
modifiable using ifconfig command.
modifiable using ip command.
b. Offloads. Supports checksum offload(TCP/UDP/IP) on transmit
and receive, TSO.

View File

@ -421,6 +421,15 @@ best CPUs to share a given queue are probably those that share the cache
with the CPU that processes transmit completions for that queue
(transmit interrupts).
Per TX Queue rate limitation:
=============================
These are rate-limitation mechanisms implemented by HW, where currently
a max-rate attribute is supported, by setting a Mbps value to
/sys/class/net/<dev>/queues/tx-<n>/tx_maxrate
A value of zero means disabled, and this is the default.
Further Information
===================

View File

@ -39,7 +39,7 @@ iii) PCI-SIG's I/O Virtualization
iv) Jumbo frames
X3100 Series supports MTU up to 9600 bytes, modifiable using
ifconfig command.
ip command.
v) Offloads supported: (Enabled by default)
Checksum offload (TCP/UDP/IP) on transmit and receive paths

View File

@ -6353,6 +6353,7 @@ F: drivers/scsi/megaraid/
MELLANOX ETHERNET DRIVER (mlx4_en)
M: Amir Vadai <amirv@mellanox.com>
M: Ido Shamay <idos@mellanox.com>
L: netdev@vger.kernel.org
S: Supported
W: http://www.mellanox.com
@ -6972,6 +6973,13 @@ S: Supported
F: drivers/block/nvme*
F: include/linux/nvme.h
NXP-NCI NFC DRIVER
M: Clément Perrochaud <clement.perrochaud@effinnov.com>
R: Charles Gorand <charles.gorand@effinnov.com>
L: linux-nfc@lists.01.org (moderated for non-subscribers)
S: Supported
F: drivers/nfc/nxp-nci
NXP TDA998X DRM DRIVER
M: Russell King <rmk+kernel@arm.linux.org.uk>
S: Supported
@ -8380,7 +8388,6 @@ F: block/partitions/ibm.c
S390 NETWORK DRIVERS
M: Ursula Braun <ursula.braun@de.ibm.com>
M: Frank Blaschka <blaschka@linux.vnet.ibm.com>
M: linux390@de.ibm.com
L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/
@ -9856,7 +9863,7 @@ F: include/linux/wl12xx.h
TIPC NETWORK LAYER
M: Jon Maloy <jon.maloy@ericsson.com>
M: Allan Stephens <allan.stephens@windriver.com>
M: Ying Xue <ying.xue@windriver.com>
L: netdev@vger.kernel.org (core kernel code)
L: tipc-discussion@lists.sourceforge.net (user apps, general discussion)
W: http://tipc.sourceforge.net/

View File

@ -842,7 +842,7 @@
};
macb0: ethernet@fffc4000 {
compatible = "cdns,at32ap7000-macb", "cdns,macb";
compatible = "cdns,at91sam9260-macb", "cdns,macb";
reg = <0xfffc4000 0x100>;
interrupts = <21 IRQ_TYPE_LEVEL_HIGH 3>;
pinctrl-names = "default";

View File

@ -845,7 +845,7 @@
};
macb0: ethernet@fffbc000 {
compatible = "cdns,at32ap7000-macb", "cdns,macb";
compatible = "cdns,at91sam9260-macb", "cdns,macb";
reg = <0xfffbc000 0x100>;
interrupts = <21 IRQ_TYPE_LEVEL_HIGH 3>;
pinctrl-names = "default";

View File

@ -956,7 +956,7 @@
};
macb0: ethernet@fffbc000 {
compatible = "cdns,at32ap7000-macb", "cdns,macb";
compatible = "cdns,at91sam9260-macb", "cdns,macb";
reg = <0xfffbc000 0x100>;
interrupts = <25 IRQ_TYPE_LEVEL_HIGH 3>;
pinctrl-names = "default";

View File

@ -53,7 +53,7 @@
};
macb0: ethernet@f802c000 {
compatible = "cdns,at32ap7000-macb", "cdns,macb";
compatible = "cdns,at91sam9260-macb", "cdns,macb";
reg = <0xf802c000 0x100>;
interrupts = <24 IRQ_TYPE_LEVEL_HIGH 3>;
pinctrl-names = "default";

View File

@ -41,7 +41,7 @@
};
macb1: ethernet@f8030000 {
compatible = "cdns,at32ap7000-macb", "cdns,macb";
compatible = "cdns,at91sam9260-macb", "cdns,macb";
reg = <0xf8030000 0x100>;
interrupts = <27 IRQ_TYPE_LEVEL_HIGH 3>;
pinctrl-names = "default";

View File

@ -41,7 +41,7 @@
};
macb1: ethernet@f802c000 {
compatible = "cdns,at32ap7000-macb", "cdns,macb";
compatible = "cdns,at91sam9260-macb", "cdns,macb";
reg = <0xf802c000 0x100>;
interrupts = <35 IRQ_TYPE_LEVEL_HIGH 3>;
pinctrl-names = "default";

View File

@ -45,6 +45,10 @@
status = "ok";
};
&sgenet1 {
status = "ok";
};
&xgenet {
status = "ok";
};

View File

@ -186,6 +186,16 @@
clock-output-names = "sge0clk";
};
sge1clk: sge1clk@1f21c000 {
compatible = "apm,xgene-device-clock";
#clock-cells = <1>;
clocks = <&socplldiv2 0>;
reg = <0x0 0x1f21c000 0x0 0x1000>;
reg-names = "csr-reg";
csr-mask = <0xc>;
clock-output-names = "sge1clk";
};
xge0clk: xge0clk@1f61c000 {
compatible = "apm,xgene-device-clock";
#clock-cells = <1>;
@ -628,13 +638,30 @@
<0x0 0x1f200000 0x0 0Xc300>,
<0x0 0x1B000000 0x0 0X200>;
reg-names = "enet_csr", "ring_csr", "ring_cmd";
interrupts = <0x0 0xA0 0x4>;
interrupts = <0x0 0xA0 0x4>,
<0x0 0xA1 0x4>;
dma-coherent;
clocks = <&sge0clk 0>;
local-mac-address = [00 00 00 00 00 00];
phy-connection-type = "sgmii";
};
sgenet1: ethernet@1f210030 {
compatible = "apm,xgene1-sgenet";
status = "disabled";
reg = <0x0 0x1f210030 0x0 0xd100>,
<0x0 0x1f200000 0x0 0Xc300>,
<0x0 0x1B000000 0x0 0X8000>;
reg-names = "enet_csr", "ring_csr", "ring_cmd";
interrupts = <0x0 0xAC 0x4>,
<0x0 0xAD 0x4>;
port-id = <1>;
dma-coherent;
clocks = <&sge1clk 0>;
local-mac-address = [00 00 00 00 00 00];
phy-connection-type = "sgmii";
};
xgenet: ethernet@1f610000 {
compatible = "apm,xgene1-xgenet";
status = "disabled";
@ -642,7 +669,8 @@
<0x0 0x1f600000 0x0 0Xc300>,
<0x0 0x18000000 0x0 0X200>;
reg-names = "enet_csr", "ring_csr", "ring_cmd";
interrupts = <0x0 0x60 0x4>;
interrupts = <0x0 0x60 0x4>,
<0x0 0x61 0x4>;
dma-coherent;
clocks = <&xge0clk 0>;
/* mac address will be overwritten by the bootloader */

View File

@ -126,7 +126,7 @@ config PPC
select IRQ_FORCED_THREADING
select HAVE_RCU_TABLE_FREE if SMP
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_BPF_JIT if PPC64
select HAVE_BPF_JIT
select HAVE_ARCH_JUMP_LABEL
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_HAS_GCOV_PROFILE_ALL

View File

@ -23,6 +23,8 @@
#define PPC_STL stringify_in_c(std)
#define PPC_STLU stringify_in_c(stdu)
#define PPC_LCMPI stringify_in_c(cmpdi)
#define PPC_LCMPLI stringify_in_c(cmpldi)
#define PPC_LCMP stringify_in_c(cmpd)
#define PPC_LONG stringify_in_c(.llong)
#define PPC_LONG_ALIGN stringify_in_c(.balign 8)
#define PPC_TLNEI stringify_in_c(tdnei)
@ -52,6 +54,8 @@
#define PPC_STL stringify_in_c(stw)
#define PPC_STLU stringify_in_c(stwu)
#define PPC_LCMPI stringify_in_c(cmpwi)
#define PPC_LCMPLI stringify_in_c(cmplwi)
#define PPC_LCMP stringify_in_c(cmpw)
#define PPC_LONG stringify_in_c(.long)
#define PPC_LONG_ALIGN stringify_in_c(.balign 4)
#define PPC_TLNEI stringify_in_c(twnei)

View File

@ -213,6 +213,8 @@
#define PPC_INST_LWZ 0x80000000
#define PPC_INST_STD 0xf8000000
#define PPC_INST_STDU 0xf8000001
#define PPC_INST_STW 0x90000000
#define PPC_INST_STWU 0x94000000
#define PPC_INST_MFLR 0x7c0802a6
#define PPC_INST_MTLR 0x7c0803a6
#define PPC_INST_CMPWI 0x2c000000

View File

@ -1,4 +1,4 @@
#
# Arch-specific network modules
#
obj-$(CONFIG_BPF_JIT) += bpf_jit_64.o bpf_jit_comp.o
obj-$(CONFIG_BPF_JIT) += bpf_jit_asm.o bpf_jit_comp.o

View File

@ -10,12 +10,25 @@
#ifndef _BPF_JIT_H
#define _BPF_JIT_H
#ifdef CONFIG_PPC64
#define BPF_PPC_STACK_R3_OFF 48
#define BPF_PPC_STACK_LOCALS 32
#define BPF_PPC_STACK_BASIC (48+64)
#define BPF_PPC_STACK_SAVE (18*8)
#define BPF_PPC_STACKFRAME (BPF_PPC_STACK_BASIC+BPF_PPC_STACK_LOCALS+ \
BPF_PPC_STACK_SAVE)
#define BPF_PPC_SLOWPATH_FRAME (48+64)
#else
#define BPF_PPC_STACK_R3_OFF 24
#define BPF_PPC_STACK_LOCALS 16
#define BPF_PPC_STACK_BASIC (24+32)
#define BPF_PPC_STACK_SAVE (18*4)
#define BPF_PPC_STACKFRAME (BPF_PPC_STACK_BASIC+BPF_PPC_STACK_LOCALS+ \
BPF_PPC_STACK_SAVE)
#define BPF_PPC_SLOWPATH_FRAME (24+32)
#endif
#define REG_SZ (BITS_PER_LONG/8)
/*
* Generated code register usage:
@ -57,7 +70,11 @@ DECLARE_LOAD_FUNC(sk_load_half);
DECLARE_LOAD_FUNC(sk_load_byte);
DECLARE_LOAD_FUNC(sk_load_byte_msh);
#ifdef CONFIG_PPC64
#define FUNCTION_DESCR_SIZE 24
#else
#define FUNCTION_DESCR_SIZE 0
#endif
/*
* 16-bit immediate helper macros: HA() is for use with sign-extending instrs
@ -86,7 +103,12 @@ DECLARE_LOAD_FUNC(sk_load_byte_msh);
#define PPC_LIS(r, i) PPC_ADDIS(r, 0, i)
#define PPC_STD(r, base, i) EMIT(PPC_INST_STD | ___PPC_RS(r) | \
___PPC_RA(base) | ((i) & 0xfffc))
#define PPC_STDU(r, base, i) EMIT(PPC_INST_STDU | ___PPC_RS(r) | \
___PPC_RA(base) | ((i) & 0xfffc))
#define PPC_STW(r, base, i) EMIT(PPC_INST_STW | ___PPC_RS(r) | \
___PPC_RA(base) | ((i) & 0xfffc))
#define PPC_STWU(r, base, i) EMIT(PPC_INST_STWU | ___PPC_RS(r) | \
___PPC_RA(base) | ((i) & 0xfffc))
#define PPC_LBZ(r, base, i) EMIT(PPC_INST_LBZ | ___PPC_RT(r) | \
___PPC_RA(base) | IMM_L(i))
@ -98,6 +120,17 @@ DECLARE_LOAD_FUNC(sk_load_byte_msh);
___PPC_RA(base) | IMM_L(i))
#define PPC_LHBRX(r, base, b) EMIT(PPC_INST_LHBRX | ___PPC_RT(r) | \
___PPC_RA(base) | ___PPC_RB(b))
#ifdef CONFIG_PPC64
#define PPC_BPF_LL(r, base, i) do { PPC_LD(r, base, i); } while(0)
#define PPC_BPF_STL(r, base, i) do { PPC_STD(r, base, i); } while(0)
#define PPC_BPF_STLU(r, base, i) do { PPC_STDU(r, base, i); } while(0)
#else
#define PPC_BPF_LL(r, base, i) do { PPC_LWZ(r, base, i); } while(0)
#define PPC_BPF_STL(r, base, i) do { PPC_STW(r, base, i); } while(0)
#define PPC_BPF_STLU(r, base, i) do { PPC_STWU(r, base, i); } while(0)
#endif
/* Convenience helpers for the above with 'far' offsets: */
#define PPC_LBZ_OFFS(r, base, i) do { if ((i) < 32768) PPC_LBZ(r, base, i); \
else { PPC_ADDIS(r, base, IMM_HA(i)); \
@ -115,6 +148,29 @@ DECLARE_LOAD_FUNC(sk_load_byte_msh);
else { PPC_ADDIS(r, base, IMM_HA(i)); \
PPC_LHZ(r, r, IMM_L(i)); } } while(0)
#ifdef CONFIG_PPC64
#define PPC_LL_OFFS(r, base, i) do { PPC_LD_OFFS(r, base, i); } while(0)
#else
#define PPC_LL_OFFS(r, base, i) do { PPC_LWZ_OFFS(r, base, i); } while(0)
#endif
#ifdef CONFIG_SMP
#ifdef CONFIG_PPC64
#define PPC_BPF_LOAD_CPU(r) \
do { BUILD_BUG_ON(FIELD_SIZEOF(struct paca_struct, paca_index) != 2); \
PPC_LHZ_OFFS(r, 13, offsetof(struct paca_struct, paca_index)); \
} while (0)
#else
#define PPC_BPF_LOAD_CPU(r) \
do { BUILD_BUG_ON(FIELD_SIZEOF(struct thread_info, cpu) != 4); \
PPC_LHZ_OFFS(r, (1 & ~(THREAD_SIZE - 1)), \
offsetof(struct thread_info, cpu)); \
} while(0)
#endif
#else
#define PPC_BPF_LOAD_CPU(r) do { PPC_LI(r, 0); } while(0)
#endif
#define PPC_CMPWI(a, i) EMIT(PPC_INST_CMPWI | ___PPC_RA(a) | IMM_L(i))
#define PPC_CMPDI(a, i) EMIT(PPC_INST_CMPDI | ___PPC_RA(a) | IMM_L(i))
#define PPC_CMPLWI(a, i) EMIT(PPC_INST_CMPLWI | ___PPC_RA(a) | IMM_L(i))
@ -196,6 +252,12 @@ DECLARE_LOAD_FUNC(sk_load_byte_msh);
PPC_ORI(d, d, (uintptr_t)(i) & 0xffff); \
} } while (0);
#ifdef CONFIG_PPC64
#define PPC_FUNC_ADDR(d,i) do { PPC_LI64(d, i); } while(0)
#else
#define PPC_FUNC_ADDR(d,i) do { PPC_LI32(d, i); } while(0)
#endif
#define PPC_LHBRX_OFFS(r, base, i) \
do { PPC_LI32(r, i); PPC_LHBRX(r, r, base); } while(0)
#ifdef __LITTLE_ENDIAN__

View File

@ -34,13 +34,13 @@
*/
.globl sk_load_word
sk_load_word:
cmpdi r_addr, 0
PPC_LCMPI r_addr, 0
blt bpf_slow_path_word_neg
.globl sk_load_word_positive_offset
sk_load_word_positive_offset:
/* Are we accessing past headlen? */
subi r_scratch1, r_HL, 4
cmpd r_scratch1, r_addr
PPC_LCMP r_scratch1, r_addr
blt bpf_slow_path_word
/* Nope, just hitting the header. cr0 here is eq or gt! */
#ifdef __LITTLE_ENDIAN__
@ -52,12 +52,12 @@ sk_load_word_positive_offset:
.globl sk_load_half
sk_load_half:
cmpdi r_addr, 0
PPC_LCMPI r_addr, 0
blt bpf_slow_path_half_neg
.globl sk_load_half_positive_offset
sk_load_half_positive_offset:
subi r_scratch1, r_HL, 2
cmpd r_scratch1, r_addr
PPC_LCMP r_scratch1, r_addr
blt bpf_slow_path_half
#ifdef __LITTLE_ENDIAN__
lhbrx r_A, r_D, r_addr
@ -68,11 +68,11 @@ sk_load_half_positive_offset:
.globl sk_load_byte
sk_load_byte:
cmpdi r_addr, 0
PPC_LCMPI r_addr, 0
blt bpf_slow_path_byte_neg
.globl sk_load_byte_positive_offset
sk_load_byte_positive_offset:
cmpd r_HL, r_addr
PPC_LCMP r_HL, r_addr
ble bpf_slow_path_byte
lbzx r_A, r_D, r_addr
blr
@ -83,11 +83,11 @@ sk_load_byte_positive_offset:
*/
.globl sk_load_byte_msh
sk_load_byte_msh:
cmpdi r_addr, 0
PPC_LCMPI r_addr, 0
blt bpf_slow_path_byte_msh_neg
.globl sk_load_byte_msh_positive_offset
sk_load_byte_msh_positive_offset:
cmpd r_HL, r_addr
PPC_LCMP r_HL, r_addr
ble bpf_slow_path_byte_msh
lbzx r_X, r_D, r_addr
rlwinm r_X, r_X, 2, 32-4-2, 31-2
@ -101,13 +101,13 @@ sk_load_byte_msh_positive_offset:
*/
#define bpf_slow_path_common(SIZE) \
mflr r0; \
std r0, 16(r1); \
PPC_STL r0, PPC_LR_STKOFF(r1); \
/* R3 goes in parameter space of caller's frame */ \
std r_skb, (BPF_PPC_STACKFRAME+48)(r1); \
std r_A, (BPF_PPC_STACK_BASIC+(0*8))(r1); \
std r_X, (BPF_PPC_STACK_BASIC+(1*8))(r1); \
addi r5, r1, BPF_PPC_STACK_BASIC+(2*8); \
stdu r1, -BPF_PPC_SLOWPATH_FRAME(r1); \
PPC_STL r_skb, (BPF_PPC_STACKFRAME+BPF_PPC_STACK_R3_OFF)(r1); \
PPC_STL r_A, (BPF_PPC_STACK_BASIC+(0*REG_SZ))(r1); \
PPC_STL r_X, (BPF_PPC_STACK_BASIC+(1*REG_SZ))(r1); \
addi r5, r1, BPF_PPC_STACK_BASIC+(2*REG_SZ); \
PPC_STLU r1, -BPF_PPC_SLOWPATH_FRAME(r1); \
/* R3 = r_skb, as passed */ \
mr r4, r_addr; \
li r6, SIZE; \
@ -115,19 +115,19 @@ sk_load_byte_msh_positive_offset:
nop; \
/* R3 = 0 on success */ \
addi r1, r1, BPF_PPC_SLOWPATH_FRAME; \
ld r0, 16(r1); \
ld r_A, (BPF_PPC_STACK_BASIC+(0*8))(r1); \
ld r_X, (BPF_PPC_STACK_BASIC+(1*8))(r1); \
PPC_LL r0, PPC_LR_STKOFF(r1); \
PPC_LL r_A, (BPF_PPC_STACK_BASIC+(0*REG_SZ))(r1); \
PPC_LL r_X, (BPF_PPC_STACK_BASIC+(1*REG_SZ))(r1); \
mtlr r0; \
cmpdi r3, 0; \
PPC_LCMPI r3, 0; \
blt bpf_error; /* cr0 = LT */ \
ld r_skb, (BPF_PPC_STACKFRAME+48)(r1); \
PPC_LL r_skb, (BPF_PPC_STACKFRAME+BPF_PPC_STACK_R3_OFF)(r1); \
/* Great success! */
bpf_slow_path_word:
bpf_slow_path_common(4)
/* Data value is on stack, and cr0 != LT */
lwz r_A, BPF_PPC_STACK_BASIC+(2*8)(r1)
lwz r_A, BPF_PPC_STACK_BASIC+(2*REG_SZ)(r1)
blr
bpf_slow_path_half:
@ -154,12 +154,12 @@ bpf_slow_path_byte_msh:
*/
#define sk_negative_common(SIZE) \
mflr r0; \
std r0, 16(r1); \
PPC_STL r0, PPC_LR_STKOFF(r1); \
/* R3 goes in parameter space of caller's frame */ \
std r_skb, (BPF_PPC_STACKFRAME+48)(r1); \
std r_A, (BPF_PPC_STACK_BASIC+(0*8))(r1); \
std r_X, (BPF_PPC_STACK_BASIC+(1*8))(r1); \
stdu r1, -BPF_PPC_SLOWPATH_FRAME(r1); \
PPC_STL r_skb, (BPF_PPC_STACKFRAME+BPF_PPC_STACK_R3_OFF)(r1); \
PPC_STL r_A, (BPF_PPC_STACK_BASIC+(0*REG_SZ))(r1); \
PPC_STL r_X, (BPF_PPC_STACK_BASIC+(1*REG_SZ))(r1); \
PPC_STLU r1, -BPF_PPC_SLOWPATH_FRAME(r1); \
/* R3 = r_skb, as passed */ \
mr r4, r_addr; \
li r5, SIZE; \
@ -167,19 +167,19 @@ bpf_slow_path_byte_msh:
nop; \
/* R3 != 0 on success */ \
addi r1, r1, BPF_PPC_SLOWPATH_FRAME; \
ld r0, 16(r1); \
ld r_A, (BPF_PPC_STACK_BASIC+(0*8))(r1); \
ld r_X, (BPF_PPC_STACK_BASIC+(1*8))(r1); \
PPC_LL r0, PPC_LR_STKOFF(r1); \
PPC_LL r_A, (BPF_PPC_STACK_BASIC+(0*REG_SZ))(r1); \
PPC_LL r_X, (BPF_PPC_STACK_BASIC+(1*REG_SZ))(r1); \
mtlr r0; \
cmpldi r3, 0; \
PPC_LCMPLI r3, 0; \
beq bpf_error_slow; /* cr0 = EQ */ \
mr r_addr, r3; \
ld r_skb, (BPF_PPC_STACKFRAME+48)(r1); \
PPC_LL r_skb, (BPF_PPC_STACKFRAME+BPF_PPC_STACK_R3_OFF)(r1); \
/* Great success! */
bpf_slow_path_word_neg:
lis r_scratch1,-32 /* SKF_LL_OFF */
cmpd r_addr, r_scratch1 /* addr < SKF_* */
PPC_LCMP r_addr, r_scratch1 /* addr < SKF_* */
blt bpf_error /* cr0 = LT */
.globl sk_load_word_negative_offset
sk_load_word_negative_offset:
@ -189,7 +189,7 @@ sk_load_word_negative_offset:
bpf_slow_path_half_neg:
lis r_scratch1,-32 /* SKF_LL_OFF */
cmpd r_addr, r_scratch1 /* addr < SKF_* */
PPC_LCMP r_addr, r_scratch1 /* addr < SKF_* */
blt bpf_error /* cr0 = LT */
.globl sk_load_half_negative_offset
sk_load_half_negative_offset:
@ -199,7 +199,7 @@ sk_load_half_negative_offset:
bpf_slow_path_byte_neg:
lis r_scratch1,-32 /* SKF_LL_OFF */
cmpd r_addr, r_scratch1 /* addr < SKF_* */
PPC_LCMP r_addr, r_scratch1 /* addr < SKF_* */
blt bpf_error /* cr0 = LT */
.globl sk_load_byte_negative_offset
sk_load_byte_negative_offset:
@ -209,7 +209,7 @@ sk_load_byte_negative_offset:
bpf_slow_path_byte_msh_neg:
lis r_scratch1,-32 /* SKF_LL_OFF */
cmpd r_addr, r_scratch1 /* addr < SKF_* */
PPC_LCMP r_addr, r_scratch1 /* addr < SKF_* */
blt bpf_error /* cr0 = LT */
.globl sk_load_byte_msh_negative_offset
sk_load_byte_msh_negative_offset:
@ -221,7 +221,7 @@ sk_load_byte_msh_negative_offset:
bpf_error_slow:
/* fabricate a cr0 = lt */
li r_scratch1, -1
cmpdi r_scratch1, 0
PPC_LCMPI r_scratch1, 0
bpf_error:
/* Entered with cr0 = lt */
li r3, 0

View File

@ -1,8 +1,9 @@
/* bpf_jit_comp.c: BPF JIT compiler for PPC64
/* bpf_jit_comp.c: BPF JIT compiler
*
* Copyright 2011 Matt Evans <matt@ozlabs.org>, IBM Corporation
*
* Based on the x86 BPF compiler, by Eric Dumazet (eric.dumazet@gmail.com)
* Ported to ppc32 by Denis Kirjanov <kda@linux-powerpc.org>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@ -36,11 +37,11 @@ static void bpf_jit_build_prologue(struct bpf_prog *fp, u32 *image,
if (ctx->seen & SEEN_DATAREF) {
/* If we call any helpers (for loads), save LR */
EMIT(PPC_INST_MFLR | __PPC_RT(R0));
PPC_STD(0, 1, 16);
PPC_BPF_STL(0, 1, PPC_LR_STKOFF);
/* Back up non-volatile regs. */
PPC_STD(r_D, 1, -(8*(32-r_D)));
PPC_STD(r_HL, 1, -(8*(32-r_HL)));
PPC_BPF_STL(r_D, 1, -(REG_SZ*(32-r_D)));
PPC_BPF_STL(r_HL, 1, -(REG_SZ*(32-r_HL)));
}
if (ctx->seen & SEEN_MEM) {
/*
@ -49,11 +50,10 @@ static void bpf_jit_build_prologue(struct bpf_prog *fp, u32 *image,
*/
for (i = r_M; i < (r_M+16); i++) {
if (ctx->seen & (1 << (i-r_M)))
PPC_STD(i, 1, -(8*(32-i)));
PPC_BPF_STL(i, 1, -(REG_SZ*(32-i)));
}
}
EMIT(PPC_INST_STDU | __PPC_RS(R1) | __PPC_RA(R1) |
(-BPF_PPC_STACKFRAME & 0xfffc));
PPC_BPF_STLU(1, 1, -BPF_PPC_STACKFRAME);
}
if (ctx->seen & SEEN_DATAREF) {
@ -67,7 +67,7 @@ static void bpf_jit_build_prologue(struct bpf_prog *fp, u32 *image,
data_len));
PPC_LWZ_OFFS(r_HL, r_skb, offsetof(struct sk_buff, len));
PPC_SUB(r_HL, r_HL, r_scratch1);
PPC_LD_OFFS(r_D, r_skb, offsetof(struct sk_buff, data));
PPC_LL_OFFS(r_D, r_skb, offsetof(struct sk_buff, data));
}
if (ctx->seen & SEEN_XREG) {
@ -99,16 +99,16 @@ static void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx)
if (ctx->seen & (SEEN_MEM | SEEN_DATAREF)) {
PPC_ADDI(1, 1, BPF_PPC_STACKFRAME);
if (ctx->seen & SEEN_DATAREF) {
PPC_LD(0, 1, 16);
PPC_BPF_LL(0, 1, PPC_LR_STKOFF);
PPC_MTLR(0);
PPC_LD(r_D, 1, -(8*(32-r_D)));
PPC_LD(r_HL, 1, -(8*(32-r_HL)));
PPC_BPF_LL(r_D, 1, -(REG_SZ*(32-r_D)));
PPC_BPF_LL(r_HL, 1, -(REG_SZ*(32-r_HL)));
}
if (ctx->seen & SEEN_MEM) {
/* Restore any saved non-vol registers */
for (i = r_M; i < (r_M+16); i++) {
if (ctx->seen & (1 << (i-r_M)))
PPC_LD(i, 1, -(8*(32-i)));
PPC_BPF_LL(i, 1, -(REG_SZ*(32-i)));
}
}
}
@ -355,7 +355,7 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
ifindex) != 4);
BUILD_BUG_ON(FIELD_SIZEOF(struct net_device,
type) != 2);
PPC_LD_OFFS(r_scratch1, r_skb, offsetof(struct sk_buff,
PPC_LL_OFFS(r_scratch1, r_skb, offsetof(struct sk_buff,
dev));
PPC_CMPDI(r_scratch1, 0);
if (ctx->pc_ret0 != -1) {
@ -411,20 +411,8 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
PPC_SRWI(r_A, r_A, 5);
break;
case BPF_ANC | SKF_AD_CPU:
#ifdef CONFIG_SMP
/*
* PACA ptr is r13:
* raw_smp_processor_id() = local_paca->paca_index
*/
BUILD_BUG_ON(FIELD_SIZEOF(struct paca_struct,
paca_index) != 2);
PPC_LHZ_OFFS(r_A, 13,
offsetof(struct paca_struct, paca_index));
#else
PPC_LI(r_A, 0);
#endif
PPC_BPF_LOAD_CPU(r_A);
break;
/*** Absolute loads from packet header/data ***/
case BPF_LD | BPF_W | BPF_ABS:
func = CHOOSE_LOAD_FUNC(K, sk_load_word);
@ -437,7 +425,7 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
common_load:
/* Load from [K]. */
ctx->seen |= SEEN_DATAREF;
PPC_LI64(r_scratch1, func);
PPC_FUNC_ADDR(r_scratch1, func);
PPC_MTLR(r_scratch1);
PPC_LI32(r_addr, K);
PPC_BLRL();
@ -463,7 +451,7 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
* in the helper functions.
*/
ctx->seen |= SEEN_DATAREF | SEEN_XREG;
PPC_LI64(r_scratch1, func);
PPC_FUNC_ADDR(r_scratch1, func);
PPC_MTLR(r_scratch1);
PPC_ADDI(r_addr, r_X, IMM_L(K));
if (K >= 32768)
@ -685,9 +673,11 @@ void bpf_jit_compile(struct bpf_prog *fp)
if (image) {
bpf_flush_icache(code_base, code_base + (proglen/4));
#ifdef CONFIG_PPC64
/* Function descriptor nastiness: Address + TOC */
((u64 *)image)[0] = (u64)code_base;
((u64 *)image)[1] = local_paca->kernel_toc;
#endif
fp->bpf_func = (void *)image;
fp->jited = true;
}

View File

@ -57,7 +57,6 @@ enum interruption_class {
IRQIO_TAP,
IRQIO_VMR,
IRQIO_LCS,
IRQIO_CLW,
IRQIO_CTC,
IRQIO_APB,
IRQIO_ADM,

View File

@ -79,7 +79,6 @@ static const struct irq_class irqclass_sub_desc[] = {
{.irq = IRQIO_TAP, .name = "TAP", .desc = "[I/O] Tape"},
{.irq = IRQIO_VMR, .name = "VMR", .desc = "[I/O] Unit Record Devices"},
{.irq = IRQIO_LCS, .name = "LCS", .desc = "[I/O] LCS"},
{.irq = IRQIO_CLW, .name = "CLW", .desc = "[I/O] CLAW"},
{.irq = IRQIO_CTC, .name = "CTC", .desc = "[I/O] CTC"},
{.irq = IRQIO_APB, .name = "APB", .desc = "[I/O] AP Bus"},
{.irq = IRQIO_ADM, .name = "ADM", .desc = "[I/O] EADM Subchannel"},

View File

@ -456,7 +456,7 @@ int gxio_mpipe_equeue_init(gxio_mpipe_equeue_t *equeue,
EXPORT_SYMBOL_GPL(gxio_mpipe_equeue_init);
int gxio_mpipe_set_timestamp(gxio_mpipe_context_t *context,
const struct timespec *ts)
const struct timespec64 *ts)
{
cycles_t cycles = get_cycles();
return gxio_mpipe_set_timestamp_aux(context, (uint64_t)ts->tv_sec,
@ -466,7 +466,7 @@ int gxio_mpipe_set_timestamp(gxio_mpipe_context_t *context,
EXPORT_SYMBOL_GPL(gxio_mpipe_set_timestamp);
int gxio_mpipe_get_timestamp(gxio_mpipe_context_t *context,
struct timespec *ts)
struct timespec64 *ts)
{
int ret;
cycles_t cycles_prev, cycles_now, clock_rate;

View File

@ -1830,7 +1830,7 @@ extern int gxio_mpipe_link_set_attr(gxio_mpipe_link_t *link, uint32_t attr,
* code.
*/
extern int gxio_mpipe_get_timestamp(gxio_mpipe_context_t *context,
struct timespec *ts);
struct timespec64 *ts);
/* Set the timestamp of mPIPE.
*
@ -1840,7 +1840,7 @@ extern int gxio_mpipe_get_timestamp(gxio_mpipe_context_t *context,
* code.
*/
extern int gxio_mpipe_set_timestamp(gxio_mpipe_context_t *context,
const struct timespec *ts);
const struct timespec64 *ts);
/* Adjust the timestamp of mPIPE.
*

View File

@ -358,8 +358,8 @@ int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter *iter, int len)
npages = (off + n + PAGE_SIZE - 1) >> PAGE_SHIFT;
if (WARN_ON(npages == 0))
return -EINVAL;
sg_init_table(sgl->sg, npages);
/* Add one extra for linking */
sg_init_table(sgl->sg, npages + 1);
for (i = 0, len = n; i < npages; i++) {
int plen = min_t(int, len, PAGE_SIZE - off);
@ -369,18 +369,26 @@ int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter *iter, int len)
off = 0;
len -= plen;
}
sg_mark_end(sgl->sg + npages - 1);
sgl->npages = npages;
return n;
}
EXPORT_SYMBOL_GPL(af_alg_make_sg);
void af_alg_link_sg(struct af_alg_sgl *sgl_prev, struct af_alg_sgl *sgl_new)
{
sg_unmark_end(sgl_prev->sg + sgl_prev->npages - 1);
sg_chain(sgl_prev->sg, sgl_prev->npages + 1, sgl_new->sg);
}
EXPORT_SYMBOL_GPL(af_alg_link_sg);
void af_alg_free_sg(struct af_alg_sgl *sgl)
{
int i;
i = 0;
do {
for (i = 0; i < sgl->npages; i++)
put_page(sgl->pages[i]);
} while (!sg_is_last(sgl->sg + (i++)));
}
EXPORT_SYMBOL_GPL(af_alg_free_sg);

View File

@ -34,8 +34,8 @@ struct hash_ctx {
struct ahash_request req;
};
static int hash_sendmsg(struct kiocb *unused, struct socket *sock,
struct msghdr *msg, size_t ignored)
static int hash_sendmsg(struct socket *sock, struct msghdr *msg,
size_t ignored)
{
int limit = ALG_MAX_PAGES * PAGE_SIZE;
struct sock *sk = sock->sk;
@ -56,8 +56,8 @@ static int hash_sendmsg(struct kiocb *unused, struct socket *sock,
ctx->more = 0;
while (iov_iter_count(&msg->msg_iter)) {
int len = iov_iter_count(&msg->msg_iter);
while (msg_data_left(msg)) {
int len = msg_data_left(msg);
if (len > limit)
len = limit;
@ -139,8 +139,8 @@ unlock:
return err ?: size;
}
static int hash_recvmsg(struct kiocb *unused, struct socket *sock,
struct msghdr *msg, size_t len, int flags)
static int hash_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
int flags)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);

View File

@ -55,8 +55,8 @@ struct rng_ctx {
struct crypto_rng *drng;
};
static int rng_recvmsg(struct kiocb *unused, struct socket *sock,
struct msghdr *msg, size_t len, int flags)
static int rng_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
int flags)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);

View File

@ -39,6 +39,7 @@ struct skcipher_ctx {
struct af_alg_completion completion;
atomic_t inflight;
unsigned used;
unsigned int len;
@ -49,9 +50,65 @@ struct skcipher_ctx {
struct ablkcipher_request req;
};
struct skcipher_async_rsgl {
struct af_alg_sgl sgl;
struct list_head list;
};
struct skcipher_async_req {
struct kiocb *iocb;
struct skcipher_async_rsgl first_sgl;
struct list_head list;
struct scatterlist *tsg;
char iv[];
};
#define GET_SREQ(areq, ctx) (struct skcipher_async_req *)((char *)areq + \
crypto_ablkcipher_reqsize(crypto_ablkcipher_reqtfm(&ctx->req)))
#define GET_REQ_SIZE(ctx) \
crypto_ablkcipher_reqsize(crypto_ablkcipher_reqtfm(&ctx->req))
#define GET_IV_SIZE(ctx) \
crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(&ctx->req))
#define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_sg_list)) / \
sizeof(struct scatterlist) - 1)
static void skcipher_free_async_sgls(struct skcipher_async_req *sreq)
{
struct skcipher_async_rsgl *rsgl, *tmp;
struct scatterlist *sgl;
struct scatterlist *sg;
int i, n;
list_for_each_entry_safe(rsgl, tmp, &sreq->list, list) {
af_alg_free_sg(&rsgl->sgl);
if (rsgl != &sreq->first_sgl)
kfree(rsgl);
}
sgl = sreq->tsg;
n = sg_nents(sgl);
for_each_sg(sgl, sg, n, i)
put_page(sg_page(sg));
kfree(sreq->tsg);
}
static void skcipher_async_cb(struct crypto_async_request *req, int err)
{
struct sock *sk = req->data;
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask->private;
struct skcipher_async_req *sreq = GET_SREQ(req, ctx);
struct kiocb *iocb = sreq->iocb;
atomic_dec(&ctx->inflight);
skcipher_free_async_sgls(sreq);
kfree(req);
iocb->ki_complete(iocb, err, err);
}
static inline int skcipher_sndbuf(struct sock *sk)
{
struct alg_sock *ask = alg_sk(sk);
@ -96,7 +153,7 @@ static int skcipher_alloc_sgl(struct sock *sk)
return 0;
}
static void skcipher_pull_sgl(struct sock *sk, int used)
static void skcipher_pull_sgl(struct sock *sk, int used, int put)
{
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask->private;
@ -123,8 +180,8 @@ static void skcipher_pull_sgl(struct sock *sk, int used)
if (sg[i].length)
return;
put_page(sg_page(sg + i));
if (put)
put_page(sg_page(sg + i));
sg_assign_page(sg + i, NULL);
}
@ -143,7 +200,7 @@ static void skcipher_free_sgl(struct sock *sk)
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask->private;
skcipher_pull_sgl(sk, ctx->used);
skcipher_pull_sgl(sk, ctx->used, 1);
}
static int skcipher_wait_for_wmem(struct sock *sk, unsigned flags)
@ -239,8 +296,8 @@ static void skcipher_data_wakeup(struct sock *sk)
rcu_read_unlock();
}
static int skcipher_sendmsg(struct kiocb *unused, struct socket *sock,
struct msghdr *msg, size_t size)
static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg,
size_t size)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
@ -424,8 +481,153 @@ unlock:
return err ?: size;
}
static int skcipher_recvmsg(struct kiocb *unused, struct socket *sock,
struct msghdr *msg, size_t ignored, int flags)
static int skcipher_all_sg_nents(struct skcipher_ctx *ctx)
{
struct skcipher_sg_list *sgl;
struct scatterlist *sg;
int nents = 0;
list_for_each_entry(sgl, &ctx->tsgl, list) {
sg = sgl->sg;
while (!sg->length)
sg++;
nents += sg_nents(sg);
}
return nents;
}
static int skcipher_recvmsg_async(struct socket *sock, struct msghdr *msg,
int flags)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask->private;
struct skcipher_sg_list *sgl;
struct scatterlist *sg;
struct skcipher_async_req *sreq;
struct ablkcipher_request *req;
struct skcipher_async_rsgl *last_rsgl = NULL;
unsigned int txbufs = 0, len = 0, tx_nents = skcipher_all_sg_nents(ctx);
unsigned int reqlen = sizeof(struct skcipher_async_req) +
GET_REQ_SIZE(ctx) + GET_IV_SIZE(ctx);
int err = -ENOMEM;
bool mark = false;
lock_sock(sk);
req = kmalloc(reqlen, GFP_KERNEL);
if (unlikely(!req))
goto unlock;
sreq = GET_SREQ(req, ctx);
sreq->iocb = msg->msg_iocb;
memset(&sreq->first_sgl, '\0', sizeof(struct skcipher_async_rsgl));
INIT_LIST_HEAD(&sreq->list);
sreq->tsg = kcalloc(tx_nents, sizeof(*sg), GFP_KERNEL);
if (unlikely(!sreq->tsg)) {
kfree(req);
goto unlock;
}
sg_init_table(sreq->tsg, tx_nents);
memcpy(sreq->iv, ctx->iv, GET_IV_SIZE(ctx));
ablkcipher_request_set_tfm(req, crypto_ablkcipher_reqtfm(&ctx->req));
ablkcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
skcipher_async_cb, sk);
while (iov_iter_count(&msg->msg_iter)) {
struct skcipher_async_rsgl *rsgl;
int used;
if (!ctx->used) {
err = skcipher_wait_for_data(sk, flags);
if (err)
goto free;
}
sgl = list_first_entry(&ctx->tsgl,
struct skcipher_sg_list, list);
sg = sgl->sg;
while (!sg->length)
sg++;
used = min_t(unsigned long, ctx->used,
iov_iter_count(&msg->msg_iter));
used = min_t(unsigned long, used, sg->length);
if (txbufs == tx_nents) {
struct scatterlist *tmp;
int x;
/* Ran out of tx slots in async request
* need to expand */
tmp = kcalloc(tx_nents * 2, sizeof(*tmp),
GFP_KERNEL);
if (!tmp)
goto free;
sg_init_table(tmp, tx_nents * 2);
for (x = 0; x < tx_nents; x++)
sg_set_page(&tmp[x], sg_page(&sreq->tsg[x]),
sreq->tsg[x].length,
sreq->tsg[x].offset);
kfree(sreq->tsg);
sreq->tsg = tmp;
tx_nents *= 2;
mark = true;
}
/* Need to take over the tx sgl from ctx
* to the asynch req - these sgls will be freed later */
sg_set_page(sreq->tsg + txbufs++, sg_page(sg), sg->length,
sg->offset);
if (list_empty(&sreq->list)) {
rsgl = &sreq->first_sgl;
list_add_tail(&rsgl->list, &sreq->list);
} else {
rsgl = kmalloc(sizeof(*rsgl), GFP_KERNEL);
if (!rsgl) {
err = -ENOMEM;
goto free;
}
list_add_tail(&rsgl->list, &sreq->list);
}
used = af_alg_make_sg(&rsgl->sgl, &msg->msg_iter, used);
err = used;
if (used < 0)
goto free;
if (last_rsgl)
af_alg_link_sg(&last_rsgl->sgl, &rsgl->sgl);
last_rsgl = rsgl;
len += used;
skcipher_pull_sgl(sk, used, 0);
iov_iter_advance(&msg->msg_iter, used);
}
if (mark)
sg_mark_end(sreq->tsg + txbufs - 1);
ablkcipher_request_set_crypt(req, sreq->tsg, sreq->first_sgl.sgl.sg,
len, sreq->iv);
err = ctx->enc ? crypto_ablkcipher_encrypt(req) :
crypto_ablkcipher_decrypt(req);
if (err == -EINPROGRESS) {
atomic_inc(&ctx->inflight);
err = -EIOCBQUEUED;
goto unlock;
}
free:
skcipher_free_async_sgls(sreq);
kfree(req);
unlock:
skcipher_wmem_wakeup(sk);
release_sock(sk);
return err;
}
static int skcipher_recvmsg_sync(struct socket *sock, struct msghdr *msg,
int flags)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
@ -439,7 +641,7 @@ static int skcipher_recvmsg(struct kiocb *unused, struct socket *sock,
long copied = 0;
lock_sock(sk);
while (iov_iter_count(&msg->msg_iter)) {
while (msg_data_left(msg)) {
sgl = list_first_entry(&ctx->tsgl,
struct skcipher_sg_list, list);
sg = sgl->sg;
@ -453,7 +655,7 @@ static int skcipher_recvmsg(struct kiocb *unused, struct socket *sock,
goto unlock;
}
used = min_t(unsigned long, ctx->used, iov_iter_count(&msg->msg_iter));
used = min_t(unsigned long, ctx->used, msg_data_left(msg));
used = af_alg_make_sg(&ctx->rsgl, &msg->msg_iter, used);
err = used;
@ -484,7 +686,7 @@ free:
goto unlock;
copied += used;
skcipher_pull_sgl(sk, used);
skcipher_pull_sgl(sk, used, 1);
iov_iter_advance(&msg->msg_iter, used);
}
@ -497,6 +699,13 @@ unlock:
return copied ?: err;
}
static int skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
size_t ignored, int flags)
{
return (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) ?
skcipher_recvmsg_async(sock, msg, flags) :
skcipher_recvmsg_sync(sock, msg, flags);
}
static unsigned int skcipher_poll(struct file *file, struct socket *sock,
poll_table *wait)
@ -555,12 +764,25 @@ static int skcipher_setkey(void *private, const u8 *key, unsigned int keylen)
return crypto_ablkcipher_setkey(private, key, keylen);
}
static void skcipher_wait(struct sock *sk)
{
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask->private;
int ctr = 0;
while (atomic_read(&ctx->inflight) && ctr++ < 100)
msleep(100);
}
static void skcipher_sock_destruct(struct sock *sk)
{
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask->private;
struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(&ctx->req);
if (atomic_read(&ctx->inflight))
skcipher_wait(sk);
skcipher_free_sgl(sk);
sock_kzfree_s(sk, ctx->iv, crypto_ablkcipher_ivsize(tfm));
sock_kfree_s(sk, ctx, ctx->len);
@ -592,6 +814,7 @@ static int skcipher_accept_parent(void *private, struct sock *sk)
ctx->more = 0;
ctx->merge = 0;
ctx->enc = 0;
atomic_set(&ctx->inflight, 0);
af_alg_init_completion(&ctx->completion);
ask->private = ctx;

View File

@ -73,9 +73,6 @@
#undef GENERAL_DEBUG
#undef EXTRA_DEBUG
#undef NS_USE_DESTRUCTORS /* For now keep this undefined unless you know
you're going to use only raw ATM */
/* Do not touch these */
#ifdef TX_DEBUG
@ -138,11 +135,6 @@ static void process_tsq(ns_dev * card);
static void drain_scq(ns_dev * card, scq_info * scq, int pos);
static void process_rsq(ns_dev * card);
static void dequeue_rx(ns_dev * card, ns_rsqe * rsqe);
#ifdef NS_USE_DESTRUCTORS
static void ns_sb_destructor(struct sk_buff *sb);
static void ns_lb_destructor(struct sk_buff *lb);
static void ns_hb_destructor(struct sk_buff *hb);
#endif /* NS_USE_DESTRUCTORS */
static void recycle_rx_buf(ns_dev * card, struct sk_buff *skb);
static void recycle_iovec_rx_bufs(ns_dev * card, struct iovec *iov, int count);
static void recycle_iov_buf(ns_dev * card, struct sk_buff *iovb);
@ -2169,9 +2161,6 @@ static void dequeue_rx(ns_dev * card, ns_rsqe * rsqe)
} else {
skb_put(skb, len);
dequeue_sm_buf(card, skb);
#ifdef NS_USE_DESTRUCTORS
skb->destructor = ns_sb_destructor;
#endif /* NS_USE_DESTRUCTORS */
ATM_SKB(skb)->vcc = vcc;
__net_timestamp(skb);
vcc->push(vcc, skb);
@ -2190,9 +2179,6 @@ static void dequeue_rx(ns_dev * card, ns_rsqe * rsqe)
} else {
skb_put(sb, len);
dequeue_sm_buf(card, sb);
#ifdef NS_USE_DESTRUCTORS
sb->destructor = ns_sb_destructor;
#endif /* NS_USE_DESTRUCTORS */
ATM_SKB(sb)->vcc = vcc;
__net_timestamp(sb);
vcc->push(vcc, sb);
@ -2208,9 +2194,6 @@ static void dequeue_rx(ns_dev * card, ns_rsqe * rsqe)
atomic_inc(&vcc->stats->rx_drop);
} else {
dequeue_lg_buf(card, skb);
#ifdef NS_USE_DESTRUCTORS
skb->destructor = ns_lb_destructor;
#endif /* NS_USE_DESTRUCTORS */
skb_push(skb, NS_SMBUFSIZE);
skb_copy_from_linear_data(sb, skb->data,
NS_SMBUFSIZE);
@ -2322,9 +2305,6 @@ static void dequeue_rx(ns_dev * card, ns_rsqe * rsqe)
card->index);
#endif /* EXTRA_DEBUG */
ATM_SKB(hb)->vcc = vcc;
#ifdef NS_USE_DESTRUCTORS
hb->destructor = ns_hb_destructor;
#endif /* NS_USE_DESTRUCTORS */
__net_timestamp(hb);
vcc->push(vcc, hb);
atomic_inc(&vcc->stats->rx);
@ -2337,68 +2317,6 @@ static void dequeue_rx(ns_dev * card, ns_rsqe * rsqe)
}
#ifdef NS_USE_DESTRUCTORS
static void ns_sb_destructor(struct sk_buff *sb)
{
ns_dev *card;
u32 stat;
card = (ns_dev *) ATM_SKB(sb)->vcc->dev->dev_data;
stat = readl(card->membase + STAT);
card->sbfqc = ns_stat_sfbqc_get(stat);
card->lbfqc = ns_stat_lfbqc_get(stat);
do {
sb = __dev_alloc_skb(NS_SMSKBSIZE, GFP_KERNEL);
if (sb == NULL)
break;
NS_PRV_BUFTYPE(sb) = BUF_SM;
skb_queue_tail(&card->sbpool.queue, sb);
skb_reserve(sb, NS_AAL0_HEADER);
push_rxbufs(card, sb);
} while (card->sbfqc < card->sbnr.min);
}
static void ns_lb_destructor(struct sk_buff *lb)
{
ns_dev *card;
u32 stat;
card = (ns_dev *) ATM_SKB(lb)->vcc->dev->dev_data;
stat = readl(card->membase + STAT);
card->sbfqc = ns_stat_sfbqc_get(stat);
card->lbfqc = ns_stat_lfbqc_get(stat);
do {
lb = __dev_alloc_skb(NS_LGSKBSIZE, GFP_KERNEL);
if (lb == NULL)
break;
NS_PRV_BUFTYPE(lb) = BUF_LG;
skb_queue_tail(&card->lbpool.queue, lb);
skb_reserve(lb, NS_SMBUFSIZE);
push_rxbufs(card, lb);
} while (card->lbfqc < card->lbnr.min);
}
static void ns_hb_destructor(struct sk_buff *hb)
{
ns_dev *card;
card = (ns_dev *) ATM_SKB(hb)->vcc->dev->dev_data;
while (card->hbpool.count < card->hbnr.init) {
hb = __dev_alloc_skb(NS_HBUFSIZE, GFP_KERNEL);
if (hb == NULL)
break;
NS_PRV_BUFTYPE(hb) = BUF_NONE;
skb_queue_tail(&card->hbpool.queue, hb);
card->hbpool.count++;
}
}
#endif /* NS_USE_DESTRUCTORS */
static void recycle_rx_buf(ns_dev * card, struct sk_buff *skb)
{
if (unlikely(NS_PRV_BUFTYPE(skb) == BUF_NONE)) {
@ -2427,9 +2345,6 @@ static void recycle_iov_buf(ns_dev * card, struct sk_buff *iovb)
static void dequeue_sm_buf(ns_dev * card, struct sk_buff *sb)
{
skb_unlink(sb, &card->sbpool.queue);
#ifdef NS_USE_DESTRUCTORS
if (card->sbfqc < card->sbnr.min)
#else
if (card->sbfqc < card->sbnr.init) {
struct sk_buff *new_sb;
if ((new_sb = dev_alloc_skb(NS_SMSKBSIZE)) != NULL) {
@ -2440,7 +2355,6 @@ static void dequeue_sm_buf(ns_dev * card, struct sk_buff *sb)
}
}
if (card->sbfqc < card->sbnr.init)
#endif /* NS_USE_DESTRUCTORS */
{
struct sk_buff *new_sb;
if ((new_sb = dev_alloc_skb(NS_SMSKBSIZE)) != NULL) {
@ -2455,9 +2369,6 @@ static void dequeue_sm_buf(ns_dev * card, struct sk_buff *sb)
static void dequeue_lg_buf(ns_dev * card, struct sk_buff *lb)
{
skb_unlink(lb, &card->lbpool.queue);
#ifdef NS_USE_DESTRUCTORS
if (card->lbfqc < card->lbnr.min)
#else
if (card->lbfqc < card->lbnr.init) {
struct sk_buff *new_lb;
if ((new_lb = dev_alloc_skb(NS_LGSKBSIZE)) != NULL) {
@ -2468,7 +2379,6 @@ static void dequeue_lg_buf(ns_dev * card, struct sk_buff *lb)
}
}
if (card->lbfqc < card->lbnr.init)
#endif /* NS_USE_DESTRUCTORS */
{
struct sk_buff *new_lb;
if ((new_lb = dev_alloc_skb(NS_LGSKBSIZE)) != NULL) {

View File

@ -26,6 +26,7 @@ config BCMA_HOST_PCI_POSSIBLE
config BCMA_HOST_PCI
bool "Support for BCMA on PCI-host bus"
depends on BCMA_HOST_PCI_POSSIBLE
select BCMA_DRIVER_PCI
default y
config BCMA_DRIVER_PCI_HOSTMODE
@ -44,6 +45,22 @@ config BCMA_HOST_SOC
If unsure, say N
config BCMA_DRIVER_PCI
bool "BCMA Broadcom PCI core driver"
depends on BCMA && PCI
default y
help
BCMA bus may have many versions of PCIe core. This driver
supports:
1) PCIe core working in clientmode
2) PCIe Gen 2 clientmode core
In general PCIe (Gen 2) clientmode core is required on PCIe
hosted buses. It's responsible for initialization and basic
hardware management.
This driver is also prerequisite for a hostmode PCIe core
support.
config BCMA_DRIVER_MIPS
bool "BCMA Broadcom MIPS core driver"
depends on BCMA && MIPS

View File

@ -3,8 +3,8 @@ bcma-y += driver_chipcommon.o driver_chipcommon_pmu.o
bcma-y += driver_chipcommon_b.o
bcma-$(CONFIG_BCMA_SFLASH) += driver_chipcommon_sflash.o
bcma-$(CONFIG_BCMA_NFLASH) += driver_chipcommon_nflash.o
bcma-y += driver_pci.o
bcma-y += driver_pcie2.o
bcma-$(CONFIG_BCMA_DRIVER_PCI) += driver_pci.o
bcma-$(CONFIG_BCMA_DRIVER_PCI) += driver_pcie2.o
bcma-$(CONFIG_BCMA_DRIVER_PCI_HOSTMODE) += driver_pci_host.o
bcma-$(CONFIG_BCMA_DRIVER_MIPS) += driver_mips.o
bcma-$(CONFIG_BCMA_DRIVER_GMAC_CMN) += driver_gmac_cmn.o

View File

@ -26,6 +26,7 @@ bool bcma_wait_value(struct bcma_device *core, u16 reg, u32 mask, u32 value,
int timeout);
void bcma_prepare_core(struct bcma_bus *bus, struct bcma_device *core);
void bcma_init_bus(struct bcma_bus *bus);
void bcma_unregister_cores(struct bcma_bus *bus);
int bcma_bus_register(struct bcma_bus *bus);
void bcma_bus_unregister(struct bcma_bus *bus);
int __init bcma_bus_early_register(struct bcma_bus *bus);
@ -42,6 +43,9 @@ int bcma_bus_scan(struct bcma_bus *bus);
int bcma_sprom_get(struct bcma_bus *bus);
/* driver_chipcommon.c */
void bcma_core_chipcommon_early_init(struct bcma_drv_cc *cc);
void bcma_core_chipcommon_init(struct bcma_drv_cc *cc);
void bcma_chipco_bcm4331_ext_pa_lines_ctl(struct bcma_drv_cc *cc, bool enable);
#ifdef CONFIG_BCMA_DRIVER_MIPS
void bcma_chipco_serial_init(struct bcma_drv_cc *cc);
extern struct platform_device bcma_pflash_dev;
@ -52,6 +56,8 @@ int bcma_core_chipcommon_b_init(struct bcma_drv_cc_b *ccb);
void bcma_core_chipcommon_b_free(struct bcma_drv_cc_b *ccb);
/* driver_chipcommon_pmu.c */
void bcma_pmu_early_init(struct bcma_drv_cc *cc);
void bcma_pmu_init(struct bcma_drv_cc *cc);
u32 bcma_pmu_get_alp_clock(struct bcma_drv_cc *cc);
u32 bcma_pmu_get_cpu_clock(struct bcma_drv_cc *cc);
@ -100,7 +106,35 @@ static inline void __exit bcma_host_soc_unregister_driver(void)
#endif /* CONFIG_BCMA_HOST_SOC && CONFIG_OF */
/* driver_pci.c */
#ifdef CONFIG_BCMA_DRIVER_PCI
u32 bcma_pcie_read(struct bcma_drv_pci *pc, u32 address);
void bcma_core_pci_early_init(struct bcma_drv_pci *pc);
void bcma_core_pci_init(struct bcma_drv_pci *pc);
void bcma_core_pci_up(struct bcma_drv_pci *pc);
void bcma_core_pci_down(struct bcma_drv_pci *pc);
#else
static inline void bcma_core_pci_early_init(struct bcma_drv_pci *pc)
{
WARN_ON(pc->core->bus->hosttype == BCMA_HOSTTYPE_PCI);
}
static inline void bcma_core_pci_init(struct bcma_drv_pci *pc)
{
/* Initialization is required for PCI hosted bus */
WARN_ON(pc->core->bus->hosttype == BCMA_HOSTTYPE_PCI);
}
#endif
/* driver_pcie2.c */
#ifdef CONFIG_BCMA_DRIVER_PCI
void bcma_core_pcie2_init(struct bcma_drv_pcie2 *pcie2);
void bcma_core_pcie2_up(struct bcma_drv_pcie2 *pcie2);
#else
static inline void bcma_core_pcie2_init(struct bcma_drv_pcie2 *pcie2)
{
/* Initialization is required for PCI hosted bus */
WARN_ON(pcie2->core->bus->hosttype == BCMA_HOSTTYPE_PCI);
}
#endif
extern int bcma_chipco_watchdog_register(struct bcma_drv_cc *cc);
@ -117,6 +151,39 @@ static inline void bcma_core_pci_hostmode_init(struct bcma_drv_pci *pc)
}
#endif /* CONFIG_BCMA_DRIVER_PCI_HOSTMODE */
/**************************************************
* driver_mips.c
**************************************************/
#ifdef CONFIG_BCMA_DRIVER_MIPS
unsigned int bcma_core_mips_irq(struct bcma_device *dev);
void bcma_core_mips_early_init(struct bcma_drv_mips *mcore);
void bcma_core_mips_init(struct bcma_drv_mips *mcore);
#else
static inline unsigned int bcma_core_mips_irq(struct bcma_device *dev)
{
return 0;
}
static inline void bcma_core_mips_early_init(struct bcma_drv_mips *mcore)
{
}
static inline void bcma_core_mips_init(struct bcma_drv_mips *mcore)
{
}
#endif
/**************************************************
* driver_gmac_cmn.c
**************************************************/
#ifdef CONFIG_BCMA_DRIVER_GMAC_CMN
void bcma_core_gmac_cmn_init(struct bcma_drv_gmac_cmn *gc);
#else
static inline void bcma_core_gmac_cmn_init(struct bcma_drv_gmac_cmn *gc)
{
}
#endif
#ifdef CONFIG_BCMA_DRIVER_GPIO
/* driver_gpio.c */
int bcma_gpio_init(struct bcma_drv_cc *cc);

View File

@ -17,6 +17,8 @@
#include "bcma_private.h"
#define BCMA_GPIO_MAX_PINS 32
static inline struct bcma_drv_cc *bcma_gpio_get_cc(struct gpio_chip *chip)
{
return container_of(chip, struct bcma_drv_cc, gpio);
@ -76,7 +78,7 @@ static void bcma_gpio_free(struct gpio_chip *chip, unsigned gpio)
bcma_chipco_gpio_pullup(cc, 1 << gpio, 0);
}
#if IS_BUILTIN(CONFIG_BCM47XX)
#if IS_BUILTIN(CONFIG_BCM47XX) || IS_BUILTIN(CONFIG_ARCH_BCM_5301X)
static int bcma_gpio_to_irq(struct gpio_chip *chip, unsigned gpio)
{
struct bcma_drv_cc *cc = bcma_gpio_get_cc(chip);
@ -204,6 +206,7 @@ static void bcma_gpio_irq_domain_exit(struct bcma_drv_cc *cc)
int bcma_gpio_init(struct bcma_drv_cc *cc)
{
struct bcma_bus *bus = cc->core->bus;
struct gpio_chip *chip = &cc->gpio;
int err;
@ -215,14 +218,14 @@ int bcma_gpio_init(struct bcma_drv_cc *cc)
chip->set = bcma_gpio_set_value;
chip->direction_input = bcma_gpio_direction_input;
chip->direction_output = bcma_gpio_direction_output;
#if IS_BUILTIN(CONFIG_BCM47XX)
#if IS_BUILTIN(CONFIG_BCM47XX) || IS_BUILTIN(CONFIG_ARCH_BCM_5301X)
chip->to_irq = bcma_gpio_to_irq;
#endif
#if IS_BUILTIN(CONFIG_OF)
if (cc->core->bus->hosttype == BCMA_HOSTTYPE_SOC)
chip->of_node = cc->core->dev.of_node;
#endif
switch (cc->core->bus->chipinfo.id) {
switch (bus->chipinfo.id) {
case BCMA_CHIP_ID_BCM5357:
case BCMA_CHIP_ID_BCM53572:
chip->ngpio = 32;
@ -231,13 +234,17 @@ int bcma_gpio_init(struct bcma_drv_cc *cc)
chip->ngpio = 16;
}
/* There is just one SoC in one device and its GPIO addresses should be
* deterministic to address them more easily. The other buses could get
* a random base number. */
if (cc->core->bus->hosttype == BCMA_HOSTTYPE_SOC)
chip->base = 0;
else
chip->base = -1;
/*
* On MIPS we register GPIO devices (LEDs, buttons) using absolute GPIO
* pin numbers. We don't have Device Tree there and we can't really use
* relative (per chip) numbers.
* So let's use predictable base for BCM47XX and "random" for all other.
*/
#if IS_BUILTIN(CONFIG_BCM47XX)
chip->base = bus->num * BCMA_GPIO_MAX_PINS;
#else
chip->base = -1;
#endif
err = bcma_gpio_irq_domain_init(cc);
if (err)

View File

@ -282,39 +282,6 @@ void bcma_core_pci_power_save(struct bcma_bus *bus, bool up)
}
EXPORT_SYMBOL_GPL(bcma_core_pci_power_save);
int bcma_core_pci_irq_ctl(struct bcma_drv_pci *pc, struct bcma_device *core,
bool enable)
{
struct pci_dev *pdev;
u32 coremask, tmp;
int err = 0;
if (!pc || core->bus->hosttype != BCMA_HOSTTYPE_PCI) {
/* This bcma device is not on a PCI host-bus. So the IRQs are
* not routed through the PCI core.
* So we must not enable routing through the PCI core. */
goto out;
}
pdev = pc->core->bus->host_pci;
err = pci_read_config_dword(pdev, BCMA_PCI_IRQMASK, &tmp);
if (err)
goto out;
coremask = BIT(core->core_index) << 8;
if (enable)
tmp |= coremask;
else
tmp &= ~coremask;
err = pci_write_config_dword(pdev, BCMA_PCI_IRQMASK, tmp);
out:
return err;
}
EXPORT_SYMBOL_GPL(bcma_core_pci_irq_ctl);
static void bcma_core_pci_extend_L1timer(struct bcma_drv_pci *pc, bool extend)
{
u32 w;
@ -328,28 +295,12 @@ static void bcma_core_pci_extend_L1timer(struct bcma_drv_pci *pc, bool extend)
bcma_pcie_read(pc, BCMA_CORE_PCI_DLLP_PMTHRESHREG);
}
void bcma_core_pci_up(struct bcma_bus *bus)
void bcma_core_pci_up(struct bcma_drv_pci *pc)
{
struct bcma_drv_pci *pc;
if (bus->hosttype != BCMA_HOSTTYPE_PCI)
return;
pc = &bus->drv_pci[0];
bcma_core_pci_extend_L1timer(pc, true);
}
EXPORT_SYMBOL_GPL(bcma_core_pci_up);
void bcma_core_pci_down(struct bcma_bus *bus)
void bcma_core_pci_down(struct bcma_drv_pci *pc)
{
struct bcma_drv_pci *pc;
if (bus->hosttype != BCMA_HOSTTYPE_PCI)
return;
pc = &bus->drv_pci[0];
bcma_core_pci_extend_L1timer(pc, false);
}
EXPORT_SYMBOL_GPL(bcma_core_pci_down);

View File

@ -11,6 +11,7 @@
#include "bcma_private.h"
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/export.h>
#include <linux/bcma/bcma.h>
#include <asm/paccess.h>

View File

@ -10,6 +10,7 @@
#include "bcma_private.h"
#include <linux/bcma/bcma.h>
#include <linux/pci.h>
/**************************************************
* R/W ops.
@ -156,14 +157,23 @@ static void pciedev_reg_pm_clk_period(struct bcma_drv_pcie2 *pcie2)
void bcma_core_pcie2_init(struct bcma_drv_pcie2 *pcie2)
{
struct bcma_chipinfo *ci = &pcie2->core->bus->chipinfo;
struct bcma_bus *bus = pcie2->core->bus;
struct bcma_chipinfo *ci = &bus->chipinfo;
u32 tmp;
tmp = pcie2_read32(pcie2, BCMA_CORE_PCIE2_SPROM(54));
if ((tmp & 0xe) >> 1 == 2)
bcma_core_pcie2_cfg_write(pcie2, 0x4e0, 0x17);
/* TODO: Do we need pcie_reqsize? */
switch (bus->chipinfo.id) {
case BCMA_CHIP_ID_BCM4360:
case BCMA_CHIP_ID_BCM4352:
pcie2->reqsize = 1024;
break;
default:
pcie2->reqsize = 128;
break;
}
if (ci->id == BCMA_CHIP_ID_BCM4360 && ci->rev > 3)
bcma_core_pcie2_war_delay_perst_enab(pcie2, true);
@ -173,3 +183,18 @@ void bcma_core_pcie2_init(struct bcma_drv_pcie2 *pcie2)
pciedev_crwlpciegen2_180(pcie2);
pciedev_crwlpciegen2_182(pcie2);
}
/**************************************************
* Runtime ops.
**************************************************/
void bcma_core_pcie2_up(struct bcma_drv_pcie2 *pcie2)
{
struct bcma_bus *bus = pcie2->core->bus;
struct pci_dev *dev = bus->host_pci;
int err;
err = pcie_set_readrq(dev, pcie2->reqsize);
if (err)
bcma_err(bus, "Error setting PCI_EXP_DEVCTL_READRQ: %d\n", err);
}

View File

@ -213,16 +213,26 @@ static int bcma_host_pci_probe(struct pci_dev *dev,
/* Initialize struct, detect chip */
bcma_init_bus(bus);
/* Scan bus to find out generation of PCIe core */
err = bcma_bus_scan(bus);
if (err)
goto err_pci_unmap_mmio;
if (bcma_find_core(bus, BCMA_CORE_PCIE2))
bus->host_is_pcie2 = true;
/* Register */
err = bcma_bus_register(bus);
if (err)
goto err_pci_unmap_mmio;
goto err_unregister_cores;
pci_set_drvdata(dev, bus);
out:
return err;
err_unregister_cores:
bcma_unregister_cores(bus);
err_pci_unmap_mmio:
pci_iounmap(dev, bus->mmio);
err_pci_release_regions:
@ -283,9 +293,12 @@ static const struct pci_device_id bcma_pci_bridge_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4357) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4358) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4359) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4360) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4365) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43a0) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43a9) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43aa) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43b1) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4727) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43227) }, /* 0xa8db, BCM43217 (sic!) */
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43228) }, /* 0xa8dc */
@ -310,3 +323,65 @@ void __exit bcma_host_pci_exit(void)
{
pci_unregister_driver(&bcma_pci_bridge_driver);
}
/**************************************************
* Runtime ops for drivers.
**************************************************/
/* See also pcicore_up */
void bcma_host_pci_up(struct bcma_bus *bus)
{
if (bus->hosttype != BCMA_HOSTTYPE_PCI)
return;
if (bus->host_is_pcie2)
bcma_core_pcie2_up(&bus->drv_pcie2);
else
bcma_core_pci_up(&bus->drv_pci[0]);
}
EXPORT_SYMBOL_GPL(bcma_host_pci_up);
/* See also pcicore_down */
void bcma_host_pci_down(struct bcma_bus *bus)
{
if (bus->hosttype != BCMA_HOSTTYPE_PCI)
return;
if (!bus->host_is_pcie2)
bcma_core_pci_down(&bus->drv_pci[0]);
}
EXPORT_SYMBOL_GPL(bcma_host_pci_down);
/* See also si_pci_setup */
int bcma_host_pci_irq_ctl(struct bcma_bus *bus, struct bcma_device *core,
bool enable)
{
struct pci_dev *pdev;
u32 coremask, tmp;
int err = 0;
if (bus->hosttype != BCMA_HOSTTYPE_PCI) {
/* This bcma device is not on a PCI host-bus. So the IRQs are
* not routed through the PCI core.
* So we must not enable routing through the PCI core. */
goto out;
}
pdev = bus->host_pci;
err = pci_read_config_dword(pdev, BCMA_PCI_IRQMASK, &tmp);
if (err)
goto out;
coremask = BIT(core->core_index) << 8;
if (enable)
tmp |= coremask;
else
tmp &= ~coremask;
err = pci_write_config_dword(pdev, BCMA_PCI_IRQMASK, tmp);
out:
return err;
}
EXPORT_SYMBOL_GPL(bcma_host_pci_irq_ctl);

View File

@ -363,7 +363,7 @@ static int bcma_register_devices(struct bcma_bus *bus)
return 0;
}
static void bcma_unregister_cores(struct bcma_bus *bus)
void bcma_unregister_cores(struct bcma_bus *bus)
{
struct bcma_device *core, *tmp;

View File

@ -2,9 +2,17 @@
menu "Bluetooth device drivers"
depends on BT
config BT_INTEL
tristate
config BT_BCM
tristate
select FW_LOADER
config BT_HCIBTUSB
tristate "HCI USB driver"
depends on USB
select BT_INTEL
help
Bluetooth HCI USB driver.
This driver is required if you want to use Bluetooth devices with
@ -13,6 +21,17 @@ config BT_HCIBTUSB
Say Y here to compile support for Bluetooth USB devices into the
kernel or say M to compile it as module (btusb).
config BT_HCIBTUSB_BCM
bool "Broadcom protocol support"
depends on BT_HCIBTUSB
select BT_BCM
default y
help
The Broadcom protocol support enables firmware and patchram
download support for Broadcom Bluetooth controllers.
Say Y here to compile support for Broadcom protocol.
config BT_HCIBTSDIO
tristate "HCI SDIO driver"
depends on MMC
@ -62,6 +81,7 @@ config BT_HCIUART_BCSP
config BT_HCIUART_ATH3K
bool "Atheros AR300x serial support"
depends on BT_HCIUART
select BT_HCIUART_H4
help
HCIATH3K (HCI Atheros AR300x) is a serial protocol for
communication between host and Atheros AR300x Bluetooth devices.
@ -94,6 +114,27 @@ config BT_HCIUART_3WIRE
Say Y here to compile support for Three-wire UART protocol.
config BT_HCIUART_INTEL
bool "Intel protocol support"
depends on BT_HCIUART
select BT_INTEL
help
The Intel protocol support enables Bluetooth HCI over serial
port interface for Intel Bluetooth controllers.
Say Y here to compile support for Intel protocol.
config BT_HCIUART_BCM
bool "Broadcom protocol support"
depends on BT_HCIUART
select BT_HCIUART_H4
select BT_BCM
help
The Broadcom protocol support enables Bluetooth HCI over serial
port interface for Broadcom Bluetooth controllers.
Say Y here to compile support for Broadcom protocol.
config BT_HCIBCM203X
tristate "HCI BCM203x USB driver"
depends on USB

View File

@ -15,10 +15,12 @@ obj-$(CONFIG_BT_HCIBTUART) += btuart_cs.o
obj-$(CONFIG_BT_HCIBTUSB) += btusb.o
obj-$(CONFIG_BT_HCIBTSDIO) += btsdio.o
obj-$(CONFIG_BT_INTEL) += btintel.o
obj-$(CONFIG_BT_ATH3K) += ath3k.o
obj-$(CONFIG_BT_MRVL) += btmrvl.o
obj-$(CONFIG_BT_MRVL_SDIO) += btmrvl_sdio.o
obj-$(CONFIG_BT_WILINK) += btwilink.o
obj-$(CONFIG_BT_BCM) += btbcm.o
btmrvl-y := btmrvl_main.o
btmrvl-$(CONFIG_DEBUG_FS) += btmrvl_debugfs.o
@ -29,6 +31,8 @@ hci_uart-$(CONFIG_BT_HCIUART_BCSP) += hci_bcsp.o
hci_uart-$(CONFIG_BT_HCIUART_LL) += hci_ll.o
hci_uart-$(CONFIG_BT_HCIUART_ATH3K) += hci_ath.o
hci_uart-$(CONFIG_BT_HCIUART_3WIRE) += hci_h5.o
hci_uart-$(CONFIG_BT_HCIUART_INTEL) += hci_intel.o
hci_uart-$(CONFIG_BT_HCIUART_BCM) += hci_bcm.o
hci_uart-objs := $(hci_uart-y)
ccflags-y += -D__CHECK_ENDIAN__

View File

@ -65,6 +65,7 @@ static const struct usb_device_id ath3k_table[] = {
/* Atheros AR3011 with sflash firmware*/
{ USB_DEVICE(0x0489, 0xE027) },
{ USB_DEVICE(0x0489, 0xE03D) },
{ USB_DEVICE(0x04F2, 0xAFF1) },
{ USB_DEVICE(0x0930, 0x0215) },
{ USB_DEVICE(0x0CF3, 0x3002) },
{ USB_DEVICE(0x0CF3, 0xE019) },

View File

@ -0,0 +1,387 @@
/*
*
* Bluetooth support for Broadcom devices
*
* Copyright (C) 2015 Intel Corporation
*
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <linux/module.h>
#include <linux/firmware.h>
#include <asm/unaligned.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include "btbcm.h"
#define VERSION "0.1"
#define BDADDR_BCM20702A0 (&(bdaddr_t) {{0x00, 0xa0, 0x02, 0x70, 0x20, 0x00}})
int btbcm_check_bdaddr(struct hci_dev *hdev)
{
struct hci_rp_read_bd_addr *bda;
struct sk_buff *skb;
skb = __hci_cmd_sync(hdev, HCI_OP_READ_BD_ADDR, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
int err = PTR_ERR(skb);
BT_ERR("%s: BCM: Reading device address failed (%d)",
hdev->name, err);
return err;
}
if (skb->len != sizeof(*bda)) {
BT_ERR("%s: BCM: Device address length mismatch", hdev->name);
kfree_skb(skb);
return -EIO;
}
bda = (struct hci_rp_read_bd_addr *)skb->data;
if (bda->status) {
BT_ERR("%s: BCM: Device address result failed (%02x)",
hdev->name, bda->status);
kfree_skb(skb);
return -bt_to_errno(bda->status);
}
/* The address 00:20:70:02:A0:00 indicates a BCM20702A0 controller
* with no configured address.
*/
if (!bacmp(&bda->bdaddr, BDADDR_BCM20702A0)) {
BT_INFO("%s: BCM: Using default device address (%pMR)",
hdev->name, &bda->bdaddr);
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
}
kfree_skb(skb);
return 0;
}
EXPORT_SYMBOL_GPL(btbcm_check_bdaddr);
int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
{
struct sk_buff *skb;
int err;
skb = __hci_cmd_sync(hdev, 0xfc01, 6, bdaddr, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
BT_ERR("%s: BCM: Change address command failed (%d)",
hdev->name, err);
return err;
}
kfree_skb(skb);
return 0;
}
EXPORT_SYMBOL_GPL(btbcm_set_bdaddr);
static int btbcm_reset(struct hci_dev *hdev)
{
struct sk_buff *skb;
skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
int err = PTR_ERR(skb);
BT_ERR("%s: BCM: Reset failed (%d)", hdev->name, err);
return err;
}
kfree_skb(skb);
return 0;
}
static struct sk_buff *btbcm_read_local_version(struct hci_dev *hdev)
{
struct sk_buff *skb;
skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: BCM: Reading local version info failed (%ld)",
hdev->name, PTR_ERR(skb));
return skb;
}
if (skb->len != sizeof(struct hci_rp_read_local_version)) {
BT_ERR("%s: BCM: Local version length mismatch", hdev->name);
kfree_skb(skb);
return ERR_PTR(-EIO);
}
return skb;
}
static struct sk_buff *btbcm_read_verbose_config(struct hci_dev *hdev)
{
struct sk_buff *skb;
skb = __hci_cmd_sync(hdev, 0xfc79, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: BCM: Read verbose config info failed (%ld)",
hdev->name, PTR_ERR(skb));
return skb;
}
if (skb->len != 7) {
BT_ERR("%s: BCM: Verbose config length mismatch", hdev->name);
kfree_skb(skb);
return ERR_PTR(-EIO);
}
return skb;
}
static struct sk_buff *btbcm_read_usb_product(struct hci_dev *hdev)
{
struct sk_buff *skb;
skb = __hci_cmd_sync(hdev, 0xfc5a, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: BCM: Read USB product info failed (%ld)",
hdev->name, PTR_ERR(skb));
return skb;
}
if (skb->len != 5) {
BT_ERR("%s: BCM: USB product length mismatch", hdev->name);
kfree_skb(skb);
return ERR_PTR(-EIO);
}
return skb;
}
static const struct {
u16 subver;
const char *name;
} bcm_uart_subver_table[] = {
{ 0x410e, "BCM43341B0" }, /* 002.001.014 */
{ }
};
static const struct {
u16 subver;
const char *name;
} bcm_usb_subver_table[] = {
{ 0x210b, "BCM43142A0" }, /* 001.001.011 */
{ 0x2112, "BCM4314A0" }, /* 001.001.018 */
{ 0x2118, "BCM20702A0" }, /* 001.001.024 */
{ 0x2126, "BCM4335A0" }, /* 001.001.038 */
{ 0x220e, "BCM20702A1" }, /* 001.002.014 */
{ 0x230f, "BCM4354A2" }, /* 001.003.015 */
{ 0x4106, "BCM4335B0" }, /* 002.001.006 */
{ 0x410e, "BCM20702B0" }, /* 002.001.014 */
{ 0x6109, "BCM4335C0" }, /* 003.001.009 */
{ 0x610c, "BCM4354" }, /* 003.001.012 */
{ }
};
int btbcm_setup_patchram(struct hci_dev *hdev)
{
const struct hci_command_hdr *cmd;
const struct firmware *fw;
const u8 *fw_ptr;
size_t fw_size;
char fw_name[64];
u16 opcode, subver, rev, pid, vid;
const char *hw_name = NULL;
struct sk_buff *skb;
struct hci_rp_read_local_version *ver;
int i, err;
/* Reset */
err = btbcm_reset(hdev);
if (err)
return err;
/* Read Local Version Info */
skb = btbcm_read_local_version(hdev);
if (IS_ERR(skb))
return PTR_ERR(skb);
ver = (struct hci_rp_read_local_version *)skb->data;
rev = le16_to_cpu(ver->hci_rev);
subver = le16_to_cpu(ver->lmp_subver);
kfree_skb(skb);
/* Read Verbose Config Version Info */
skb = btbcm_read_verbose_config(hdev);
if (IS_ERR(skb))
return PTR_ERR(skb);
BT_INFO("%s: BCM: chip id %u", hdev->name, skb->data[1]);
kfree_skb(skb);
switch ((rev & 0xf000) >> 12) {
case 0:
for (i = 0; bcm_uart_subver_table[i].name; i++) {
if (subver == bcm_uart_subver_table[i].subver) {
hw_name = bcm_uart_subver_table[i].name;
break;
}
}
snprintf(fw_name, sizeof(fw_name), "brcm/%s.hcd",
hw_name ? : "BCM");
break;
case 1:
case 2:
/* Read USB Product Info */
skb = btbcm_read_usb_product(hdev);
if (IS_ERR(skb))
return PTR_ERR(skb);
vid = get_unaligned_le16(skb->data + 1);
pid = get_unaligned_le16(skb->data + 3);
kfree_skb(skb);
for (i = 0; bcm_usb_subver_table[i].name; i++) {
if (subver == bcm_usb_subver_table[i].subver) {
hw_name = bcm_usb_subver_table[i].name;
break;
}
}
snprintf(fw_name, sizeof(fw_name), "brcm/%s-%4.4x-%4.4x.hcd",
hw_name ? : "BCM", vid, pid);
break;
default:
return 0;
}
BT_INFO("%s: %s (%3.3u.%3.3u.%3.3u) build %4.4u", hdev->name,
hw_name ? : "BCM", (subver & 0x7000) >> 13,
(subver & 0x1f00) >> 8, (subver & 0x00ff), rev & 0x0fff);
err = request_firmware(&fw, fw_name, &hdev->dev);
if (err < 0) {
BT_INFO("%s: BCM: patch %s not found", hdev->name, fw_name);
return 0;
}
/* Start Download */
skb = __hci_cmd_sync(hdev, 0xfc2e, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
BT_ERR("%s: BCM: Download Minidrv command failed (%d)",
hdev->name, err);
goto reset;
}
kfree_skb(skb);
/* 50 msec delay after Download Minidrv completes */
msleep(50);
fw_ptr = fw->data;
fw_size = fw->size;
while (fw_size >= sizeof(*cmd)) {
const u8 *cmd_param;
cmd = (struct hci_command_hdr *)fw_ptr;
fw_ptr += sizeof(*cmd);
fw_size -= sizeof(*cmd);
if (fw_size < cmd->plen) {
BT_ERR("%s: BCM: patch %s is corrupted", hdev->name,
fw_name);
err = -EINVAL;
goto reset;
}
cmd_param = fw_ptr;
fw_ptr += cmd->plen;
fw_size -= cmd->plen;
opcode = le16_to_cpu(cmd->opcode);
skb = __hci_cmd_sync(hdev, opcode, cmd->plen, cmd_param,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
BT_ERR("%s: BCM: patch command %04x failed (%d)",
hdev->name, opcode, err);
goto reset;
}
kfree_skb(skb);
}
/* 250 msec delay after Launch Ram completes */
msleep(250);
reset:
/* Reset */
err = btbcm_reset(hdev);
if (err)
goto done;
/* Read Local Version Info */
skb = btbcm_read_local_version(hdev);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
goto done;
}
ver = (struct hci_rp_read_local_version *)skb->data;
rev = le16_to_cpu(ver->hci_rev);
subver = le16_to_cpu(ver->lmp_subver);
kfree_skb(skb);
BT_INFO("%s: %s (%3.3u.%3.3u.%3.3u) build %4.4u", hdev->name,
hw_name ? : "BCM", (subver & 0x7000) >> 13,
(subver & 0x1f00) >> 8, (subver & 0x00ff), rev & 0x0fff);
btbcm_check_bdaddr(hdev);
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
done:
release_firmware(fw);
return err;
}
EXPORT_SYMBOL_GPL(btbcm_setup_patchram);
int btbcm_setup_apple(struct hci_dev *hdev)
{
struct sk_buff *skb;
/* Read Verbose Config Version Info */
skb = btbcm_read_verbose_config(hdev);
if (IS_ERR(skb))
return PTR_ERR(skb);
BT_INFO("%s: BCM: chip id %u build %4.4u", hdev->name, skb->data[1],
get_unaligned_le16(skb->data + 5));
kfree_skb(skb);
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
return 0;
}
EXPORT_SYMBOL_GPL(btbcm_setup_apple);
MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>");
MODULE_DESCRIPTION("Bluetooth support for Broadcom devices ver " VERSION);
MODULE_VERSION(VERSION);
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,54 @@
/*
*
* Bluetooth support for Broadcom devices
*
* Copyright (C) 2015 Intel Corporation
*
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#if IS_ENABLED(CONFIG_BT_BCM)
int btbcm_check_bdaddr(struct hci_dev *hdev);
int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr);
int btbcm_setup_patchram(struct hci_dev *hdev);
int btbcm_setup_apple(struct hci_dev *hdev);
#else
static inline int btbcm_check_bdaddr(struct hci_dev *hdev)
{
return -EOPNOTSUPP;
}
static inline int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
{
return -EOPNOTSUPP;
}
static inline int btbcm_setup_patchram(struct hci_dev *hdev)
{
return 0;
}
static inline int btbcm_setup_apple(struct hci_dev *hdev)
{
return 0;
}
#endif

View File

@ -0,0 +1,101 @@
/*
*
* Bluetooth support for Intel devices
*
* Copyright (C) 2015 Intel Corporation
*
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <linux/module.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include "btintel.h"
#define VERSION "0.1"
#define BDADDR_INTEL (&(bdaddr_t) {{0x00, 0x8b, 0x9e, 0x19, 0x03, 0x00}})
int btintel_check_bdaddr(struct hci_dev *hdev)
{
struct hci_rp_read_bd_addr *bda;
struct sk_buff *skb;
skb = __hci_cmd_sync(hdev, HCI_OP_READ_BD_ADDR, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
int err = PTR_ERR(skb);
BT_ERR("%s: Reading Intel device address failed (%d)",
hdev->name, err);
return err;
}
if (skb->len != sizeof(*bda)) {
BT_ERR("%s: Intel device address length mismatch", hdev->name);
kfree_skb(skb);
return -EIO;
}
bda = (struct hci_rp_read_bd_addr *)skb->data;
if (bda->status) {
BT_ERR("%s: Intel device address result failed (%02x)",
hdev->name, bda->status);
kfree_skb(skb);
return -bt_to_errno(bda->status);
}
/* For some Intel based controllers, the default Bluetooth device
* address 00:03:19:9E:8B:00 can be found. These controllers are
* fully operational, but have the danger of duplicate addresses
* and that in turn can cause problems with Bluetooth operation.
*/
if (!bacmp(&bda->bdaddr, BDADDR_INTEL)) {
BT_ERR("%s: Found Intel default device address (%pMR)",
hdev->name, &bda->bdaddr);
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
}
kfree_skb(skb);
return 0;
}
EXPORT_SYMBOL_GPL(btintel_check_bdaddr);
int btintel_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
{
struct sk_buff *skb;
int err;
skb = __hci_cmd_sync(hdev, 0xfc31, 6, bdaddr, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
BT_ERR("%s: Changing Intel device address failed (%d)",
hdev->name, err);
return err;
}
kfree_skb(skb);
return 0;
}
EXPORT_SYMBOL_GPL(btintel_set_bdaddr);
MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>");
MODULE_DESCRIPTION("Bluetooth support for Intel devices ver " VERSION);
MODULE_VERSION(VERSION);
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,89 @@
/*
*
* Bluetooth support for Intel devices
*
* Copyright (C) 2015 Intel Corporation
*
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
struct intel_version {
u8 status;
u8 hw_platform;
u8 hw_variant;
u8 hw_revision;
u8 fw_variant;
u8 fw_revision;
u8 fw_build_num;
u8 fw_build_ww;
u8 fw_build_yy;
u8 fw_patch_num;
} __packed;
struct intel_boot_params {
__u8 status;
__u8 otp_format;
__u8 otp_content;
__u8 otp_patch;
__le16 dev_revid;
__u8 secure_boot;
__u8 key_from_hdr;
__u8 key_type;
__u8 otp_lock;
__u8 api_lock;
__u8 debug_lock;
bdaddr_t otp_bdaddr;
__u8 min_fw_build_nn;
__u8 min_fw_build_cw;
__u8 min_fw_build_yy;
__u8 limited_cce;
__u8 unlocked_state;
} __packed;
struct intel_bootup {
__u8 zero;
__u8 num_cmds;
__u8 source;
__u8 reset_type;
__u8 reset_reason;
__u8 ddc_status;
} __packed;
struct intel_secure_send_result {
__u8 result;
__le16 opcode;
__u8 status;
} __packed;
#if IS_ENABLED(CONFIG_BT_INTEL)
int btintel_check_bdaddr(struct hci_dev *hdev);
int btintel_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr);
#else
static inline int btintel_check_bdaddr(struct hci_dev *hdev)
{
return -EOPNOTSUPP;
}
static inline int btintel_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
{
return -EOPNOTSUPP;
}
#endif

View File

@ -111,6 +111,7 @@ struct btmrvl_private {
/* Vendor specific Bluetooth commands */
#define BT_CMD_PSCAN_WIN_REPORT_ENABLE 0xFC03
#define BT_CMD_ROUTE_SCO_TO_HOST 0xFC1D
#define BT_CMD_SET_BDADDR 0xFC22
#define BT_CMD_AUTO_SLEEP_MODE 0xFC23
#define BT_CMD_HOST_SLEEP_CONFIG 0xFC59

View File

@ -230,6 +230,18 @@ int btmrvl_send_module_cfg_cmd(struct btmrvl_private *priv, u8 subcmd)
}
EXPORT_SYMBOL_GPL(btmrvl_send_module_cfg_cmd);
static int btmrvl_enable_sco_routing_to_host(struct btmrvl_private *priv)
{
int ret;
u8 subcmd = 0;
ret = btmrvl_send_sync_cmd(priv, BT_CMD_ROUTE_SCO_TO_HOST, &subcmd, 1);
if (ret)
BT_ERR("BT_CMD_ROUTE_SCO_TO_HOST command failed: %#x", ret);
return ret;
}
int btmrvl_pscan_window_reporting(struct btmrvl_private *priv, u8 subcmd)
{
struct btmrvl_sdio_card *card = priv->btmrvl_dev.card;
@ -558,6 +570,8 @@ static int btmrvl_setup(struct hci_dev *hdev)
btmrvl_check_device_tree(priv);
btmrvl_enable_sco_routing_to_host(priv);
btmrvl_pscan_window_reporting(priv, 0x01);
priv->btmrvl_dev.psmode = 1;

View File

@ -28,7 +28,10 @@
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#define VERSION "0.7"
#include "btintel.h"
#include "btbcm.h"
#define VERSION "0.8"
static bool disable_scofix;
static bool force_scofix;
@ -52,6 +55,8 @@ static struct usb_driver btusb_driver;
#define BTUSB_SWAVE 0x1000
#define BTUSB_INTEL_NEW 0x2000
#define BTUSB_AMP 0x4000
#define BTUSB_QCA_ROME 0x8000
#define BTUSB_BCM_APPLE 0x10000
static const struct usb_device_id btusb_table[] = {
/* Generic Bluetooth USB device */
@ -61,7 +66,8 @@ static const struct usb_device_id btusb_table[] = {
{ USB_DEVICE_INFO(0xe0, 0x01, 0x04), .driver_info = BTUSB_AMP },
/* Apple-specific (Broadcom) devices */
{ USB_VENDOR_AND_INTERFACE_INFO(0x05ac, 0xff, 0x01, 0x01) },
{ USB_VENDOR_AND_INTERFACE_INFO(0x05ac, 0xff, 0x01, 0x01),
.driver_info = BTUSB_BCM_APPLE },
/* MediaTek MT76x0E */
{ USB_DEVICE(0x0e8d, 0x763f) },
@ -107,13 +113,7 @@ static const struct usb_device_id btusb_table[] = {
{ USB_DEVICE(0x0c10, 0x0000) },
/* Broadcom BCM20702A0 */
{ USB_DEVICE(0x0489, 0xe042) },
{ USB_DEVICE(0x04ca, 0x2003) },
{ USB_DEVICE(0x0b05, 0x17b5) },
{ USB_DEVICE(0x0b05, 0x17cb) },
{ USB_DEVICE(0x413c, 0x8197) },
{ USB_DEVICE(0x13d3, 0x3404),
.driver_info = BTUSB_BCM_PATCHRAM },
/* Broadcom BCM20702B0 (Dynex/Insignia) */
{ USB_DEVICE(0x19ff, 0x0239), .driver_info = BTUSB_BCM_PATCHRAM },
@ -135,10 +135,12 @@ static const struct usb_device_id btusb_table[] = {
.driver_info = BTUSB_BCM_PATCHRAM },
/* Belkin F8065bf - Broadcom based */
{ USB_VENDOR_AND_INTERFACE_INFO(0x050d, 0xff, 0x01, 0x01) },
{ USB_VENDOR_AND_INTERFACE_INFO(0x050d, 0xff, 0x01, 0x01),
.driver_info = BTUSB_BCM_PATCHRAM },
/* IMC Networks - Broadcom based */
{ USB_VENDOR_AND_INTERFACE_INFO(0x13d3, 0xff, 0x01, 0x01) },
{ USB_VENDOR_AND_INTERFACE_INFO(0x13d3, 0xff, 0x01, 0x01),
.driver_info = BTUSB_BCM_PATCHRAM },
/* Intel Bluetooth USB Bootloader (RAM module) */
{ USB_DEVICE(0x8087, 0x0a5a),
@ -159,6 +161,7 @@ static const struct usb_device_id blacklist_table[] = {
/* Atheros 3011 with sflash firmware */
{ USB_DEVICE(0x0489, 0xe027), .driver_info = BTUSB_IGNORE },
{ USB_DEVICE(0x0489, 0xe03d), .driver_info = BTUSB_IGNORE },
{ USB_DEVICE(0x04f2, 0xaff1), .driver_info = BTUSB_IGNORE },
{ USB_DEVICE(0x0930, 0x0215), .driver_info = BTUSB_IGNORE },
{ USB_DEVICE(0x0cf3, 0x3002), .driver_info = BTUSB_IGNORE },
{ USB_DEVICE(0x0cf3, 0xe019), .driver_info = BTUSB_IGNORE },
@ -212,6 +215,10 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x0489, 0xe036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe03c), .driver_info = BTUSB_ATH3012 },
/* QCA ROME chipset */
{ USB_DEVICE(0x0cf3, 0xe300), .driver_info = BTUSB_QCA_ROME },
{ USB_DEVICE(0x0cf3, 0xe360), .driver_info = BTUSB_QCA_ROME },
/* Broadcom BCM2035 */
{ USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 },
{ USB_DEVICE(0x0a5c, 0x200a), .driver_info = BTUSB_WRONG_SCO_MTU },
@ -337,17 +344,9 @@ struct btusb_data {
int (*recv_event)(struct hci_dev *hdev, struct sk_buff *skb);
int (*recv_bulk)(struct btusb_data *data, void *buffer, int count);
};
static int btusb_wait_on_bit_timeout(void *word, int bit, unsigned long timeout,
unsigned mode)
{
might_sleep();
if (!test_bit(bit, word))
return 0;
return out_of_line_wait_on_bit_timeout(word, bit, bit_wait_timeout,
mode, timeout);
}
int (*setup_on_usb)(struct hci_dev *hdev);
};
static inline void btusb_free_frags(struct btusb_data *data)
{
@ -888,6 +887,15 @@ static int btusb_open(struct hci_dev *hdev)
BT_DBG("%s", hdev->name);
/* Patching USB firmware files prior to starting any URBs of HCI path
* It is more safe to use USB bulk channel for downloading USB patch
*/
if (data->setup_on_usb) {
err = data->setup_on_usb(hdev);
if (err <0)
return err;
}
err = usb_autopm_get_interface(data->intf);
if (err < 0)
return err;
@ -1263,6 +1271,28 @@ static void btusb_waker(struct work_struct *work)
usb_autopm_put_interface(data->intf);
}
static struct sk_buff *btusb_read_local_version(struct hci_dev *hdev)
{
struct sk_buff *skb;
skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION failed (%ld)",
hdev->name, PTR_ERR(skb));
return skb;
}
if (skb->len != sizeof(struct hci_rp_read_local_version)) {
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION event length mismatch",
hdev->name);
kfree_skb(skb);
return ERR_PTR(-EIO);
}
return skb;
}
static int btusb_setup_bcm92035(struct hci_dev *hdev)
{
struct sk_buff *skb;
@ -1287,12 +1317,9 @@ static int btusb_setup_csr(struct hci_dev *hdev)
BT_DBG("%s", hdev->name);
skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("Reading local version failed (%ld)", -PTR_ERR(skb));
skb = btusb_read_local_version(hdev);
if (IS_ERR(skb))
return -PTR_ERR(skb);
}
rp = (struct hci_rp_read_local_version *)skb->data;
@ -1318,39 +1345,6 @@ static int btusb_setup_csr(struct hci_dev *hdev)
return ret;
}
struct intel_version {
u8 status;
u8 hw_platform;
u8 hw_variant;
u8 hw_revision;
u8 fw_variant;
u8 fw_revision;
u8 fw_build_num;
u8 fw_build_ww;
u8 fw_build_yy;
u8 fw_patch_num;
} __packed;
struct intel_boot_params {
__u8 status;
__u8 otp_format;
__u8 otp_content;
__u8 otp_patch;
__le16 dev_revid;
__u8 secure_boot;
__u8 key_from_hdr;
__u8 key_type;
__u8 otp_lock;
__u8 api_lock;
__u8 debug_lock;
bdaddr_t otp_bdaddr;
__u8 min_fw_build_nn;
__u8 min_fw_build_cw;
__u8 min_fw_build_yy;
__u8 limited_cce;
__u8 unlocked_state;
} __packed;
static const struct firmware *btusb_setup_intel_get_fw(struct hci_dev *hdev,
struct intel_version *ver)
{
@ -1507,51 +1501,6 @@ static int btusb_setup_intel_patching(struct hci_dev *hdev,
return 0;
}
#define BDADDR_INTEL (&(bdaddr_t) {{0x00, 0x8b, 0x9e, 0x19, 0x03, 0x00}})
static int btusb_check_bdaddr_intel(struct hci_dev *hdev)
{
struct sk_buff *skb;
struct hci_rp_read_bd_addr *rp;
skb = __hci_cmd_sync(hdev, HCI_OP_READ_BD_ADDR, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s reading Intel device address failed (%ld)",
hdev->name, PTR_ERR(skb));
return PTR_ERR(skb);
}
if (skb->len != sizeof(*rp)) {
BT_ERR("%s Intel device address length mismatch", hdev->name);
kfree_skb(skb);
return -EIO;
}
rp = (struct hci_rp_read_bd_addr *)skb->data;
if (rp->status) {
BT_ERR("%s Intel device address result failed (%02x)",
hdev->name, rp->status);
kfree_skb(skb);
return -bt_to_errno(rp->status);
}
/* For some Intel based controllers, the default Bluetooth device
* address 00:03:19:9E:8B:00 can be found. These controllers are
* fully operational, but have the danger of duplicate addresses
* and that in turn can cause problems with Bluetooth operation.
*/
if (!bacmp(&rp->bdaddr, BDADDR_INTEL)) {
BT_ERR("%s found Intel default device address (%pMR)",
hdev->name, &rp->bdaddr);
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
}
kfree_skb(skb);
return 0;
}
static int btusb_setup_intel(struct hci_dev *hdev)
{
struct sk_buff *skb;
@ -1624,7 +1573,7 @@ static int btusb_setup_intel(struct hci_dev *hdev)
BT_INFO("%s: Intel device is already patched. patch num: %02x",
hdev->name, ver->fw_patch_num);
kfree_skb(skb);
btusb_check_bdaddr_intel(hdev);
btintel_check_bdaddr(hdev);
return 0;
}
@ -1637,7 +1586,7 @@ static int btusb_setup_intel(struct hci_dev *hdev)
fw = btusb_setup_intel_get_fw(hdev, ver);
if (!fw) {
kfree_skb(skb);
btusb_check_bdaddr_intel(hdev);
btintel_check_bdaddr(hdev);
return 0;
}
fw_ptr = fw->data;
@ -1718,7 +1667,7 @@ static int btusb_setup_intel(struct hci_dev *hdev)
BT_INFO("%s: Intel Bluetooth firmware patch completed and activated",
hdev->name);
btusb_check_bdaddr_intel(hdev);
btintel_check_bdaddr(hdev);
return 0;
exit_mfg_disable:
@ -1734,7 +1683,7 @@ exit_mfg_disable:
BT_INFO("%s: Intel Bluetooth firmware patch completed", hdev->name);
btusb_check_bdaddr_intel(hdev);
btintel_check_bdaddr(hdev);
return 0;
exit_mfg_deactivate:
@ -1755,7 +1704,7 @@ exit_mfg_deactivate:
BT_INFO("%s: Intel Bluetooth firmware patch completed and deactivated",
hdev->name);
btusb_check_bdaddr_intel(hdev);
btintel_check_bdaddr(hdev);
return 0;
}
@ -1797,6 +1746,38 @@ static int btusb_recv_bulk_intel(struct btusb_data *data, void *buffer,
return btusb_recv_bulk(data, buffer, count);
}
static void btusb_intel_bootup(struct btusb_data *data, const void *ptr,
unsigned int len)
{
const struct intel_bootup *evt = ptr;
if (len != sizeof(*evt))
return;
if (test_and_clear_bit(BTUSB_BOOTING, &data->flags)) {
smp_mb__after_atomic();
wake_up_bit(&data->flags, BTUSB_BOOTING);
}
}
static void btusb_intel_secure_send_result(struct btusb_data *data,
const void *ptr, unsigned int len)
{
const struct intel_secure_send_result *evt = ptr;
if (len != sizeof(*evt))
return;
if (evt->result)
set_bit(BTUSB_FIRMWARE_FAILED, &data->flags);
if (test_and_clear_bit(BTUSB_DOWNLOADING, &data->flags) &&
test_bit(BTUSB_FIRMWARE_LOADED, &data->flags)) {
smp_mb__after_atomic();
wake_up_bit(&data->flags, BTUSB_DOWNLOADING);
}
}
static int btusb_recv_event_intel(struct hci_dev *hdev, struct sk_buff *skb)
{
struct btusb_data *data = hci_get_drvdata(hdev);
@ -1804,32 +1785,27 @@ static int btusb_recv_event_intel(struct hci_dev *hdev, struct sk_buff *skb)
if (test_bit(BTUSB_BOOTLOADER, &data->flags)) {
struct hci_event_hdr *hdr = (void *)skb->data;
/* When the firmware loading completes the device sends
* out a vendor specific event indicating the result of
* the firmware loading.
*/
if (skb->len == 7 && hdr->evt == 0xff && hdr->plen == 0x05 &&
skb->data[2] == 0x06) {
if (skb->data[3] != 0x00)
test_bit(BTUSB_FIRMWARE_FAILED, &data->flags);
if (skb->len > HCI_EVENT_HDR_SIZE && hdr->evt == 0xff &&
hdr->plen > 0) {
const void *ptr = skb->data + HCI_EVENT_HDR_SIZE + 1;
unsigned int len = skb->len - HCI_EVENT_HDR_SIZE - 1;
if (test_and_clear_bit(BTUSB_DOWNLOADING,
&data->flags) &&
test_bit(BTUSB_FIRMWARE_LOADED, &data->flags)) {
smp_mb__after_atomic();
wake_up_bit(&data->flags, BTUSB_DOWNLOADING);
}
}
/* When switching to the operational firmware the device
* sends a vendor specific event indicating that the bootup
* completed.
*/
if (skb->len == 9 && hdr->evt == 0xff && hdr->plen == 0x07 &&
skb->data[2] == 0x02) {
if (test_and_clear_bit(BTUSB_BOOTING, &data->flags)) {
smp_mb__after_atomic();
wake_up_bit(&data->flags, BTUSB_BOOTING);
switch (skb->data[2]) {
case 0x02:
/* When switching to the operational firmware
* the device sends a vendor specific event
* indicating that the bootup completed.
*/
btusb_intel_bootup(data, ptr, len);
break;
case 0x06:
/* When the firmware loading completes the
* device sends out a vendor specific event
* indicating the result of the firmware
* loading.
*/
btusb_intel_secure_send_result(data, ptr, len);
break;
}
}
}
@ -2031,7 +2007,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
if (ver->fw_variant == 0x23) {
kfree_skb(skb);
clear_bit(BTUSB_BOOTLOADER, &data->flags);
btusb_check_bdaddr_intel(hdev);
btintel_check_bdaddr(hdev);
return 0;
}
@ -2197,9 +2173,9 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
* and thus just timeout if that happens and fail the setup
* of this device.
*/
err = btusb_wait_on_bit_timeout(&data->flags, BTUSB_DOWNLOADING,
msecs_to_jiffies(5000),
TASK_INTERRUPTIBLE);
err = wait_on_bit_timeout(&data->flags, BTUSB_DOWNLOADING,
TASK_INTERRUPTIBLE,
msecs_to_jiffies(5000));
if (err == 1) {
BT_ERR("%s: Firmware loading interrupted", hdev->name);
err = -EINTR;
@ -2250,9 +2226,9 @@ done:
*/
BT_INFO("%s: Waiting for device to boot", hdev->name);
err = btusb_wait_on_bit_timeout(&data->flags, BTUSB_BOOTING,
msecs_to_jiffies(1000),
TASK_INTERRUPTIBLE);
err = wait_on_bit_timeout(&data->flags, BTUSB_BOOTING,
TASK_INTERRUPTIBLE,
msecs_to_jiffies(1000));
if (err == 1) {
BT_ERR("%s: Device boot interrupted", hdev->name);
@ -2315,15 +2291,19 @@ static void btusb_hw_error_intel(struct hci_dev *hdev, u8 code)
kfree_skb(skb);
}
static int btusb_set_bdaddr_intel(struct hci_dev *hdev, const bdaddr_t *bdaddr)
static int btusb_shutdown_intel(struct hci_dev *hdev)
{
struct sk_buff *skb;
long ret;
skb = __hci_cmd_sync(hdev, 0xfc31, 6, bdaddr, HCI_INIT_TIMEOUT);
/* Some platforms have an issue with BT LED when the interface is
* down or BT radio is turned off, which takes 5 seconds to BT LED
* goes off. This command turns off the BT LED immediately.
*/
skb = __hci_cmd_sync(hdev, 0xfc3f, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
ret = PTR_ERR(skb);
BT_ERR("%s: changing Intel device address failed (%ld)",
BT_ERR("%s: turning off Intel device LED failed (%ld)",
hdev->name, ret);
return ret;
}
@ -2355,211 +2335,6 @@ static int btusb_set_bdaddr_marvell(struct hci_dev *hdev,
return 0;
}
#define BDADDR_BCM20702A0 (&(bdaddr_t) {{0x00, 0xa0, 0x02, 0x70, 0x20, 0x00}})
static int btusb_setup_bcm_patchram(struct hci_dev *hdev)
{
struct btusb_data *data = hci_get_drvdata(hdev);
struct usb_device *udev = data->udev;
char fw_name[64];
const struct firmware *fw;
const u8 *fw_ptr;
size_t fw_size;
const struct hci_command_hdr *cmd;
const u8 *cmd_param;
u16 opcode;
struct sk_buff *skb;
struct hci_rp_read_local_version *ver;
struct hci_rp_read_bd_addr *bda;
long ret;
snprintf(fw_name, sizeof(fw_name), "brcm/%s-%04x-%04x.hcd",
udev->product ? udev->product : "BCM",
le16_to_cpu(udev->descriptor.idVendor),
le16_to_cpu(udev->descriptor.idProduct));
ret = request_firmware(&fw, fw_name, &hdev->dev);
if (ret < 0) {
BT_INFO("%s: BCM: patch %s not found", hdev->name, fw_name);
return 0;
}
/* Reset */
skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
ret = PTR_ERR(skb);
BT_ERR("%s: HCI_OP_RESET failed (%ld)", hdev->name, ret);
goto done;
}
kfree_skb(skb);
/* Read Local Version Info */
skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
ret = PTR_ERR(skb);
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION failed (%ld)",
hdev->name, ret);
goto done;
}
if (skb->len != sizeof(*ver)) {
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION event length mismatch",
hdev->name);
kfree_skb(skb);
ret = -EIO;
goto done;
}
ver = (struct hci_rp_read_local_version *)skb->data;
BT_INFO("%s: BCM: patching hci_ver=%02x hci_rev=%04x lmp_ver=%02x "
"lmp_subver=%04x", hdev->name, ver->hci_ver, ver->hci_rev,
ver->lmp_ver, ver->lmp_subver);
kfree_skb(skb);
/* Start Download */
skb = __hci_cmd_sync(hdev, 0xfc2e, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
ret = PTR_ERR(skb);
BT_ERR("%s: BCM: Download Minidrv command failed (%ld)",
hdev->name, ret);
goto reset_fw;
}
kfree_skb(skb);
/* 50 msec delay after Download Minidrv completes */
msleep(50);
fw_ptr = fw->data;
fw_size = fw->size;
while (fw_size >= sizeof(*cmd)) {
cmd = (struct hci_command_hdr *)fw_ptr;
fw_ptr += sizeof(*cmd);
fw_size -= sizeof(*cmd);
if (fw_size < cmd->plen) {
BT_ERR("%s: BCM: patch %s is corrupted",
hdev->name, fw_name);
ret = -EINVAL;
goto reset_fw;
}
cmd_param = fw_ptr;
fw_ptr += cmd->plen;
fw_size -= cmd->plen;
opcode = le16_to_cpu(cmd->opcode);
skb = __hci_cmd_sync(hdev, opcode, cmd->plen, cmd_param,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
ret = PTR_ERR(skb);
BT_ERR("%s: BCM: patch command %04x failed (%ld)",
hdev->name, opcode, ret);
goto reset_fw;
}
kfree_skb(skb);
}
/* 250 msec delay after Launch Ram completes */
msleep(250);
reset_fw:
/* Reset */
skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
ret = PTR_ERR(skb);
BT_ERR("%s: HCI_OP_RESET failed (%ld)", hdev->name, ret);
goto done;
}
kfree_skb(skb);
/* Read Local Version Info */
skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
ret = PTR_ERR(skb);
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION failed (%ld)",
hdev->name, ret);
goto done;
}
if (skb->len != sizeof(*ver)) {
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION event length mismatch",
hdev->name);
kfree_skb(skb);
ret = -EIO;
goto done;
}
ver = (struct hci_rp_read_local_version *)skb->data;
BT_INFO("%s: BCM: firmware hci_ver=%02x hci_rev=%04x lmp_ver=%02x "
"lmp_subver=%04x", hdev->name, ver->hci_ver, ver->hci_rev,
ver->lmp_ver, ver->lmp_subver);
kfree_skb(skb);
/* Read BD Address */
skb = __hci_cmd_sync(hdev, HCI_OP_READ_BD_ADDR, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
ret = PTR_ERR(skb);
BT_ERR("%s: HCI_OP_READ_BD_ADDR failed (%ld)",
hdev->name, ret);
goto done;
}
if (skb->len != sizeof(*bda)) {
BT_ERR("%s: HCI_OP_READ_BD_ADDR event length mismatch",
hdev->name);
kfree_skb(skb);
ret = -EIO;
goto done;
}
bda = (struct hci_rp_read_bd_addr *)skb->data;
if (bda->status) {
BT_ERR("%s: HCI_OP_READ_BD_ADDR error status (%02x)",
hdev->name, bda->status);
kfree_skb(skb);
ret = -bt_to_errno(bda->status);
goto done;
}
/* The address 00:20:70:02:A0:00 indicates a BCM20702A0 controller
* with no configured address.
*/
if (!bacmp(&bda->bdaddr, BDADDR_BCM20702A0)) {
BT_INFO("%s: BCM: using default device address (%pMR)",
hdev->name, &bda->bdaddr);
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
}
kfree_skb(skb);
done:
release_firmware(fw);
return ret;
}
static int btusb_set_bdaddr_bcm(struct hci_dev *hdev, const bdaddr_t *bdaddr)
{
struct sk_buff *skb;
long ret;
skb = __hci_cmd_sync(hdev, 0xfc01, 6, bdaddr, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
ret = PTR_ERR(skb);
BT_ERR("%s: BCM: Change address command failed (%ld)",
hdev->name, ret);
return ret;
}
kfree_skb(skb);
return 0;
}
static int btusb_set_bdaddr_ath3012(struct hci_dev *hdev,
const bdaddr_t *bdaddr)
{
@ -2585,6 +2360,258 @@ static int btusb_set_bdaddr_ath3012(struct hci_dev *hdev,
return 0;
}
#define QCA_DFU_PACKET_LEN 4096
#define QCA_GET_TARGET_VERSION 0x09
#define QCA_CHECK_STATUS 0x05
#define QCA_DFU_DOWNLOAD 0x01
#define QCA_SYSCFG_UPDATED 0x40
#define QCA_PATCH_UPDATED 0x80
#define QCA_DFU_TIMEOUT 3000
struct qca_version {
__le32 rom_version;
__le32 patch_version;
__le32 ram_version;
__le32 ref_clock;
__u8 reserved[4];
} __packed;
struct qca_rampatch_version {
__le16 rom_version;
__le16 patch_version;
} __packed;
struct qca_device_info {
u32 rom_version;
u8 rampatch_hdr; /* length of header in rampatch */
u8 nvm_hdr; /* length of header in NVM */
u8 ver_offset; /* offset of version structure in rampatch */
};
static const struct qca_device_info qca_devices_table[] = {
{ 0x00000100, 20, 4, 10 }, /* Rome 1.0 */
{ 0x00000101, 20, 4, 10 }, /* Rome 1.1 */
{ 0x00000201, 28, 4, 18 }, /* Rome 2.1 */
{ 0x00000300, 28, 4, 18 }, /* Rome 3.0 */
{ 0x00000302, 28, 4, 18 }, /* Rome 3.2 */
};
static int btusb_qca_send_vendor_req(struct hci_dev *hdev, u8 request,
void *data, u16 size)
{
struct btusb_data *btdata = hci_get_drvdata(hdev);
struct usb_device *udev = btdata->udev;
int pipe, err;
u8 *buf;
buf = kmalloc(size, GFP_KERNEL);
if (!buf)
return -ENOMEM;
/* Found some of USB hosts have IOT issues with ours so that we should
* not wait until HCI layer is ready.
*/
pipe = usb_rcvctrlpipe(udev, 0);
err = usb_control_msg(udev, pipe, request, USB_TYPE_VENDOR | USB_DIR_IN,
0, 0, buf, size, USB_CTRL_SET_TIMEOUT);
if (err < 0) {
BT_ERR("%s: Failed to access otp area (%d)", hdev->name, err);
goto done;
}
memcpy(data, buf, size);
done:
kfree(buf);
return err;
}
static int btusb_setup_qca_download_fw(struct hci_dev *hdev,
const struct firmware *firmware,
size_t hdr_size)
{
struct btusb_data *btdata = hci_get_drvdata(hdev);
struct usb_device *udev = btdata->udev;
size_t count, size, sent = 0;
int pipe, len, err;
u8 *buf;
buf = kmalloc(QCA_DFU_PACKET_LEN, GFP_KERNEL);
if (!buf)
return -ENOMEM;
count = firmware->size;
size = min_t(size_t, count, hdr_size);
memcpy(buf, firmware->data, size);
/* USB patches should go down to controller through USB path
* because binary format fits to go down through USB channel.
* USB control path is for patching headers and USB bulk is for
* patch body.
*/
pipe = usb_sndctrlpipe(udev, 0);
err = usb_control_msg(udev, pipe, QCA_DFU_DOWNLOAD, USB_TYPE_VENDOR,
0, 0, buf, size, USB_CTRL_SET_TIMEOUT);
if (err < 0) {
BT_ERR("%s: Failed to send headers (%d)", hdev->name, err);
goto done;
}
sent += size;
count -= size;
while (count) {
size = min_t(size_t, count, QCA_DFU_PACKET_LEN);
memcpy(buf, firmware->data + sent, size);
pipe = usb_sndbulkpipe(udev, 0x02);
err = usb_bulk_msg(udev, pipe, buf, size, &len,
QCA_DFU_TIMEOUT);
if (err < 0) {
BT_ERR("%s: Failed to send body at %zd of %zd (%d)",
hdev->name, sent, firmware->size, err);
break;
}
if (size != len) {
BT_ERR("%s: Failed to get bulk buffer", hdev->name);
err = -EILSEQ;
break;
}
sent += size;
count -= size;
}
done:
kfree(buf);
return err;
}
static int btusb_setup_qca_load_rampatch(struct hci_dev *hdev,
struct qca_version *ver,
const struct qca_device_info *info)
{
struct qca_rampatch_version *rver;
const struct firmware *fw;
u32 ver_rom, ver_patch;
u16 rver_rom, rver_patch;
char fwname[64];
int err;
ver_rom = le32_to_cpu(ver->rom_version);
ver_patch = le32_to_cpu(ver->patch_version);
snprintf(fwname, sizeof(fwname), "qca/rampatch_usb_%08x.bin", ver_rom);
err = request_firmware(&fw, fwname, &hdev->dev);
if (err) {
BT_ERR("%s: failed to request rampatch file: %s (%d)",
hdev->name, fwname, err);
return err;
}
BT_INFO("%s: using rampatch file: %s", hdev->name, fwname);
rver = (struct qca_rampatch_version *)(fw->data + info->ver_offset);
rver_rom = le16_to_cpu(rver->rom_version);
rver_patch = le16_to_cpu(rver->patch_version);
BT_INFO("%s: QCA: patch rome 0x%x build 0x%x, firmware rome 0x%x "
"build 0x%x", hdev->name, rver_rom, rver_patch, ver_rom,
ver_patch);
if (rver_rom != ver_rom || rver_patch <= ver_patch) {
BT_ERR("%s: rampatch file version did not match with firmware",
hdev->name);
err = -EINVAL;
goto done;
}
err = btusb_setup_qca_download_fw(hdev, fw, info->rampatch_hdr);
done:
release_firmware(fw);
return err;
}
static int btusb_setup_qca_load_nvm(struct hci_dev *hdev,
struct qca_version *ver,
const struct qca_device_info *info)
{
const struct firmware *fw;
char fwname[64];
int err;
snprintf(fwname, sizeof(fwname), "qca/nvm_usb_%08x.bin",
le32_to_cpu(ver->rom_version));
err = request_firmware(&fw, fwname, &hdev->dev);
if (err) {
BT_ERR("%s: failed to request NVM file: %s (%d)",
hdev->name, fwname, err);
return err;
}
BT_INFO("%s: using NVM file: %s", hdev->name, fwname);
err = btusb_setup_qca_download_fw(hdev, fw, info->nvm_hdr);
release_firmware(fw);
return err;
}
static int btusb_setup_qca(struct hci_dev *hdev)
{
const struct qca_device_info *info = NULL;
struct qca_version ver;
u32 ver_rom;
u8 status;
int i, err;
err = btusb_qca_send_vendor_req(hdev, QCA_GET_TARGET_VERSION, &ver,
sizeof(ver));
if (err < 0)
return err;
ver_rom = le32_to_cpu(ver.rom_version);
for (i = 0; i < ARRAY_SIZE(qca_devices_table); i++) {
if (ver_rom == qca_devices_table[i].rom_version)
info = &qca_devices_table[i];
}
if (!info) {
BT_ERR("%s: don't support firmware rome 0x%x", hdev->name,
ver_rom);
return -ENODEV;
}
err = btusb_qca_send_vendor_req(hdev, QCA_CHECK_STATUS, &status,
sizeof(status));
if (err < 0)
return err;
if (!(status & QCA_PATCH_UPDATED)) {
err = btusb_setup_qca_load_rampatch(hdev, &ver, info);
if (err < 0)
return err;
}
if (!(status & QCA_SYSCFG_UPDATED)) {
err = btusb_setup_qca_load_nvm(hdev, &ver, info);
if (err < 0)
return err;
}
return 0;
}
static int btusb_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
@ -2701,23 +2728,29 @@ static int btusb_probe(struct usb_interface *intf,
if (id->driver_info & BTUSB_BCM92035)
hdev->setup = btusb_setup_bcm92035;
#ifdef CONFIG_BT_HCIBTUSB_BCM
if (id->driver_info & BTUSB_BCM_PATCHRAM) {
hdev->setup = btusb_setup_bcm_patchram;
hdev->set_bdaddr = btusb_set_bdaddr_bcm;
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
hdev->setup = btbcm_setup_patchram;
hdev->set_bdaddr = btbcm_set_bdaddr;
}
if (id->driver_info & BTUSB_BCM_APPLE)
hdev->setup = btbcm_setup_apple;
#endif
if (id->driver_info & BTUSB_INTEL) {
hdev->setup = btusb_setup_intel;
hdev->set_bdaddr = btusb_set_bdaddr_intel;
hdev->shutdown = btusb_shutdown_intel;
hdev->set_bdaddr = btintel_set_bdaddr;
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
}
if (id->driver_info & BTUSB_INTEL_NEW) {
hdev->send = btusb_send_frame_intel;
hdev->setup = btusb_setup_intel_new;
hdev->hw_error = btusb_hw_error_intel;
hdev->set_bdaddr = btusb_set_bdaddr_intel;
hdev->set_bdaddr = btintel_set_bdaddr;
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
}
@ -2734,9 +2767,15 @@ static int btusb_probe(struct usb_interface *intf,
if (id->driver_info & BTUSB_ATH3012) {
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
}
if (id->driver_info & BTUSB_QCA_ROME) {
data->setup_on_usb = btusb_setup_qca;
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
}
if (id->driver_info & BTUSB_AMP) {
/* AMP controllers do not support SCO packets */
data->isoc = NULL;
@ -2772,6 +2811,8 @@ static int btusb_probe(struct usb_interface *intf,
/* Fake CSR devices with broken commands */
if (bcdDevice <= 0x100)
hdev->setup = btusb_setup_csr;
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
}
if (id->driver_info & BTUSB_SNIFFER) {

View File

@ -45,6 +45,7 @@ struct ath_struct {
struct hci_uart *hu;
unsigned int cur_sleep;
struct sk_buff *rx_skb;
struct sk_buff_head txq;
struct work_struct ctxtsw;
};
@ -136,6 +137,8 @@ static int ath_close(struct hci_uart *hu)
skb_queue_purge(&ath->txq);
kfree_skb(ath->rx_skb);
cancel_work_sync(&ath->ctxtsw);
hu->priv = NULL;
@ -187,40 +190,42 @@ static struct sk_buff *ath_dequeue(struct hci_uart *hu)
return skb_dequeue(&ath->txq);
}
/* Recv data */
static int ath_recv(struct hci_uart *hu, void *data, int count)
{
int ret;
static const struct h4_recv_pkt ath_recv_pkts[] = {
{ H4_RECV_ACL, .recv = hci_recv_frame },
{ H4_RECV_SCO, .recv = hci_recv_frame },
{ H4_RECV_EVENT, .recv = hci_recv_frame },
};
ret = hci_recv_stream_fragment(hu->hdev, data, count);
if (ret < 0) {
BT_ERR("Frame Reassembly Failed");
return ret;
/* Recv data */
static int ath_recv(struct hci_uart *hu, const void *data, int count)
{
struct ath_struct *ath = hu->priv;
ath->rx_skb = h4_recv_buf(hu->hdev, ath->rx_skb, data, count,
ath_recv_pkts, ARRAY_SIZE(ath_recv_pkts));
if (IS_ERR(ath->rx_skb)) {
int err = PTR_ERR(ath->rx_skb);
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
return err;
}
return count;
}
static struct hci_uart_proto athp = {
.id = HCI_UART_ATH3K,
.open = ath_open,
.close = ath_close,
.recv = ath_recv,
.enqueue = ath_enqueue,
.dequeue = ath_dequeue,
.flush = ath_flush,
static const struct hci_uart_proto athp = {
.id = HCI_UART_ATH3K,
.name = "ATH3K",
.open = ath_open,
.close = ath_close,
.recv = ath_recv,
.enqueue = ath_enqueue,
.dequeue = ath_dequeue,
.flush = ath_flush,
};
int __init ath_init(void)
{
int err = hci_uart_register_proto(&athp);
if (!err)
BT_INFO("HCIATH3K protocol initialized");
else
BT_ERR("HCIATH3K protocol registration failed");
return err;
return hci_uart_register_proto(&athp);
}
int __exit ath_deinit(void)

View File

@ -0,0 +1,153 @@
/*
*
* Bluetooth HCI UART driver for Broadcom devices
*
* Copyright (C) 2015 Intel Corporation
*
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/skbuff.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include "btbcm.h"
#include "hci_uart.h"
struct bcm_data {
struct sk_buff *rx_skb;
struct sk_buff_head txq;
};
static int bcm_open(struct hci_uart *hu)
{
struct bcm_data *bcm;
BT_DBG("hu %p", hu);
bcm = kzalloc(sizeof(*bcm), GFP_KERNEL);
if (!bcm)
return -ENOMEM;
skb_queue_head_init(&bcm->txq);
hu->priv = bcm;
return 0;
}
static int bcm_close(struct hci_uart *hu)
{
struct bcm_data *bcm = hu->priv;
BT_DBG("hu %p", hu);
skb_queue_purge(&bcm->txq);
kfree_skb(bcm->rx_skb);
kfree(bcm);
hu->priv = NULL;
return 0;
}
static int bcm_flush(struct hci_uart *hu)
{
struct bcm_data *bcm = hu->priv;
BT_DBG("hu %p", hu);
skb_queue_purge(&bcm->txq);
return 0;
}
static int bcm_setup(struct hci_uart *hu)
{
BT_DBG("hu %p", hu);
hu->hdev->set_bdaddr = btbcm_set_bdaddr;
return btbcm_setup_patchram(hu->hdev);
}
static const struct h4_recv_pkt bcm_recv_pkts[] = {
{ H4_RECV_ACL, .recv = hci_recv_frame },
{ H4_RECV_SCO, .recv = hci_recv_frame },
{ H4_RECV_EVENT, .recv = hci_recv_frame },
};
static int bcm_recv(struct hci_uart *hu, const void *data, int count)
{
struct bcm_data *bcm = hu->priv;
if (!test_bit(HCI_UART_REGISTERED, &hu->flags))
return -EUNATCH;
bcm->rx_skb = h4_recv_buf(hu->hdev, bcm->rx_skb, data, count,
bcm_recv_pkts, ARRAY_SIZE(bcm_recv_pkts));
if (IS_ERR(bcm->rx_skb)) {
int err = PTR_ERR(bcm->rx_skb);
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
return err;
}
return count;
}
static int bcm_enqueue(struct hci_uart *hu, struct sk_buff *skb)
{
struct bcm_data *bcm = hu->priv;
BT_DBG("hu %p skb %p", hu, skb);
/* Prepend skb with frame type */
memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1);
skb_queue_tail(&bcm->txq, skb);
return 0;
}
static struct sk_buff *bcm_dequeue(struct hci_uart *hu)
{
struct bcm_data *bcm = hu->priv;
return skb_dequeue(&bcm->txq);
}
static const struct hci_uart_proto bcm_proto = {
.id = HCI_UART_BCM,
.name = "BCM",
.open = bcm_open,
.close = bcm_close,
.flush = bcm_flush,
.setup = bcm_setup,
.recv = bcm_recv,
.enqueue = bcm_enqueue,
.dequeue = bcm_dequeue,
};
int __init bcm_init(void)
{
return hci_uart_register_proto(&bcm_proto);
}
int __exit bcm_deinit(void)
{
return hci_uart_unregister_proto(&bcm_proto);
}

View File

@ -47,8 +47,6 @@
#include "hci_uart.h"
#define VERSION "0.3"
static bool txcrc = 1;
static bool hciextn = 1;
@ -554,10 +552,10 @@ static u16 bscp_get_crc(struct bcsp_struct *bcsp)
}
/* Recv data */
static int bcsp_recv(struct hci_uart *hu, void *data, int count)
static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
{
struct bcsp_struct *bcsp = hu->priv;
unsigned char *ptr;
const unsigned char *ptr;
BT_DBG("hu %p count %d rx_state %d rx_count %ld",
hu, count, bcsp->rx_state, bcsp->rx_count);
@ -735,8 +733,9 @@ static int bcsp_close(struct hci_uart *hu)
return 0;
}
static struct hci_uart_proto bcsp = {
static const struct hci_uart_proto bcsp = {
.id = HCI_UART_BCSP,
.name = "BCSP",
.open = bcsp_open,
.close = bcsp_close,
.enqueue = bcsp_enqueue,
@ -747,14 +746,7 @@ static struct hci_uart_proto bcsp = {
int __init bcsp_init(void)
{
int err = hci_uart_register_proto(&bcsp);
if (!err)
BT_INFO("HCI BCSP protocol initialized");
else
BT_ERR("HCI BCSP protocol registration failed");
return err;
return hci_uart_register_proto(&bcsp);
}
int __exit bcsp_deinit(void)

View File

@ -40,17 +40,14 @@
#include <linux/signal.h>
#include <linux/ioctl.h>
#include <linux/skbuff.h>
#include <asm/unaligned.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include "hci_uart.h"
#define VERSION "1.2"
struct h4_struct {
unsigned long rx_state;
unsigned long rx_count;
struct sk_buff *rx_skb;
struct sk_buff_head txq;
};
@ -117,18 +114,26 @@ static int h4_enqueue(struct hci_uart *hu, struct sk_buff *skb)
return 0;
}
static const struct h4_recv_pkt h4_recv_pkts[] = {
{ H4_RECV_ACL, .recv = hci_recv_frame },
{ H4_RECV_SCO, .recv = hci_recv_frame },
{ H4_RECV_EVENT, .recv = hci_recv_frame },
};
/* Recv data */
static int h4_recv(struct hci_uart *hu, void *data, int count)
static int h4_recv(struct hci_uart *hu, const void *data, int count)
{
int ret;
struct h4_struct *h4 = hu->priv;
if (!test_bit(HCI_UART_REGISTERED, &hu->flags))
return -EUNATCH;
ret = hci_recv_stream_fragment(hu->hdev, data, count);
if (ret < 0) {
BT_ERR("Frame Reassembly Failed");
return ret;
h4->rx_skb = h4_recv_buf(hu->hdev, h4->rx_skb, data, count,
h4_recv_pkts, ARRAY_SIZE(h4_recv_pkts));
if (IS_ERR(h4->rx_skb)) {
int err = PTR_ERR(h4->rx_skb);
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
return err;
}
return count;
@ -140,8 +145,9 @@ static struct sk_buff *h4_dequeue(struct hci_uart *hu)
return skb_dequeue(&h4->txq);
}
static struct hci_uart_proto h4p = {
static const struct hci_uart_proto h4p = {
.id = HCI_UART_H4,
.name = "H4",
.open = h4_open,
.close = h4_close,
.recv = h4_recv,
@ -152,17 +158,105 @@ static struct hci_uart_proto h4p = {
int __init h4_init(void)
{
int err = hci_uart_register_proto(&h4p);
if (!err)
BT_INFO("HCI H4 protocol initialized");
else
BT_ERR("HCI H4 protocol registration failed");
return err;
return hci_uart_register_proto(&h4p);
}
int __exit h4_deinit(void)
{
return hci_uart_unregister_proto(&h4p);
}
struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb,
const unsigned char *buffer, int count,
const struct h4_recv_pkt *pkts, int pkts_count)
{
while (count) {
int i, len;
if (!skb) {
for (i = 0; i < pkts_count; i++) {
if (buffer[0] != (&pkts[i])->type)
continue;
skb = bt_skb_alloc((&pkts[i])->maxlen,
GFP_ATOMIC);
if (!skb)
return ERR_PTR(-ENOMEM);
bt_cb(skb)->pkt_type = (&pkts[i])->type;
bt_cb(skb)->expect = (&pkts[i])->hlen;
break;
}
/* Check for invalid packet type */
if (!skb)
return ERR_PTR(-EILSEQ);
count -= 1;
buffer += 1;
}
len = min_t(uint, bt_cb(skb)->expect - skb->len, count);
memcpy(skb_put(skb, len), buffer, len);
count -= len;
buffer += len;
/* Check for partial packet */
if (skb->len < bt_cb(skb)->expect)
continue;
for (i = 0; i < pkts_count; i++) {
if (bt_cb(skb)->pkt_type == (&pkts[i])->type)
break;
}
if (i >= pkts_count) {
kfree_skb(skb);
return ERR_PTR(-EILSEQ);
}
if (skb->len == (&pkts[i])->hlen) {
u16 dlen;
switch ((&pkts[i])->lsize) {
case 0:
/* No variable data length */
(&pkts[i])->recv(hdev, skb);
skb = NULL;
break;
case 1:
/* Single octet variable length */
dlen = skb->data[(&pkts[i])->loff];
bt_cb(skb)->expect += dlen;
if (skb_tailroom(skb) < dlen) {
kfree_skb(skb);
return ERR_PTR(-EMSGSIZE);
}
break;
case 2:
/* Double octet variable length */
dlen = get_unaligned_le16(skb->data +
(&pkts[i])->loff);
bt_cb(skb)->expect += dlen;
if (skb_tailroom(skb) < dlen) {
kfree_skb(skb);
return ERR_PTR(-EMSGSIZE);
}
break;
default:
/* Unsupported variable length */
kfree_skb(skb);
return ERR_PTR(-EILSEQ);
}
} else {
/* Complete frame */
(&pkts[i])->recv(hdev, skb);
skb = NULL;
}
}
return skb;
}

View File

@ -511,10 +511,10 @@ static void h5_reset_rx(struct h5 *h5)
clear_bit(H5_RX_ESC, &h5->flags);
}
static int h5_recv(struct hci_uart *hu, void *data, int count)
static int h5_recv(struct hci_uart *hu, const void *data, int count)
{
struct h5 *h5 = hu->priv;
unsigned char *ptr = data;
const unsigned char *ptr = data;
BT_DBG("%s pending %zu count %d", hu->hdev->name, h5->rx_pending,
count);
@ -743,8 +743,9 @@ static int h5_flush(struct hci_uart *hu)
return 0;
}
static struct hci_uart_proto h5p = {
static const struct hci_uart_proto h5p = {
.id = HCI_UART_3WIRE,
.name = "Three-wire (H5)",
.open = h5_open,
.close = h5_close,
.recv = h5_recv,
@ -755,14 +756,7 @@ static struct hci_uart_proto h5p = {
int __init h5_init(void)
{
int err = hci_uart_register_proto(&h5p);
if (!err)
BT_INFO("HCI Three-wire UART (H5) protocol initialized");
else
BT_ERR("HCI Three-wire UART (H5) protocol init failed");
return err;
return hci_uart_register_proto(&h5p);
}
int __exit h5_deinit(void)

View File

@ -0,0 +1,31 @@
/*
*
* Bluetooth HCI UART driver for Intel devices
*
* Copyright (C) 2015 Intel Corporation
*
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/skbuff.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include "hci_uart.h"

View File

@ -44,13 +44,15 @@
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include "btintel.h"
#include "btbcm.h"
#include "hci_uart.h"
#define VERSION "2.2"
#define VERSION "2.3"
static struct hci_uart_proto *hup[HCI_UART_MAX_PROTO];
static const struct hci_uart_proto *hup[HCI_UART_MAX_PROTO];
int hci_uart_register_proto(struct hci_uart_proto *p)
int hci_uart_register_proto(const struct hci_uart_proto *p)
{
if (p->id >= HCI_UART_MAX_PROTO)
return -EINVAL;
@ -60,10 +62,12 @@ int hci_uart_register_proto(struct hci_uart_proto *p)
hup[p->id] = p;
BT_INFO("HCI UART protocol %s registered", p->name);
return 0;
}
int hci_uart_unregister_proto(struct hci_uart_proto *p)
int hci_uart_unregister_proto(const struct hci_uart_proto *p)
{
if (p->id >= HCI_UART_MAX_PROTO)
return -EINVAL;
@ -76,7 +80,7 @@ int hci_uart_unregister_proto(struct hci_uart_proto *p)
return 0;
}
static struct hci_uart_proto *hci_uart_get_proto(unsigned int id)
static const struct hci_uart_proto *hci_uart_get_proto(unsigned int id)
{
if (id >= HCI_UART_MAX_PROTO)
return NULL;
@ -261,6 +265,54 @@ static int hci_uart_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
return 0;
}
static int hci_uart_setup(struct hci_dev *hdev)
{
struct hci_uart *hu = hci_get_drvdata(hdev);
struct hci_rp_read_local_version *ver;
struct sk_buff *skb;
if (hu->proto->setup)
return hu->proto->setup(hu);
if (!test_bit(HCI_UART_VND_DETECT, &hu->hdev_flags))
return 0;
skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: Reading local version information failed (%ld)",
hdev->name, PTR_ERR(skb));
return 0;
}
if (skb->len != sizeof(*ver)) {
BT_ERR("%s: Event length mismatch for version information",
hdev->name);
goto done;
}
ver = (struct hci_rp_read_local_version *)skb->data;
switch (le16_to_cpu(ver->manufacturer)) {
#ifdef CONFIG_BT_HCIUART_INTEL
case 2:
hdev->set_bdaddr = btintel_set_bdaddr;
btintel_check_bdaddr(hdev);
break;
#endif
#ifdef CONFIG_BT_HCIUART_BCM
case 15:
hdev->set_bdaddr = btbcm_set_bdaddr;
btbcm_check_bdaddr(hdev);
break;
#endif
}
done:
kfree_skb(skb);
return 0;
}
/* ------ LDISC part ------ */
/* hci_uart_tty_open
*
@ -316,7 +368,7 @@ static int hci_uart_tty_open(struct tty_struct *tty)
*/
static void hci_uart_tty_close(struct tty_struct *tty)
{
struct hci_uart *hu = (void *)tty->disc_data;
struct hci_uart *hu = tty->disc_data;
struct hci_dev *hdev;
BT_DBG("tty %p", tty);
@ -355,7 +407,7 @@ static void hci_uart_tty_close(struct tty_struct *tty)
*/
static void hci_uart_tty_wakeup(struct tty_struct *tty)
{
struct hci_uart *hu = (void *)tty->disc_data;
struct hci_uart *hu = tty->disc_data;
BT_DBG("");
@ -383,9 +435,10 @@ static void hci_uart_tty_wakeup(struct tty_struct *tty)
*
* Return Value: None
*/
static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data, char *flags, int count)
static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data,
char *flags, int count)
{
struct hci_uart *hu = (void *)tty->disc_data;
struct hci_uart *hu = tty->disc_data;
if (!hu || tty != hu->tty)
return;
@ -394,7 +447,7 @@ static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data, char *f
return;
spin_lock(&hu->rx_lock);
hu->proto->recv(hu, (void *) data, count);
hu->proto->recv(hu, data, count);
if (hu->hdev)
hu->hdev->stat.byte_rx += count;
@ -426,6 +479,7 @@ static int hci_uart_register_dev(struct hci_uart *hu)
hdev->close = hci_uart_close;
hdev->flush = hci_uart_flush;
hdev->send = hci_uart_send_frame;
hdev->setup = hci_uart_setup;
SET_HCIDEV_DEV(hdev, hu->tty->dev);
if (test_bit(HCI_UART_RAW_DEVICE, &hu->hdev_flags))
@ -458,7 +512,7 @@ static int hci_uart_register_dev(struct hci_uart *hu)
static int hci_uart_set_proto(struct hci_uart *hu, int id)
{
struct hci_uart_proto *p;
const struct hci_uart_proto *p;
int err;
p = hci_uart_get_proto(id);
@ -486,9 +540,10 @@ static int hci_uart_set_flags(struct hci_uart *hu, unsigned long flags)
BIT(HCI_UART_RESET_ON_INIT) |
BIT(HCI_UART_CREATE_AMP) |
BIT(HCI_UART_INIT_PENDING) |
BIT(HCI_UART_EXT_CONFIG);
BIT(HCI_UART_EXT_CONFIG) |
BIT(HCI_UART_VND_DETECT);
if ((flags & ~valid_flags))
if (flags & ~valid_flags)
return -EINVAL;
hu->hdev_flags = flags;
@ -509,10 +564,10 @@ static int hci_uart_set_flags(struct hci_uart *hu, unsigned long flags)
*
* Return Value: Command dependent
*/
static int hci_uart_tty_ioctl(struct tty_struct *tty, struct file * file,
unsigned int cmd, unsigned long arg)
static int hci_uart_tty_ioctl(struct tty_struct *tty, struct file *file,
unsigned int cmd, unsigned long arg)
{
struct hci_uart *hu = (void *)tty->disc_data;
struct hci_uart *hu = tty->disc_data;
int err = 0;
BT_DBG("");
@ -566,19 +621,19 @@ static int hci_uart_tty_ioctl(struct tty_struct *tty, struct file * file,
* We don't provide read/write/poll interface for user space.
*/
static ssize_t hci_uart_tty_read(struct tty_struct *tty, struct file *file,
unsigned char __user *buf, size_t nr)
unsigned char __user *buf, size_t nr)
{
return 0;
}
static ssize_t hci_uart_tty_write(struct tty_struct *tty, struct file *file,
const unsigned char *data, size_t count)
const unsigned char *data, size_t count)
{
return 0;
}
static unsigned int hci_uart_tty_poll(struct tty_struct *tty,
struct file *filp, poll_table *wait)
struct file *filp, poll_table *wait)
{
return 0;
}
@ -626,6 +681,9 @@ static int __init hci_uart_init(void)
#ifdef CONFIG_BT_HCIUART_3WIRE
h5_init();
#endif
#ifdef CONFIG_BT_HCIUART_BCM
bcm_init();
#endif
return 0;
}
@ -649,6 +707,9 @@ static void __exit hci_uart_exit(void)
#ifdef CONFIG_BT_HCIUART_3WIRE
h5_deinit();
#endif
#ifdef CONFIG_BT_HCIUART_BCM
bcm_deinit();
#endif
/* Release tty registration of line discipline */
err = tty_unregister_ldisc(N_HCI);

View File

@ -370,10 +370,10 @@ static inline int ll_check_data_len(struct hci_dev *hdev, struct ll_struct *ll,
}
/* Recv data */
static int ll_recv(struct hci_uart *hu, void *data, int count)
static int ll_recv(struct hci_uart *hu, const void *data, int count)
{
struct ll_struct *ll = hu->priv;
char *ptr;
const char *ptr;
struct hci_event_hdr *eh;
struct hci_acl_hdr *ah;
struct hci_sco_hdr *sh;
@ -505,8 +505,9 @@ static struct sk_buff *ll_dequeue(struct hci_uart *hu)
return skb_dequeue(&ll->txq);
}
static struct hci_uart_proto llp = {
static const struct hci_uart_proto llp = {
.id = HCI_UART_LL,
.name = "LL",
.open = ll_open,
.close = ll_close,
.recv = ll_recv,
@ -517,14 +518,7 @@ static struct hci_uart_proto llp = {
int __init ll_init(void)
{
int err = hci_uart_register_proto(&llp);
if (!err)
BT_INFO("HCILL protocol initialized");
else
BT_ERR("HCILL protocol registration failed");
return err;
return hci_uart_register_proto(&llp);
}
int __exit ll_deinit(void)

View File

@ -35,7 +35,7 @@
#define HCIUARTGETFLAGS _IOR('U', 204, int)
/* UART protocols */
#define HCI_UART_MAX_PROTO 6
#define HCI_UART_MAX_PROTO 8
#define HCI_UART_H4 0
#define HCI_UART_BCSP 1
@ -43,21 +43,26 @@
#define HCI_UART_H4DS 3
#define HCI_UART_LL 4
#define HCI_UART_ATH3K 5
#define HCI_UART_INTEL 6
#define HCI_UART_BCM 7
#define HCI_UART_RAW_DEVICE 0
#define HCI_UART_RESET_ON_INIT 1
#define HCI_UART_CREATE_AMP 2
#define HCI_UART_INIT_PENDING 3
#define HCI_UART_EXT_CONFIG 4
#define HCI_UART_VND_DETECT 5
struct hci_uart;
struct hci_uart_proto {
unsigned int id;
const char *name;
int (*open)(struct hci_uart *hu);
int (*close)(struct hci_uart *hu);
int (*flush)(struct hci_uart *hu);
int (*recv)(struct hci_uart *hu, void *data, int len);
int (*setup)(struct hci_uart *hu);
int (*recv)(struct hci_uart *hu, const void *data, int len);
int (*enqueue)(struct hci_uart *hu, struct sk_buff *skb);
struct sk_buff *(*dequeue)(struct hci_uart *hu);
};
@ -71,7 +76,7 @@ struct hci_uart {
struct work_struct init_ready;
struct work_struct write_work;
struct hci_uart_proto *proto;
const struct hci_uart_proto *proto;
void *priv;
struct sk_buff *tx_skb;
@ -87,14 +92,48 @@ struct hci_uart {
#define HCI_UART_SENDING 1
#define HCI_UART_TX_WAKEUP 2
int hci_uart_register_proto(struct hci_uart_proto *p);
int hci_uart_unregister_proto(struct hci_uart_proto *p);
int hci_uart_register_proto(const struct hci_uart_proto *p);
int hci_uart_unregister_proto(const struct hci_uart_proto *p);
int hci_uart_tx_wakeup(struct hci_uart *hu);
int hci_uart_init_ready(struct hci_uart *hu);
#ifdef CONFIG_BT_HCIUART_H4
int h4_init(void);
int h4_deinit(void);
struct h4_recv_pkt {
u8 type; /* Packet type */
u8 hlen; /* Header length */
u8 loff; /* Data length offset in header */
u8 lsize; /* Data length field size */
u16 maxlen; /* Max overall packet length */
int (*recv)(struct hci_dev *hdev, struct sk_buff *skb);
};
#define H4_RECV_ACL \
.type = HCI_ACLDATA_PKT, \
.hlen = HCI_ACL_HDR_SIZE, \
.loff = 2, \
.lsize = 2, \
.maxlen = HCI_MAX_FRAME_SIZE \
#define H4_RECV_SCO \
.type = HCI_SCODATA_PKT, \
.hlen = HCI_SCO_HDR_SIZE, \
.loff = 2, \
.lsize = 1, \
.maxlen = HCI_MAX_SCO_SIZE
#define H4_RECV_EVENT \
.type = HCI_EVENT_PKT, \
.hlen = HCI_EVENT_HDR_SIZE, \
.loff = 1, \
.lsize = 1, \
.maxlen = HCI_MAX_EVENT_SIZE
struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb,
const unsigned char *buffer, int count,
const struct h4_recv_pkt *pkts, int pkts_count);
#endif
#ifdef CONFIG_BT_HCIUART_BCSP
@ -116,3 +155,8 @@ int ath_deinit(void);
int h5_init(void);
int h5_deinit(void);
#endif
#ifdef CONFIG_BT_HCIUART_BCM
int bcm_init(void);
int bcm_deinit(void);
#endif

View File

@ -237,18 +237,6 @@ static int fwnet_header_create(struct sk_buff *skb, struct net_device *net,
return -net->hard_header_len;
}
static int fwnet_header_rebuild(struct sk_buff *skb)
{
struct fwnet_header *h = (struct fwnet_header *)skb->data;
if (get_unaligned_be16(&h->h_proto) == ETH_P_IP)
return arp_find((unsigned char *)&h->h_dest, skb);
dev_notice(&skb->dev->dev, "unable to resolve type %04x addresses\n",
be16_to_cpu(h->h_proto));
return 0;
}
static int fwnet_header_cache(const struct neighbour *neigh,
struct hh_cache *hh, __be16 type)
{
@ -282,7 +270,6 @@ static int fwnet_header_parse(const struct sk_buff *skb, unsigned char *haddr)
static const struct header_ops fwnet_header_ops = {
.create = fwnet_header_create,
.rebuild = fwnet_header_rebuild,
.cache = fwnet_header_cache,
.cache_update = fwnet_header_cache_update,
.parse = fwnet_header_parse,

View File

@ -587,8 +587,9 @@ static int mlx4_ib_SET_PORT(struct mlx4_ib_dev *dev, u8 port, int reset_qkey_vio
((__be32 *) mailbox->buf)[1] = cpu_to_be32(cap_mask);
}
err = mlx4_cmd(dev->dev, mailbox->dma, port, 0, MLX4_CMD_SET_PORT,
MLX4_CMD_TIME_CLASS_B, MLX4_CMD_WRAPPED);
err = mlx4_cmd(dev->dev, mailbox->dma, port, MLX4_SET_PORT_IB_OPCODE,
MLX4_CMD_SET_PORT, MLX4_CMD_TIME_CLASS_B,
MLX4_CMD_WRAPPED);
mlx4_free_cmd_mailbox(dev->dev, mailbox);
return err;
@ -1525,8 +1526,8 @@ static void update_gids_task(struct work_struct *work)
memcpy(gids, gw->gids, sizeof gw->gids);
err = mlx4_cmd(dev, mailbox->dma, MLX4_SET_PORT_GID_TABLE << 8 | gw->port,
1, MLX4_CMD_SET_PORT, MLX4_CMD_TIME_CLASS_B,
MLX4_CMD_WRAPPED);
MLX4_SET_PORT_ETH_OPCODE, MLX4_CMD_SET_PORT,
MLX4_CMD_TIME_CLASS_B, MLX4_CMD_WRAPPED);
if (err)
pr_warn("set port command failed\n");
else
@ -1564,7 +1565,7 @@ static void reset_gids_task(struct work_struct *work)
IB_LINK_LAYER_ETHERNET) {
err = mlx4_cmd(dev, mailbox->dma,
MLX4_SET_PORT_GID_TABLE << 8 | gw->port,
1, MLX4_CMD_SET_PORT,
MLX4_SET_PORT_ETH_OPCODE, MLX4_CMD_SET_PORT,
MLX4_CMD_TIME_CLASS_B,
MLX4_CMD_WRAPPED);
if (err)

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -572,11 +572,15 @@ int mlx5_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
int mlx5_ib_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
{
struct mlx5_core_dev *mdev = to_mdev(ibcq->device)->mdev;
void __iomem *uar_page = mdev->priv.uuari.uars[0].map;
mlx5_cq_arm(&to_mcq(ibcq)->mcq,
(flags & IB_CQ_SOLICITED_MASK) == IB_CQ_SOLICITED ?
MLX5_CQ_DB_REQ_NOT_SOL : MLX5_CQ_DB_REQ_NOT,
to_mdev(ibcq->device)->mdev->priv.uuari.uars[0].map,
MLX5_GET_DOORBELL_LOCK(&to_mdev(ibcq->device)->mdev->priv.cq_uar_lock));
uar_page,
MLX5_GET_DOORBELL_LOCK(&mdev->priv.cq_uar_lock),
to_mcq(ibcq)->mcq.cons_index);
return 0;
}
@ -697,8 +701,6 @@ static int create_cq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
cq->mcq.set_ci_db = cq->db.db;
cq->mcq.arm_db = cq->db.db + 1;
*cq->mcq.set_ci_db = 0;
*cq->mcq.arm_db = 0;
cq->mcq.cqe_sz = cqe_size;
err = alloc_cq_buf(dev, &cq->buf, entries, cqe_size);
@ -782,7 +784,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev, int entries,
cq->cqe_size = cqe_size;
cqb->ctx.cqe_sz_flags = cqe_sz_to_mlx_sz(cqe_size) << 5;
cqb->ctx.log_sz_usr_page = cpu_to_be32((ilog2(entries) << 24) | index);
err = mlx5_vector2eqn(dev, vector, &eqn, &irqn);
err = mlx5_vector2eqn(dev->mdev, vector, &eqn, &irqn);
if (err)
goto err_cqb;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -62,95 +62,6 @@ static char mlx5_version[] =
DRIVER_NAME ": Mellanox Connect-IB Infiniband driver v"
DRIVER_VERSION " (" DRIVER_RELDATE ")\n";
int mlx5_vector2eqn(struct mlx5_ib_dev *dev, int vector, int *eqn, int *irqn)
{
struct mlx5_eq_table *table = &dev->mdev->priv.eq_table;
struct mlx5_eq *eq, *n;
int err = -ENOENT;
spin_lock(&table->lock);
list_for_each_entry_safe(eq, n, &dev->eqs_list, list) {
if (eq->index == vector) {
*eqn = eq->eqn;
*irqn = eq->irqn;
err = 0;
break;
}
}
spin_unlock(&table->lock);
return err;
}
static int alloc_comp_eqs(struct mlx5_ib_dev *dev)
{
struct mlx5_eq_table *table = &dev->mdev->priv.eq_table;
char name[MLX5_MAX_EQ_NAME];
struct mlx5_eq *eq, *n;
int ncomp_vec;
int nent;
int err;
int i;
INIT_LIST_HEAD(&dev->eqs_list);
ncomp_vec = table->num_comp_vectors;
nent = MLX5_COMP_EQ_SIZE;
for (i = 0; i < ncomp_vec; i++) {
eq = kzalloc(sizeof(*eq), GFP_KERNEL);
if (!eq) {
err = -ENOMEM;
goto clean;
}
snprintf(name, MLX5_MAX_EQ_NAME, "mlx5_comp%d", i);
err = mlx5_create_map_eq(dev->mdev, eq,
i + MLX5_EQ_VEC_COMP_BASE, nent, 0,
name, &dev->mdev->priv.uuari.uars[0]);
if (err) {
kfree(eq);
goto clean;
}
mlx5_ib_dbg(dev, "allocated completion EQN %d\n", eq->eqn);
eq->index = i;
spin_lock(&table->lock);
list_add_tail(&eq->list, &dev->eqs_list);
spin_unlock(&table->lock);
}
dev->num_comp_vectors = ncomp_vec;
return 0;
clean:
spin_lock(&table->lock);
list_for_each_entry_safe(eq, n, &dev->eqs_list, list) {
list_del(&eq->list);
spin_unlock(&table->lock);
if (mlx5_destroy_unmap_eq(dev->mdev, eq))
mlx5_ib_warn(dev, "failed to destroy EQ 0x%x\n", eq->eqn);
kfree(eq);
spin_lock(&table->lock);
}
spin_unlock(&table->lock);
return err;
}
static void free_comp_eqs(struct mlx5_ib_dev *dev)
{
struct mlx5_eq_table *table = &dev->mdev->priv.eq_table;
struct mlx5_eq *eq, *n;
spin_lock(&table->lock);
list_for_each_entry_safe(eq, n, &dev->eqs_list, list) {
list_del(&eq->list);
spin_unlock(&table->lock);
if (mlx5_destroy_unmap_eq(dev->mdev, eq))
mlx5_ib_warn(dev, "failed to destroy EQ 0x%x\n", eq->eqn);
kfree(eq);
spin_lock(&table->lock);
}
spin_unlock(&table->lock);
}
static int mlx5_ib_query_device(struct ib_device *ibdev,
struct ib_device_attr *props)
{
@ -1291,10 +1202,6 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
get_ext_port_caps(dev);
err = alloc_comp_eqs(dev);
if (err)
goto err_dealloc;
MLX5_INIT_DOORBELL_LOCK(&dev->uar_lock);
strlcpy(dev->ib_dev.name, "mlx5_%d", IB_DEVICE_NAME_MAX);
@ -1303,7 +1210,8 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
dev->ib_dev.local_dma_lkey = mdev->caps.gen.reserved_lkey;
dev->num_ports = mdev->caps.gen.num_ports;
dev->ib_dev.phys_port_cnt = dev->num_ports;
dev->ib_dev.num_comp_vectors = dev->num_comp_vectors;
dev->ib_dev.num_comp_vectors =
dev->mdev->priv.eq_table.num_comp_vectors;
dev->ib_dev.dma_device = &mdev->pdev->dev;
dev->ib_dev.uverbs_abi_ver = MLX5_IB_UVERBS_ABI_VERSION;
@ -1390,13 +1298,13 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
err = init_node_data(dev);
if (err)
goto err_eqs;
goto err_dealloc;
mutex_init(&dev->cap_mask_mutex);
err = create_dev_resources(&dev->devr);
if (err)
goto err_eqs;
goto err_dealloc;
err = mlx5_ib_odp_init_one(dev);
if (err)
@ -1433,9 +1341,6 @@ err_odp:
err_rsrc:
destroy_dev_resources(&dev->devr);
err_eqs:
free_comp_eqs(dev);
err_dealloc:
ib_dealloc_device((struct ib_device *)dev);
@ -1450,7 +1355,6 @@ static void mlx5_ib_remove(struct mlx5_core_dev *mdev, void *context)
destroy_umrc_res(dev);
mlx5_ib_odp_remove_one(dev);
destroy_dev_resources(&dev->devr);
free_comp_eqs(dev);
ib_dealloc_device(&dev->ib_dev);
}
@ -1458,6 +1362,7 @@ static struct mlx5_interface mlx5_ib_interface = {
.add = mlx5_ib_add,
.remove = mlx5_ib_remove,
.event = mlx5_ib_event,
.protocol = MLX5_INTERFACE_PROTOCOL_IB,
};
static int __init mlx5_ib_init(void)

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -421,9 +421,7 @@ struct mlx5_ib_dev {
struct ib_device ib_dev;
struct mlx5_core_dev *mdev;
MLX5_DECLARE_DOORBELL_LOCK(uar_lock);
struct list_head eqs_list;
int num_ports;
int num_comp_vectors;
/* serialize update of capability mask
*/
struct mutex cap_mask_mutex;
@ -594,7 +592,6 @@ struct ib_xrcd *mlx5_ib_alloc_xrcd(struct ib_device *ibdev,
struct ib_ucontext *context,
struct ib_udata *udata);
int mlx5_ib_dealloc_xrcd(struct ib_xrcd *xrcd);
int mlx5_vector2eqn(struct mlx5_ib_dev *dev, int vector, int *eqn, int *irqn);
int mlx5_ib_get_buf_offset(u64 addr, int page_shift, u32 *offset);
int mlx5_query_ext_port_caps(struct mlx5_ib_dev *dev, u8 port);
int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2014 Mellanox Technologies. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -796,9 +796,6 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev,
goto err_free;
}
qp->db.db[0] = 0;
qp->db.db[1] = 0;
qp->sq.wrid = kmalloc(qp->sq.wqe_cnt * sizeof(*qp->sq.wrid), GFP_KERNEL);
qp->sq.wr_data = kmalloc(qp->sq.wqe_cnt * sizeof(*qp->sq.wr_data), GFP_KERNEL);
qp->rq.wrid = kmalloc(qp->rq.wqe_cnt * sizeof(*qp->rq.wrid), GFP_KERNEL);
@ -1162,10 +1159,11 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp)
in = kzalloc(sizeof(*in), GFP_KERNEL);
if (!in)
return;
if (qp->state != IB_QPS_RESET) {
mlx5_ib_qp_disable_pagefaults(qp);
if (mlx5_core_qp_modify(dev->mdev, to_mlx5_state(qp->state),
MLX5_QP_STATE_RST, in, sizeof(*in), &qp->mqp))
MLX5_QP_STATE_RST, in, 0, &qp->mqp))
mlx5_ib_warn(dev, "mlx5_ib: modify QP %06x to RESET failed\n",
qp->mqp.qpn);
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -165,8 +165,6 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq,
return err;
}
*srq->db.db = 0;
if (mlx5_buf_alloc(dev->mdev, buf_size, PAGE_SIZE * 2, &srq->buf)) {
mlx5_ib_dbg(dev, "buf alloc failed\n");
err = -ENOMEM;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2013, Mellanox Technologies inc. All rights reserved.
* Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -842,6 +842,13 @@ static void ipoib_set_mcast_list(struct net_device *dev)
queue_work(ipoib_workqueue, &priv->restart_task);
}
static int ipoib_get_iflink(const struct net_device *dev)
{
struct ipoib_dev_priv *priv = netdev_priv(dev);
return priv->parent->ifindex;
}
static u32 ipoib_addr_hash(struct ipoib_neigh_hash *htbl, u8 *daddr)
{
/*
@ -1341,6 +1348,7 @@ static const struct net_device_ops ipoib_netdev_ops = {
.ndo_start_xmit = ipoib_start_xmit,
.ndo_tx_timeout = ipoib_timeout,
.ndo_set_rx_mode = ipoib_set_mcast_list,
.ndo_get_iflink = ipoib_get_iflink,
};
void ipoib_setup(struct net_device *dev)

View File

@ -102,7 +102,6 @@ int __ipoib_vlan_add(struct ipoib_dev_priv *ppriv, struct ipoib_dev_priv *priv,
}
priv->child_type = type;
priv->dev->iflink = ppriv->dev->ifindex;
list_add_tail(&priv->list, &ppriv->child_intfs);
return 0;

View File

@ -389,22 +389,49 @@ zsau_resp[] =
{NULL, ZSAU_UNKNOWN}
};
/* retrieve CID from parsed response
* returns 0 if no CID, -1 if invalid CID, or CID value 1..65535
/* check for and remove fixed string prefix
* If s starts with prefix terminated by a non-alphanumeric character,
* return pointer to the first character after that, otherwise return NULL.
*/
static int cid_of_response(char *s)
static char *skip_prefix(char *s, const char *prefix)
{
int cid;
int rc;
while (*prefix)
if (*s++ != *prefix++)
return NULL;
if (isalnum(*s))
return NULL;
return s;
}
if (s[-1] != ';')
return 0; /* no CID separator */
rc = kstrtoint(s, 10, &cid);
if (rc)
return 0; /* CID not numeric */
if (cid < 1 || cid > 65535)
return -1; /* CID out of range */
return cid;
/* queue event with CID */
static void add_cid_event(struct cardstate *cs, int cid, int type,
void *ptr, int parameter)
{
unsigned long flags;
unsigned next, tail;
struct event_t *event;
gig_dbg(DEBUG_EVENT, "queueing event %d for cid %d", type, cid);
spin_lock_irqsave(&cs->ev_lock, flags);
tail = cs->ev_tail;
next = (tail + 1) % MAX_EVENTS;
if (unlikely(next == cs->ev_head)) {
dev_err(cs->dev, "event queue full\n");
kfree(ptr);
} else {
event = cs->events + tail;
event->type = type;
event->cid = cid;
event->ptr = ptr;
event->arg = NULL;
event->parameter = parameter;
event->at_state = NULL;
cs->ev_tail = next;
}
spin_unlock_irqrestore(&cs->ev_lock, flags);
}
/**
@ -417,190 +444,188 @@ static int cid_of_response(char *s)
*/
void gigaset_handle_modem_response(struct cardstate *cs)
{
unsigned char *argv[MAX_REC_PARAMS + 1];
int params;
int i, j;
char *eoc, *psep, *ptr;
const struct resp_type_t *rt;
const struct zsau_resp_t *zr;
int curarg;
unsigned long flags;
unsigned next, tail, head;
struct event_t *event;
int resp_code;
int param_type;
int abort;
size_t len;
int cid;
int rawstring;
int cid, parameter;
u8 type, value;
len = cs->cbytes;
if (!len) {
if (!cs->cbytes) {
/* ignore additional LFs/CRs (M10x config mode or cx100) */
gig_dbg(DEBUG_MCMD, "skipped EOL [%02X]", cs->respdata[0]);
return;
}
cs->respdata[len] = 0;
argv[0] = cs->respdata;
params = 1;
cs->respdata[cs->cbytes] = 0;
if (cs->at_state.getstring) {
/* getstring only allowed without cid at the moment */
/* state machine wants next line verbatim */
cs->at_state.getstring = 0;
rawstring = 1;
cid = 0;
ptr = kstrdup(cs->respdata, GFP_ATOMIC);
gig_dbg(DEBUG_EVENT, "string==%s", ptr ? ptr : "NULL");
add_cid_event(cs, 0, RSP_STRING, ptr, 0);
return;
}
/* look up response type */
for (rt = resp_type; rt->response; ++rt) {
eoc = skip_prefix(cs->respdata, rt->response);
if (eoc)
break;
}
if (!rt->response) {
add_cid_event(cs, 0, RSP_NONE, NULL, 0);
gig_dbg(DEBUG_EVENT, "unknown modem response: '%s'\n",
cs->respdata);
return;
}
/* check for CID */
psep = strrchr(cs->respdata, ';');
if (psep &&
!kstrtoint(psep + 1, 10, &cid) &&
cid >= 1 && cid <= 65535) {
/* valid CID: chop it off */
*psep = 0;
} else {
/* parse line */
for (i = 0; i < len; i++)
switch (cs->respdata[i]) {
case ';':
case ',':
case '=':
if (params > MAX_REC_PARAMS) {
dev_warn(cs->dev,
"too many parameters in response\n");
/* need last parameter (might be CID) */
params--;
}
argv[params++] = cs->respdata + i + 1;
}
rawstring = 0;
cid = params > 1 ? cid_of_response(argv[params - 1]) : 0;
if (cid < 0) {
gigaset_add_event(cs, &cs->at_state, RSP_INVAL,
NULL, 0, NULL);
return;
}
for (j = 1; j < params; ++j)
argv[j][-1] = 0;
gig_dbg(DEBUG_EVENT, "CMD received: %s", argv[0]);
if (cid) {
--params;
gig_dbg(DEBUG_EVENT, "CID: %s", argv[params]);
}
gig_dbg(DEBUG_EVENT, "available params: %d", params - 1);
for (j = 1; j < params; j++)
gig_dbg(DEBUG_EVENT, "param %d: %s", j, argv[j]);
/* no valid CID: leave unchanged */
cid = 0;
}
spin_lock_irqsave(&cs->ev_lock, flags);
head = cs->ev_head;
tail = cs->ev_tail;
gig_dbg(DEBUG_EVENT, "CMD received: %s", cs->respdata);
if (cid)
gig_dbg(DEBUG_EVENT, "CID: %d", cid);
abort = 1;
curarg = 0;
while (curarg < params) {
next = (tail + 1) % MAX_EVENTS;
if (unlikely(next == head)) {
dev_err(cs->dev, "event queue full\n");
break;
}
switch (rt->type) {
case RT_NOTHING:
/* check parameter separator */
if (*eoc)
goto bad_param; /* extra parameter */
event = cs->events + tail;
event->at_state = NULL;
event->cid = cid;
event->ptr = NULL;
event->arg = NULL;
tail = next;
add_cid_event(cs, cid, rt->resp_code, NULL, 0);
break;
if (rawstring) {
resp_code = RSP_STRING;
param_type = RT_STRING;
} else {
for (rt = resp_type; rt->response; ++rt)
if (!strcmp(argv[curarg], rt->response))
case RT_RING:
/* check parameter separator */
if (!*eoc)
eoc = NULL; /* no parameter */
else if (*eoc++ != ',')
goto bad_param;
add_cid_event(cs, 0, rt->resp_code, NULL, cid);
/* process parameters as individual responses */
while (eoc) {
/* look up parameter type */
psep = NULL;
for (rt = resp_type; rt->response; ++rt) {
psep = skip_prefix(eoc, rt->response);
if (psep)
break;
if (!rt->response) {
event->type = RSP_NONE;
gig_dbg(DEBUG_EVENT,
"unknown modem response: '%s'\n",
argv[curarg]);
break;
}
resp_code = rt->resp_code;
param_type = rt->type;
++curarg;
}
event->type = resp_code;
switch (param_type) {
case RT_NOTHING:
break;
case RT_RING:
if (!cid) {
dev_err(cs->dev,
"received RING without CID!\n");
event->type = RSP_INVAL;
abort = 1;
} else {
event->cid = 0;
event->parameter = cid;
abort = 0;
}
break;
case RT_ZSAU:
if (curarg >= params) {
event->parameter = ZSAU_NONE;
break;
}
for (zr = zsau_resp; zr->str; ++zr)
if (!strcmp(argv[curarg], zr->str))
break;
event->parameter = zr->code;
if (!zr->str)
/* all legal parameters are of type RT_STRING */
if (!psep || rt->type != RT_STRING) {
dev_warn(cs->dev,
"%s: unknown parameter %s after ZSAU\n",
__func__, argv[curarg]);
++curarg;
break;
case RT_STRING:
if (curarg < params) {
event->ptr = kstrdup(argv[curarg], GFP_ATOMIC);
if (!event->ptr)
dev_err(cs->dev, "out of memory\n");
++curarg;
"illegal RING parameter: '%s'\n",
eoc);
return;
}
gig_dbg(DEBUG_EVENT, "string==%s",
event->ptr ? (char *) event->ptr : "NULL");
break;
case RT_ZCAU:
event->parameter = -1;
if (curarg + 1 < params) {
u8 type, value;
i = kstrtou8(argv[curarg++], 16, &type);
j = kstrtou8(argv[curarg++], 16, &value);
if (i == 0 && j == 0)
event->parameter = (type << 8) | value;
} else
curarg = params - 1;
break;
case RT_NUMBER:
if (curarg >= params ||
kstrtoint(argv[curarg++], 10, &event->parameter))
event->parameter = -1;
gig_dbg(DEBUG_EVENT, "parameter==%d", event->parameter);
/* skip parameter value separator */
if (*psep++ != '=')
goto bad_param;
/* look up end of parameter */
eoc = strchr(psep, ',');
if (eoc)
*eoc++ = 0;
/* retrieve parameter value */
ptr = kstrdup(psep, GFP_ATOMIC);
/* queue event */
add_cid_event(cs, cid, rt->resp_code, ptr, 0);
}
break;
case RT_ZSAU:
/* check parameter separator */
if (!*eoc) {
/* no parameter */
add_cid_event(cs, cid, rt->resp_code, NULL, ZSAU_NONE);
break;
}
if (*eoc++ != '=')
goto bad_param;
if (resp_code == RSP_ZDLE)
cs->dle = event->parameter;
/* look up parameter value */
for (zr = zsau_resp; zr->str; ++zr)
if (!strcmp(eoc, zr->str))
break;
if (!zr->str)
goto bad_param;
if (abort)
break;
add_cid_event(cs, cid, rt->resp_code, NULL, zr->code);
break;
case RT_STRING:
/* check parameter separator */
if (*eoc++ != '=')
goto bad_param;
/* retrieve parameter value */
ptr = kstrdup(eoc, GFP_ATOMIC);
/* queue event */
add_cid_event(cs, cid, rt->resp_code, ptr, 0);
break;
case RT_ZCAU:
/* check parameter separators */
if (*eoc++ != '=')
goto bad_param;
psep = strchr(eoc, ',');
if (!psep)
goto bad_param;
*psep++ = 0;
/* decode parameter values */
if (kstrtou8(eoc, 16, &type) || kstrtou8(psep, 16, &value)) {
*--psep = ',';
goto bad_param;
}
parameter = (type << 8) | value;
add_cid_event(cs, cid, rt->resp_code, NULL, parameter);
break;
case RT_NUMBER:
/* check parameter separator */
if (*eoc++ != '=')
goto bad_param;
/* decode parameter value */
if (kstrtoint(eoc, 10, &parameter))
goto bad_param;
/* special case ZDLE: set flag before queueing event */
if (rt->resp_code == RSP_ZDLE)
cs->dle = parameter;
add_cid_event(cs, cid, rt->resp_code, NULL, parameter);
break;
bad_param:
/* parameter unexpected, incomplete or malformed */
dev_warn(cs->dev, "bad parameter in response '%s'\n",
cs->respdata);
add_cid_event(cs, cid, rt->resp_code, NULL, -1);
break;
default:
dev_err(cs->dev, "%s: internal error on '%s'\n",
__func__, cs->respdata);
}
cs->ev_tail = tail;
spin_unlock_irqrestore(&cs->ev_lock, flags);
if (curarg != params)
gig_dbg(DEBUG_EVENT,
"invalid number of processed parameters: %d/%d",
curarg, params);
}
EXPORT_SYMBOL_GPL(gigaset_handle_modem_response);

View File

@ -1951,38 +1951,6 @@ static int isdn_net_header(struct sk_buff *skb, struct net_device *dev,
return len;
}
/* We don't need to send arp, because we have point-to-point connections. */
static int
isdn_net_rebuild_header(struct sk_buff *skb)
{
struct net_device *dev = skb->dev;
isdn_net_local *lp = netdev_priv(dev);
int ret = 0;
if (lp->p_encap == ISDN_NET_ENCAP_ETHER) {
struct ethhdr *eth = (struct ethhdr *) skb->data;
/*
* Only ARP/IP is currently supported
*/
if (eth->h_proto != htons(ETH_P_IP)) {
printk(KERN_WARNING
"isdn_net: %s don't know how to resolve type %d addresses?\n",
dev->name, (int) eth->h_proto);
memcpy(eth->h_source, dev->dev_addr, dev->addr_len);
return 0;
}
/*
* Try to get ARP to resolve the header.
*/
#ifdef CONFIG_INET
ret = arp_find(eth->h_dest, skb);
#endif
}
return ret;
}
static int isdn_header_cache(const struct neighbour *neigh, struct hh_cache *hh,
__be16 type)
{
@ -2005,7 +1973,6 @@ static void isdn_header_cache_update(struct hh_cache *hh,
static const struct header_ops isdn_header_ops = {
.create = isdn_net_header,
.rebuild = isdn_net_rebuild_header,
.cache = isdn_header_cache,
.cache_update = isdn_header_cache_update,
};

View File

@ -112,8 +112,8 @@ mISDN_sock_cmsg(struct sock *sk, struct msghdr *msg, struct sk_buff *skb)
}
static int
mISDN_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t len, int flags)
mISDN_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
int flags)
{
struct sk_buff *skb;
struct sock *sk = sock->sk;
@ -173,8 +173,7 @@ mISDN_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
}
static int
mISDN_sock_sendmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t len)
mISDN_sock_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
{
struct sock *sk = sock->sk;
struct sk_buff *skb;

View File

@ -1190,7 +1190,6 @@ static int dvb_net_stop(struct net_device *dev)
static const struct header_ops dvb_header_ops = {
.create = eth_header,
.parse = eth_header_parse,
.rebuild = eth_rebuild_header,
};

View File

@ -104,7 +104,6 @@ EXPORT_SYMBOL(arcnet_timeout);
static int arcnet_header(struct sk_buff *skb, struct net_device *dev,
unsigned short type, const void *daddr,
const void *saddr, unsigned len);
static int arcnet_rebuild_header(struct sk_buff *skb);
static int go_tx(struct net_device *dev);
static int debug = ARCNET_DEBUG;
@ -312,7 +311,6 @@ static int choose_mtu(void)
static const struct header_ops arcnet_header_ops = {
.create = arcnet_header,
.rebuild = arcnet_rebuild_header,
};
static const struct net_device_ops arcnet_netdev_ops = {
@ -538,59 +536,6 @@ static int arcnet_header(struct sk_buff *skb, struct net_device *dev,
return proto->build_header(skb, dev, type, _daddr);
}
/*
* Rebuild the ARCnet hard header. This is called after an ARP (or in the
* future other address resolution) has completed on this sk_buff. We now
* let ARP fill in the destination field.
*/
static int arcnet_rebuild_header(struct sk_buff *skb)
{
struct net_device *dev = skb->dev;
struct arcnet_local *lp = netdev_priv(dev);
int status = 0; /* default is failure */
unsigned short type;
uint8_t daddr=0;
struct ArcProto *proto;
/*
* XXX: Why not use skb->mac_len?
*/
if (skb->network_header - skb->mac_header != 2) {
BUGMSG(D_NORMAL,
"rebuild_header: shouldn't be here! (hdrsize=%d)\n",
(int)(skb->network_header - skb->mac_header));
return 0;
}
type = *(uint16_t *) skb_pull(skb, 2);
BUGMSG(D_DURING, "rebuild header for protocol %Xh\n", type);
if (type == ETH_P_IP) {
#ifdef CONFIG_INET
BUGMSG(D_DURING, "rebuild header for ethernet protocol %Xh\n", type);
status = arp_find(&daddr, skb) ? 1 : 0;
BUGMSG(D_DURING, " rebuilt: dest is %d; protocol %Xh\n",
daddr, type);
#endif
} else {
BUGMSG(D_NORMAL,
"I don't understand ethernet protocol %Xh addresses!\n", type);
dev->stats.tx_errors++;
dev->stats.tx_aborted_errors++;
}
/* if we couldn't resolve the address... give up. */
if (!status)
return 0;
/* add the _real_ header this time! */
proto = arc_proto_map[lp->default_proto[daddr]];
proto->build_header(skb, dev, type, daddr);
return 1; /* success */
}
/* Called by the kernel in order to transmit a packet. */
netdev_tx_t arcnet_send_packet(struct sk_buff *skb,
struct net_device *dev)

View File

@ -70,6 +70,7 @@
#define AD_PORT_STANDBY 0x80
#define AD_PORT_SELECTED 0x100
#define AD_PORT_MOVED 0x200
#define AD_PORT_CHURNED (AD_PORT_ACTOR_CHURN | AD_PORT_PARTNER_CHURN)
/* Port Key definitions
* key is determined according to the link speed, duplex and
@ -1013,16 +1014,19 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
/* check if state machine should change state */
/* first, check if port was reinitialized */
if (port->sm_vars & AD_PORT_BEGIN)
if (port->sm_vars & AD_PORT_BEGIN) {
port->sm_rx_state = AD_RX_INITIALIZE;
port->sm_vars |= AD_PORT_CHURNED;
/* check if port is not enabled */
else if (!(port->sm_vars & AD_PORT_BEGIN)
} else if (!(port->sm_vars & AD_PORT_BEGIN)
&& !port->is_enabled && !(port->sm_vars & AD_PORT_MOVED))
port->sm_rx_state = AD_RX_PORT_DISABLED;
/* check if new lacpdu arrived */
else if (lacpdu && ((port->sm_rx_state == AD_RX_EXPIRED) ||
(port->sm_rx_state == AD_RX_DEFAULTED) ||
(port->sm_rx_state == AD_RX_CURRENT))) {
if (port->sm_rx_state != AD_RX_CURRENT)
port->sm_vars |= AD_PORT_CHURNED;
port->sm_rx_timer_counter = 0;
port->sm_rx_state = AD_RX_CURRENT;
} else {
@ -1100,9 +1104,11 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
*/
port->partner_oper.port_state &= ~AD_STATE_SYNCHRONIZATION;
port->sm_vars &= ~AD_PORT_MATCHED;
port->partner_oper.port_state |= AD_STATE_LACP_TIMEOUT;
port->partner_oper.port_state |= AD_STATE_LACP_ACTIVITY;
port->sm_rx_timer_counter = __ad_timer_to_ticks(AD_CURRENT_WHILE_TIMER, (u16)(AD_SHORT_TIMEOUT));
port->actor_oper_port_state |= AD_STATE_EXPIRED;
port->sm_vars |= AD_PORT_CHURNED;
break;
case AD_RX_DEFAULTED:
__update_default_selected(port);
@ -1131,6 +1137,45 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
}
}
/**
* ad_churn_machine - handle port churn's state machine
* @port: the port we're looking at
*
*/
static void ad_churn_machine(struct port *port)
{
if (port->sm_vars & AD_PORT_CHURNED) {
port->sm_vars &= ~AD_PORT_CHURNED;
port->sm_churn_actor_state = AD_CHURN_MONITOR;
port->sm_churn_partner_state = AD_CHURN_MONITOR;
port->sm_churn_actor_timer_counter =
__ad_timer_to_ticks(AD_ACTOR_CHURN_TIMER, 0);
port->sm_churn_partner_timer_counter =
__ad_timer_to_ticks(AD_PARTNER_CHURN_TIMER, 0);
return;
}
if (port->sm_churn_actor_timer_counter &&
!(--port->sm_churn_actor_timer_counter) &&
port->sm_churn_actor_state == AD_CHURN_MONITOR) {
if (port->actor_oper_port_state & AD_STATE_SYNCHRONIZATION) {
port->sm_churn_actor_state = AD_NO_CHURN;
} else {
port->churn_actor_count++;
port->sm_churn_actor_state = AD_CHURN;
}
}
if (port->sm_churn_partner_timer_counter &&
!(--port->sm_churn_partner_timer_counter) &&
port->sm_churn_partner_state == AD_CHURN_MONITOR) {
if (port->partner_oper.port_state & AD_STATE_SYNCHRONIZATION) {
port->sm_churn_partner_state = AD_NO_CHURN;
} else {
port->churn_partner_count++;
port->sm_churn_partner_state = AD_CHURN;
}
}
}
/**
* ad_tx_machine - handle a port's tx state machine
* @port: the port we're looking at
@ -1383,8 +1428,10 @@ static void ad_port_selection_logic(struct port *port, bool *update_slave_arr)
else
port->aggregator->is_individual = true;
port->aggregator->actor_admin_aggregator_key = port->actor_admin_port_key;
port->aggregator->actor_oper_aggregator_key = port->actor_oper_port_key;
port->aggregator->actor_admin_aggregator_key =
port->actor_admin_port_key;
port->aggregator->actor_oper_aggregator_key =
port->actor_oper_port_key;
port->aggregator->partner_system =
port->partner_oper.system;
port->aggregator->partner_system_priority =
@ -1710,14 +1757,9 @@ static void ad_initialize_port(struct port *port, int lacp_fast)
};
if (port) {
port->actor_port_number = 1;
port->actor_port_priority = 0xff;
port->actor_system = null_mac_addr;
port->actor_system_priority = 0xffff;
port->actor_port_aggregator_identifier = 0;
port->ntt = false;
port->actor_admin_port_key = 1;
port->actor_oper_port_key = 1;
port->actor_admin_port_state = AD_STATE_AGGREGATION |
AD_STATE_LACP_ACTIVITY;
port->actor_oper_port_state = AD_STATE_AGGREGATION |
@ -1731,7 +1773,7 @@ static void ad_initialize_port(struct port *port, int lacp_fast)
port->is_enabled = true;
/* private parameters */
port->sm_vars = 0x3;
port->sm_vars = AD_PORT_BEGIN | AD_PORT_LACP_ENABLED;
port->sm_rx_state = 0;
port->sm_rx_timer_counter = 0;
port->sm_periodic_state = 0;
@ -1739,12 +1781,17 @@ static void ad_initialize_port(struct port *port, int lacp_fast)
port->sm_mux_state = 0;
port->sm_mux_timer_counter = 0;
port->sm_tx_state = 0;
port->sm_tx_timer_counter = 0;
port->slave = NULL;
port->aggregator = NULL;
port->next_port_in_aggregator = NULL;
port->transaction_id = 0;
port->sm_churn_actor_timer_counter = 0;
port->sm_churn_actor_state = 0;
port->churn_actor_count = 0;
port->sm_churn_partner_timer_counter = 0;
port->sm_churn_partner_state = 0;
port->churn_partner_count = 0;
memcpy(&port->lacpdu, &lacpdu, sizeof(lacpdu));
}
}
@ -1916,8 +1963,6 @@ void bond_3ad_bind_slave(struct slave *slave)
* lacpdu's are sent in one second)
*/
port->sm_tx_timer_counter = ad_ticks_per_sec/AD_MAX_TX_IN_SECOND;
port->aggregator = NULL;
port->next_port_in_aggregator = NULL;
__disable_port(port);
@ -2164,6 +2209,7 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
ad_port_selection_logic(port, &update_slave_arr);
ad_mux_machine(port, &update_slave_arr);
ad_tx_machine(port);
ad_churn_machine(port);
/* turn off the BEGIN bit, since we already handled it */
if (port->sm_vars & AD_PORT_BEGIN)
@ -2279,8 +2325,8 @@ void bond_3ad_adapter_speed_changed(struct slave *slave)
spin_lock_bh(&slave->bond->mode_lock);
port->actor_admin_port_key &= ~AD_SPEED_KEY_MASKS;
port->actor_oper_port_key = port->actor_admin_port_key |=
(__get_link_speed(port) << 1);
port->actor_admin_port_key |= __get_link_speed(port) << 1;
port->actor_oper_port_key = port->actor_admin_port_key;
netdev_dbg(slave->bond->dev, "Port %d changed speed\n", port->actor_port_number);
/* there is no need to reselect a new aggregator, just signal the
* state machines to reinitialize
@ -2312,8 +2358,8 @@ void bond_3ad_adapter_duplex_changed(struct slave *slave)
spin_lock_bh(&slave->bond->mode_lock);
port->actor_admin_port_key &= ~AD_DUPLEX_KEY_MASKS;
port->actor_oper_port_key = port->actor_admin_port_key |=
__get_duplex(port);
port->actor_admin_port_key |= __get_duplex(port);
port->actor_oper_port_key = port->actor_admin_port_key;
netdev_dbg(slave->bond->dev, "Port %d slave %s changed duplex\n",
port->actor_port_number, slave->dev->name);
if (port->actor_oper_port_key & AD_DUPLEX_KEY_MASKS)
@ -2354,21 +2400,19 @@ void bond_3ad_handle_link_change(struct slave *slave, char link)
* on link up we are forcing recheck on the duplex and speed since
* some of he adaptors(ce1000.lan) report.
*/
port->actor_admin_port_key &= ~(AD_DUPLEX_KEY_MASKS|AD_SPEED_KEY_MASKS);
if (link == BOND_LINK_UP) {
port->is_enabled = true;
port->actor_admin_port_key &= ~AD_DUPLEX_KEY_MASKS;
port->actor_oper_port_key = port->actor_admin_port_key |=
__get_duplex(port);
port->actor_admin_port_key &= ~AD_SPEED_KEY_MASKS;
port->actor_oper_port_key = port->actor_admin_port_key |=
(__get_link_speed(port) << 1);
port->actor_admin_port_key |=
(__get_link_speed(port) << 1) | __get_duplex(port);
if (port->actor_admin_port_key & AD_DUPLEX_KEY_MASKS)
port->sm_vars |= AD_PORT_LACP_ENABLED;
} else {
/* link has failed */
port->is_enabled = false;
port->actor_admin_port_key &= ~AD_DUPLEX_KEY_MASKS;
port->actor_oper_port_key = (port->actor_admin_port_key &=
~AD_SPEED_KEY_MASKS);
port->sm_vars &= ~AD_PORT_LACP_ENABLED;
}
port->actor_oper_port_key = port->actor_admin_port_key;
netdev_dbg(slave->bond->dev, "Port %d changed link status to %s\n",
port->actor_port_number,
link == BOND_LINK_UP ? "UP" : "DOWN");
@ -2485,6 +2529,9 @@ int bond_3ad_lacpdu_recv(const struct sk_buff *skb, struct bonding *bond,
if (skb->protocol != PKT_TYPE_LACPDU)
return RX_HANDLER_ANOTHER;
if (!MAC_ADDRESS_EQUAL(eth_hdr(skb)->h_dest, lacpdu_mcast_addr))
return RX_HANDLER_ANOTHER;
lacpdu = skb_header_pointer(skb, 0, sizeof(_lacpdu), &_lacpdu);
if (!lacpdu)
return RX_HANDLER_ANOTHER;

View File

@ -928,6 +928,39 @@ static inline void slave_disable_netpoll(struct slave *slave)
static void bond_poll_controller(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave = NULL;
struct list_head *iter;
struct ad_info ad_info;
struct netpoll_info *ni;
const struct net_device_ops *ops;
if (BOND_MODE(bond) == BOND_MODE_8023AD)
if (bond_3ad_get_active_agg_info(bond, &ad_info))
return;
rcu_read_lock_bh();
bond_for_each_slave_rcu(bond, slave, iter) {
ops = slave->dev->netdev_ops;
if (!bond_slave_is_up(slave) || !ops->ndo_poll_controller)
continue;
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
struct aggregator *agg =
SLAVE_AD_INFO(slave)->port.aggregator;
if (agg &&
agg->aggregator_identifier != ad_info.aggregator_id)
continue;
}
ni = rcu_dereference_bh(slave->dev->npinfo);
if (down_trylock(&ni->dev_lock))
continue;
ops->ndo_poll_controller(slave->dev);
up(&ni->dev_lock);
}
rcu_read_unlock_bh();
}
static void bond_netpoll_cleanup(struct net_device *bond_dev)
@ -2900,6 +2933,8 @@ static int bond_slave_netdev_event(unsigned long event,
if (old_duplex != slave->duplex)
bond_3ad_adapter_duplex_changed(slave);
}
/* Fallthrough */
case NETDEV_DOWN:
/* Refresh slave-array if applicable!
* If the setup does not use miimon or arpmon (mode-specific!),
* then these events will not cause the slave-array to be
@ -2911,10 +2946,6 @@ static int bond_slave_netdev_event(unsigned long event,
if (bond_mode_uses_xmit_hash(bond))
bond_update_slave_arr(bond, NULL);
break;
case NETDEV_DOWN:
if (bond_mode_uses_xmit_hash(bond))
bond_update_slave_arr(bond, NULL);
break;
case NETDEV_CHANGEMTU:
/* TODO: Should slaves be allowed to
* independently alter their MTU? For
@ -4008,6 +4039,7 @@ static const struct net_device_ops bond_netdev_ops = {
.ndo_fix_features = bond_fix_features,
.ndo_bridge_setlink = ndo_dflt_netdev_switch_port_bridge_setlink,
.ndo_bridge_dellink = ndo_dflt_netdev_switch_port_bridge_dellink,
.ndo_features_check = passthru_features_check,
};
static const struct device_type bond_type = {

View File

@ -176,18 +176,51 @@ static void bond_info_show_slave(struct seq_file *seq,
slave->link_failure_count);
seq_printf(seq, "Permanent HW addr: %pM\n", slave->perm_hwaddr);
seq_printf(seq, "Slave queue ID: %d\n", slave->queue_id);
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
const struct aggregator *agg
= SLAVE_AD_INFO(slave)->port.aggregator;
const struct port *port = &SLAVE_AD_INFO(slave)->port;
const struct aggregator *agg = port->aggregator;
if (agg)
if (agg) {
seq_printf(seq, "Aggregator ID: %d\n",
agg->aggregator_identifier);
else
seq_printf(seq, "Actor Churn State: %s\n",
bond_3ad_churn_desc(port->sm_churn_actor_state));
seq_printf(seq, "Partner Churn State: %s\n",
bond_3ad_churn_desc(port->sm_churn_partner_state));
seq_printf(seq, "Actor Churned Count: %d\n",
port->churn_actor_count);
seq_printf(seq, "Partner Churned Count: %d\n",
port->churn_partner_count);
seq_puts(seq, "details actor lacp pdu:\n");
seq_printf(seq, " system priority: %d\n",
port->actor_system_priority);
seq_printf(seq, " port key: %d\n",
port->actor_oper_port_key);
seq_printf(seq, " port priority: %d\n",
port->actor_port_priority);
seq_printf(seq, " port number: %d\n",
port->actor_port_number);
seq_printf(seq, " port state: %d\n",
port->actor_oper_port_state);
seq_puts(seq, "details partner lacp pdu:\n");
seq_printf(seq, " system priority: %d\n",
port->partner_oper.system_priority);
seq_printf(seq, " oper key: %d\n",
port->partner_oper.key);
seq_printf(seq, " port priority: %d\n",
port->partner_oper.port_priority);
seq_printf(seq, " port number: %d\n",
port->partner_oper.port_number);
seq_printf(seq, " port state: %d\n",
port->partner_oper.port_state);
} else {
seq_puts(seq, "Aggregator ID: N/A\n");
}
}
seq_printf(seq, "Slave queue ID: %d\n", slave->queue_id);
}
static int bond_info_seq_show(struct seq_file *seq, void *v)

Some files were not shown because too many files have changed in this diff Show More