1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next

Pull networking updates from David Miller:

 1) Add TX fast path in mac80211, from Johannes Berg.

 2) Add TSO/GRO support to ibmveth, from Thomas Falcon

 3) Move away from cached routes in ipv6, just like ipv4, from Martin
    KaFai Lau.

 4) Lots of new rhashtable tests, from Thomas Graf.

 5) Run ingress qdisc lockless, from Alexei Starovoitov.

 6) Allow servers to fetch TCP packet headers for SYN packets of new
    connections, for fingerprinting.  From Eric Dumazet.

 7) Add mode parameter to pktgen, for testing receive.  From Alexei
    Starovoitov.

 8) Cache access optimizations via simplifications of build_skb(), from
    Alexander Duyck.

 9) Move page frag allocator under mm/, also from Alexander.

10) Add xmit_more support to hv_netvsc, from KY Srinivasan.

11) Add a counter guard in case we try to perform endless reclassify
    loops in the packet scheduler.

12) Extern flow dissector to be programmable and use it in new "Flower"
    classifier.  From Jiri Pirko.

13) AF_PACKET fanout rollover fixes, performance improvements, and new
    statistics.  From Willem de Bruijn.

14) Add netdev driver for GENEVE tunnels, from John W Linville.

15) Add ingress netfilter hooks and filtering, from Pablo Neira Ayuso.

16) Fix handling of epoll edge triggers in TCP, from Eric Dumazet.

17) Add an ECN retry fallback for the initial TCP handshake, from Daniel
    Borkmann.

18) Add tail call support to BPF, from Alexei Starovoitov.

19) Add several pktgen helper scripts, from Jesper Dangaard Brouer.

20) Add zerocopy support to AF_UNIX, from Hannes Frederic Sowa.

21) Favor even port numbers for allocation to connect() requests, and
    odd port numbers for bind(0), in an effort to help avoid
    ip_local_port_range exhaustion.  From Eric Dumazet.

22) Add Cavium ThunderX driver, from Sunil Goutham.

23) Allow bpf programs to access skb_iif and dev->ifindex SKB metadata,
    from Alexei Starovoitov.

24) Add support for T6 chips in cxgb4vf driver, from Hariprasad Shenai.

25) Double TCP Small Queues default to 256K to accomodate situations
    like the XEN driver and wireless aggregation.  From Wei Liu.

26) Add more entropy inputs to flow dissector, from Tom Herbert.

27) Add CDG congestion control algorithm to TCP, from Kenneth Klette
    Jonassen.

28) Convert ipset over to RCU locking, from Jozsef Kadlecsik.

29) Track and act upon link status of ipv4 route nexthops, from Andy
    Gospodarek.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1670 commits)
  bridge: vlan: flush the dynamically learned entries on port vlan delete
  bridge: multicast: add a comment to br_port_state_selection about blocking state
  net: inet_diag: export IPV6_V6ONLY sockopt
  stmmac: troubleshoot unexpected bits in des0 & des1
  net: ipv4 sysctl option to ignore routes when nexthop link is down
  net: track link-status of ipv4 nexthops
  net: switchdev: ignore unsupported bridge flags
  net: Cavium: Fix MAC address setting in shutdown state
  drivers: net: xgene: fix for ACPI support without ACPI
  ip: report the original address of ICMP messages
  net/mlx5e: Prefetch skb data on RX
  net/mlx5e: Pop cq outside mlx5e_get_cqe
  net/mlx5e: Remove mlx5e_cq.sqrq back-pointer
  net/mlx5e: Remove extra spaces
  net/mlx5e: Avoid TX CQE generation if more xmit packets expected
  net/mlx5e: Avoid redundant dev_kfree_skb() upon NOP completion
  net/mlx5e: Remove re-assignment of wq type in mlx5e_enable_rq()
  net/mlx5e: Use skb_shinfo(skb)->gso_segs rather than counting them
  net/mlx5e: Static mapping of netdev priv resources to/from netdev TX queues
  net/mlx4_en: Use HW counters for rx/tx bytes/packets in PF device
  ...
steinar/wifi_calib_4_9_kernel
Linus Torvalds 2015-06-24 16:49:49 -07:00
commit e0456717e4
1418 changed files with 109727 additions and 27780 deletions

View File

@ -0,0 +1,8 @@
What: /sys/bus/pci/drivers/janz-cmodio/.../modulbus_number
Date: May 2010
KernelVersion: 2.6.35
Contact: Ira W. Snyder <ira.snyder@gmail.com>
Description:
Value representing the HEX switch S2 of the janz carrier board CMOD-IO or CAN-PCI2
Read-only: value of the configuration switch (0..15)

View File

@ -39,6 +39,25 @@ Description:
Format is a string, e.g: 00:11:22:33:44:55 for an Ethernet MAC
address.
What: /sys/class/net/<bridge iface>/bridge/group_fwd_mask
Date: January 2012
KernelVersion: 3.2
Contact: netdev@vger.kernel.org
Description:
Bitmask to allow forwarding of link local frames with address
01-80-C2-00-00-0X on a bridge device. Only values that set bits
not matching BR_GROUPFWD_RESTRICTED in net/bridge/br_private.h
allowed.
Default value 0 does not forward any link local frames.
Restricted bits:
0: 01-80-C2-00-00-00 Bridge Group Address used for STP
1: 01-80-C2-00-00-01 (MAC Control) 802.3 used for MAC PAUSE
2: 01-80-C2-00-00-02 (Link Aggregation) 802.3ad
Any values not setting these bits can be used. Take special
care when forwarding control frames e.g. 802.1X-PAE or LLDP.
What: /sys/class/net/<iface>/broadcast
Date: April 2005
KernelVersion: 2.6.12

View File

@ -0,0 +1,19 @@
What: /sys/class/net/<iface>/termination
Date: May 2010
KernelVersion: 2.6.35
Contact: Ira W. Snyder <ira.snyder@gmail.com>
Description:
Value representing the can bus termination
Default: 1 (termination active)
Reading: get actual termination state
Writing: set actual termination state (0=no termination, 1=termination active)
What: /sys/class/net/<iface>/fwinfo
Date: May 2015
KernelVersion: 3.19
Contact: Andreas Gröger <andreas24groeger@gmail.com>
Description:
Firmware stamp of ican3 module
Read-only: 32 byte string identification of the ICAN3 module
(known values: "JANZ-ICAN3 ICANOS 1.xx", "JANZ-ICAN3 CAL/CANopen 1.xx")

View File

@ -1,48 +0,0 @@
* AMD 10GbE PHY driver (amd-xgbe-phy)
Required properties:
- compatible: Should be "amd,xgbe-phy-seattle-v1a" and
"ethernet-phy-ieee802.3-c45"
- reg: Address and length of the register sets for the device
- SerDes Rx/Tx registers
- SerDes integration registers (1/2)
- SerDes integration registers (2/2)
- interrupt-parent: Should be the phandle for the interrupt controller
that services interrupts for this device
- interrupts: Should contain the amd-xgbe-phy interrupt.
Optional properties:
- amd,speed-set: Speed capabilities of the device
0 - 1GbE and 10GbE (default)
1 - 2.5GbE and 10GbE
The following optional properties are represented by an array with each
value corresponding to a particular speed. The first array value represents
the setting for the 1GbE speed, the second value for the 2.5GbE speed and
the third value for the 10GbE speed. All three values are required if the
property is used.
- amd,serdes-blwc: Baseline wandering correction enablement
0 - Off
1 - On
- amd,serdes-cdr-rate: CDR rate speed selection
- amd,serdes-pq-skew: PQ (data sampling) skew
- amd,serdes-tx-amp: TX amplitude boost
- amd,serdes-dfe-tap-config: DFE taps available to run
- amd,serdes-dfe-tap-enable: DFE taps to enable
Example:
xgbe_phy@e1240800 {
compatible = "amd,xgbe-phy-seattle-v1a", "ethernet-phy-ieee802.3-c45";
reg = <0 0xe1240800 0 0x00400>,
<0 0xe1250000 0 0x00060>,
<0 0xe1250080 0 0x00004>;
interrupt-parent = <&gic>;
interrupts = <0 323 4>;
amd,speed-set = <0>;
amd,serdes-blwc = <1>, <1>, <0>;
amd,serdes-cdr-rate = <2>, <2>, <7>;
amd,serdes-pq-skew = <10>, <10>, <30>;
amd,serdes-tx-amp = <15>, <15>, <10>;
amd,serdes-dfe-tap-config = <3>, <3>, <1>;
amd,serdes-dfe-tap-enable = <0>, <0>, <127>;
};

View File

@ -5,12 +5,16 @@ Required properties:
- reg: Address and length of the register sets for the device
- MAC registers
- PCS registers
- SerDes Rx/Tx registers
- SerDes integration registers (1/2)
- SerDes integration registers (2/2)
- interrupt-parent: Should be the phandle for the interrupt controller
that services interrupts for this device
- interrupts: Should contain the amd-xgbe interrupt(s). The first interrupt
listed is required and is the general device interrupt. If the optional
amd,per-channel-interrupt property is specified, then one additional
interrupt for each DMA channel supported by the device should be specified
interrupt for each DMA channel supported by the device should be specified.
The last interrupt listed should be the PCS auto-negotiation interrupt.
- clocks:
- DMA clock for the amd-xgbe device (used for calculating the
correct Rx interrupt watchdog timer value on a DMA channel
@ -19,7 +23,6 @@ Required properties:
- clock-names: Should be the names of the clocks
- "dma_clk" for the DMA clock
- "ptp_clk" for the PTP clock
- phy-handle: See ethernet.txt file in the same directory
- phy-mode: See ethernet.txt file in the same directory
Optional properties:
@ -29,19 +32,46 @@ Optional properties:
- amd,per-channel-interrupt: Indicates that Rx and Tx complete will generate
a unique interrupt for each DMA channel - this requires an additional
interrupt be configured for each DMA channel
- amd,speed-set: Speed capabilities of the device
0 - 1GbE and 10GbE (default)
1 - 2.5GbE and 10GbE
The following optional properties are represented by an array with each
value corresponding to a particular speed. The first array value represents
the setting for the 1GbE speed, the second value for the 2.5GbE speed and
the third value for the 10GbE speed. All three values are required if the
property is used.
- amd,serdes-blwc: Baseline wandering correction enablement
0 - Off
1 - On
- amd,serdes-cdr-rate: CDR rate speed selection
- amd,serdes-pq-skew: PQ (data sampling) skew
- amd,serdes-tx-amp: TX amplitude boost
- amd,serdes-dfe-tap-config: DFE taps available to run
- amd,serdes-dfe-tap-enable: DFE taps to enable
Example:
xgbe@e0700000 {
compatible = "amd,xgbe-seattle-v1a";
reg = <0 0xe0700000 0 0x80000>,
<0 0xe0780000 0 0x80000>;
<0 0xe0780000 0 0x80000>,
<0 0xe1240800 0 0x00400>,
<0 0xe1250000 0 0x00060>,
<0 0xe1250080 0 0x00004>;
interrupt-parent = <&gic>;
interrupts = <0 325 4>,
<0 326 1>, <0 327 1>, <0 328 1>, <0 329 1>;
<0 326 1>, <0 327 1>, <0 328 1>, <0 329 1>,
<0 323 4>;
amd,per-channel-interrupt;
clocks = <&xgbe_dma_clk>, <&xgbe_ptp_clk>;
clock-names = "dma_clk", "ptp_clk";
phy-handle = <&phy>;
phy-mode = "xgmii";
mac-address = [ 02 a1 a2 a3 a4 a5 ];
amd,speed-set = <0>;
amd,serdes-blwc = <1>, <1>, <0>;
amd,serdes-cdr-rate = <2>, <2>, <7>;
amd,serdes-pq-skew = <10>, <10>, <30>;
amd,serdes-tx-amp = <15>, <15>, <10>;
amd,serdes-dfe-tap-config = <3>, <3>, <1>;
amd,serdes-dfe-tap-enable = <0>, <0>, <127>;
};

View File

@ -0,0 +1,15 @@
* EZchip NPS Management Ethernet port driver
Required properties:
- compatible: Should be "ezchip,nps-mgt-enet"
- reg: Address and length of the register set for the device
- interrupts: Should contain the ENET interrupt
Examples:
ethernet@f0003000 {
compatible = "ezchip,nps-mgt-enet";
reg = <0xf0003000 0x44>;
interrupts = <7>;
mac-address = [ 00 11 22 33 44 55 ];
};

View File

@ -0,0 +1,35 @@
* IPQ806x DWMAC Ethernet controller
The device inherits all the properties of the dwmac/stmmac devices
described in the file net/stmmac.txt with the following changes.
Required properties:
- compatible: should be "qcom,ipq806x-gmac" along with "snps,dwmac"
and any applicable more detailed version number
described in net/stmmac.txt
- qcom,nss-common: should contain a phandle to a syscon device mapping the
nss-common registers.
- qcom,qsgmii-csr: should contain a phandle to a syscon device mapping the
qsgmii-csr registers.
Example:
gmac: ethernet@37000000 {
device_type = "network";
compatible = "qcom,ipq806x-gmac";
reg = <0x37000000 0x200000>;
interrupts = <GIC_SPI 220 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "macirq";
qcom,nss-common = <&nss_common>;
qcom,qsgmii-csr = <&qsgmii_csr>;
clocks = <&gcc GMAC_CORE1_CLK>;
clock-names = "stmmaceth";
resets = <&gcc GMAC_CORE1_RESET>;
reset-names = "stmmaceth";
};

View File

@ -7,8 +7,10 @@ Required properties:
Use "cdns,at32ap7000-macb" for other 10/100 usage or use the generic form: "cdns,macb".
Use "cdns,pc302-gem" for Picochip picoXcell pc302 and later devices based on
the Cadence GEM, or the generic form: "cdns,gem".
Use "cdns,sama5d3-gem" for the Gigabit IP available on Atmel sama5d3 SoCs.
Use "cdns,sama5d4-gem" for the Gigabit IP available on Atmel sama5d4 SoCs.
Use "atmel,sama5d2-gem" for the GEM IP (10/100) available on Atmel sama5d2 SoCs.
Use "atmel,sama5d3-gem" for the Gigabit IP available on Atmel sama5d3 SoCs.
Use "atmel,sama5d4-gem" for the GEM IP (10/100) available on Atmel sama5d4 SoCs.
Use "cdns,zynqmp-gem" for Zynq Ultrascale+ MPSoC.
- reg: Address and length of the register set for the device
- interrupts: Should contain macb interrupt
- phy-mode: See ethernet.txt file in the same directory.

View File

@ -0,0 +1,29 @@
* Marvell International Ltd. NCI NFC Controller
Required properties:
- compatible: Should be "mrvl,nfc-uart".
Optional SoC specific properties:
- pinctrl-names: Contains only one value - "default".
- pintctrl-0: Specifies the pin control groups used for this controller.
- reset-n-io: Output GPIO pin used to reset the chip (active low).
- hci-muxed: Specifies that the chip is muxing NCI over HCI frames.
Optional UART-based chip specific properties:
- flow-control: Specifies that the chip is using RTS/CTS.
- break-control: Specifies that the chip needs specific break management.
Example (for ARM-based BeagleBoard Black with 88W8887 on UART5):
&uart5 {
status = "okay";
nfcmrvluart: nfcmrvluart@5 {
compatible = "mrvl,nfc-uart";
reset-n-io = <&gpio3 16 0>;
hci-muxed;
flow-control;
}
};

View File

@ -1,7 +1,7 @@
* STMicroelectronics SAS. ST21NFCB NFC Controller
* STMicroelectronics SAS. ST NCI NFC Controller
Required properties:
- compatible: Should be "st,st21nfcb-i2c".
- compatible: Should be "st,st21nfcb-i2c" or "st,st21nfcc-i2c".
- clock-frequency: I²C work frequency.
- reg: address on the bus
- interrupt-parent: phandle for the interrupt gpio controller

View File

@ -18,6 +18,9 @@ Optional SoC Specific Properties:
"IRQ Status Read" erratum.
- en2-rf-quirk: Specify that the trf7970a being used has the "EN2 RF"
erratum.
- t5t-rmb-extra-byte-quirk: Specify that the trf7970a has the erratum
where an extra byte is returned by Read Multiple Block commands issued
to Type 5 tags.
Example (for ARM-based BeagleBone with TRF7970A on SPI1):
@ -39,6 +42,7 @@ Example (for ARM-based BeagleBone with TRF7970A on SPI1):
autosuspend-delay = <30000>;
irq-status-read-quirk;
en2-rf-quirk;
t5t-rmb-extra-byte-quirk;
status = "okay";
};
};

View File

@ -0,0 +1,20 @@
* NXP LPC1850 GMAC ethernet controller
This device is a platform glue layer for stmmac.
Please see stmmac.txt for the other unchanged properties.
Required properties:
- compatible: Should contain "nxp,lpc1850-dwmac"
Examples:
mac: ethernet@40010000 {
compatible = "nxp,lpc1850-dwmac", "snps,dwmac-3.611", "snps,dwmac";
reg = <0x40010000 0x2000>;
interrupts = <5>;
interrupt-names = "macirq";
clocks = <&ccu1 CLK_CPU_ETHERNET>;
clock-names = "stmmaceth";
resets = <&rgu 22>;
reset-names = "stmmaceth";
}

View File

@ -30,6 +30,9 @@ Optional Properties:
- max-speed: Maximum PHY supported speed (10, 100, 1000...)
- broken-turn-around: If set, indicates the PHY device does not correctly
release the turn around line low at the end of a MDIO transaction.
Example:
ethernet-phy@0 {

View File

@ -0,0 +1,48 @@
* Renesas Electronics Ethernet AVB
This file provides information on what the device node for the Ethernet AVB
interface contains.
Required properties:
- compatible: "renesas,etheravb-r8a7790" if the device is a part of R8A7790 SoC.
"renesas,etheravb-r8a7794" if the device is a part of R8A7794 SoC.
- reg: offset and length of (1) the register block and (2) the stream buffer.
- interrupts: interrupt specifier for the sole interrupt.
- phy-mode: see ethernet.txt file in the same directory.
- phy-handle: see ethernet.txt file in the same directory.
- #address-cells: number of address cells for the MDIO bus, must be equal to 1.
- #size-cells: number of size cells on the MDIO bus, must be equal to 0.
- clocks: clock phandle and specifier pair.
- pinctrl-0: phandle, referring to a default pin configuration node.
Optional properties:
- interrupt-parent: the phandle for the interrupt controller that services
interrupts for this device.
- pinctrl-names: pin configuration state name ("default").
- renesas,no-ether-link: boolean, specify when a board does not provide a proper
AVB_LINK signal.
- renesas,ether-link-active-low: boolean, specify when the AVB_LINK signal is
active-low instead of normal active-high.
Example:
ethernet@e6800000 {
compatible = "renesas,etheravb-r8a7790";
reg = <0 0xe6800000 0 0x800>, <0 0xee0e8000 0 0x4000>;
interrupt-parent = <&gic>;
interrupts = <0 163 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_ETHERAVB>;
phy-mode = "rmii";
phy-handle = <&phy0>;
pinctrl-0 = <&ether_pins>;
pinctrl-names = "default";
renesas,no-ether-link;
#address-cells = <1>;
#size-cells = <0>;
phy0: ethernet-phy@0 {
reg = <0>;
interrupt-parent = <&gpio2>;
interrupts = <15 IRQ_TYPE_LEVEL_LOW>;
};
};

View File

@ -3,7 +3,7 @@ Rockchip SoC RK3288 10/100/1000 Ethernet driver(GMAC)
The device node has following properties.
Required properties:
- compatible: Can be "rockchip,rk3288-gmac".
- compatible: Can be one of "rockchip,rk3288-gmac", "rockchip,rk3368-gmac"
- reg: addresses and length of the register sets for the device.
- interrupts: Should contain the GMAC interrupts.
- interrupt-names: Should contain the interrupt names "macirq".

View File

@ -0,0 +1,25 @@
* Texas Instruments - dp83867 Giga bit ethernet phy
Required properties:
- reg - The ID number for the phy, usually a small integer
- ti,rx-internal-delay - RGMII Recieve Clock Delay - see dt-bindings/net/ti-dp83867.h
for applicable values
- ti,tx-internal-delay - RGMII Transmit Clock Delay - see dt-bindings/net/ti-dp83867.h
for applicable values
- ti,fifo-depth - Transmitt FIFO depth- see dt-bindings/net/ti-dp83867.h
for applicable values
Default child nodes are standard Ethernet PHY device
nodes as described in Documentation/devicetree/bindings/net/phy.txt
Example:
ethernet-phy@0 {
reg = <0>;
ti,rx-internal-delay = <DP83867_RGMIIDCTL_2_25_NS>;
ti,tx-internal-delay = <DP83867_RGMIIDCTL_2_75_NS>;
ti,fifo-depth = <DP83867_PHYCR_FIFO_DEPTH_4_B_NIB>;
};
Datasheet can be found:
http://www.ti.com/product/DP83867IR/datasheet

View File

@ -51,6 +51,7 @@ Table of Contents
3.4 Configuring Bonding Manually via Sysfs
3.5 Configuration with Interfaces Support
3.6 Overriding Configuration for Special Cases
3.7 Configuring LACP for 802.3ad mode in a more secure way
4. Querying Bonding Configuration
4.1 Bonding Configuration
@ -178,6 +179,27 @@ active_slave
active slave, or the empty string if there is no active slave or
the current mode does not use an active slave.
ad_actor_sys_prio
In an AD system, this specifies the system priority. The allowed range
is 1 - 65535. If the value is not specified, it takes 65535 as the
default value.
This parameter has effect only in 802.3ad mode and is available through
SysFs interface.
ad_actor_system
In an AD system, this specifies the mac-address for the actor in
protocol packet exchanges (LACPDUs). The value cannot be NULL or
multicast. It is preferred to have the local-admin bit set for this
mac but driver does not enforce it. If the value is not given then
system defaults to using the masters' mac address as actors' system
address.
This parameter has effect only in 802.3ad mode and is available through
SysFs interface.
ad_select
Specifies the 802.3ad aggregation selection logic to use. The
@ -220,6 +242,21 @@ ad_select
This option was added in bonding version 3.4.0.
ad_user_port_key
In an AD system, the port-key has three parts as shown below -
Bits Use
00 Duplex
01-05 Speed
06-15 User-defined
This defines the upper 10 bits of the port key. The values can be
from 0 - 1023. If not given, the system defaults to 0.
This parameter has effect only in 802.3ad mode and is available through
SysFs interface.
all_slaves_active
Specifies that duplicate frames (received on inactive ports) should be
@ -1622,6 +1659,53 @@ output port selection.
This feature first appeared in bonding driver version 3.7.0 and support for
output slave selection was limited to round-robin and active-backup modes.
3.7 Configuring LACP for 802.3ad mode in a more secure way
----------------------------------------------------------
When using 802.3ad bonding mode, the Actor (host) and Partner (switch)
exchange LACPDUs. These LACPDUs cannot be sniffed, because they are
destined to link local mac addresses (which switches/bridges are not
supposed to forward). However, most of the values are easily predictable
or are simply the machine's MAC address (which is trivially known to all
other hosts in the same L2). This implies that other machines in the L2
domain can spoof LACPDU packets from other hosts to the switch and potentially
cause mayhem by joining (from the point of view of the switch) another
machine's aggregate, thus receiving a portion of that hosts incoming
traffic and / or spoofing traffic from that machine themselves (potentially
even successfully terminating some portion of flows). Though this is not
a likely scenario, one could avoid this possibility by simply configuring
few bonding parameters:
(a) ad_actor_system : You can set a random mac-address that can be used for
these LACPDU exchanges. The value can not be either NULL or Multicast.
Also it's preferable to set the local-admin bit. Following shell code
generates a random mac-address as described above.
# sys_mac_addr=$(printf '%02x:%02x:%02x:%02x:%02x:%02x' \
$(( (RANDOM & 0xFE) | 0x02 )) \
$(( RANDOM & 0xFF )) \
$(( RANDOM & 0xFF )) \
$(( RANDOM & 0xFF )) \
$(( RANDOM & 0xFF )) \
$(( RANDOM & 0xFF )))
# echo $sys_mac_addr > /sys/class/net/bond0/bonding/ad_actor_system
(b) ad_actor_sys_prio : Randomize the system priority. The default value
is 65535, but system can take the value from 1 - 65535. Following shell
code generates random priority and sets it.
# sys_prio=$(( 1 + RANDOM + RANDOM ))
# echo $sys_prio > /sys/class/net/bond0/bonding/ad_actor_sys_prio
(c) ad_user_port_key : Use the user portion of the port-key. The default
keeps this empty. These are the upper 10 bits of the port-key and value
ranges from 0 - 1023. Following shell code generates these 10 bits and
sets it.
# usr_port_key=$(( RANDOM & 0x3FF ))
# echo $usr_port_key > /sys/class/net/bond0/bonding/ad_user_port_key
4 Querying Bonding Configuration
=================================

View File

@ -268,6 +268,9 @@ solution for a couple of reasons:
struct can_frame {
canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */
__u8 can_dlc; /* frame payload length in byte (0 .. 8) */
__u8 __pad; /* padding */
__u8 __res0; /* reserved / padding */
__u8 __res1; /* reserved / padding */
__u8 data[8] __attribute__((aligned(8)));
};

View File

@ -8,6 +8,7 @@ the data center network to provide multi-bit feedback to the end hosts.
To enable it on end hosts:
sysctl -w net.ipv4.tcp_congestion_control=dctcp
sysctl -w net.ipv4.tcp_ecn_fallback=0 (optional)
All switches in the data center network running DCTCP must support ECN
marking and be configured for marking when reaching defined switch buffer

View File

@ -30,8 +30,8 @@ int sd = socket(PF_IEEE802154, SOCK_DGRAM, 0);
The address family, socket addresses etc. are defined in the
include/net/af_ieee802154.h header or in the special header
in our userspace package (see either linux-zigbee sourceforge download page
or git tree at git://linux-zigbee.git.sourceforge.net/gitroot/linux-zigbee).
in the userspace package (see either http://wpan.cakelab.org/ or the
git tree at https://github.com/linux-wpan/wpan-tools).
One can use SOCK_RAW for passing raw data towards device xmit function. YMMV.
@ -49,15 +49,6 @@ Like with WiFi, there are several types of devices implementing IEEE 802.15.4.
Those types of devices require different approach to be hooked into Linux kernel.
MLME - MAC Level Management
============================
Most of IEEE 802.15.4 MLME interfaces are directly mapped on netlink commands.
See the include/net/nl802154.h header. Our userspace tools package
(see above) provides CLI configuration utility for radio interfaces and simple
coordinator for IEEE 802.15.4 networks as an example users of MLME protocol.
HardMAC
=======
@ -75,8 +66,6 @@ net_device with a pointer to struct ieee802154_mlme_ops instance. The fields
assoc_req, assoc_resp, disassoc_req, start_req, and scan_req are optional.
All other fields are required.
We provide an example of simple HardMAC driver at drivers/ieee802154/fakehard.c
SoftMAC
=======
@ -89,7 +78,8 @@ stack interface for network sniffers (e.g. WireShark).
This layer is going to be extended soon.
See header include/net/mac802154.h and several drivers in drivers/ieee802154/.
See header include/net/mac802154.h and several drivers in
drivers/net/ieee802154/.
Device drivers API
@ -114,18 +104,17 @@ Moreover IEEE 802.15.4 device operations structure should be filled.
Fake drivers
============
In addition there are two drivers available which simulate real devices with
HardMAC (fakehard) and SoftMAC (fakelb - IEEE 802.15.4 loopback driver)
interfaces. This option provides possibility to test and debug stack without
usage of real hardware.
In addition there is a driver available which simulates a real device with
SoftMAC (fakelb - IEEE 802.15.4 loopback driver) interface. This option
provides possibility to test and debug stack without usage of real hardware.
See sources in drivers/ieee802154 folder for more details.
See sources in drivers/net/ieee802154 folder for more details.
6LoWPAN Linux implementation
============================
The IEEE 802.15.4 standard specifies an MTU of 128 bytes, yielding about 80
The IEEE 802.15.4 standard specifies an MTU of 127 bytes, yielding about 80
octets of actual MAC payload once security is turned on, on a wireless link
with a link throughput of 250 kbps or less. The 6LoWPAN adaptation format
[RFC4944] was specified to carry IPv6 datagrams over such constrained links,
@ -140,7 +129,8 @@ In Semptember 2011 the standard update was published - [RFC6282].
It deprecates HC1 and HC2 compression and defines IPHC encoding format which is
used in this Linux implementation.
All the code related to 6lowpan you may find in files: net/ieee802154/6lowpan.*
All the code related to 6lowpan you may find in files: net/6lowpan/*
and net/ieee802154/6lowpan/*
To setup 6lowpan interface you need (busybox release > 1.17.0):
1. Add IEEE802.15.4 interface and initialize PANid;

View File

@ -267,6 +267,15 @@ tcp_ecn - INTEGER
but do not request ECN on outgoing connections.
Default: 2
tcp_ecn_fallback - BOOLEAN
If the kernel detects that ECN connection misbehaves, enable fall
back to non-ECN. Currently, this knob implements the fallback
from RFC3168, section 6.1.1.1., but we reserve that in future,
additional detection mechanisms could be implemented under this
knob. The value is not used, if tcp_ecn or per route (or congestion
control) ECN settings are disabled.
Default: 1 (fallback enabled)
tcp_fack - BOOLEAN
Enable FACK congestion avoidance and fast retransmission.
The value is not used, if tcp_sack is not enabled.
@ -742,8 +751,10 @@ IP Variables:
ip_local_port_range - 2 INTEGERS
Defines the local port range that is used by TCP and UDP to
choose the local port. The first number is the first, the
second the last local port number. The default values are
32768 and 61000 respectively.
second the last local port number.
If possible, it is better these numbers have different parity.
(one even and one odd values)
The default values are 32768 and 60999 respectively.
ip_local_reserved_ports - list of comma separated ranges
Specify the ports which are reserved for known third-party
@ -766,7 +777,7 @@ ip_local_reserved_ports - list of comma separated ranges
ip_local_port_range, e.g.:
$ cat /proc/sys/net/ipv4/ip_local_port_range
32000 61000
32000 60999
$ cat /proc/sys/net/ipv4/ip_local_reserved_ports
8080,9148
@ -1213,6 +1224,14 @@ auto_flowlabels - BOOLEAN
FALSE: disabled
Default: false
flowlabel_state_ranges - BOOLEAN
Split the flow label number space into two ranges. 0-0x7FFFF is
reserved for the IPv6 flow manager facility, 0x80000-0xFFFFF
is reserved for stateless flow labels as described in RFC6437.
TRUE: enabled
FALSE: disabled
Default: true
anycast_src_echo_reply - BOOLEAN
Controls the use of anycast addresses as source addresses for ICMPv6
echo reply

View File

@ -1,6 +1,6 @@
HOWTO for the linux packet generator
HOWTO for the linux packet generator
------------------------------------
Enable CONFIG_NET_PKTGEN to compile and build pktgen either in-kernel
@ -50,17 +50,33 @@ For ixgbe use e.g. "30" resulting in approx 33K interrupts/sec (1/30*10^6):
# ethtool -C ethX rx-usecs 30
Viewing threads
===============
/proc/net/pktgen/kpktgend_0
Name: kpktgend_0 max_before_softirq: 10000
Running:
Stopped: eth1
Result: OK: max_before_softirq=10000
Kernel threads
==============
Pktgen creates a thread for each CPU with affinity to that CPU.
Which is controlled through procfile /proc/net/pktgen/kpktgend_X.
Most important are the devices assigned to the thread. Note that a
device can only belong to one thread.
Example: /proc/net/pktgen/kpktgend_0
Running:
Stopped: eth4@0
Result: OK: add_device=eth4@0
Most important are the devices assigned to the thread.
The two basic thread commands are:
* add_device DEVICE@NAME -- adds a single device
* rem_device_all -- remove all associated devices
When adding a device to a thread, a corrosponding procfile is created
which is used for configuring this device. Thus, device names need to
be unique.
To support adding the same device to multiple threads, which is useful
with multi queue NICs, a the device naming scheme is extended with "@":
device@something
The part after "@" can be anything, but it is custom to use the thread
number.
Viewing devices
===============
@ -69,29 +85,32 @@ The Params section holds configured information. The Current section
holds running statistics. The Result is printed after a run or after
interruption. Example:
/proc/net/pktgen/eth1
/proc/net/pktgen/eth4@0
Params: count 10000000 min_pkt_size: 60 max_pkt_size: 60
frags: 0 delay: 0 clone_skb: 1000000 ifname: eth1
Params: count 100000 min_pkt_size: 60 max_pkt_size: 60
frags: 0 delay: 0 clone_skb: 64 ifname: eth4@0
flows: 0 flowlen: 0
dst_min: 10.10.11.2 dst_max:
src_min: src_max:
src_mac: 00:00:00:00:00:00 dst_mac: 00:04:23:AC:FD:82
udp_src_min: 9 udp_src_max: 9 udp_dst_min: 9 udp_dst_max: 9
src_mac_count: 0 dst_mac_count: 0
Flags:
Current:
pkts-sofar: 10000000 errors: 39664
started: 1103053986245187us stopped: 1103053999346329us idle: 880401us
seq_num: 10000011 cur_dst_mac_offset: 0 cur_src_mac_offset: 0
cur_saddr: 0x10a0a0a cur_daddr: 0x20b0a0a
cur_udp_dst: 9 cur_udp_src: 9
queue_map_min: 0 queue_map_max: 0
dst_min: 192.168.81.2 dst_max:
src_min: src_max:
src_mac: 90:e2:ba:0a:56:b4 dst_mac: 00:1b:21:3c:9d:f8
udp_src_min: 9 udp_src_max: 109 udp_dst_min: 9 udp_dst_max: 9
src_mac_count: 0 dst_mac_count: 0
Flags: UDPSRC_RND NO_TIMESTAMP QUEUE_MAP_CPU
Current:
pkts-sofar: 100000 errors: 0
started: 623913381008us stopped: 623913396439us idle: 25us
seq_num: 100001 cur_dst_mac_offset: 0 cur_src_mac_offset: 0
cur_saddr: 192.168.8.3 cur_daddr: 192.168.81.2
cur_udp_dst: 9 cur_udp_src: 42
cur_queue_map: 0
flows: 0
Result: OK: 13101142(c12220741+d880401) usec, 10000000 (60byte,0frags)
763292pps 390Mb/sec (390805504bps) errors: 39664
Result: OK: 15430(c15405+d25) usec, 100000 (60byte,0frags)
6480562pps 3110Mb/sec (3110669760bps) errors: 0
Configuring threads and devices
================================
Configuring devices
===================
This is done via the /proc interface, and most easily done via pgset
as defined in the sample scripts.
@ -126,7 +145,7 @@ Examples:
To select queue 1 of a given device,
use queue_map_min=1 and queue_map_max=1
pgset "src_mac_count 1" Sets the number of MACs we'll range through.
pgset "src_mac_count 1" Sets the number of MACs we'll range through.
The 'minimum' MAC is what you set with srcmac.
pgset "dst_mac_count 1" Sets the number of MACs we'll range through.
@ -145,6 +164,7 @@ Examples:
UDPCSUM,
IPSEC # IPsec encapsulation (needs CONFIG_XFRM)
NODE_ALLOC # node specific memory allocation
NO_TIMESTAMP # disable timestamping
pgset spi SPI_VALUE Set specific SA used to transform packet.
@ -192,24 +212,43 @@ Examples:
pgset "rate 300M" set rate to 300 Mb/s
pgset "ratep 1000000" set rate to 1Mpps
pgset "xmit_mode netif_receive" RX inject into stack netif_receive_skb()
Works with "burst" but not with "clone_skb".
Default xmit_mode is "start_xmit".
Sample scripts
==============
A collection of small tutorial scripts for pktgen is in the
samples/pktgen directory:
A collection of tutorial scripts and helpers for pktgen is in the
samples/pktgen directory. The helper parameters.sh file support easy
and consistant parameter parsing across the sample scripts.
Usage example and help:
./pktgen_sample01_simple.sh -i eth4 -m 00:1B:21:3C:9D:F8 -d 192.168.8.2
Usage: ./pktgen_sample01_simple.sh [-vx] -i ethX
-i : ($DEV) output interface/device (required)
-s : ($PKT_SIZE) packet size
-d : ($DEST_IP) destination IP
-m : ($DST_MAC) destination MAC-addr
-t : ($THREADS) threads to start
-c : ($SKB_CLONE) SKB clones send before alloc new SKB
-b : ($BURST) HW level bursting of SKBs
-v : ($VERBOSE) verbose
-x : ($DEBUG) debug
The global variables being set are also listed. E.g. the required
interface/device parameter "-i" sets variable $DEV. Copy the
pktgen_sampleXX scripts and modify them to fit your own needs.
The old scripts:
pktgen.conf-1-1 # 1 CPU 1 dev
pktgen.conf-1-2 # 1 CPU 2 dev
pktgen.conf-2-1 # 2 CPU's 1 dev
pktgen.conf-2-2 # 2 CPU's 2 dev
pktgen.conf-1-1-rdos # 1 CPU 1 dev w. route DoS
pktgen.conf-1-1-ip6 # 1 CPU 1 dev ipv6
pktgen.conf-1-1-ip6-rdos # 1 CPU 1 dev ipv6 w. route DoS
pktgen.conf-1-1-flows # 1 CPU 1 dev multiple flows.
Run in shell: ./pktgen.conf-X-Y
This does all the setup including sending.
Interrupt affinity
===================
@ -217,6 +256,9 @@ Note that when adding devices to a specific CPU it is a good idea to
also assign /proc/irq/XX/smp_affinity so that the TX interrupts are bound
to the same CPU. This reduces cache bouncing when freeing skbs.
Plus using the device flag QUEUE_MAP_CPU, which maps the SKBs TX queue
to the running threads CPU (directly from smp_processor_id()).
Enable IPsec
============
Default IPsec transformation with ESP encapsulation plus transport mode
@ -237,18 +279,19 @@ Current commands and configuration options
start
stop
reset
** Thread commands:
add_device
rem_device_all
max_before_softirq
** Device commands:
count
clone_skb
burst
debug
frags
@ -257,10 +300,17 @@ delay
src_mac_count
dst_mac_count
pkt_size
pkt_size
min_pkt_size
max_pkt_size
queue_map_min
queue_map_max
skb_priority
tos (ipv4)
traffic_class (ipv6)
mpls
udp_src_min
@ -269,6 +319,8 @@ udp_src_max
udp_dst_min
udp_dst_max
node
flag
IPSRC_RND
IPDST_RND
@ -287,6 +339,9 @@ flag
UDPCSUM
IPSEC
NODE_ALLOC
NO_TIMESTAMP
spi (ipsec)
dst_min
dst_max
@ -299,8 +354,10 @@ src_mac
clear_counters
dst6
src6
dst6
dst6_max
dst6_min
flows
flowlen
@ -308,6 +365,17 @@ flowlen
rate
ratep
xmit_mode <start_xmit|netif_receive>
vlan_cfi
vlan_id
vlan_p
svlan_cfi
svlan_id
svlan_p
References:
ftp://robur.slu.se/pub/Linux/net-development/pktgen-testing/
ftp://robur.slu.se/pub/Linux/net-development/pktgen-testing/examples/

View File

@ -1,59 +1,360 @@
Switch (and switch-ish) device drivers HOWTO
===========================
Ethernet switch device driver model (switchdev)
===============================================
Copyright (c) 2014 Jiri Pirko <jiri@resnulli.us>
Copyright (c) 2014-2015 Scott Feldman <sfeldma@gmail.com>
Please note that the word "switch" is here used in very generic meaning.
This include devices supporting L2/L3 but also various flow offloading chips,
including switches embedded into SR-IOV NICs.
Lets describe a topology a bit. Imagine the following example:
The Ethernet switch device driver model (switchdev) is an in-kernel driver
model for switch devices which offload the forwarding (data) plane from the
kernel.
+----------------------------+ +---------------+
| SOME switch chip | | CPU |
+----------------------------+ +---------------+
port1 port2 port3 port4 MNGMNT | PCI-E |
| | | | | +---------------+
PHY PHY | | | | NIC0 NIC1
| | | | | |
| | +- PCI-E -+ | |
| +------- MII -------+ |
+------------- MII ------------+
Figure 1 is a block diagram showing the components of the switchdev model for
an example setup using a data-center-class switch ASIC chip. Other setups
with SR-IOV or soft switches, such as OVS, are possible.
In this example, there are two independent lines between the switch silicon
and CPU. NIC0 and NIC1 drivers are not aware of a switch presence. They are
separate from the switch driver. SOME switch chip is by managed by a driver
via PCI-E device MNGMNT. Note that MNGMNT device, NIC0 and NIC1 may be
connected to some other type of bus.
Now, for the previous example show the representation in kernel:
User-spacetools
userspace|
+-------------------------------------------------------------------+
kernel|Netlink
|
+--------------+-------------------------------+
|Networkstack|
|(Linux)|
||
+----------------------------------------------+
sw1p2 sw1p4 sw1p6
sw1p1 + sw1p3 + sw1p5 + eth1
+|+|+|+
|||||||
+--+----+----+----+-+--+----+---++-----+-----+
|Switchdriver||mgmt|
|(thisdocument)||driver|
||||
+--------------+----------------++-----------+
|
kernel|HWbus(egPCI)
+-------------------------------------------------------------------+
hardware|
+--------------+---+------------+
|Switchdevice (sw1)|
|+----++--------+
||voffloadeddatapath|mgmtport
||||
+--|----|----+----+----+----+---+
||||||
++++++
p1p2p3p4p5p6
front-panelports
+----------------------------+ +---------------+
| SOME switch chip | | CPU |
+----------------------------+ +---------------+
sw0p0 sw0p1 sw0p2 sw0p3 MNGMNT | PCI-E |
| | | | | +---------------+
PHY PHY | | | | eth0 eth1
| | | | | |
| | +- PCI-E -+ | |
| +------- MII -------+ |
+------------- MII ------------+
Fig 1.
Lets call the example switch driver for SOME switch chip "SOMEswitch". This
driver takes care of PCI-E device MNGMNT. There is a netdevice instance sw0pX
created for each port of a switch. These netdevices are instances
of "SOMEswitch" driver. sw0pX netdevices serve as a "representation"
of the switch chip. eth0 and eth1 are instances of some other existing driver.
The only difference of the switch-port netdevice from the ordinary netdevice
is that is implements couple more NDOs:
Include Files
-------------
ndo_switch_parent_id_get - This returns the same ID for two port netdevices
of the same physical switch chip. This is
mandatory to be implemented by all switch drivers
and serves the caller for recognition of a port
netdevice.
ndo_switch_parent_* - Functions that serve for a manipulation of the switch
chip itself (it can be though of as a "parent" of the
port, therefore the name). They are not port-specific.
Caller might use arbitrary port netdevice of the same
switch and it will make no difference.
ndo_switch_port_* - Functions that serve for a port-specific manipulation.
#include <linux/netdevice.h>
#include <net/switchdev.h>
Configuration
-------------
Use "depends NET_SWITCHDEV" in driver's Kconfig to ensure switchdev model
support is built for driver.
Switch Ports
------------
On switchdev driver initialization, the driver will allocate and register a
struct net_device (using register_netdev()) for each enumerated physical switch
port, called the port netdev. A port netdev is the software representation of
the physical port and provides a conduit for control traffic to/from the
controller (the kernel) and the network, as well as an anchor point for higher
level constructs such as bridges, bonds, VLANs, tunnels, and L3 routers. Using
standard netdev tools (iproute2, ethtool, etc), the port netdev can also
provide to the user access to the physical properties of the switch port such
as PHY link state and I/O statistics.
There is (currently) no higher-level kernel object for the switch beyond the
port netdevs. All of the switchdev driver ops are netdev ops or switchdev ops.
A switch management port is outside the scope of the switchdev driver model.
Typically, the management port is not participating in offloaded data plane and
is loaded with a different driver, such as a NIC driver, on the management port
device.
Port Netdev Naming
^^^^^^^^^^^^^^^^^^
Udev rules should be used for port netdev naming, using some unique attribute
of the port as a key, for example the port MAC address or the port PHYS name.
Hard-coding of kernel netdev names within the driver is discouraged; let the
kernel pick the default netdev name, and let udev set the final name based on a
port attribute.
Using port PHYS name (ndo_get_phys_port_name) for the key is particularly
useful for dynamically-named ports where the device names its ports based on
external configuration. For example, if a physical 40G port is split logically
into 4 10G ports, resulting in 4 port netdevs, the device can give a unique
name for each port using port PHYS name. The udev rule would be:
SUBSYSTEM=="net", ACTION=="add", DRIVER="<driver>", ATTR{phys_port_name}!="", \
NAME="$attr{phys_port_name}"
Suggested naming convention is "swXpYsZ", where X is the switch name or ID, Y
is the port name or ID, and Z is the sub-port name or ID. For example, sw1p1s0
would be sub-port 0 on port 1 on switch 1.
Switch ID
^^^^^^^^^
The switchdev driver must implement the switchdev op switchdev_port_attr_get
for SWITCHDEV_ATTR_PORT_PARENT_ID for each port netdev, returning the same
physical ID for each port of a switch. The ID must be unique between switches
on the same system. The ID does not need to be unique between switches on
different systems.
The switch ID is used to locate ports on a switch and to know if aggregated
ports belong to the same switch.
Port Features
^^^^^^^^^^^^^
NETIF_F_NETNS_LOCAL
If the switchdev driver (and device) only supports offloading of the default
network namespace (netns), the driver should set this feature flag to prevent
the port netdev from being moved out of the default netns. A netns-aware
driver/device would not set this flag and be responsible for partitioning
hardware to preserve netns containment. This means hardware cannot forward
traffic from a port in one namespace to another port in another namespace.
Port Topology
^^^^^^^^^^^^^
The port netdevs representing the physical switch ports can be organized into
higher-level switching constructs. The default construct is a standalone
router port, used to offload L3 forwarding. Two or more ports can be bonded
together to form a LAG. Two or more ports (or LAGs) can be bridged to bridge
L2 networks. VLANs can be applied to sub-divide L2 networks. L2-over-L3
tunnels can be built on ports. These constructs are built using standard Linux
tools such as the bridge driver, the bonding/team drivers, and netlink-based
tools such as iproute2.
The switchdev driver can know a particular port's position in the topology by
monitoring NETDEV_CHANGEUPPER notifications. For example, a port moved into a
bond will see it's upper master change. If that bond is moved into a bridge,
the bond's upper master will change. And so on. The driver will track such
movements to know what position a port is in in the overall topology by
registering for netdevice events and acting on NETDEV_CHANGEUPPER.
L2 Forwarding Offload
---------------------
The idea is to offload the L2 data forwarding (switching) path from the kernel
to the switchdev device by mirroring bridge FDB entries down to the device. An
FDB entry is the {port, MAC, VLAN} tuple forwarding destination.
To offloading L2 bridging, the switchdev driver/device should support:
- Static FDB entries installed on a bridge port
- Notification of learned/forgotten src mac/vlans from device
- STP state changes on the port
- VLAN flooding of multicast/broadcast and unknown unicast packets
Static FDB Entries
^^^^^^^^^^^^^^^^^^
The switchdev driver should implement ndo_fdb_add, ndo_fdb_del and ndo_fdb_dump
to support static FDB entries installed to the device. Static bridge FDB
entries are installed, for example, using iproute2 bridge cmd:
bridge fdb add ADDR dev DEV [vlan VID] [self]
The driver should use the helper switchdev_port_fdb_xxx ops for ndo_fdb_xxx
ops, and handle add/delete/dump of SWITCHDEV_OBJ_PORT_FDB object using
switchdev_port_obj_xxx ops.
XXX: what should be done if offloading this rule to hardware fails (for
example, due to full capacity in hardware tables) ?
Note: by default, the bridge does not filter on VLAN and only bridges untagged
traffic. To enable VLAN support, turn on VLAN filtering:
echo 1 >/sys/class/net/<bridge>/bridge/vlan_filtering
Notification of Learned/Forgotten Source MAC/VLANs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The switch device will learn/forget source MAC address/VLAN on ingress packets
and notify the switch driver of the mac/vlan/port tuples. The switch driver,
in turn, will notify the bridge driver using the switchdev notifier call:
err = call_switchdev_notifiers(val, dev, info);
Where val is SWITCHDEV_FDB_ADD when learning and SWITCHDEV_FDB_DEL when
forgetting, and info points to a struct switchdev_notifier_fdb_info. On
SWITCHDEV_FDB_ADD, the bridge driver will install the FDB entry into the
bridge's FDB and mark the entry as NTF_EXT_LEARNED. The iproute2 bridge
command will label these entries "offload":
$ bridge fdb
52:54:00:12:35:01 dev sw1p1 master br0 permanent
00:02:00:00:02:00 dev sw1p1 master br0 offload
00:02:00:00:02:00 dev sw1p1 self
52:54:00:12:35:02 dev sw1p2 master br0 permanent
00:02:00:00:03:00 dev sw1p2 master br0 offload
00:02:00:00:03:00 dev sw1p2 self
33:33:00:00:00:01 dev eth0 self permanent
01:00:5e:00:00:01 dev eth0 self permanent
33:33:ff:00:00:00 dev eth0 self permanent
01:80:c2:00:00:0e dev eth0 self permanent
33:33:00:00:00:01 dev br0 self permanent
01:00:5e:00:00:01 dev br0 self permanent
33:33:ff:12:35:01 dev br0 self permanent
Learning on the port should be disabled on the bridge using the bridge command:
bridge link set dev DEV learning off
Learning on the device port should be enabled, as well as learning_sync:
bridge link set dev DEV learning on self
bridge link set dev DEV learning_sync on self
Learning_sync attribute enables syncing of the learned/forgotton FDB entry to
the bridge's FDB. It's possible, but not optimal, to enable learning on the
device port and on the bridge port, and disable learning_sync.
To support learning and learning_sync port attributes, the driver implements
switchdev op switchdev_port_attr_get/set for SWITCHDEV_ATTR_PORT_BRIDGE_FLAGS.
The driver should initialize the attributes to the hardware defaults.
FDB Ageing
^^^^^^^^^^
There are two FDB ageing models supported: 1) ageing by the device, and 2)
ageing by the kernel. Ageing by the device is preferred if many FDB entries
are supported. The driver calls call_switchdev_notifiers(SWITCHDEV_FDB_DEL,
...) to age out the FDB entry. In this model, ageing by the kernel should be
turned off. XXX: how to turn off ageing in kernel on a per-port basis or
otherwise prevent the kernel from ageing out the FDB entry?
In the kernel ageing model, the standard bridge ageing mechanism is used to age
out stale FDB entries. To keep an FDB entry "alive", the driver should refresh
the FDB entry by calling call_switchdev_notifiers(SWITCHDEV_FDB_ADD, ...). The
notification will reset the FDB entry's last-used time to now. The driver
should rate limit refresh notifications, for example, no more than once a
second. If the FDB entry expires, fdb_delete is called to remove entry from
the device.
STP State Change on Port
^^^^^^^^^^^^^^^^^^^^^^^^
Internally or with a third-party STP protocol implementation (e.g. mstpd), the
bridge driver maintains the STP state for ports, and will notify the switch
driver of STP state change on a port using the switchdev op
switchdev_attr_port_set for SWITCHDEV_ATTR_PORT_STP_UPDATE.
State is one of BR_STATE_*. The switch driver can use STP state updates to
update ingress packet filter list for the port. For example, if port is
DISABLED, no packets should pass, but if port moves to BLOCKED, then STP BPDUs
and other IEEE 01:80:c2:xx:xx:xx link-local multicast packets can pass.
Note that STP BDPUs are untagged and STP state applies to all VLANs on the port
so packet filters should be applied consistently across untagged and tagged
VLANs on the port.
Flooding L2 domain
^^^^^^^^^^^^^^^^^^
For a given L2 VLAN domain, the switch device should flood multicast/broadcast
and unknown unicast packets to all ports in domain, if allowed by port's
current STP state. The switch driver, knowing which ports are within which
vlan L2 domain, can program the switch device for flooding. The packet should
also be sent to the port netdev for processing by the bridge driver. The
bridge should not reflood the packet to the same ports the device flooded.
XXX: the mechanism to avoid duplicate flood packets is being discuseed.
It is possible for the switch device to not handle flooding and push the
packets up to the bridge driver for flooding. This is not ideal as the number
of ports scale in the L2 domain as the device is much more efficient at
flooding packets that software.
IGMP Snooping
^^^^^^^^^^^^^
XXX: complete this section
L3 Routing Offload
------------------
Offloading L3 routing requires that device be programmed with FIB entries from
the kernel, with the device doing the FIB lookup and forwarding. The device
does a longest prefix match (LPM) on FIB entries matching route prefix and
forwards the packet to the matching FIB entry's nexthop(s) egress ports.
To program the device, the driver implements support for
SWITCHDEV_OBJ_IPV[4|6]_FIB object using switchdev_port_obj_xxx ops.
switchdev_port_obj_add is used for both adding a new FIB entry to the device,
or modifying an existing entry on the device.
XXX: Currently, only SWITCHDEV_OBJ_IPV4_FIB objects are supported.
SWITCHDEV_OBJ_IPV4_FIB object passes:
struct switchdev_obj_ipv4_fib { /* IPV4_FIB */
u32 dst;
int dst_len;
struct fib_info *fi;
u8 tos;
u8 type;
u32 nlflags;
u32 tb_id;
} ipv4_fib;
to add/modify/delete IPv4 dst/dest_len prefix on table tb_id. The *fi
structure holds details on the route and route's nexthops. *dev is one of the
port netdevs mentioned in the routes next hop list. If the output port netdevs
referenced in the route's nexthop list don't all have the same switch ID, the
driver is not called to add/modify/delete the FIB entry.
Routes offloaded to the device are labeled with "offload" in the ip route
listing:
$ ip route show
default via 192.168.0.2 dev eth0
11.0.0.0/30 dev sw1p1 proto kernel scope link src 11.0.0.2 offload
11.0.0.4/30 via 11.0.0.1 dev sw1p1 proto zebra metric 20 offload
11.0.0.8/30 dev sw1p2 proto kernel scope link src 11.0.0.10 offload
11.0.0.12/30 via 11.0.0.9 dev sw1p2 proto zebra metric 20 offload
12.0.0.2 proto zebra metric 30 offload
nexthop via 11.0.0.1 dev sw1p1 weight 1
nexthop via 11.0.0.9 dev sw1p2 weight 1
12.0.0.3 via 11.0.0.1 dev sw1p1 proto zebra metric 20 offload
12.0.0.4 via 11.0.0.9 dev sw1p2 proto zebra metric 20 offload
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.15
XXX: add/mod/del IPv6 FIB API
Nexthop Resolution
^^^^^^^^^^^^^^^^^^
The FIB entry's nexthop list contains the nexthop tuple (gateway, dev), but for
the switch device to forward the packet with the correct dst mac address, the
nexthop gateways must be resolved to the neighbor's mac address. Neighbor mac
address discovery comes via the ARP (or ND) process and is available via the
arp_tbl neighbor table. To resolve the routes nexthop gateways, the driver
should trigger the kernel's neighbor resolution process. See the rocker
driver's rocker_port_ipv4_resolve() for an example.
The driver can monitor for updates to arp_tbl using the netevent notifier
NETEVENT_NEIGH_UPDATE. The device can be programmed with resolved nexthops
for the routes as arp_tbl updates.

View File

@ -8,14 +8,8 @@ For example if your action queues a packet to be processed later,
or intentionally branches by redirecting a packet, then you need to
clone the packet.
There are certain fields in the skb tc_verd that need to be reset so we
avoid loops, etc. A few are generic enough that skb_act_clone()
resets them for you, so invoke skb_act_clone() rather than skb_clone().
2) If you munge any packet thou shalt call pskb_expand_head in the case
someone else is referencing the skb. After that you "own" the skb.
You must also tell us if it is ok to munge the packet (TC_OK2MUNGE),
this way any action downstream can stomp on the packet.
3) Dropping packets you don't own is a no-no. You simply return
TC_ACT_SHOT to the caller and they will drop it.

View File

@ -122,7 +122,7 @@ This must be done from a context that can sleep.
PHY Management
--------------
The physical link (i2c, ...) management is defined by the following struture:
The physical link (i2c, ...) management is defined by the following structure:
struct nfc_phy_ops {
int (*write)(void *dev_id, struct sk_buff *skb);

View File

@ -1,6 +1,6 @@
IBM s390 QDIO Ethernet Driver
HiperSockets Bridge Port Support
OSA and HiperSockets Bridge Port Support
Uevents
@ -8,7 +8,7 @@ To generate the events the device must be assigned a role of either
a primary or a secondary Bridge Port. For more information, see
"z/VM Connectivity, SC24-6174".
When run on HiperSockets Bridge Capable Port hardware, and the state
When run on an OSA or HiperSockets Bridge Capable Port hardware, and the state
of some configured Bridge Port device on the channel changes, a udev
event with ACTION=CHANGE is emitted on behalf of the corresponding
ccwgroup device. The event has the following attributes:

View File

@ -653,7 +653,6 @@ M: Tom Lendacky <thomas.lendacky@amd.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/amd/xgbe/
F: drivers/net/phy/amd-xgbe-phy.c
AMS (Apple Motion Sensor) DRIVER
M: Michael Hanselmann <linux-kernel@hansmi.ch>
@ -923,6 +922,13 @@ M: Krzysztof Halasa <khalasa@piap.pl>
S: Maintained
F: arch/arm/mach-cns3xxx/
ARM/CAVIUM THUNDER NETWORK DRIVER
M: Sunil Goutham <sgoutham@cavium.com>
M: Robert Richter <rric@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: drivers/net/ethernet/cavium/
ARM/CIRRUS LOGIC CLPS711X ARM ARCHITECTURE
M: Alexander Shiyan <shc_work@mail.ru>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -1873,6 +1879,14 @@ W: http://www.attotech.com
S: Supported
F: drivers/scsi/esas2r
ATUSB IEEE 802.15.4 RADIO DRIVER
M: Stefan Schmidt <stefan@osg.samsung.com>
L: linux-wpan@vger.kernel.org
S: Maintained
F: drivers/net/ieee802154/atusb.c
F: drivers/net/ieee802154/atusb.h
F: drivers/net/ieee802154/at86rf230.h
AUDIT SUBSYSTEM
M: Paul Moore <paul@paul-moore.com>
M: Eric Paris <eparis@redhat.com>
@ -2453,6 +2467,17 @@ S: Maintained
F: drivers/iio/light/cm*
F: Documentation/devicetree/bindings/i2c/trivial-devices.txt
CAVIUM LIQUIDIO NETWORK DRIVER
M: Derek Chickles <derek.chickles@caviumnetworks.com>
M: Satanand Burla <satananda.burla@caviumnetworks.com>
M: Felix Manlunas <felix.manlunas@caviumnetworks.com>
M: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com>
L: netdev@vger.kernel.org
W: http://www.cavium.com
S: Supported
F: drivers/net/ethernet/cavium/
F: drivers/net/ethernet/cavium/liquidio/
CC2520 IEEE-802.15.4 RADIO DRIVER
M: Varka Bhadram <varkabhadram@gmail.com>
L: linux-wpan@vger.kernel.org
@ -6411,6 +6436,12 @@ F: include/uapi/linux/meye.h
F: include/uapi/linux/ivtv*
F: include/uapi/linux/uvcvideo.h
MEDIATEK MT7601U WIRELESS LAN DRIVER
M: Jakub Kicinski <kubakici@wp.pl>
L: linux-wireless@vger.kernel.org
S: Maintained
F: drivers/net/wireless/mediatek/mt7601u/
MEGARAID SCSI/SAS DRIVERS
M: Kashyap Desai <kashyap.desai@avagotech.com>
M: Sumit Saxena <sumit.saxena@avagotech.com>
@ -8197,8 +8228,6 @@ P: rt2x00 project
M: Stanislaw Gruszka <sgruszka@redhat.com>
M: Helmut Schaa <helmut.schaa@googlemail.com>
L: linux-wireless@vger.kernel.org
L: users@rt2x00.serialmonkey.com (moderated for non-subscribers)
W: http://rt2x00.serialmonkey.com/
S: Maintained
F: drivers/net/wireless/rt2x00/

View File

@ -873,6 +873,16 @@ b_epilogue:
off = offsetof(struct sk_buff, queue_mapping);
emit(ARM_LDRH_I(r_A, r_skb, off), ctx);
break;
case BPF_LDX | BPF_W | BPF_ABS:
/*
* load a 32bit word from struct seccomp_data.
* seccomp_check_filter() will already have checked
* that k is 32bit aligned and lies within the
* struct seccomp_data.
*/
ctx->seen |= SEEN_SKB;
emit(ARM_LDR_I(r_A, r_skb, k), ctx);
break;
default:
return -1;
}

View File

@ -28,6 +28,9 @@ extern u8 sk_load_word[], sk_load_half[], sk_load_byte[];
* | old backchain | |
* +---------------+ |
* | r15 - r6 | |
* +---------------+ |
* | 4 byte align | |
* | tail_call_cnt | |
* BFP -> +===============+ |
* | | |
* | BPF stack | |
@ -46,14 +49,17 @@ extern u8 sk_load_word[], sk_load_half[], sk_load_byte[];
* R15 -> +---------------+ + low
*
* We get 160 bytes stack space from calling function, but only use
* 11 * 8 byte (old backchain + r15 - r6) for storing registers.
* 12 * 8 byte for old backchain, r15..r6, and tail_call_cnt.
*/
#define STK_SPACE (MAX_BPF_STACK + 8 + 4 + 4 + 160)
#define STK_160_UNUSED (160 - 11 * 8)
#define STK_160_UNUSED (160 - 12 * 8)
#define STK_OFF (STK_SPACE - STK_160_UNUSED)
#define STK_OFF_TMP 160 /* Offset of tmp buffer on stack */
#define STK_OFF_HLEN 168 /* Offset of SKB header length on stack */
#define STK_OFF_R6 (160 - 11 * 8) /* Offset of r6 on stack */
#define STK_OFF_TCCNT (160 - 12 * 8) /* Offset of tail_call_cnt on stack */
/* Offset to skip condition code check */
#define OFF_OK 4

View File

@ -21,6 +21,7 @@
#include <linux/netdevice.h>
#include <linux/filter.h>
#include <linux/init.h>
#include <linux/bpf.h>
#include <asm/cacheflush.h>
#include <asm/dis.h>
#include "bpf_jit.h"
@ -40,6 +41,8 @@ struct bpf_jit {
int base_ip; /* Base address for literal pool */
int ret0_ip; /* Address of return 0 */
int exit_ip; /* Address of exit */
int tail_call_start; /* Tail call start offset */
int labels[1]; /* Labels for local jumps */
};
#define BPF_SIZE_MAX 4096 /* Max size for program */
@ -49,6 +52,7 @@ struct bpf_jit {
#define SEEN_RET0 4 /* ret0_ip points to a valid return 0 */
#define SEEN_LITERAL 8 /* code uses literals */
#define SEEN_FUNC 16 /* calls C functions */
#define SEEN_TAIL_CALL 32 /* code uses tail calls */
#define SEEN_STACK (SEEN_FUNC | SEEN_MEM | SEEN_SKB)
/*
@ -60,6 +64,7 @@ struct bpf_jit {
#define REG_L (__MAX_BPF_REG+3) /* Literal pool register */
#define REG_15 (__MAX_BPF_REG+4) /* Register 15 */
#define REG_0 REG_W0 /* Register 0 */
#define REG_1 REG_W1 /* Register 1 */
#define REG_2 BPF_REG_1 /* Register 2 */
#define REG_14 BPF_REG_0 /* Register 14 */
@ -223,6 +228,24 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1)
REG_SET_SEEN(b3); \
})
#define EMIT6_PCREL_LABEL(op1, op2, b1, b2, label, mask) \
({ \
int rel = (jit->labels[label] - jit->prg) >> 1; \
_EMIT6(op1 | reg(b1, b2) << 16 | (rel & 0xffff), \
op2 | mask << 12); \
REG_SET_SEEN(b1); \
REG_SET_SEEN(b2); \
})
#define EMIT6_PCREL_IMM_LABEL(op1, op2, b1, imm, label, mask) \
({ \
int rel = (jit->labels[label] - jit->prg) >> 1; \
_EMIT6(op1 | (reg_high(b1) | mask) << 16 | \
(rel & 0xffff), op2 | (imm & 0xff) << 8); \
REG_SET_SEEN(b1); \
BUILD_BUG_ON(((unsigned long) imm) > 0xff); \
})
#define EMIT6_PCREL(op1, op2, b1, b2, i, off, mask) \
({ \
/* Branch instruction needs 6 bytes */ \
@ -286,7 +309,7 @@ static void jit_fill_hole(void *area, unsigned int size)
*/
static void save_regs(struct bpf_jit *jit, u32 rs, u32 re)
{
u32 off = 72 + (rs - 6) * 8;
u32 off = STK_OFF_R6 + (rs - 6) * 8;
if (rs == re)
/* stg %rs,off(%r15) */
@ -301,7 +324,7 @@ static void save_regs(struct bpf_jit *jit, u32 rs, u32 re)
*/
static void restore_regs(struct bpf_jit *jit, u32 rs, u32 re)
{
u32 off = 72 + (rs - 6) * 8;
u32 off = STK_OFF_R6 + (rs - 6) * 8;
if (jit->seen & SEEN_STACK)
off += STK_OFF;
@ -374,6 +397,16 @@ static void save_restore_regs(struct bpf_jit *jit, int op)
*/
static void bpf_jit_prologue(struct bpf_jit *jit)
{
if (jit->seen & SEEN_TAIL_CALL) {
/* xc STK_OFF_TCCNT(4,%r15),STK_OFF_TCCNT(%r15) */
_EMIT6(0xd703f000 | STK_OFF_TCCNT, 0xf000 | STK_OFF_TCCNT);
} else {
/* j tail_call_start: NOP if no tail calls are used */
EMIT4_PCREL(0xa7f40000, 6);
_EMIT2(0);
}
/* Tail calls have to skip above initialization */
jit->tail_call_start = jit->prg;
/* Save registers */
save_restore_regs(jit, REGS_SAVE);
/* Setup literal pool */
@ -951,6 +984,75 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
EMIT4(0xb9040000, BPF_REG_0, REG_2);
break;
}
case BPF_JMP | BPF_CALL | BPF_X:
/*
* Implicit input:
* B1: pointer to ctx
* B2: pointer to bpf_array
* B3: index in bpf_array
*/
jit->seen |= SEEN_TAIL_CALL;
/*
* if (index >= array->map.max_entries)
* goto out;
*/
/* llgf %w1,map.max_entries(%b2) */
EMIT6_DISP_LH(0xe3000000, 0x0016, REG_W1, REG_0, BPF_REG_2,
offsetof(struct bpf_array, map.max_entries));
/* clgrj %b3,%w1,0xa,label0: if %b3 >= %w1 goto out */
EMIT6_PCREL_LABEL(0xec000000, 0x0065, BPF_REG_3,
REG_W1, 0, 0xa);
/*
* if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
* goto out;
*/
if (jit->seen & SEEN_STACK)
off = STK_OFF_TCCNT + STK_OFF;
else
off = STK_OFF_TCCNT;
/* lhi %w0,1 */
EMIT4_IMM(0xa7080000, REG_W0, 1);
/* laal %w1,%w0,off(%r15) */
EMIT6_DISP_LH(0xeb000000, 0x00fa, REG_W1, REG_W0, REG_15, off);
/* clij %w1,MAX_TAIL_CALL_CNT,0x2,label0 */
EMIT6_PCREL_IMM_LABEL(0xec000000, 0x007f, REG_W1,
MAX_TAIL_CALL_CNT, 0, 0x2);
/*
* prog = array->prog[index];
* if (prog == NULL)
* goto out;
*/
/* sllg %r1,%b3,3: %r1 = index * 8 */
EMIT6_DISP_LH(0xeb000000, 0x000d, REG_1, BPF_REG_3, REG_0, 3);
/* lg %r1,prog(%b2,%r1) */
EMIT6_DISP_LH(0xe3000000, 0x0004, REG_1, BPF_REG_2,
REG_1, offsetof(struct bpf_array, prog));
/* clgij %r1,0,0x8,label0 */
EMIT6_PCREL_IMM_LABEL(0xec000000, 0x007d, REG_1, 0, 0, 0x8);
/*
* Restore registers before calling function
*/
save_restore_regs(jit, REGS_RESTORE);
/*
* goto *(prog->bpf_func + tail_call_start);
*/
/* lg %r1,bpf_func(%r1) */
EMIT6_DISP_LH(0xe3000000, 0x0004, REG_1, REG_1, REG_0,
offsetof(struct bpf_prog, bpf_func));
/* bc 0xf,tail_call_start(%r1) */
_EMIT4(0x47f01000 + jit->tail_call_start);
/* out: */
jit->labels[0] = jit->prg;
break;
case BPF_JMP | BPF_EXIT: /* return b0 */
last = (i == fp->len - 1) ? 1 : 0;
if (last && !(jit->seen & SEEN_RET0))

View File

@ -12,6 +12,7 @@
#include <linux/filter.h>
#include <linux/if_vlan.h>
#include <asm/cacheflush.h>
#include <linux/bpf.h>
int bpf_jit_enable __read_mostly;
@ -37,7 +38,8 @@ static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
return ptr + len;
}
#define EMIT(bytes, len) do { prog = emit_code(prog, bytes, len); } while (0)
#define EMIT(bytes, len) \
do { prog = emit_code(prog, bytes, len); cnt += len; } while (0)
#define EMIT1(b1) EMIT(b1, 1)
#define EMIT2(b1, b2) EMIT((b1) + ((b2) << 8), 2)
@ -186,31 +188,31 @@ struct jit_context {
#define BPF_MAX_INSN_SIZE 128
#define BPF_INSN_SAFETY 64
static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
int oldproglen, struct jit_context *ctx)
#define STACKSIZE \
(MAX_BPF_STACK + \
32 /* space for rbx, r13, r14, r15 */ + \
8 /* space for skb_copy_bits() buffer */)
#define PROLOGUE_SIZE 51
/* emit x64 prologue code for BPF program and check it's size.
* bpf_tail_call helper will skip it while jumping into another program
*/
static void emit_prologue(u8 **pprog)
{
struct bpf_insn *insn = bpf_prog->insnsi;
int insn_cnt = bpf_prog->len;
bool seen_ld_abs = ctx->seen_ld_abs | (oldproglen == 0);
bool seen_exit = false;
u8 temp[BPF_MAX_INSN_SIZE + BPF_INSN_SAFETY];
int i;
int proglen = 0;
u8 *prog = temp;
int stacksize = MAX_BPF_STACK +
32 /* space for rbx, r13, r14, r15 */ +
8 /* space for skb_copy_bits() buffer */;
u8 *prog = *pprog;
int cnt = 0;
EMIT1(0x55); /* push rbp */
EMIT3(0x48, 0x89, 0xE5); /* mov rbp,rsp */
/* sub rsp, stacksize */
EMIT3_off32(0x48, 0x81, 0xEC, stacksize);
/* sub rsp, STACKSIZE */
EMIT3_off32(0x48, 0x81, 0xEC, STACKSIZE);
/* all classic BPF filters use R6(rbx) save it */
/* mov qword ptr [rbp-X],rbx */
EMIT3_off32(0x48, 0x89, 0x9D, -stacksize);
EMIT3_off32(0x48, 0x89, 0x9D, -STACKSIZE);
/* bpf_convert_filter() maps classic BPF register X to R7 and uses R8
* as temporary, so all tcpdump filters need to spill/fill R7(r13) and
@ -221,16 +223,112 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
*/
/* mov qword ptr [rbp-X],r13 */
EMIT3_off32(0x4C, 0x89, 0xAD, -stacksize + 8);
EMIT3_off32(0x4C, 0x89, 0xAD, -STACKSIZE + 8);
/* mov qword ptr [rbp-X],r14 */
EMIT3_off32(0x4C, 0x89, 0xB5, -stacksize + 16);
EMIT3_off32(0x4C, 0x89, 0xB5, -STACKSIZE + 16);
/* mov qword ptr [rbp-X],r15 */
EMIT3_off32(0x4C, 0x89, 0xBD, -stacksize + 24);
EMIT3_off32(0x4C, 0x89, 0xBD, -STACKSIZE + 24);
/* clear A and X registers */
EMIT2(0x31, 0xc0); /* xor eax, eax */
EMIT3(0x4D, 0x31, 0xED); /* xor r13, r13 */
/* clear tail_cnt: mov qword ptr [rbp-X], rax */
EMIT3_off32(0x48, 0x89, 0x85, -STACKSIZE + 32);
BUILD_BUG_ON(cnt != PROLOGUE_SIZE);
*pprog = prog;
}
/* generate the following code:
* ... bpf_tail_call(void *ctx, struct bpf_array *array, u64 index) ...
* if (index >= array->map.max_entries)
* goto out;
* if (++tail_call_cnt > MAX_TAIL_CALL_CNT)
* goto out;
* prog = array->prog[index];
* if (prog == NULL)
* goto out;
* goto *(prog->bpf_func + prologue_size);
* out:
*/
static void emit_bpf_tail_call(u8 **pprog)
{
u8 *prog = *pprog;
int label1, label2, label3;
int cnt = 0;
/* rdi - pointer to ctx
* rsi - pointer to bpf_array
* rdx - index in bpf_array
*/
/* if (index >= array->map.max_entries)
* goto out;
*/
EMIT4(0x48, 0x8B, 0x46, /* mov rax, qword ptr [rsi + 16] */
offsetof(struct bpf_array, map.max_entries));
EMIT3(0x48, 0x39, 0xD0); /* cmp rax, rdx */
#define OFFSET1 44 /* number of bytes to jump */
EMIT2(X86_JBE, OFFSET1); /* jbe out */
label1 = cnt;
/* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
* goto out;
*/
EMIT2_off32(0x8B, 0x85, -STACKSIZE + 36); /* mov eax, dword ptr [rbp - 516] */
EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */
#define OFFSET2 33
EMIT2(X86_JA, OFFSET2); /* ja out */
label2 = cnt;
EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */
EMIT2_off32(0x89, 0x85, -STACKSIZE + 36); /* mov dword ptr [rbp - 516], eax */
/* prog = array->prog[index]; */
EMIT4(0x48, 0x8D, 0x44, 0xD6); /* lea rax, [rsi + rdx * 8 + 0x50] */
EMIT1(offsetof(struct bpf_array, prog));
EMIT3(0x48, 0x8B, 0x00); /* mov rax, qword ptr [rax] */
/* if (prog == NULL)
* goto out;
*/
EMIT4(0x48, 0x83, 0xF8, 0x00); /* cmp rax, 0 */
#define OFFSET3 10
EMIT2(X86_JE, OFFSET3); /* je out */
label3 = cnt;
/* goto *(prog->bpf_func + prologue_size); */
EMIT4(0x48, 0x8B, 0x40, /* mov rax, qword ptr [rax + 32] */
offsetof(struct bpf_prog, bpf_func));
EMIT4(0x48, 0x83, 0xC0, PROLOGUE_SIZE); /* add rax, prologue_size */
/* now we're ready to jump into next BPF program
* rdi == ctx (1st arg)
* rax == prog->bpf_func + prologue_size
*/
EMIT2(0xFF, 0xE0); /* jmp rax */
/* out: */
BUILD_BUG_ON(cnt - label1 != OFFSET1);
BUILD_BUG_ON(cnt - label2 != OFFSET2);
BUILD_BUG_ON(cnt - label3 != OFFSET3);
*pprog = prog;
}
static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
int oldproglen, struct jit_context *ctx)
{
struct bpf_insn *insn = bpf_prog->insnsi;
int insn_cnt = bpf_prog->len;
bool seen_ld_abs = ctx->seen_ld_abs | (oldproglen == 0);
bool seen_exit = false;
u8 temp[BPF_MAX_INSN_SIZE + BPF_INSN_SAFETY];
int i, cnt = 0;
int proglen = 0;
u8 *prog = temp;
emit_prologue(&prog);
if (seen_ld_abs) {
/* r9d : skb->len - skb->data_len (headlen)
* r10 : skb->data
@ -739,6 +837,10 @@ xadd: if (is_imm8(insn->off))
}
break;
case BPF_JMP | BPF_CALL | BPF_X:
emit_bpf_tail_call(&prog);
break;
/* cond jump */
case BPF_JMP | BPF_JEQ | BPF_X:
case BPF_JMP | BPF_JNE | BPF_X:
@ -891,13 +993,13 @@ common_load:
/* update cleanup_addr */
ctx->cleanup_addr = proglen;
/* mov rbx, qword ptr [rbp-X] */
EMIT3_off32(0x48, 0x8B, 0x9D, -stacksize);
EMIT3_off32(0x48, 0x8B, 0x9D, -STACKSIZE);
/* mov r13, qword ptr [rbp-X] */
EMIT3_off32(0x4C, 0x8B, 0xAD, -stacksize + 8);
EMIT3_off32(0x4C, 0x8B, 0xAD, -STACKSIZE + 8);
/* mov r14, qword ptr [rbp-X] */
EMIT3_off32(0x4C, 0x8B, 0xB5, -stacksize + 16);
EMIT3_off32(0x4C, 0x8B, 0xB5, -STACKSIZE + 16);
/* mov r15, qword ptr [rbp-X] */
EMIT3_off32(0x4C, 0x8B, 0xBD, -stacksize + 24);
EMIT3_off32(0x4C, 0x8B, 0xBD, -STACKSIZE + 24);
EMIT1(0xC9); /* leave */
EMIT1(0xC3); /* ret */

View File

@ -247,7 +247,7 @@ int af_alg_accept(struct sock *sk, struct socket *newsock)
if (!type)
goto unlock;
sk2 = sk_alloc(sock_net(sk), PF_ALG, GFP_KERNEL, &alg_proto);
sk2 = sk_alloc(sock_net(sk), PF_ALG, GFP_KERNEL, &alg_proto, 0);
err = -ENOMEM;
if (!sk2)
goto unlock;
@ -327,7 +327,7 @@ static int alg_create(struct net *net, struct socket *sock, int protocol,
return -EPROTONOSUPPORT;
err = -ENOMEM;
sk = sk_alloc(net, PF_ALG, GFP_KERNEL, &alg_proto);
sk = sk_alloc(net, PF_ALG, GFP_KERNEL, &alg_proto, kern);
if (!sk)
goto out;

View File

@ -116,8 +116,8 @@ static bool disable64;
static short nvpibits = -1;
static short nvcibits = -1;
static short rx_skb_reserve = 16;
static bool irq_coalesce = 1;
static bool sdh = 0;
static bool irq_coalesce = true;
static bool sdh;
/* Read from EEPROM = 0000 0011b */
static unsigned int readtab[] = {

View File

@ -306,14 +306,12 @@ static int idt77105_start(struct atm_dev *dev)
if (start_timer) {
start_timer = 0;
init_timer(&stats_timer);
setup_timer(&stats_timer, idt77105_stats_timer_func, 0UL);
stats_timer.expires = jiffies+IDT77105_STATS_TIMER_PERIOD;
stats_timer.function = idt77105_stats_timer_func;
add_timer(&stats_timer);
init_timer(&restart_timer);
setup_timer(&restart_timer, idt77105_restart_timer_func, 0UL);
restart_timer.expires = jiffies+IDT77105_RESTART_TIMER_PERIOD;
restart_timer.function = idt77105_restart_timer_func;
add_timer(&restart_timer);
}
spin_unlock_irqrestore(&idt77105_priv_lock, flags);

View File

@ -2618,7 +2618,7 @@ static void ia_close(struct atm_vcc *vcc)
if (vcc->qos.txtp.traffic_class != ATM_NONE) {
iadev->close_pending++;
prepare_to_wait(&iadev->timeout_wait, &wait, TASK_UNINTERRUPTIBLE);
schedule_timeout(50);
schedule_timeout(msecs_to_jiffies(500));
finish_wait(&iadev->timeout_wait, &wait);
spin_lock_irqsave(&iadev->tx_lock, flags);
while((skb = skb_dequeue(&iadev->tx_backlog))) {

View File

@ -29,12 +29,6 @@ config BCMA_HOST_PCI
select BCMA_DRIVER_PCI
default y
config BCMA_DRIVER_PCI_HOSTMODE
bool "Driver for PCI core working in hostmode"
depends on BCMA && MIPS && BCMA_HOST_PCI
help
PCI core hostmode operation (external PCI bus).
config BCMA_HOST_SOC
bool "Support for BCMA in a SoC"
depends on BCMA
@ -61,6 +55,12 @@ config BCMA_DRIVER_PCI
This driver is also prerequisite for a hostmode PCIe core
support.
config BCMA_DRIVER_PCI_HOSTMODE
bool "Driver for PCI core working in hostmode"
depends on BCMA && MIPS && BCMA_DRIVER_PCI
help
PCI core hostmode operation (external PCI bus).
config BCMA_DRIVER_MIPS
bool "BCMA Broadcom MIPS core driver"
depends on BCMA && MIPS

View File

@ -226,6 +226,7 @@ int bcma_gpio_init(struct bcma_drv_cc *cc)
chip->of_node = cc->core->dev.of_node;
#endif
switch (bus->chipinfo.id) {
case BCMA_CHIP_ID_BCM4707:
case BCMA_CHIP_ID_BCM5357:
case BCMA_CHIP_ID_BCM53572:
chip->ngpio = 32;
@ -235,16 +236,17 @@ int bcma_gpio_init(struct bcma_drv_cc *cc)
}
/*
* On MIPS we register GPIO devices (LEDs, buttons) using absolute GPIO
* pin numbers. We don't have Device Tree there and we can't really use
* relative (per chip) numbers.
* So let's use predictable base for BCM47XX and "random" for all other.
* Register SoC GPIO devices with absolute GPIO pin base.
* On MIPS, we don't have Device Tree and we can't use relative (per chip)
* GPIO numbers.
* On some ARM devices, user space may want to access some system GPIO
* pins directly, which is easier to do with a predictable GPIO base.
*/
#if IS_BUILTIN(CONFIG_BCM47XX)
chip->base = bus->num * BCMA_GPIO_MAX_PINS;
#else
chip->base = -1;
#endif
if (IS_BUILTIN(CONFIG_BCM47XX) ||
cc->core->bus->hosttype == BCMA_HOSTTYPE_SOC)
chip->base = bus->num * BCMA_GPIO_MAX_PINS;
else
chip->base = -1;
err = bcma_gpio_irq_domain_init(cc);
if (err)

View File

@ -598,7 +598,7 @@ static struct socket *drbd_try_connect(struct drbd_connection *connection)
memcpy(&peer_in6, &connection->peer_addr, peer_addr_len);
what = "sock_create_kern";
err = sock_create_kern(((struct sockaddr *)&src_in6)->sa_family,
err = sock_create_kern(&init_net, ((struct sockaddr *)&src_in6)->sa_family,
SOCK_STREAM, IPPROTO_TCP, &sock);
if (err < 0) {
sock = NULL;
@ -693,7 +693,7 @@ static int prepare_listen_socket(struct drbd_connection *connection, struct acce
memcpy(&my_addr, &connection->my_addr, my_addr_len);
what = "sock_create_kern";
err = sock_create_kern(((struct sockaddr *)&my_addr)->sa_family,
err = sock_create_kern(&init_net, ((struct sockaddr *)&my_addr)->sa_family,
SOCK_STREAM, IPPROTO_TCP, &s_listen);
if (err) {
s_listen = NULL;

View File

@ -9,6 +9,10 @@ config BT_BCM
tristate
select FW_LOADER
config BT_RTL
tristate
select FW_LOADER
config BT_HCIBTUSB
tristate "HCI USB driver"
depends on USB
@ -32,6 +36,17 @@ config BT_HCIBTUSB_BCM
Say Y here to compile support for Broadcom protocol.
config BT_HCIBTUSB_RTL
bool "Realtek protocol support"
depends on BT_HCIBTUSB
select BT_RTL
default y
help
The Realtek protocol support enables firmware and configuration
download support for Realtek Bluetooth controllers.
Say Y here to compile support for Realtek protocol.
config BT_HCIBTSDIO
tristate "HCI SDIO driver"
depends on MMC

View File

@ -21,6 +21,7 @@ obj-$(CONFIG_BT_MRVL) += btmrvl.o
obj-$(CONFIG_BT_MRVL_SDIO) += btmrvl_sdio.o
obj-$(CONFIG_BT_WILINK) += btwilink.o
obj-$(CONFIG_BT_BCM) += btbcm.o
obj-$(CONFIG_BT_RTL) += btrtl.o
btmrvl-y := btmrvl_main.o
btmrvl-$(CONFIG_DEBUG_FS) += btmrvl_debugfs.o

View File

@ -80,6 +80,7 @@ static const struct usb_device_id ath3k_table[] = {
{ USB_DEVICE(0x0489, 0xe057) },
{ USB_DEVICE(0x0489, 0xe056) },
{ USB_DEVICE(0x0489, 0xe05f) },
{ USB_DEVICE(0x0489, 0xe076) },
{ USB_DEVICE(0x0489, 0xe078) },
{ USB_DEVICE(0x04c5, 0x1330) },
{ USB_DEVICE(0x04CA, 0x3004) },
@ -88,6 +89,7 @@ static const struct usb_device_id ath3k_table[] = {
{ USB_DEVICE(0x04CA, 0x3007) },
{ USB_DEVICE(0x04CA, 0x3008) },
{ USB_DEVICE(0x04CA, 0x300b) },
{ USB_DEVICE(0x04CA, 0x300d) },
{ USB_DEVICE(0x04CA, 0x300f) },
{ USB_DEVICE(0x04CA, 0x3010) },
{ USB_DEVICE(0x0930, 0x0219) },
@ -113,6 +115,7 @@ static const struct usb_device_id ath3k_table[] = {
{ USB_DEVICE(0x13d3, 0x3408) },
{ USB_DEVICE(0x13d3, 0x3423) },
{ USB_DEVICE(0x13d3, 0x3432) },
{ USB_DEVICE(0x13d3, 0x3474) },
/* Atheros AR5BBU12 with sflash firmware */
{ USB_DEVICE(0x0489, 0xE02C) },
@ -137,6 +140,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
{ USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
@ -145,6 +149,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
{ USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x300b), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x300d), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
@ -170,6 +175,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
{ USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3423), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
/* Atheros AR5BBU22 with sflash firmware */
{ USB_DEVICE(0x0489, 0xE036), .driver_info = BTUSB_ATH3012 },

View File

@ -202,9 +202,8 @@ static void bt3c_write_wakeup(struct bt3c_info *info)
/* Send frame */
len = bt3c_write(iobase, 256, skb->data, skb->len);
if (len != skb->len) {
if (len != skb->len)
BT_ERR("Very strange");
}
kfree_skb(skb);

View File

@ -33,6 +33,7 @@
#define VERSION "0.1"
#define BDADDR_BCM20702A0 (&(bdaddr_t) {{0x00, 0xa0, 0x02, 0x70, 0x20, 0x00}})
#define BDADDR_BCM4324B3 (&(bdaddr_t) {{0x00, 0x00, 0x00, 0xb3, 0x24, 0x43}})
int btbcm_check_bdaddr(struct hci_dev *hdev)
{
@ -55,17 +56,19 @@ int btbcm_check_bdaddr(struct hci_dev *hdev)
}
bda = (struct hci_rp_read_bd_addr *)skb->data;
if (bda->status) {
BT_ERR("%s: BCM: Device address result failed (%02x)",
hdev->name, bda->status);
kfree_skb(skb);
return -bt_to_errno(bda->status);
}
/* The address 00:20:70:02:A0:00 indicates a BCM20702A0 controller
/* Check if the address indicates a controller with either an
* invalid or default address. In both cases the device needs
* to be marked as not having a valid address.
*
* The address 00:20:70:02:A0:00 indicates a BCM20702A0 controller
* with no configured address.
*
* The address 43:24:B3:00:00:00 indicates a BCM4324B3 controller
* with waiting for configuration state.
*/
if (!bacmp(&bda->bdaddr, BDADDR_BCM20702A0)) {
if (!bacmp(&bda->bdaddr, BDADDR_BCM20702A0) ||
!bacmp(&bda->bdaddr, BDADDR_BCM4324B3)) {
BT_INFO("%s: BCM: Using default device address (%pMR)",
hdev->name, &bda->bdaddr);
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
@ -95,21 +98,14 @@ int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
}
EXPORT_SYMBOL_GPL(btbcm_set_bdaddr);
int btbcm_patchram(struct hci_dev *hdev, const char *firmware)
int btbcm_patchram(struct hci_dev *hdev, const struct firmware *fw)
{
const struct hci_command_hdr *cmd;
const struct firmware *fw;
const u8 *fw_ptr;
size_t fw_size;
struct sk_buff *skb;
u16 opcode;
int err;
err = request_firmware(&fw, firmware, &hdev->dev);
if (err < 0) {
BT_INFO("%s: BCM: Patch %s not found", hdev->name, firmware);
return err;
}
int err = 0;
/* Start Download */
skb = __hci_cmd_sync(hdev, 0xfc2e, 0, NULL, HCI_INIT_TIMEOUT);
@ -135,8 +131,7 @@ int btbcm_patchram(struct hci_dev *hdev, const char *firmware)
fw_size -= sizeof(*cmd);
if (fw_size < cmd->plen) {
BT_ERR("%s: BCM: Patch %s is corrupted", hdev->name,
firmware);
BT_ERR("%s: BCM: Patch is corrupted", hdev->name);
err = -EINVAL;
goto done;
}
@ -162,7 +157,6 @@ int btbcm_patchram(struct hci_dev *hdev, const char *firmware)
msleep(250);
done:
release_firmware(fw);
return err;
}
EXPORT_SYMBOL(btbcm_patchram);
@ -248,9 +242,101 @@ static const struct {
const char *name;
} bcm_uart_subver_table[] = {
{ 0x410e, "BCM43341B0" }, /* 002.001.014 */
{ 0x4406, "BCM4324B3" }, /* 002.004.006 */
{ 0x610c, "BCM4354" }, /* 003.001.012 */
{ }
};
int btbcm_initialize(struct hci_dev *hdev, char *fw_name, size_t len)
{
u16 subver, rev;
const char *hw_name = NULL;
struct sk_buff *skb;
struct hci_rp_read_local_version *ver;
int i, err;
/* Reset */
err = btbcm_reset(hdev);
if (err)
return err;
/* Read Local Version Info */
skb = btbcm_read_local_version(hdev);
if (IS_ERR(skb))
return PTR_ERR(skb);
ver = (struct hci_rp_read_local_version *)skb->data;
rev = le16_to_cpu(ver->hci_rev);
subver = le16_to_cpu(ver->lmp_subver);
kfree_skb(skb);
/* Read Verbose Config Version Info */
skb = btbcm_read_verbose_config(hdev);
if (IS_ERR(skb))
return PTR_ERR(skb);
BT_INFO("%s: BCM: chip id %u", hdev->name, skb->data[1]);
kfree_skb(skb);
switch ((rev & 0xf000) >> 12) {
case 0:
case 1:
case 3:
for (i = 0; bcm_uart_subver_table[i].name; i++) {
if (subver == bcm_uart_subver_table[i].subver) {
hw_name = bcm_uart_subver_table[i].name;
break;
}
}
snprintf(fw_name, len, "brcm/%s.hcd", hw_name ? : "BCM");
break;
default:
return 0;
}
BT_INFO("%s: %s (%3.3u.%3.3u.%3.3u) build %4.4u", hdev->name,
hw_name ? : "BCM", (subver & 0x7000) >> 13,
(subver & 0x1f00) >> 8, (subver & 0x00ff), rev & 0x0fff);
return 0;
}
EXPORT_SYMBOL_GPL(btbcm_initialize);
int btbcm_finalize(struct hci_dev *hdev)
{
struct sk_buff *skb;
struct hci_rp_read_local_version *ver;
u16 subver, rev;
int err;
/* Reset */
err = btbcm_reset(hdev);
if (err)
return err;
/* Read Local Version Info */
skb = btbcm_read_local_version(hdev);
if (IS_ERR(skb))
return PTR_ERR(skb);
ver = (struct hci_rp_read_local_version *)skb->data;
rev = le16_to_cpu(ver->hci_rev);
subver = le16_to_cpu(ver->lmp_subver);
kfree_skb(skb);
BT_INFO("%s: BCM (%3.3u.%3.3u.%3.3u) build %4.4u", hdev->name,
(subver & 0x7000) >> 13, (subver & 0x1f00) >> 8,
(subver & 0x00ff), rev & 0x0fff);
btbcm_check_bdaddr(hdev);
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
return 0;
}
EXPORT_SYMBOL_GPL(btbcm_finalize);
static const struct {
u16 subver;
const char *name;
@ -271,6 +357,7 @@ static const struct {
int btbcm_setup_patchram(struct hci_dev *hdev)
{
char fw_name[64];
const struct firmware *fw;
u16 subver, rev, pid, vid;
const char *hw_name = NULL;
struct sk_buff *skb;
@ -302,6 +389,7 @@ int btbcm_setup_patchram(struct hci_dev *hdev)
switch ((rev & 0xf000) >> 12) {
case 0:
case 3:
for (i = 0; bcm_uart_subver_table[i].name; i++) {
if (subver == bcm_uart_subver_table[i].subver) {
hw_name = bcm_uart_subver_table[i].name;
@ -341,9 +429,15 @@ int btbcm_setup_patchram(struct hci_dev *hdev)
hw_name ? : "BCM", (subver & 0x7000) >> 13,
(subver & 0x1f00) >> 8, (subver & 0x00ff), rev & 0x0fff);
err = btbcm_patchram(hdev, fw_name);
if (err == -ENOENT)
err = request_firmware(&fw, fw_name, &hdev->dev);
if (err < 0) {
BT_INFO("%s: BCM: Patch %s not found", hdev->name, fw_name);
return 0;
}
btbcm_patchram(hdev, fw);
release_firmware(fw);
/* Reset */
err = btbcm_reset(hdev);

View File

@ -21,15 +21,61 @@
*
*/
#define BCM_UART_CLOCK_48MHZ 0x01
#define BCM_UART_CLOCK_24MHZ 0x02
struct bcm_update_uart_baud_rate {
__le16 zero;
__le32 baud_rate;
} __packed;
struct bcm_write_uart_clock_setting {
__u8 type;
} __packed;
struct bcm_set_sleep_mode {
__u8 sleep_mode;
__u8 idle_host;
__u8 idle_dev;
__u8 bt_wake_active;
__u8 host_wake_active;
__u8 allow_host_sleep;
__u8 combine_modes;
__u8 tristate_control;
__u8 usb_auto_sleep;
__u8 usb_resume_timeout;
__u8 pulsed_host_wake;
__u8 break_to_host;
} __packed;
struct bcm_set_pcm_int_params {
__u8 routing;
__u8 rate;
__u8 frame_sync;
__u8 sync_mode;
__u8 clock_mode;
} __packed;
struct bcm_set_pcm_format_params {
__u8 lsb_first;
__u8 fill_value;
__u8 fill_method;
__u8 fill_num;
__u8 right_justify;
} __packed;
#if IS_ENABLED(CONFIG_BT_BCM)
int btbcm_check_bdaddr(struct hci_dev *hdev);
int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr);
int btbcm_patchram(struct hci_dev *hdev, const char *firmware);
int btbcm_patchram(struct hci_dev *hdev, const struct firmware *fw);
int btbcm_setup_patchram(struct hci_dev *hdev);
int btbcm_setup_apple(struct hci_dev *hdev);
int btbcm_initialize(struct hci_dev *hdev, char *fw_name, size_t len);
int btbcm_finalize(struct hci_dev *hdev);
#else
static inline int btbcm_check_bdaddr(struct hci_dev *hdev)
@ -42,7 +88,7 @@ static inline int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
return -EOPNOTSUPP;
}
static inline int btbcm_patchram(struct hci_dev *hdev, const char *firmware)
static inline int btbcm_patchram(struct hci_dev *hdev, const struct firmware *fw)
{
return -EOPNOTSUPP;
}
@ -57,4 +103,15 @@ static inline int btbcm_setup_apple(struct hci_dev *hdev)
return 0;
}
static inline int btbcm_initialize(struct hci_dev *hdev, char *fw_name,
size_t len)
{
return 0;
}
static inline int btbcm_finalize(struct hci_dev *hdev)
{
return 0;
}
#endif

View File

@ -53,12 +53,6 @@ int btintel_check_bdaddr(struct hci_dev *hdev)
}
bda = (struct hci_rp_read_bd_addr *)skb->data;
if (bda->status) {
BT_ERR("%s: Intel device address result failed (%02x)",
hdev->name, bda->status);
kfree_skb(skb);
return -bt_to_errno(bda->status);
}
/* For some Intel based controllers, the default Bluetooth device
* address 00:03:19:9E:8B:00 can be found. These controllers are

View File

@ -1217,7 +1217,7 @@ static void btmrvl_sdio_dump_firmware(struct btmrvl_private *priv)
unsigned int reg, reg_start, reg_end;
enum rdwr_status stat;
u8 *dbg_ptr, *end_ptr, *fw_dump_data, *fw_dump_ptr;
u8 dump_num, idx, i, read_reg, doneflag = 0;
u8 dump_num = 0, idx, i, read_reg, doneflag = 0;
u32 memory_size, fw_dump_len = 0;
/* dump sdio register first */

View File

@ -0,0 +1,390 @@
/*
* Bluetooth support for Realtek devices
*
* Copyright (C) 2015 Endless Mobile, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/module.h>
#include <linux/firmware.h>
#include <asm/unaligned.h>
#include <linux/usb.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include "btrtl.h"
#define VERSION "0.1"
#define RTL_EPATCH_SIGNATURE "Realtech"
#define RTL_ROM_LMP_3499 0x3499
#define RTL_ROM_LMP_8723A 0x1200
#define RTL_ROM_LMP_8723B 0x8723
#define RTL_ROM_LMP_8821A 0x8821
#define RTL_ROM_LMP_8761A 0x8761
static int rtl_read_rom_version(struct hci_dev *hdev, u8 *version)
{
struct rtl_rom_version_evt *rom_version;
struct sk_buff *skb;
/* Read RTL ROM version command */
skb = __hci_cmd_sync(hdev, 0xfc6d, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: Read ROM version failed (%ld)",
hdev->name, PTR_ERR(skb));
return PTR_ERR(skb);
}
if (skb->len != sizeof(*rom_version)) {
BT_ERR("%s: RTL version event length mismatch", hdev->name);
kfree_skb(skb);
return -EIO;
}
rom_version = (struct rtl_rom_version_evt *)skb->data;
BT_INFO("%s: rom_version status=%x version=%x",
hdev->name, rom_version->status, rom_version->version);
*version = rom_version->version;
kfree_skb(skb);
return 0;
}
static int rtl8723b_parse_firmware(struct hci_dev *hdev, u16 lmp_subver,
const struct firmware *fw,
unsigned char **_buf)
{
const u8 extension_sig[] = { 0x51, 0x04, 0xfd, 0x77 };
struct rtl_epatch_header *epatch_info;
unsigned char *buf;
int i, ret, len;
size_t min_size;
u8 opcode, length, data, rom_version = 0;
int project_id = -1;
const unsigned char *fwptr, *chip_id_base;
const unsigned char *patch_length_base, *patch_offset_base;
u32 patch_offset = 0;
u16 patch_length, num_patches;
const u16 project_id_to_lmp_subver[] = {
RTL_ROM_LMP_8723A,
RTL_ROM_LMP_8723B,
RTL_ROM_LMP_8821A,
RTL_ROM_LMP_8761A
};
ret = rtl_read_rom_version(hdev, &rom_version);
if (ret)
return ret;
min_size = sizeof(struct rtl_epatch_header) + sizeof(extension_sig) + 3;
if (fw->size < min_size)
return -EINVAL;
fwptr = fw->data + fw->size - sizeof(extension_sig);
if (memcmp(fwptr, extension_sig, sizeof(extension_sig)) != 0) {
BT_ERR("%s: extension section signature mismatch", hdev->name);
return -EINVAL;
}
/* Loop from the end of the firmware parsing instructions, until
* we find an instruction that identifies the "project ID" for the
* hardware supported by this firwmare file.
* Once we have that, we double-check that that project_id is suitable
* for the hardware we are working with.
*/
while (fwptr >= fw->data + (sizeof(struct rtl_epatch_header) + 3)) {
opcode = *--fwptr;
length = *--fwptr;
data = *--fwptr;
BT_DBG("check op=%x len=%x data=%x", opcode, length, data);
if (opcode == 0xff) /* EOF */
break;
if (length == 0) {
BT_ERR("%s: found instruction with length 0",
hdev->name);
return -EINVAL;
}
if (opcode == 0 && length == 1) {
project_id = data;
break;
}
fwptr -= length;
}
if (project_id < 0) {
BT_ERR("%s: failed to find version instruction", hdev->name);
return -EINVAL;
}
if (project_id >= ARRAY_SIZE(project_id_to_lmp_subver)) {
BT_ERR("%s: unknown project id %d", hdev->name, project_id);
return -EINVAL;
}
if (lmp_subver != project_id_to_lmp_subver[project_id]) {
BT_ERR("%s: firmware is for %x but this is a %x", hdev->name,
project_id_to_lmp_subver[project_id], lmp_subver);
return -EINVAL;
}
epatch_info = (struct rtl_epatch_header *)fw->data;
if (memcmp(epatch_info->signature, RTL_EPATCH_SIGNATURE, 8) != 0) {
BT_ERR("%s: bad EPATCH signature", hdev->name);
return -EINVAL;
}
num_patches = le16_to_cpu(epatch_info->num_patches);
BT_DBG("fw_version=%x, num_patches=%d",
le32_to_cpu(epatch_info->fw_version), num_patches);
/* After the rtl_epatch_header there is a funky patch metadata section.
* Assuming 2 patches, the layout is:
* ChipID1 ChipID2 PatchLength1 PatchLength2 PatchOffset1 PatchOffset2
*
* Find the right patch for this chip.
*/
min_size += 8 * num_patches;
if (fw->size < min_size)
return -EINVAL;
chip_id_base = fw->data + sizeof(struct rtl_epatch_header);
patch_length_base = chip_id_base + (sizeof(u16) * num_patches);
patch_offset_base = patch_length_base + (sizeof(u16) * num_patches);
for (i = 0; i < num_patches; i++) {
u16 chip_id = get_unaligned_le16(chip_id_base +
(i * sizeof(u16)));
if (chip_id == rom_version + 1) {
patch_length = get_unaligned_le16(patch_length_base +
(i * sizeof(u16)));
patch_offset = get_unaligned_le32(patch_offset_base +
(i * sizeof(u32)));
break;
}
}
if (!patch_offset) {
BT_ERR("%s: didn't find patch for chip id %d",
hdev->name, rom_version);
return -EINVAL;
}
BT_DBG("length=%x offset=%x index %d", patch_length, patch_offset, i);
min_size = patch_offset + patch_length;
if (fw->size < min_size)
return -EINVAL;
/* Copy the firmware into a new buffer and write the version at
* the end.
*/
len = patch_length;
buf = kmemdup(fw->data + patch_offset, patch_length, GFP_KERNEL);
if (!buf)
return -ENOMEM;
memcpy(buf + patch_length - 4, &epatch_info->fw_version, 4);
*_buf = buf;
return len;
}
static int rtl_download_firmware(struct hci_dev *hdev,
const unsigned char *data, int fw_len)
{
struct rtl_download_cmd *dl_cmd;
int frag_num = fw_len / RTL_FRAG_LEN + 1;
int frag_len = RTL_FRAG_LEN;
int ret = 0;
int i;
dl_cmd = kmalloc(sizeof(struct rtl_download_cmd), GFP_KERNEL);
if (!dl_cmd)
return -ENOMEM;
for (i = 0; i < frag_num; i++) {
struct sk_buff *skb;
BT_DBG("download fw (%d/%d)", i, frag_num);
dl_cmd->index = i;
if (i == (frag_num - 1)) {
dl_cmd->index |= 0x80; /* data end */
frag_len = fw_len % RTL_FRAG_LEN;
}
memcpy(dl_cmd->data, data, frag_len);
/* Send download command */
skb = __hci_cmd_sync(hdev, 0xfc20, frag_len + 1, dl_cmd,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: download fw command failed (%ld)",
hdev->name, PTR_ERR(skb));
ret = -PTR_ERR(skb);
goto out;
}
if (skb->len != sizeof(struct rtl_download_response)) {
BT_ERR("%s: download fw event length mismatch",
hdev->name);
kfree_skb(skb);
ret = -EIO;
goto out;
}
kfree_skb(skb);
data += RTL_FRAG_LEN;
}
out:
kfree(dl_cmd);
return ret;
}
static int btrtl_setup_rtl8723a(struct hci_dev *hdev)
{
const struct firmware *fw;
int ret;
BT_INFO("%s: rtl: loading rtl_bt/rtl8723a_fw.bin", hdev->name);
ret = request_firmware(&fw, "rtl_bt/rtl8723a_fw.bin", &hdev->dev);
if (ret < 0) {
BT_ERR("%s: Failed to load rtl_bt/rtl8723a_fw.bin", hdev->name);
return ret;
}
if (fw->size < 8) {
ret = -EINVAL;
goto out;
}
/* Check that the firmware doesn't have the epatch signature
* (which is only for RTL8723B and newer).
*/
if (!memcmp(fw->data, RTL_EPATCH_SIGNATURE, 8)) {
BT_ERR("%s: unexpected EPATCH signature!", hdev->name);
ret = -EINVAL;
goto out;
}
ret = rtl_download_firmware(hdev, fw->data, fw->size);
out:
release_firmware(fw);
return ret;
}
static int btrtl_setup_rtl8723b(struct hci_dev *hdev, u16 lmp_subver,
const char *fw_name)
{
unsigned char *fw_data = NULL;
const struct firmware *fw;
int ret;
BT_INFO("%s: rtl: loading %s", hdev->name, fw_name);
ret = request_firmware(&fw, fw_name, &hdev->dev);
if (ret < 0) {
BT_ERR("%s: Failed to load %s", hdev->name, fw_name);
return ret;
}
ret = rtl8723b_parse_firmware(hdev, lmp_subver, fw, &fw_data);
if (ret < 0)
goto out;
ret = rtl_download_firmware(hdev, fw_data, ret);
kfree(fw_data);
if (ret < 0)
goto out;
out:
release_firmware(fw);
return ret;
}
static struct sk_buff *btrtl_read_local_version(struct hci_dev *hdev)
{
struct sk_buff *skb;
skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION failed (%ld)",
hdev->name, PTR_ERR(skb));
return skb;
}
if (skb->len != sizeof(struct hci_rp_read_local_version)) {
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION event length mismatch",
hdev->name);
kfree_skb(skb);
return ERR_PTR(-EIO);
}
return skb;
}
int btrtl_setup_realtek(struct hci_dev *hdev)
{
struct sk_buff *skb;
struct hci_rp_read_local_version *resp;
u16 lmp_subver;
skb = btrtl_read_local_version(hdev);
if (IS_ERR(skb))
return -PTR_ERR(skb);
resp = (struct hci_rp_read_local_version *)skb->data;
BT_INFO("%s: rtl: examining hci_ver=%02x hci_rev=%04x lmp_ver=%02x "
"lmp_subver=%04x", hdev->name, resp->hci_ver, resp->hci_rev,
resp->lmp_ver, resp->lmp_subver);
lmp_subver = le16_to_cpu(resp->lmp_subver);
kfree_skb(skb);
/* Match a set of subver values that correspond to stock firmware,
* which is not compatible with standard btusb.
* If matched, upload an alternative firmware that does conform to
* standard btusb. Once that firmware is uploaded, the subver changes
* to a different value.
*/
switch (lmp_subver) {
case RTL_ROM_LMP_8723A:
case RTL_ROM_LMP_3499:
return btrtl_setup_rtl8723a(hdev);
case RTL_ROM_LMP_8723B:
return btrtl_setup_rtl8723b(hdev, lmp_subver,
"rtl_bt/rtl8723b_fw.bin");
case RTL_ROM_LMP_8821A:
return btrtl_setup_rtl8723b(hdev, lmp_subver,
"rtl_bt/rtl8821a_fw.bin");
case RTL_ROM_LMP_8761A:
return btrtl_setup_rtl8723b(hdev, lmp_subver,
"rtl_bt/rtl8761a_fw.bin");
default:
BT_INFO("rtl: assuming no firmware upload needed.");
return 0;
}
}
EXPORT_SYMBOL_GPL(btrtl_setup_realtek);
MODULE_AUTHOR("Daniel Drake <drake@endlessm.com>");
MODULE_DESCRIPTION("Bluetooth support for Realtek devices ver " VERSION);
MODULE_VERSION(VERSION);
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,52 @@
/*
* Bluetooth support for Realtek devices
*
* Copyright (C) 2015 Endless Mobile, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#define RTL_FRAG_LEN 252
struct rtl_download_cmd {
__u8 index;
__u8 data[RTL_FRAG_LEN];
} __packed;
struct rtl_download_response {
__u8 status;
__u8 index;
} __packed;
struct rtl_rom_version_evt {
__u8 status;
__u8 version;
} __packed;
struct rtl_epatch_header {
__u8 signature[8];
__le32 fw_version;
__le16 num_patches;
} __packed;
#if IS_ENABLED(CONFIG_BT_RTL)
int btrtl_setup_realtek(struct hci_dev *hdev);
#else
static inline int btrtl_setup_realtek(struct hci_dev *hdev)
{
return -EOPNOTSUPP;
}
#endif

View File

@ -31,13 +31,14 @@
#include "btintel.h"
#include "btbcm.h"
#include "btrtl.h"
#define VERSION "0.8"
static bool disable_scofix;
static bool force_scofix;
static bool reset = 1;
static bool reset = true;
static struct usb_driver btusb_driver;
@ -178,6 +179,7 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
@ -186,6 +188,7 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x300b), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x300d), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
@ -211,6 +214,7 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3423), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
/* Atheros AR5BBU12 with sflash firmware */
{ USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
@ -265,7 +269,7 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x0e5e, 0x6622), .driver_info = BTUSB_BROKEN_ISOC },
/* Roper Class 1 Bluetooth Dongle (Silicon Wave based) */
{ USB_DEVICE(0x1300, 0x0001), .driver_info = BTUSB_SWAVE },
{ USB_DEVICE(0x1310, 0x0001), .driver_info = BTUSB_SWAVE },
/* Digianswer devices */
{ USB_DEVICE(0x08fd, 0x0001), .driver_info = BTUSB_DIGIANSWER },
@ -330,6 +334,7 @@ static const struct usb_device_id blacklist_table[] = {
#define BTUSB_FIRMWARE_LOADED 7
#define BTUSB_FIRMWARE_FAILED 8
#define BTUSB_BOOTING 9
#define BTUSB_RESET_RESUME 10
struct btusb_data {
struct hci_dev *hdev;
@ -1298,28 +1303,6 @@ static void btusb_waker(struct work_struct *work)
usb_autopm_put_interface(data->intf);
}
static struct sk_buff *btusb_read_local_version(struct hci_dev *hdev)
{
struct sk_buff *skb;
skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION failed (%ld)",
hdev->name, PTR_ERR(skb));
return skb;
}
if (skb->len != sizeof(struct hci_rp_read_local_version)) {
BT_ERR("%s: HCI_OP_READ_LOCAL_VERSION event length mismatch",
hdev->name);
kfree_skb(skb);
return ERR_PTR(-EIO);
}
return skb;
}
static int btusb_setup_bcm92035(struct hci_dev *hdev)
{
struct sk_buff *skb;
@ -1340,408 +1323,40 @@ static int btusb_setup_csr(struct hci_dev *hdev)
{
struct hci_rp_read_local_version *rp;
struct sk_buff *skb;
int ret;
BT_DBG("%s", hdev->name);
skb = btusb_read_local_version(hdev);
if (IS_ERR(skb))
return -PTR_ERR(skb);
rp = (struct hci_rp_read_local_version *)skb->data;
if (!rp->status) {
if (le16_to_cpu(rp->manufacturer) != 10) {
/* Clear the reset quirk since this is not an actual
* early Bluetooth 1.1 device from CSR.
*/
clear_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
/* These fake CSR controllers have all a broken
* stored link key handling and so just disable it.
*/
set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY,
&hdev->quirks);
}
}
ret = -bt_to_errno(rp->status);
kfree_skb(skb);
return ret;
}
#define RTL_FRAG_LEN 252
struct rtl_download_cmd {
__u8 index;
__u8 data[RTL_FRAG_LEN];
} __packed;
struct rtl_download_response {
__u8 status;
__u8 index;
} __packed;
struct rtl_rom_version_evt {
__u8 status;
__u8 version;
} __packed;
struct rtl_epatch_header {
__u8 signature[8];
__le32 fw_version;
__le16 num_patches;
} __packed;
#define RTL_EPATCH_SIGNATURE "Realtech"
#define RTL_ROM_LMP_3499 0x3499
#define RTL_ROM_LMP_8723A 0x1200
#define RTL_ROM_LMP_8723B 0x8723
#define RTL_ROM_LMP_8821A 0x8821
#define RTL_ROM_LMP_8761A 0x8761
static int rtl_read_rom_version(struct hci_dev *hdev, u8 *version)
{
struct rtl_rom_version_evt *rom_version;
struct sk_buff *skb;
int ret;
/* Read RTL ROM version command */
skb = __hci_cmd_sync(hdev, 0xfc6d, 0, NULL, HCI_INIT_TIMEOUT);
skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: Read ROM version failed (%ld)",
hdev->name, PTR_ERR(skb));
return PTR_ERR(skb);
int err = PTR_ERR(skb);
BT_ERR("%s: CSR: Local version failed (%d)", hdev->name, err);
return err;
}
if (skb->len != sizeof(*rom_version)) {
BT_ERR("%s: RTL version event length mismatch", hdev->name);
if (skb->len != sizeof(struct hci_rp_read_local_version)) {
BT_ERR("%s: CSR: Local version length mismatch", hdev->name);
kfree_skb(skb);
return -EIO;
}
rom_version = (struct rtl_rom_version_evt *)skb->data;
BT_INFO("%s: rom_version status=%x version=%x",
hdev->name, rom_version->status, rom_version->version);
rp = (struct hci_rp_read_local_version *)skb->data;
ret = rom_version->status;
if (ret == 0)
*version = rom_version->version;
if (le16_to_cpu(rp->manufacturer) != 10) {
/* Clear the reset quirk since this is not an actual
* early Bluetooth 1.1 device from CSR.
*/
clear_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
/* These fake CSR controllers have all a broken
* stored link key handling and so just disable it.
*/
set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
}
kfree_skb(skb);
return ret;
}
static int rtl8723b_parse_firmware(struct hci_dev *hdev, u16 lmp_subver,
const struct firmware *fw,
unsigned char **_buf)
{
const u8 extension_sig[] = { 0x51, 0x04, 0xfd, 0x77 };
struct rtl_epatch_header *epatch_info;
unsigned char *buf;
int i, ret, len;
size_t min_size;
u8 opcode, length, data, rom_version = 0;
int project_id = -1;
const unsigned char *fwptr, *chip_id_base;
const unsigned char *patch_length_base, *patch_offset_base;
u32 patch_offset = 0;
u16 patch_length, num_patches;
const u16 project_id_to_lmp_subver[] = {
RTL_ROM_LMP_8723A,
RTL_ROM_LMP_8723B,
RTL_ROM_LMP_8821A,
RTL_ROM_LMP_8761A
};
ret = rtl_read_rom_version(hdev, &rom_version);
if (ret)
return -bt_to_errno(ret);
min_size = sizeof(struct rtl_epatch_header) + sizeof(extension_sig) + 3;
if (fw->size < min_size)
return -EINVAL;
fwptr = fw->data + fw->size - sizeof(extension_sig);
if (memcmp(fwptr, extension_sig, sizeof(extension_sig)) != 0) {
BT_ERR("%s: extension section signature mismatch", hdev->name);
return -EINVAL;
}
/* Loop from the end of the firmware parsing instructions, until
* we find an instruction that identifies the "project ID" for the
* hardware supported by this firwmare file.
* Once we have that, we double-check that that project_id is suitable
* for the hardware we are working with.
*/
while (fwptr >= fw->data + (sizeof(struct rtl_epatch_header) + 3)) {
opcode = *--fwptr;
length = *--fwptr;
data = *--fwptr;
BT_DBG("check op=%x len=%x data=%x", opcode, length, data);
if (opcode == 0xff) /* EOF */
break;
if (length == 0) {
BT_ERR("%s: found instruction with length 0",
hdev->name);
return -EINVAL;
}
if (opcode == 0 && length == 1) {
project_id = data;
break;
}
fwptr -= length;
}
if (project_id < 0) {
BT_ERR("%s: failed to find version instruction", hdev->name);
return -EINVAL;
}
if (project_id >= ARRAY_SIZE(project_id_to_lmp_subver)) {
BT_ERR("%s: unknown project id %d", hdev->name, project_id);
return -EINVAL;
}
if (lmp_subver != project_id_to_lmp_subver[project_id]) {
BT_ERR("%s: firmware is for %x but this is a %x", hdev->name,
project_id_to_lmp_subver[project_id], lmp_subver);
return -EINVAL;
}
epatch_info = (struct rtl_epatch_header *)fw->data;
if (memcmp(epatch_info->signature, RTL_EPATCH_SIGNATURE, 8) != 0) {
BT_ERR("%s: bad EPATCH signature", hdev->name);
return -EINVAL;
}
num_patches = le16_to_cpu(epatch_info->num_patches);
BT_DBG("fw_version=%x, num_patches=%d",
le32_to_cpu(epatch_info->fw_version), num_patches);
/* After the rtl_epatch_header there is a funky patch metadata section.
* Assuming 2 patches, the layout is:
* ChipID1 ChipID2 PatchLength1 PatchLength2 PatchOffset1 PatchOffset2
*
* Find the right patch for this chip.
*/
min_size += 8 * num_patches;
if (fw->size < min_size)
return -EINVAL;
chip_id_base = fw->data + sizeof(struct rtl_epatch_header);
patch_length_base = chip_id_base + (sizeof(u16) * num_patches);
patch_offset_base = patch_length_base + (sizeof(u16) * num_patches);
for (i = 0; i < num_patches; i++) {
u16 chip_id = get_unaligned_le16(chip_id_base +
(i * sizeof(u16)));
if (chip_id == rom_version + 1) {
patch_length = get_unaligned_le16(patch_length_base +
(i * sizeof(u16)));
patch_offset = get_unaligned_le32(patch_offset_base +
(i * sizeof(u32)));
break;
}
}
if (!patch_offset) {
BT_ERR("%s: didn't find patch for chip id %d",
hdev->name, rom_version);
return -EINVAL;
}
BT_DBG("length=%x offset=%x index %d", patch_length, patch_offset, i);
min_size = patch_offset + patch_length;
if (fw->size < min_size)
return -EINVAL;
/* Copy the firmware into a new buffer and write the version at
* the end.
*/
len = patch_length;
buf = kmemdup(fw->data + patch_offset, patch_length, GFP_KERNEL);
if (!buf)
return -ENOMEM;
memcpy(buf + patch_length - 4, &epatch_info->fw_version, 4);
*_buf = buf;
return len;
}
static int rtl_download_firmware(struct hci_dev *hdev,
const unsigned char *data, int fw_len)
{
struct rtl_download_cmd *dl_cmd;
int frag_num = fw_len / RTL_FRAG_LEN + 1;
int frag_len = RTL_FRAG_LEN;
int ret = 0;
int i;
dl_cmd = kmalloc(sizeof(struct rtl_download_cmd), GFP_KERNEL);
if (!dl_cmd)
return -ENOMEM;
for (i = 0; i < frag_num; i++) {
struct rtl_download_response *dl_resp;
struct sk_buff *skb;
BT_DBG("download fw (%d/%d)", i, frag_num);
dl_cmd->index = i;
if (i == (frag_num - 1)) {
dl_cmd->index |= 0x80; /* data end */
frag_len = fw_len % RTL_FRAG_LEN;
}
memcpy(dl_cmd->data, data, frag_len);
/* Send download command */
skb = __hci_cmd_sync(hdev, 0xfc20, frag_len + 1, dl_cmd,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: download fw command failed (%ld)",
hdev->name, PTR_ERR(skb));
ret = -PTR_ERR(skb);
goto out;
}
if (skb->len != sizeof(*dl_resp)) {
BT_ERR("%s: download fw event length mismatch",
hdev->name);
kfree_skb(skb);
ret = -EIO;
goto out;
}
dl_resp = (struct rtl_download_response *)skb->data;
if (dl_resp->status != 0) {
kfree_skb(skb);
ret = bt_to_errno(dl_resp->status);
goto out;
}
kfree_skb(skb);
data += RTL_FRAG_LEN;
}
out:
kfree(dl_cmd);
return ret;
}
static int btusb_setup_rtl8723a(struct hci_dev *hdev)
{
struct btusb_data *data = dev_get_drvdata(&hdev->dev);
struct usb_device *udev = interface_to_usbdev(data->intf);
const struct firmware *fw;
int ret;
BT_INFO("%s: rtl: loading rtl_bt/rtl8723a_fw.bin", hdev->name);
ret = request_firmware(&fw, "rtl_bt/rtl8723a_fw.bin", &udev->dev);
if (ret < 0) {
BT_ERR("%s: Failed to load rtl_bt/rtl8723a_fw.bin", hdev->name);
return ret;
}
if (fw->size < 8) {
ret = -EINVAL;
goto out;
}
/* Check that the firmware doesn't have the epatch signature
* (which is only for RTL8723B and newer).
*/
if (!memcmp(fw->data, RTL_EPATCH_SIGNATURE, 8)) {
BT_ERR("%s: unexpected EPATCH signature!", hdev->name);
ret = -EINVAL;
goto out;
}
ret = rtl_download_firmware(hdev, fw->data, fw->size);
out:
release_firmware(fw);
return ret;
}
static int btusb_setup_rtl8723b(struct hci_dev *hdev, u16 lmp_subver,
const char *fw_name)
{
struct btusb_data *data = dev_get_drvdata(&hdev->dev);
struct usb_device *udev = interface_to_usbdev(data->intf);
unsigned char *fw_data = NULL;
const struct firmware *fw;
int ret;
BT_INFO("%s: rtl: loading %s", hdev->name, fw_name);
ret = request_firmware(&fw, fw_name, &udev->dev);
if (ret < 0) {
BT_ERR("%s: Failed to load %s", hdev->name, fw_name);
return ret;
}
ret = rtl8723b_parse_firmware(hdev, lmp_subver, fw, &fw_data);
if (ret < 0)
goto out;
ret = rtl_download_firmware(hdev, fw_data, ret);
kfree(fw_data);
if (ret < 0)
goto out;
out:
release_firmware(fw);
return ret;
}
static int btusb_setup_realtek(struct hci_dev *hdev)
{
struct sk_buff *skb;
struct hci_rp_read_local_version *resp;
u16 lmp_subver;
skb = btusb_read_local_version(hdev);
if (IS_ERR(skb))
return -PTR_ERR(skb);
resp = (struct hci_rp_read_local_version *)skb->data;
BT_INFO("%s: rtl: examining hci_ver=%02x hci_rev=%04x lmp_ver=%02x "
"lmp_subver=%04x", hdev->name, resp->hci_ver, resp->hci_rev,
resp->lmp_ver, resp->lmp_subver);
lmp_subver = le16_to_cpu(resp->lmp_subver);
kfree_skb(skb);
/* Match a set of subver values that correspond to stock firmware,
* which is not compatible with standard btusb.
* If matched, upload an alternative firmware that does conform to
* standard btusb. Once that firmware is uploaded, the subver changes
* to a different value.
*/
switch (lmp_subver) {
case RTL_ROM_LMP_8723A:
case RTL_ROM_LMP_3499:
return btusb_setup_rtl8723a(hdev);
case RTL_ROM_LMP_8723B:
return btusb_setup_rtl8723b(hdev, lmp_subver,
"rtl_bt/rtl8723b_fw.bin");
case RTL_ROM_LMP_8821A:
return btusb_setup_rtl8723b(hdev, lmp_subver,
"rtl_bt/rtl8821a_fw.bin");
case RTL_ROM_LMP_8761A:
return btusb_setup_rtl8723b(hdev, lmp_subver,
"rtl_bt/rtl8761a_fw.bin");
default:
BT_INFO("rtl: assuming no firmware upload needed.");
return 0;
}
return 0;
}
static const struct firmware *btusb_setup_intel_get_fw(struct hci_dev *hdev,
@ -1951,12 +1566,6 @@ static int btusb_setup_intel(struct hci_dev *hdev)
}
ver = (struct intel_version *)skb->data;
if (ver->status) {
BT_ERR("%s Intel fw version event failed (%02x)", hdev->name,
ver->status);
kfree_skb(skb);
return -bt_to_errno(ver->status);
}
BT_INFO("%s: read Intel version: %02x%02x%02x%02x%02x%02x%02x%02x%02x",
hdev->name, ver->hw_platform, ver->hw_variant,
@ -1990,6 +1599,8 @@ static int btusb_setup_intel(struct hci_dev *hdev)
}
fw_ptr = fw->data;
kfree_skb(skb);
/* This Intel specific command enables the manufacturer mode of the
* controller.
*
@ -2004,15 +1615,6 @@ static int btusb_setup_intel(struct hci_dev *hdev)
return PTR_ERR(skb);
}
if (skb->data[0]) {
u8 evt_status = skb->data[0];
BT_ERR("%s enable Intel manufacturer mode event failed (%02x)",
hdev->name, evt_status);
kfree_skb(skb);
release_firmware(fw);
return -bt_to_errno(evt_status);
}
kfree_skb(skb);
disable_patch = 1;
@ -2331,6 +1933,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
struct intel_boot_params *params;
const struct firmware *fw;
const u8 *fw_ptr;
u32 frag_len;
char fwname[64];
ktime_t calltime, delta, rettime;
unsigned long long duration;
@ -2358,13 +1961,6 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
}
ver = (struct intel_version *)skb->data;
if (ver->status) {
BT_ERR("%s: Intel version command failure (%02x)",
hdev->name, ver->status);
err = -bt_to_errno(ver->status);
kfree_skb(skb);
return err;
}
/* The hardware platform number has a fixed value of 0x37 and
* for now only accept this single value.
@ -2439,13 +2035,6 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
}
params = (struct intel_boot_params *)skb->data;
if (params->status) {
BT_ERR("%s: Intel boot parameters command failure (%02x)",
hdev->name, params->status);
err = -bt_to_errno(params->status);
kfree_skb(skb);
return err;
}
BT_INFO("%s: Device revision is %u", hdev->name,
le16_to_cpu(params->dev_revid));
@ -2495,6 +2084,12 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
BT_INFO("%s: Found device firmware: %s", hdev->name, fwname);
/* Save the DDC file name for later use to apply once the firmware
* downloading is done.
*/
snprintf(fwname, sizeof(fwname), "intel/ibt-11-%u.ddc",
le16_to_cpu(params->dev_revid));
kfree_skb(skb);
if (fw->size < 644) {
@ -2537,24 +2132,33 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
}
fw_ptr = fw->data + 644;
frag_len = 0;
while (fw_ptr - fw->data < fw->size) {
struct hci_command_hdr *cmd = (void *)fw_ptr;
u8 cmd_len;
struct hci_command_hdr *cmd = (void *)(fw_ptr + frag_len);
cmd_len = sizeof(*cmd) + cmd->plen;
frag_len += sizeof(*cmd) + cmd->plen;
/* Send each command from the firmware data buffer as
* a single Data fragment.
/* The paramter length of the secure send command requires
* a 4 byte alignment. It happens so that the firmware file
* contains proper Intel_NOP commands to align the fragments
* as needed.
*
* Send set of commands with 4 byte alignment from the
* firmware data buffer as a single Data fragement.
*/
err = btusb_intel_secure_send(hdev, 0x01, cmd_len, fw_ptr);
if (err < 0) {
BT_ERR("%s: Failed to send firmware data (%d)",
hdev->name, err);
goto done;
}
if (!(frag_len % 4)) {
err = btusb_intel_secure_send(hdev, 0x01, frag_len,
fw_ptr);
if (err < 0) {
BT_ERR("%s: Failed to send firmware data (%d)",
hdev->name, err);
goto done;
}
fw_ptr += cmd_len;
fw_ptr += frag_len;
frag_len = 0;
}
}
set_bit(BTUSB_FIRMWARE_LOADED, &data->flags);
@ -2647,6 +2251,43 @@ done:
clear_bit(BTUSB_BOOTLOADER, &data->flags);
/* Once the device is running in operational mode, it needs to apply
* the device configuration (DDC) parameters.
*
* The device can work without DDC parameters, so even if it fails
* to load the file, no need to fail the setup.
*/
err = request_firmware_direct(&fw, fwname, &hdev->dev);
if (err < 0)
return 0;
BT_INFO("%s: Found Intel DDC parameters: %s", hdev->name, fwname);
fw_ptr = fw->data;
/* DDC file contains one or more DDC structure which has
* Length (1 byte), DDC ID (2 bytes), and DDC value (Length - 2).
*/
while (fw->size > fw_ptr - fw->data) {
u8 cmd_plen = fw_ptr[0] + sizeof(u8);
skb = __hci_cmd_sync(hdev, 0xfc8b, cmd_plen, fw_ptr,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: Failed to send Intel_Write_DDC (%ld)",
hdev->name, PTR_ERR(skb));
release_firmware(fw);
return PTR_ERR(skb);
}
fw_ptr += cmd_plen;
kfree_skb(skb);
}
release_firmware(fw);
BT_INFO("%s: Applying Intel DDC parameters completed", hdev->name);
return 0;
}
@ -2678,13 +2319,6 @@ static void btusb_hw_error_intel(struct hci_dev *hdev, u8 code)
return;
}
if (skb->data[0] != 0x00) {
BT_ERR("%s: Exception info command failure (%02x)",
hdev->name, skb->data[0]);
kfree_skb(skb);
return;
}
BT_ERR("%s: Exception info %s", hdev->name, (char *)(skb->data + 1));
kfree_skb(skb);
@ -2792,6 +2426,7 @@ struct qca_device_info {
static const struct qca_device_info qca_devices_table[] = {
{ 0x00000100, 20, 4, 10 }, /* Rome 1.0 */
{ 0x00000101, 20, 4, 10 }, /* Rome 1.1 */
{ 0x00000200, 28, 4, 18 }, /* Rome 2.0 */
{ 0x00000201, 28, 4, 18 }, /* Rome 2.1 */
{ 0x00000300, 28, 4, 18 }, /* Rome 3.0 */
{ 0x00000302, 28, 4, 18 }, /* Rome 3.2 */
@ -3175,8 +2810,17 @@ static int btusb_probe(struct usb_interface *intf,
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
}
if (id->driver_info & BTUSB_REALTEK)
hdev->setup = btusb_setup_realtek;
#ifdef CONFIG_BT_HCIBTUSB_RTL
if (id->driver_info & BTUSB_REALTEK) {
hdev->setup = btrtl_setup_realtek;
/* Realtek devices lose their updated firmware over suspend,
* but the USB hub doesn't notice any status change.
* Explicitly request a device reset on resume.
*/
set_bit(BTUSB_RESET_RESUME, &data->flags);
}
#endif
if (id->driver_info & BTUSB_AMP) {
/* AMP controllers do not support SCO packets */
@ -3308,6 +2952,14 @@ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
btusb_stop_traffic(data);
usb_kill_anchored_urbs(&data->tx_anchor);
/* Optionally request a device reset on resume, but only when
* wakeups are disabled. If wakeups are enabled we assume the
* device will stay powered up throughout suspend.
*/
if (test_bit(BTUSB_RESET_RESUME, &data->flags) &&
!device_may_wakeup(&data->udev->dev))
data->udev->reset_resume = 1;
return 0;
}

View File

@ -22,7 +22,7 @@
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#define DEBUG
#include <linux/platform_device.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>

View File

@ -192,6 +192,7 @@ static int ath_recv(struct hci_uart *hu, const void *data, int count)
if (IS_ERR(ath->rx_skb)) {
int err = PTR_ERR(ath->rx_skb);
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
ath->rx_skb = NULL;
return err;
}

View File

@ -24,6 +24,7 @@
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/skbuff.h>
#include <linux/firmware.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
@ -36,6 +37,55 @@ struct bcm_data {
struct sk_buff_head txq;
};
static int bcm_set_baudrate(struct hci_uart *hu, unsigned int speed)
{
struct hci_dev *hdev = hu->hdev;
struct sk_buff *skb;
struct bcm_update_uart_baud_rate param;
if (speed > 3000000) {
struct bcm_write_uart_clock_setting clock;
clock.type = BCM_UART_CLOCK_48MHZ;
BT_DBG("%s: Set Controller clock (%d)", hdev->name, clock.type);
/* This Broadcom specific command changes the UART's controller
* clock for baud rate > 3000000.
*/
skb = __hci_cmd_sync(hdev, 0xfc45, 1, &clock, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
int err = PTR_ERR(skb);
BT_ERR("%s: BCM: failed to write clock command (%d)",
hdev->name, err);
return err;
}
kfree_skb(skb);
}
BT_DBG("%s: Set Controller UART speed to %d bit/s", hdev->name, speed);
param.zero = cpu_to_le16(0);
param.baud_rate = cpu_to_le32(speed);
/* This Broadcom specific command changes the UART's controller baud
* rate.
*/
skb = __hci_cmd_sync(hdev, 0xfc18, sizeof(param), &param,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
int err = PTR_ERR(skb);
BT_ERR("%s: BCM: failed to write update baudrate command (%d)",
hdev->name, err);
return err;
}
kfree_skb(skb);
return 0;
}
static int bcm_open(struct hci_uart *hu)
{
struct bcm_data *bcm;
@ -79,11 +129,62 @@ static int bcm_flush(struct hci_uart *hu)
static int bcm_setup(struct hci_uart *hu)
{
char fw_name[64];
const struct firmware *fw;
unsigned int speed;
int err;
BT_DBG("hu %p", hu);
hu->hdev->set_bdaddr = btbcm_set_bdaddr;
return btbcm_setup_patchram(hu->hdev);
err = btbcm_initialize(hu->hdev, fw_name, sizeof(fw_name));
if (err)
return err;
err = request_firmware(&fw, fw_name, &hu->hdev->dev);
if (err < 0) {
BT_INFO("%s: BCM: Patch %s not found", hu->hdev->name, fw_name);
return 0;
}
err = btbcm_patchram(hu->hdev, fw);
if (err) {
BT_INFO("%s: BCM: Patch failed (%d)", hu->hdev->name, err);
goto finalize;
}
/* Init speed if any */
if (hu->init_speed)
speed = hu->init_speed;
else if (hu->proto->init_speed)
speed = hu->proto->init_speed;
else
speed = 0;
if (speed)
hci_uart_set_baudrate(hu, speed);
/* Operational speed if any */
if (hu->oper_speed)
speed = hu->oper_speed;
else if (hu->proto->oper_speed)
speed = hu->proto->oper_speed;
else
speed = 0;
if (speed) {
err = bcm_set_baudrate(hu, speed);
if (!err)
hci_uart_set_baudrate(hu, speed);
}
finalize:
release_firmware(fw);
err = btbcm_finalize(hu->hdev);
return err;
}
static const struct h4_recv_pkt bcm_recv_pkts[] = {
@ -104,6 +205,7 @@ static int bcm_recv(struct hci_uart *hu, const void *data, int count)
if (IS_ERR(bcm->rx_skb)) {
int err = PTR_ERR(bcm->rx_skb);
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
bcm->rx_skb = NULL;
return err;
}
@ -133,10 +235,13 @@ static struct sk_buff *bcm_dequeue(struct hci_uart *hu)
static const struct hci_uart_proto bcm_proto = {
.id = HCI_UART_BCM,
.name = "BCM",
.init_speed = 115200,
.oper_speed = 4000000,
.open = bcm_open,
.close = bcm_close,
.flush = bcm_flush,
.setup = bcm_setup,
.set_baudrate = bcm_set_baudrate,
.recv = bcm_recv,
.enqueue = bcm_enqueue,
.dequeue = bcm_dequeue,

View File

@ -47,8 +47,8 @@
#include "hci_uart.h"
static bool txcrc = 1;
static bool hciextn = 1;
static bool txcrc = true;
static bool hciextn = true;
#define BCSP_TXWINSIZE 4
@ -436,7 +436,7 @@ static inline void bcsp_unslip_one_byte(struct bcsp_struct *bcsp, unsigned char
break;
default:
memcpy(skb_put(bcsp->rx_skb, 1), &byte, 1);
if ((bcsp->rx_skb-> data[0] & 0x40) != 0 &&
if ((bcsp->rx_skb->data[0] & 0x40) != 0 &&
bcsp->rx_state != BCSP_W4_CRC)
bcsp_crc_update(&bcsp->message_crc, byte);
bcsp->rx_count--;
@ -447,24 +447,24 @@ static inline void bcsp_unslip_one_byte(struct bcsp_struct *bcsp, unsigned char
switch (byte) {
case 0xdc:
memcpy(skb_put(bcsp->rx_skb, 1), &c0, 1);
if ((bcsp->rx_skb-> data[0] & 0x40) != 0 &&
if ((bcsp->rx_skb->data[0] & 0x40) != 0 &&
bcsp->rx_state != BCSP_W4_CRC)
bcsp_crc_update(&bcsp-> message_crc, 0xc0);
bcsp_crc_update(&bcsp->message_crc, 0xc0);
bcsp->rx_esc_state = BCSP_ESCSTATE_NOESC;
bcsp->rx_count--;
break;
case 0xdd:
memcpy(skb_put(bcsp->rx_skb, 1), &db, 1);
if ((bcsp->rx_skb-> data[0] & 0x40) != 0 &&
if ((bcsp->rx_skb->data[0] & 0x40) != 0 &&
bcsp->rx_state != BCSP_W4_CRC)
bcsp_crc_update(&bcsp-> message_crc, 0xdb);
bcsp_crc_update(&bcsp->message_crc, 0xdb);
bcsp->rx_esc_state = BCSP_ESCSTATE_NOESC;
bcsp->rx_count--;
break;
default:
BT_ERR ("Invalid byte %02x after esc byte", byte);
BT_ERR("Invalid byte %02x after esc byte", byte);
kfree_skb(bcsp->rx_skb);
bcsp->rx_skb = NULL;
bcsp->rx_state = BCSP_W4_PKT_DELIMITER;
@ -527,7 +527,7 @@ static void bcsp_complete_rx_pkt(struct hci_uart *hu)
hci_recv_frame(hu->hdev, bcsp->rx_skb);
} else {
BT_ERR ("Packet for unknown channel (%u %s)",
BT_ERR("Packet for unknown channel (%u %s)",
bcsp->rx_skb->data[1] & 0x0f,
bcsp->rx_skb->data[0] & 0x80 ?
"reliable" : "unreliable");
@ -587,7 +587,7 @@ static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
}
if (bcsp->rx_skb->data[0] & 0x80 /* reliable pkt */
&& (bcsp->rx_skb->data[0] & 0x07) != bcsp->rxseq_txack) {
BT_ERR ("Out-of-order packet arrived, got %u expected %u",
BT_ERR("Out-of-order packet arrived, got %u expected %u",
bcsp->rx_skb->data[0] & 0x07, bcsp->rxseq_txack);
kfree_skb(bcsp->rx_skb);

View File

@ -133,6 +133,7 @@ static int h4_recv(struct hci_uart *hu, const void *data, int count)
if (IS_ERR(h4->rx_skb)) {
int err = PTR_ERR(h4->rx_skb);
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
h4->rx_skb = NULL;
return err;
}

View File

@ -40,6 +40,7 @@
#include <linux/signal.h>
#include <linux/ioctl.h>
#include <linux/skbuff.h>
#include <linux/firmware.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
@ -265,11 +266,133 @@ static int hci_uart_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
return 0;
}
/* Flow control or un-flow control the device */
void hci_uart_set_flow_control(struct hci_uart *hu, bool enable)
{
struct tty_struct *tty = hu->tty;
struct ktermios ktermios;
int status;
unsigned int set = 0;
unsigned int clear = 0;
if (enable) {
/* Disable hardware flow control */
ktermios = tty->termios;
ktermios.c_cflag &= ~CRTSCTS;
status = tty_set_termios(tty, &ktermios);
BT_DBG("Disabling hardware flow control: %s",
status ? "failed" : "success");
/* Clear RTS to prevent the device from sending */
/* Most UARTs need OUT2 to enable interrupts */
status = tty->driver->ops->tiocmget(tty);
BT_DBG("Current tiocm 0x%x", status);
set &= ~(TIOCM_OUT2 | TIOCM_RTS);
clear = ~set;
set &= TIOCM_DTR | TIOCM_RTS | TIOCM_OUT1 |
TIOCM_OUT2 | TIOCM_LOOP;
clear &= TIOCM_DTR | TIOCM_RTS | TIOCM_OUT1 |
TIOCM_OUT2 | TIOCM_LOOP;
status = tty->driver->ops->tiocmset(tty, set, clear);
BT_DBG("Clearing RTS: %s", status ? "failed" : "success");
} else {
/* Set RTS to allow the device to send again */
status = tty->driver->ops->tiocmget(tty);
BT_DBG("Current tiocm 0x%x", status);
set |= (TIOCM_OUT2 | TIOCM_RTS);
clear = ~set;
set &= TIOCM_DTR | TIOCM_RTS | TIOCM_OUT1 |
TIOCM_OUT2 | TIOCM_LOOP;
clear &= TIOCM_DTR | TIOCM_RTS | TIOCM_OUT1 |
TIOCM_OUT2 | TIOCM_LOOP;
status = tty->driver->ops->tiocmset(tty, set, clear);
BT_DBG("Setting RTS: %s", status ? "failed" : "success");
/* Re-enable hardware flow control */
ktermios = tty->termios;
ktermios.c_cflag |= CRTSCTS;
status = tty_set_termios(tty, &ktermios);
BT_DBG("Enabling hardware flow control: %s",
status ? "failed" : "success");
}
}
void hci_uart_set_speeds(struct hci_uart *hu, unsigned int init_speed,
unsigned int oper_speed)
{
hu->init_speed = init_speed;
hu->oper_speed = oper_speed;
}
void hci_uart_init_tty(struct hci_uart *hu)
{
struct tty_struct *tty = hu->tty;
struct ktermios ktermios;
/* Bring the UART into a known 8 bits no parity hw fc state */
ktermios = tty->termios;
ktermios.c_iflag &= ~(IGNBRK | BRKINT | PARMRK | ISTRIP |
INLCR | IGNCR | ICRNL | IXON);
ktermios.c_oflag &= ~OPOST;
ktermios.c_lflag &= ~(ECHO | ECHONL | ICANON | ISIG | IEXTEN);
ktermios.c_cflag &= ~(CSIZE | PARENB);
ktermios.c_cflag |= CS8;
ktermios.c_cflag |= CRTSCTS;
/* tty_set_termios() return not checked as it is always 0 */
tty_set_termios(tty, &ktermios);
}
void hci_uart_set_baudrate(struct hci_uart *hu, unsigned int speed)
{
struct tty_struct *tty = hu->tty;
struct ktermios ktermios;
ktermios = tty->termios;
ktermios.c_cflag &= ~CBAUD;
tty_termios_encode_baud_rate(&ktermios, speed, speed);
/* tty_set_termios() return not checked as it is always 0 */
tty_set_termios(tty, &ktermios);
BT_DBG("%s: New tty speeds: %d/%d", hu->hdev->name,
tty->termios.c_ispeed, tty->termios.c_ospeed);
}
static int hci_uart_setup(struct hci_dev *hdev)
{
struct hci_uart *hu = hci_get_drvdata(hdev);
struct hci_rp_read_local_version *ver;
struct sk_buff *skb;
unsigned int speed;
int err;
/* Init speed if any */
if (hu->init_speed)
speed = hu->init_speed;
else if (hu->proto->init_speed)
speed = hu->proto->init_speed;
else
speed = 0;
if (speed)
hci_uart_set_baudrate(hu, speed);
/* Operational speed if any */
if (hu->oper_speed)
speed = hu->oper_speed;
else if (hu->proto->oper_speed)
speed = hu->proto->oper_speed;
else
speed = 0;
if (hu->proto->set_baudrate && speed) {
err = hu->proto->set_baudrate(hu, speed);
if (!err)
hci_uart_set_baudrate(hu, speed);
}
if (hu->proto->setup)
return hu->proto->setup(hu);

View File

@ -58,10 +58,13 @@ struct hci_uart;
struct hci_uart_proto {
unsigned int id;
const char *name;
unsigned int init_speed;
unsigned int oper_speed;
int (*open)(struct hci_uart *hu);
int (*close)(struct hci_uart *hu);
int (*flush)(struct hci_uart *hu);
int (*setup)(struct hci_uart *hu);
int (*set_baudrate)(struct hci_uart *hu, unsigned int speed);
int (*recv)(struct hci_uart *hu, const void *data, int len);
int (*enqueue)(struct hci_uart *hu, struct sk_buff *skb);
struct sk_buff *(*dequeue)(struct hci_uart *hu);
@ -82,6 +85,9 @@ struct hci_uart {
struct sk_buff *tx_skb;
unsigned long tx_state;
spinlock_t rx_lock;
unsigned int init_speed;
unsigned int oper_speed;
};
/* HCI_UART proto flag bits */
@ -96,6 +102,11 @@ int hci_uart_register_proto(const struct hci_uart_proto *p);
int hci_uart_unregister_proto(const struct hci_uart_proto *p);
int hci_uart_tx_wakeup(struct hci_uart *hu);
int hci_uart_init_ready(struct hci_uart *hu);
void hci_uart_init_tty(struct hci_uart *hu);
void hci_uart_set_baudrate(struct hci_uart *hu, unsigned int speed);
void hci_uart_set_flow_control(struct hci_uart *hu, bool enable);
void hci_uart_set_speeds(struct hci_uart *hu, unsigned int init_speed,
unsigned int oper_speed);
#ifdef CONFIG_BT_HCIUART_H4
int h4_init(void);

View File

@ -366,7 +366,7 @@ static const struct file_operations vhci_fops = {
.llseek = no_llseek,
};
static struct miscdevice vhci_miscdev= {
static struct miscdevice vhci_miscdev = {
.name = "vhci",
.fops = &vhci_fops,
.minor = VHCI_MINOR,

View File

@ -140,12 +140,47 @@ static struct clk_regmap pll14_vote = {
},
};
#define NSS_PLL_RATE(f, _l, _m, _n, i) \
{ \
.freq = f, \
.l = _l, \
.m = _m, \
.n = _n, \
.ibits = i, \
}
static struct pll_freq_tbl pll18_freq_tbl[] = {
NSS_PLL_RATE(550000000, 44, 0, 1, 0x01495625),
NSS_PLL_RATE(733000000, 58, 16, 25, 0x014b5625),
};
static struct clk_pll pll18 = {
.l_reg = 0x31a4,
.m_reg = 0x31a8,
.n_reg = 0x31ac,
.config_reg = 0x31b4,
.mode_reg = 0x31a0,
.status_reg = 0x31b8,
.status_bit = 16,
.post_div_shift = 16,
.post_div_width = 1,
.freq_tbl = pll18_freq_tbl,
.clkr.hw.init = &(struct clk_init_data){
.name = "pll18",
.parent_names = (const char *[]){ "pxo" },
.num_parents = 1,
.ops = &clk_pll_ops,
},
};
enum {
P_PXO,
P_PLL8,
P_PLL3,
P_PLL0,
P_CXO,
P_PLL14,
P_PLL18,
};
static const struct parent_map gcc_pxo_pll8_map[] = {
@ -197,6 +232,22 @@ static const char *gcc_pxo_pll8_pll0_map[] = {
"pll0_vote",
};
static const struct parent_map gcc_pxo_pll8_pll14_pll18_pll0_map[] = {
{ P_PXO, 0 },
{ P_PLL8, 4 },
{ P_PLL0, 2 },
{ P_PLL14, 5 },
{ P_PLL18, 1 }
};
static const char *gcc_pxo_pll8_pll14_pll18_pll0[] = {
"pxo",
"pll8_vote",
"pll0_vote",
"pll14",
"pll18",
};
static struct freq_tbl clk_tbl_gsbi_uart[] = {
{ 1843200, P_PLL8, 2, 6, 625 },
{ 3686400, P_PLL8, 2, 12, 625 },
@ -2202,6 +2253,472 @@ static struct clk_branch ebi2_aon_clk = {
},
};
static const struct freq_tbl clk_tbl_gmac[] = {
{ 133000000, P_PLL0, 1, 50, 301 },
{ 266000000, P_PLL0, 1, 127, 382 },
{ }
};
static struct clk_dyn_rcg gmac_core1_src = {
.ns_reg[0] = 0x3cac,
.ns_reg[1] = 0x3cb0,
.md_reg[0] = 0x3ca4,
.md_reg[1] = 0x3ca8,
.bank_reg = 0x3ca0,
.mn[0] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.mn[1] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.s[0] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.s[1] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.p[0] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.p[1] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.mux_sel_bit = 0,
.freq_tbl = clk_tbl_gmac,
.clkr = {
.enable_reg = 0x3ca0,
.enable_mask = BIT(1),
.hw.init = &(struct clk_init_data){
.name = "gmac_core1_src",
.parent_names = gcc_pxo_pll8_pll14_pll18_pll0,
.num_parents = 5,
.ops = &clk_dyn_rcg_ops,
},
},
};
static struct clk_branch gmac_core1_clk = {
.halt_reg = 0x3c20,
.halt_bit = 4,
.hwcg_reg = 0x3cb4,
.hwcg_bit = 6,
.clkr = {
.enable_reg = 0x3cb4,
.enable_mask = BIT(4),
.hw.init = &(struct clk_init_data){
.name = "gmac_core1_clk",
.parent_names = (const char *[]){
"gmac_core1_src",
},
.num_parents = 1,
.ops = &clk_branch_ops,
.flags = CLK_SET_RATE_PARENT,
},
},
};
static struct clk_dyn_rcg gmac_core2_src = {
.ns_reg[0] = 0x3ccc,
.ns_reg[1] = 0x3cd0,
.md_reg[0] = 0x3cc4,
.md_reg[1] = 0x3cc8,
.bank_reg = 0x3ca0,
.mn[0] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.mn[1] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.s[0] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.s[1] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.p[0] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.p[1] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.mux_sel_bit = 0,
.freq_tbl = clk_tbl_gmac,
.clkr = {
.enable_reg = 0x3cc0,
.enable_mask = BIT(1),
.hw.init = &(struct clk_init_data){
.name = "gmac_core2_src",
.parent_names = gcc_pxo_pll8_pll14_pll18_pll0,
.num_parents = 5,
.ops = &clk_dyn_rcg_ops,
},
},
};
static struct clk_branch gmac_core2_clk = {
.halt_reg = 0x3c20,
.halt_bit = 5,
.hwcg_reg = 0x3cd4,
.hwcg_bit = 6,
.clkr = {
.enable_reg = 0x3cd4,
.enable_mask = BIT(4),
.hw.init = &(struct clk_init_data){
.name = "gmac_core2_clk",
.parent_names = (const char *[]){
"gmac_core2_src",
},
.num_parents = 1,
.ops = &clk_branch_ops,
.flags = CLK_SET_RATE_PARENT,
},
},
};
static struct clk_dyn_rcg gmac_core3_src = {
.ns_reg[0] = 0x3cec,
.ns_reg[1] = 0x3cf0,
.md_reg[0] = 0x3ce4,
.md_reg[1] = 0x3ce8,
.bank_reg = 0x3ce0,
.mn[0] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.mn[1] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.s[0] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.s[1] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.p[0] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.p[1] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.mux_sel_bit = 0,
.freq_tbl = clk_tbl_gmac,
.clkr = {
.enable_reg = 0x3ce0,
.enable_mask = BIT(1),
.hw.init = &(struct clk_init_data){
.name = "gmac_core3_src",
.parent_names = gcc_pxo_pll8_pll14_pll18_pll0,
.num_parents = 5,
.ops = &clk_dyn_rcg_ops,
},
},
};
static struct clk_branch gmac_core3_clk = {
.halt_reg = 0x3c20,
.halt_bit = 6,
.hwcg_reg = 0x3cf4,
.hwcg_bit = 6,
.clkr = {
.enable_reg = 0x3cf4,
.enable_mask = BIT(4),
.hw.init = &(struct clk_init_data){
.name = "gmac_core3_clk",
.parent_names = (const char *[]){
"gmac_core3_src",
},
.num_parents = 1,
.ops = &clk_branch_ops,
.flags = CLK_SET_RATE_PARENT,
},
},
};
static struct clk_dyn_rcg gmac_core4_src = {
.ns_reg[0] = 0x3d0c,
.ns_reg[1] = 0x3d10,
.md_reg[0] = 0x3d04,
.md_reg[1] = 0x3d08,
.bank_reg = 0x3d00,
.mn[0] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.mn[1] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.s[0] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.s[1] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.p[0] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.p[1] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.mux_sel_bit = 0,
.freq_tbl = clk_tbl_gmac,
.clkr = {
.enable_reg = 0x3d00,
.enable_mask = BIT(1),
.hw.init = &(struct clk_init_data){
.name = "gmac_core4_src",
.parent_names = gcc_pxo_pll8_pll14_pll18_pll0,
.num_parents = 5,
.ops = &clk_dyn_rcg_ops,
},
},
};
static struct clk_branch gmac_core4_clk = {
.halt_reg = 0x3c20,
.halt_bit = 7,
.hwcg_reg = 0x3d14,
.hwcg_bit = 6,
.clkr = {
.enable_reg = 0x3d14,
.enable_mask = BIT(4),
.hw.init = &(struct clk_init_data){
.name = "gmac_core4_clk",
.parent_names = (const char *[]){
"gmac_core4_src",
},
.num_parents = 1,
.ops = &clk_branch_ops,
.flags = CLK_SET_RATE_PARENT,
},
},
};
static const struct freq_tbl clk_tbl_nss_tcm[] = {
{ 266000000, P_PLL0, 3, 0, 0 },
{ 400000000, P_PLL0, 2, 0, 0 },
{ }
};
static struct clk_dyn_rcg nss_tcm_src = {
.ns_reg[0] = 0x3dc4,
.ns_reg[1] = 0x3dc8,
.bank_reg = 0x3dc0,
.s[0] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.s[1] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.p[0] = {
.pre_div_shift = 3,
.pre_div_width = 4,
},
.p[1] = {
.pre_div_shift = 3,
.pre_div_width = 4,
},
.mux_sel_bit = 0,
.freq_tbl = clk_tbl_nss_tcm,
.clkr = {
.enable_reg = 0x3dc0,
.enable_mask = BIT(1),
.hw.init = &(struct clk_init_data){
.name = "nss_tcm_src",
.parent_names = gcc_pxo_pll8_pll14_pll18_pll0,
.num_parents = 5,
.ops = &clk_dyn_rcg_ops,
},
},
};
static struct clk_branch nss_tcm_clk = {
.halt_reg = 0x3c20,
.halt_bit = 14,
.clkr = {
.enable_reg = 0x3dd0,
.enable_mask = BIT(6) | BIT(4),
.hw.init = &(struct clk_init_data){
.name = "nss_tcm_clk",
.parent_names = (const char *[]){
"nss_tcm_src",
},
.num_parents = 1,
.ops = &clk_branch_ops,
.flags = CLK_SET_RATE_PARENT,
},
},
};
static const struct freq_tbl clk_tbl_nss[] = {
{ 110000000, P_PLL18, 1, 1, 5 },
{ 275000000, P_PLL18, 2, 0, 0 },
{ 550000000, P_PLL18, 1, 0, 0 },
{ 733000000, P_PLL18, 1, 0, 0 },
{ }
};
static struct clk_dyn_rcg ubi32_core1_src_clk = {
.ns_reg[0] = 0x3d2c,
.ns_reg[1] = 0x3d30,
.md_reg[0] = 0x3d24,
.md_reg[1] = 0x3d28,
.bank_reg = 0x3d20,
.mn[0] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.mn[1] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.s[0] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.s[1] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.p[0] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.p[1] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.mux_sel_bit = 0,
.freq_tbl = clk_tbl_nss,
.clkr = {
.enable_reg = 0x3d20,
.enable_mask = BIT(1),
.hw.init = &(struct clk_init_data){
.name = "ubi32_core1_src_clk",
.parent_names = gcc_pxo_pll8_pll14_pll18_pll0,
.num_parents = 5,
.ops = &clk_dyn_rcg_ops,
.flags = CLK_SET_RATE_PARENT | CLK_GET_RATE_NOCACHE,
},
},
};
static struct clk_dyn_rcg ubi32_core2_src_clk = {
.ns_reg[0] = 0x3d4c,
.ns_reg[1] = 0x3d50,
.md_reg[0] = 0x3d44,
.md_reg[1] = 0x3d48,
.bank_reg = 0x3d40,
.mn[0] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.mn[1] = {
.mnctr_en_bit = 8,
.mnctr_reset_bit = 7,
.mnctr_mode_shift = 5,
.n_val_shift = 16,
.m_val_shift = 16,
.width = 8,
},
.s[0] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.s[1] = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_pll14_pll18_pll0_map,
},
.p[0] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.p[1] = {
.pre_div_shift = 3,
.pre_div_width = 2,
},
.mux_sel_bit = 0,
.freq_tbl = clk_tbl_nss,
.clkr = {
.enable_reg = 0x3d40,
.enable_mask = BIT(1),
.hw.init = &(struct clk_init_data){
.name = "ubi32_core2_src_clk",
.parent_names = gcc_pxo_pll8_pll14_pll18_pll0,
.num_parents = 5,
.ops = &clk_dyn_rcg_ops,
.flags = CLK_SET_RATE_PARENT | CLK_GET_RATE_NOCACHE,
},
},
};
static struct clk_regmap *gcc_ipq806x_clks[] = {
[PLL0] = &pll0.clkr,
[PLL0_VOTE] = &pll0_vote,
@ -2211,6 +2728,7 @@ static struct clk_regmap *gcc_ipq806x_clks[] = {
[PLL8_VOTE] = &pll8_vote,
[PLL14] = &pll14.clkr,
[PLL14_VOTE] = &pll14_vote,
[PLL18] = &pll18.clkr,
[GSBI1_UART_SRC] = &gsbi1_uart_src.clkr,
[GSBI1_UART_CLK] = &gsbi1_uart_clk.clkr,
[GSBI2_UART_SRC] = &gsbi2_uart_src.clkr,
@ -2307,6 +2825,18 @@ static struct clk_regmap *gcc_ipq806x_clks[] = {
[USB_FS1_SYSTEM_CLK] = &usb_fs1_sys_clk.clkr,
[EBI2_CLK] = &ebi2_clk.clkr,
[EBI2_AON_CLK] = &ebi2_aon_clk.clkr,
[GMAC_CORE1_CLK_SRC] = &gmac_core1_src.clkr,
[GMAC_CORE1_CLK] = &gmac_core1_clk.clkr,
[GMAC_CORE2_CLK_SRC] = &gmac_core2_src.clkr,
[GMAC_CORE2_CLK] = &gmac_core2_clk.clkr,
[GMAC_CORE3_CLK_SRC] = &gmac_core3_src.clkr,
[GMAC_CORE3_CLK] = &gmac_core3_clk.clkr,
[GMAC_CORE4_CLK_SRC] = &gmac_core4_src.clkr,
[GMAC_CORE4_CLK] = &gmac_core4_clk.clkr,
[UBI32_CORE1_CLK_SRC] = &ubi32_core1_src_clk.clkr,
[UBI32_CORE2_CLK_SRC] = &ubi32_core2_src_clk.clkr,
[NSSTCM_CLK_SRC] = &nss_tcm_src.clkr,
[NSSTCM_CLK] = &nss_tcm_clk.clkr,
};
static const struct qcom_reset_map gcc_ipq806x_resets[] = {
@ -2425,6 +2955,48 @@ static const struct qcom_reset_map gcc_ipq806x_resets[] = {
[USB30_1_PHY_RESET] = { 0x3b58, 0 },
[NSSFB0_RESET] = { 0x3b60, 6 },
[NSSFB1_RESET] = { 0x3b60, 7 },
[UBI32_CORE1_CLKRST_CLAMP_RESET] = { 0x3d3c, 3},
[UBI32_CORE1_CLAMP_RESET] = { 0x3d3c, 2 },
[UBI32_CORE1_AHB_RESET] = { 0x3d3c, 1 },
[UBI32_CORE1_AXI_RESET] = { 0x3d3c, 0 },
[UBI32_CORE2_CLKRST_CLAMP_RESET] = { 0x3d5c, 3 },
[UBI32_CORE2_CLAMP_RESET] = { 0x3d5c, 2 },
[UBI32_CORE2_AHB_RESET] = { 0x3d5c, 1 },
[UBI32_CORE2_AXI_RESET] = { 0x3d5c, 0 },
[GMAC_CORE1_RESET] = { 0x3cbc, 0 },
[GMAC_CORE2_RESET] = { 0x3cdc, 0 },
[GMAC_CORE3_RESET] = { 0x3cfc, 0 },
[GMAC_CORE4_RESET] = { 0x3d1c, 0 },
[GMAC_AHB_RESET] = { 0x3e24, 0 },
[NSS_CH0_RST_RX_CLK_N_RESET] = { 0x3b60, 0 },
[NSS_CH0_RST_TX_CLK_N_RESET] = { 0x3b60, 1 },
[NSS_CH0_RST_RX_125M_N_RESET] = { 0x3b60, 2 },
[NSS_CH0_HW_RST_RX_125M_N_RESET] = { 0x3b60, 3 },
[NSS_CH0_RST_TX_125M_N_RESET] = { 0x3b60, 4 },
[NSS_CH1_RST_RX_CLK_N_RESET] = { 0x3b60, 5 },
[NSS_CH1_RST_TX_CLK_N_RESET] = { 0x3b60, 6 },
[NSS_CH1_RST_RX_125M_N_RESET] = { 0x3b60, 7 },
[NSS_CH1_HW_RST_RX_125M_N_RESET] = { 0x3b60, 8 },
[NSS_CH1_RST_TX_125M_N_RESET] = { 0x3b60, 9 },
[NSS_CH2_RST_RX_CLK_N_RESET] = { 0x3b60, 10 },
[NSS_CH2_RST_TX_CLK_N_RESET] = { 0x3b60, 11 },
[NSS_CH2_RST_RX_125M_N_RESET] = { 0x3b60, 12 },
[NSS_CH2_HW_RST_RX_125M_N_RESET] = { 0x3b60, 13 },
[NSS_CH2_RST_TX_125M_N_RESET] = { 0x3b60, 14 },
[NSS_CH3_RST_RX_CLK_N_RESET] = { 0x3b60, 15 },
[NSS_CH3_RST_TX_CLK_N_RESET] = { 0x3b60, 16 },
[NSS_CH3_RST_RX_125M_N_RESET] = { 0x3b60, 17 },
[NSS_CH3_HW_RST_RX_125M_N_RESET] = { 0x3b60, 18 },
[NSS_CH3_RST_TX_125M_N_RESET] = { 0x3b60, 19 },
[NSS_RST_RX_250M_125M_N_RESET] = { 0x3b60, 20 },
[NSS_RST_TX_250M_125M_N_RESET] = { 0x3b60, 21 },
[NSS_QSGMII_TXPI_RST_N_RESET] = { 0x3b60, 22 },
[NSS_QSGMII_CDR_RST_N_RESET] = { 0x3b60, 23 },
[NSS_SGMII2_CDR_RST_N_RESET] = { 0x3b60, 24 },
[NSS_SGMII3_CDR_RST_N_RESET] = { 0x3b60, 25 },
[NSS_CAL_PRBS_RST_N_RESET] = { 0x3b60, 26 },
[NSS_LCKDT_RST_N_RESET] = { 0x3b60, 27 },
[NSS_SRDS_N_RESET] = { 0x3b60, 28 },
};
static const struct regmap_config gcc_ipq806x_regmap_config = {
@ -2453,6 +3025,8 @@ static int gcc_ipq806x_probe(struct platform_device *pdev)
{
struct clk *clk;
struct device *dev = &pdev->dev;
struct regmap *regmap;
int ret;
/* Temporary until RPM clocks supported */
clk = clk_register_fixed_rate(dev, "cxo", NULL, CLK_IS_ROOT, 25000000);
@ -2463,7 +3037,25 @@ static int gcc_ipq806x_probe(struct platform_device *pdev)
if (IS_ERR(clk))
return PTR_ERR(clk);
return qcom_cc_probe(pdev, &gcc_ipq806x_desc);
ret = qcom_cc_probe(pdev, &gcc_ipq806x_desc);
if (ret)
return ret;
regmap = dev_get_regmap(dev, NULL);
if (!regmap)
return -ENODEV;
/* Setup PLL18 static bits */
regmap_update_bits(regmap, 0x31a4, 0xffffffc0, 0x40000400);
regmap_write(regmap, 0x31b0, 0x3080);
/* Set GMAC footswitch sleep/wakeup values */
regmap_write(regmap, 0x3cb8, 8);
regmap_write(regmap, 0x3cd8, 8);
regmap_write(regmap, 0x3cf8, 8);
regmap_write(regmap, 0x3d18, 8);
return 0;
}
static int gcc_ipq806x_remove(struct platform_device *pdev)

View File

@ -453,10 +453,10 @@ static int c4iw_get_mib(struct ib_device *ibdev,
cxgb4_get_tcp_stats(c4iw_dev->rdev.lldi.pdev, &v4, &v6);
memset(stats, 0, sizeof *stats);
stats->iw.tcpInSegs = v4.tcpInSegs + v6.tcpInSegs;
stats->iw.tcpOutSegs = v4.tcpOutSegs + v6.tcpOutSegs;
stats->iw.tcpRetransSegs = v4.tcpRetransSegs + v6.tcpRetransSegs;
stats->iw.tcpOutRsts = v4.tcpOutRsts + v6.tcpOutSegs;
stats->iw.tcpInSegs = v4.tcp_in_segs + v6.tcp_in_segs;
stats->iw.tcpOutSegs = v4.tcp_out_segs + v6.tcp_out_segs;
stats->iw.tcpRetransSegs = v4.tcp_retrans_segs + v6.tcp_retrans_segs;
stats->iw.tcpOutRsts = v4.tcp_out_rsts + v6.tcp_out_rsts;
return 0;
}

View File

@ -189,7 +189,7 @@ void mlx4_ib_notify_slaves_on_guid_change(struct mlx4_ib_dev *dev,
{
int i;
u64 guid_indexes;
int slave_id;
int slave_id, slave_port;
enum slave_port_state new_state;
enum slave_port_state prev_state;
__be64 tmp_cur_ag, form_cache_ag;
@ -217,6 +217,11 @@ void mlx4_ib_notify_slaves_on_guid_change(struct mlx4_ib_dev *dev,
slave_id = (block_num * NUM_ALIAS_GUID_IN_REC) + i ;
if (slave_id >= dev->dev->persist->num_vfs + 1)
return;
slave_port = mlx4_phys_to_slave_port(dev->dev, slave_id, port_num);
if (slave_port < 0) /* this port isn't available for the VF */
continue;
tmp_cur_ag = *(__be64 *)&p_data[i * GUID_REC_SIZE];
form_cache_ag = get_cached_alias_guid(dev, port_num,
(NUM_ALIAS_GUID_IN_REC * block_num) + i);

View File

@ -64,14 +64,6 @@ enum {
#define GUID_TBL_BLK_NUM_ENTRIES 8
#define GUID_TBL_BLK_SIZE (GUID_TBL_ENTRY_SIZE * GUID_TBL_BLK_NUM_ENTRIES)
/* Counters should be saturate once they reach their maximum value */
#define ASSIGN_32BIT_COUNTER(counter, value) do {\
if ((value) > U32_MAX) \
counter = cpu_to_be32(U32_MAX); \
else \
counter = cpu_to_be32(value); \
} while (0)
struct mlx4_mad_rcv_buf {
struct ib_grh grh;
u8 payload[256];
@ -830,31 +822,25 @@ static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
const struct ib_mad *in_mad, struct ib_mad *out_mad)
{
struct mlx4_cmd_mailbox *mailbox;
struct mlx4_counter counter_stats;
struct mlx4_ib_dev *dev = to_mdev(ibdev);
int err;
u32 inmod = dev->counters[port_num - 1] & 0xffff;
u8 mode;
if (in_mad->mad_hdr.mgmt_class != IB_MGMT_CLASS_PERF_MGMT)
return -EINVAL;
mailbox = mlx4_alloc_cmd_mailbox(dev->dev);
if (IS_ERR(mailbox))
return IB_MAD_RESULT_FAILURE;
err = mlx4_cmd_box(dev->dev, 0, mailbox->dma, inmod, 0,
MLX4_CMD_QUERY_IF_STAT, MLX4_CMD_TIME_CLASS_C,
MLX4_CMD_WRAPPED);
memset(&counter_stats, 0, sizeof(counter_stats));
err = mlx4_get_counter_stats(dev->dev,
dev->counters[port_num - 1].index,
&counter_stats, 0);
if (err)
err = IB_MAD_RESULT_FAILURE;
else {
memset(out_mad->data, 0, sizeof out_mad->data);
mode = ((struct mlx4_counter *)mailbox->buf)->counter_mode;
switch (mode & 0xf) {
switch (counter_stats.counter_mode & 0xf) {
case 0:
edit_counter(mailbox->buf,
(void *)(out_mad->data + 40));
edit_counter(&counter_stats,
(void *)(out_mad->data + 40));
err = IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY;
break;
default:
@ -862,8 +848,6 @@ static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
}
}
mlx4_free_cmd_mailbox(dev->dev, mailbox);
return err;
}
@ -873,6 +857,7 @@ int mlx4_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
struct ib_mad_hdr *out, size_t *out_mad_size,
u16 *out_mad_pkey_index)
{
struct mlx4_ib_dev *dev = to_mdev(ibdev);
const struct ib_mad *in_mad = (const struct ib_mad *)in;
struct ib_mad *out_mad = (struct ib_mad *)out;
@ -881,8 +866,9 @@ int mlx4_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
switch (rdma_port_get_link_layer(ibdev, port_num)) {
case IB_LINK_LAYER_INFINIBAND:
return ib_process_mad(ibdev, mad_flags, port_num, in_wc,
in_grh, in_mad, out_mad);
if (!mlx4_is_slave(dev->dev))
return ib_process_mad(ibdev, mad_flags, port_num, in_wc,
in_grh, in_mad, out_mad);
case IB_LINK_LAYER_ETHERNET:
return iboe_process_mad(ibdev, mad_flags, port_num, in_wc,
in_grh, in_mad, out_mad);
@ -1375,14 +1361,17 @@ static void mlx4_ib_multiplex_mad(struct mlx4_ib_demux_pv_ctx *ctx, struct ib_wc
* stadard address handle by decoding the tunnelled mlx4_ah fields */
memcpy(&ah.av, &tunnel->hdr.av, sizeof (struct mlx4_av));
ah.ibah.device = ctx->ib_dev;
port = be32_to_cpu(ah.av.ib.port_pd) >> 24;
port = mlx4_slave_convert_port(dev->dev, slave, port);
if (port < 0)
return;
ah.av.ib.port_pd = cpu_to_be32(port << 24 | (be32_to_cpu(ah.av.ib.port_pd) & 0xffffff));
mlx4_ib_query_ah(&ah.ibah, &ah_attr);
if (ah_attr.ah_flags & IB_AH_GRH)
fill_in_real_sgid_index(dev, slave, ctx->port, &ah_attr);
port = mlx4_slave_convert_port(dev->dev, slave, ah_attr.port_num);
if (port < 0)
return;
ah_attr.port_num = port;
memcpy(ah_attr.dmac, tunnel->hdr.mac, 6);
ah_attr.vlan_id = be16_to_cpu(tunnel->hdr.vlan);
/* if slave have default vlan use it */

View File

@ -1146,7 +1146,7 @@ static int __mlx4_ib_create_flow(struct ib_qp *qp, struct ib_flow_attr *flow_att
ret = mlx4_cmd_imm(mdev->dev, mailbox->dma, reg_id, size >> 2, 0,
MLX4_QP_FLOW_STEERING_ATTACH, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
MLX4_CMD_WRAPPED);
if (ret == -ENOMEM)
pr_err("mcg table is full. Fail to register network rule.\n");
else if (ret == -ENXIO)
@ -1163,7 +1163,7 @@ static int __mlx4_ib_destroy_flow(struct mlx4_dev *dev, u64 reg_id)
int err;
err = mlx4_cmd(dev, reg_id, 0, 0,
MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
MLX4_CMD_WRAPPED);
if (err)
pr_err("Fail to detach network rule. registration id = 0x%llx\n",
reg_id);
@ -2098,77 +2098,52 @@ static void init_pkeys(struct mlx4_ib_dev *ibdev)
static void mlx4_ib_alloc_eqs(struct mlx4_dev *dev, struct mlx4_ib_dev *ibdev)
{
char name[80];
int eq_per_port = 0;
int added_eqs = 0;
int total_eqs = 0;
int i, j, eq;
int i, j, eq = 0, total_eqs = 0;
/* Legacy mode or comp_pool is not large enough */
if (dev->caps.comp_pool == 0 ||
dev->caps.num_ports > dev->caps.comp_pool)
return;
eq_per_port = dev->caps.comp_pool / dev->caps.num_ports;
/* Init eq table */
added_eqs = 0;
mlx4_foreach_port(i, dev, MLX4_PORT_TYPE_IB)
added_eqs += eq_per_port;
total_eqs = dev->caps.num_comp_vectors + added_eqs;
ibdev->eq_table = kzalloc(total_eqs * sizeof(int), GFP_KERNEL);
ibdev->eq_table = kcalloc(dev->caps.num_comp_vectors,
sizeof(ibdev->eq_table[0]), GFP_KERNEL);
if (!ibdev->eq_table)
return;
ibdev->eq_added = added_eqs;
eq = 0;
mlx4_foreach_port(i, dev, MLX4_PORT_TYPE_IB) {
for (j = 0; j < eq_per_port; j++) {
snprintf(name, sizeof(name), "mlx4-ib-%d-%d@%s",
i, j, dev->persist->pdev->bus->name);
/* Set IRQ for specific name (per ring) */
if (mlx4_assign_eq(dev, name, NULL,
&ibdev->eq_table[eq])) {
/* Use legacy (same as mlx4_en driver) */
pr_warn("Can't allocate EQ %d; reverting to legacy\n", eq);
ibdev->eq_table[eq] =
(eq % dev->caps.num_comp_vectors);
}
eq++;
for (i = 1; i <= dev->caps.num_ports; i++) {
for (j = 0; j < mlx4_get_eqs_per_port(dev, i);
j++, total_eqs++) {
if (i > 1 && mlx4_is_eq_shared(dev, total_eqs))
continue;
ibdev->eq_table[eq] = total_eqs;
if (!mlx4_assign_eq(dev, i,
&ibdev->eq_table[eq]))
eq++;
else
ibdev->eq_table[eq] = -1;
}
}
/* Fill the reset of the vector with legacy EQ */
for (i = 0, eq = added_eqs; i < dev->caps.num_comp_vectors; i++)
ibdev->eq_table[eq++] = i;
for (i = eq; i < dev->caps.num_comp_vectors;
ibdev->eq_table[i++] = -1)
;
/* Advertise the new number of EQs to clients */
ibdev->ib_dev.num_comp_vectors = total_eqs;
ibdev->ib_dev.num_comp_vectors = eq;
}
static void mlx4_ib_free_eqs(struct mlx4_dev *dev, struct mlx4_ib_dev *ibdev)
{
int i;
int total_eqs = ibdev->ib_dev.num_comp_vectors;
/* no additional eqs were added */
/* no eqs were allocated */
if (!ibdev->eq_table)
return;
/* Reset the advertised EQ number */
ibdev->ib_dev.num_comp_vectors = dev->caps.num_comp_vectors;
ibdev->ib_dev.num_comp_vectors = 0;
/* Free only the added eqs */
for (i = 0; i < ibdev->eq_added; i++) {
/* Don't free legacy eqs if used */
if (ibdev->eq_table[i] <= dev->caps.num_comp_vectors)
continue;
for (i = 0; i < total_eqs; i++)
mlx4_release_eq(dev, ibdev->eq_table[i]);
}
kfree(ibdev->eq_table);
ibdev->eq_table = NULL;
}
static int mlx4_port_immutable(struct ib_device *ibdev, u8 port_num,
@ -2203,6 +2178,8 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
struct mlx4_ib_iboe *iboe;
int ib_num_ports = 0;
int num_req_counters;
int allocated;
u32 counter_index;
pr_info_once("%s", mlx4_ib_version);
@ -2373,19 +2350,31 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
num_req_counters = mlx4_is_bonded(dev) ? 1 : ibdev->num_ports;
for (i = 0; i < num_req_counters; ++i) {
mutex_init(&ibdev->qp1_proxy_lock[i]);
allocated = 0;
if (mlx4_ib_port_link_layer(&ibdev->ib_dev, i + 1) ==
IB_LINK_LAYER_ETHERNET) {
err = mlx4_counter_alloc(ibdev->dev, &ibdev->counters[i]);
err = mlx4_counter_alloc(ibdev->dev, &counter_index);
/* if failed to allocate a new counter, use default */
if (err)
ibdev->counters[i] = -1;
} else {
ibdev->counters[i] = -1;
counter_index =
mlx4_get_default_counter_index(dev,
i + 1);
else
allocated = 1;
} else { /* IB_LINK_LAYER_INFINIBAND use the default counter */
counter_index = mlx4_get_default_counter_index(dev,
i + 1);
}
ibdev->counters[i].index = counter_index;
ibdev->counters[i].allocated = allocated;
pr_info("counter index %d for port %d allocated %d\n",
counter_index, i + 1, allocated);
}
if (mlx4_is_bonded(dev))
for (i = 1; i < ibdev->num_ports ; ++i)
ibdev->counters[i] = ibdev->counters[0];
for (i = 1; i < ibdev->num_ports ; ++i) {
ibdev->counters[i].index = ibdev->counters[0].index;
ibdev->counters[i].allocated = 0;
}
mlx4_foreach_port(i, dev, MLX4_PORT_TYPE_IB)
ib_num_ports++;
@ -2525,10 +2514,12 @@ err_steer_qp_release:
mlx4_qp_release_range(dev, ibdev->steer_qpn_base,
ibdev->steer_qpn_count);
err_counter:
for (; i; --i)
if (ibdev->counters[i - 1] != -1)
mlx4_counter_free(ibdev->dev, ibdev->counters[i - 1]);
for (i = 0; i < ibdev->num_ports; ++i) {
if (ibdev->counters[i].index != -1 &&
ibdev->counters[i].allocated)
mlx4_counter_free(ibdev->dev,
ibdev->counters[i].index);
}
err_map:
iounmap(ibdev->uar_map);
@ -2645,8 +2636,9 @@ static void mlx4_ib_remove(struct mlx4_dev *dev, void *ibdev_ptr)
iounmap(ibdev->uar_map);
for (p = 0; p < ibdev->num_ports; ++p)
if (ibdev->counters[p] != -1)
mlx4_counter_free(ibdev->dev, ibdev->counters[p]);
if (ibdev->counters[p].index != -1 &&
ibdev->counters[p].allocated)
mlx4_counter_free(ibdev->dev, ibdev->counters[p].index);
mlx4_foreach_port(p, dev, MLX4_PORT_TYPE_IB)
mlx4_CLOSE_PORT(dev, p);

View File

@ -504,6 +504,11 @@ struct mlx4_ib_iov_port {
struct mlx4_ib_iov_sysfs_attr mcg_dentry;
};
struct counter_index {
u32 index;
u8 allocated;
};
struct mlx4_ib_dev {
struct ib_device ib_dev;
struct mlx4_dev *dev;
@ -522,9 +527,8 @@ struct mlx4_ib_dev {
struct mutex cap_mask_mutex;
bool ib_active;
struct mlx4_ib_iboe iboe;
int counters[MLX4_MAX_PORTS];
struct counter_index counters[MLX4_MAX_PORTS];
int *eq_table;
int eq_added;
struct kobject *iov_parent;
struct kobject *ports_parent;
struct kobject *dev_ports_parent[MLX4_MFUNC_MAX];

View File

@ -1539,12 +1539,13 @@ static int __mlx4_ib_modify_qp(struct ib_qp *ibqp,
}
if (cur_state == IB_QPS_INIT && new_state == IB_QPS_RTR) {
if (dev->counters[qp->port - 1] != -1) {
if (dev->counters[qp->port - 1].index != -1) {
context->pri_path.counter_index =
dev->counters[qp->port - 1];
dev->counters[qp->port - 1].index;
optpar |= MLX4_QP_OPTPAR_COUNTER_INDEX;
} else
context->pri_path.counter_index = 0xff;
context->pri_path.counter_index =
MLX4_SINK_COUNTER_INDEX(dev->dev);
if (qp->flags & MLX4_IB_QP_NETIF) {
mlx4_ib_steer_qp_reg(dev, qp, 1);

View File

@ -1,8 +1,6 @@
config MLX5_INFINIBAND
tristate "Mellanox Connect-IB HCA support"
depends on NETDEVICES && ETHERNET && PCI
select NET_VENDOR_MELLANOX
select MLX5_CORE
depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE
---help---
This driver provides low-level InfiniBand support for
Mellanox Connect-IB PCI Express host channel adapters (HCAs).

View File

@ -590,8 +590,7 @@ static int alloc_cq_buf(struct mlx5_ib_dev *dev, struct mlx5_ib_cq_buf *buf,
{
int err;
err = mlx5_buf_alloc(dev->mdev, nent * cqe_size,
PAGE_SIZE * 2, &buf->buf);
err = mlx5_buf_alloc(dev->mdev, nent * cqe_size, &buf->buf);
if (err)
return err;
@ -760,7 +759,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev,
return ERR_PTR(-EINVAL);
entries = roundup_pow_of_two(entries + 1);
if (entries > dev->mdev->caps.gen.max_cqes)
if (entries > (1 << MLX5_CAP_GEN(dev->mdev, log_max_cq_sz)))
return ERR_PTR(-EINVAL);
cq = kzalloc(sizeof(*cq), GFP_KERNEL);
@ -927,7 +926,7 @@ int mlx5_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
int err;
u32 fsel;
if (!(dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_CQ_MODER))
if (!MLX5_CAP_GEN(dev->mdev, cq_moderation))
return -ENOSYS;
in = kzalloc(sizeof(*in), GFP_KERNEL);
@ -1082,7 +1081,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
int uninitialized_var(cqe_size);
unsigned long flags;
if (!(dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_RESIZE_CQ)) {
if (!MLX5_CAP_GEN(dev->mdev, cq_resize)) {
pr_info("Firmware does not support resize CQ\n");
return -ENOSYS;
}
@ -1091,7 +1090,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
return -EINVAL;
entries = roundup_pow_of_two(entries + 1);
if (entries > dev->mdev->caps.gen.max_cqes + 1)
if (entries > (1 << MLX5_CAP_GEN(dev->mdev, log_max_cq_sz)) + 1)
return -EINVAL;
if (entries == ibcq->cqe + 1)

View File

@ -136,7 +136,7 @@ int mlx5_query_ext_port_caps(struct mlx5_ib_dev *dev, u8 port)
packet_error = be16_to_cpu(out_mad->status);
dev->mdev->caps.gen.ext_port_cap[port - 1] = (!err && !packet_error) ?
dev->mdev->port_caps[port - 1].ext_port_cap = (!err && !packet_error) ?
MLX_EXT_PORT_CAP_FLAG_EXTENDED_PORT_INFO : 0;
out:
@ -144,3 +144,300 @@ out:
kfree(out_mad);
return err;
}
int mlx5_query_mad_ifc_smp_attr_node_info(struct ib_device *ibdev,
struct ib_smp *out_mad)
{
struct ib_smp *in_mad = NULL;
int err = -ENOMEM;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
if (!in_mad)
return -ENOMEM;
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_NODE_INFO;
err = mlx5_MAD_IFC(to_mdev(ibdev), 1, 1, 1, NULL, NULL, in_mad,
out_mad);
kfree(in_mad);
return err;
}
int mlx5_query_mad_ifc_system_image_guid(struct ib_device *ibdev,
__be64 *sys_image_guid)
{
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!out_mad)
return -ENOMEM;
err = mlx5_query_mad_ifc_smp_attr_node_info(ibdev, out_mad);
if (err)
goto out;
memcpy(sys_image_guid, out_mad->data + 4, 8);
out:
kfree(out_mad);
return err;
}
int mlx5_query_mad_ifc_max_pkeys(struct ib_device *ibdev,
u16 *max_pkeys)
{
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!out_mad)
return -ENOMEM;
err = mlx5_query_mad_ifc_smp_attr_node_info(ibdev, out_mad);
if (err)
goto out;
*max_pkeys = be16_to_cpup((__be16 *)(out_mad->data + 28));
out:
kfree(out_mad);
return err;
}
int mlx5_query_mad_ifc_vendor_id(struct ib_device *ibdev,
u32 *vendor_id)
{
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!out_mad)
return -ENOMEM;
err = mlx5_query_mad_ifc_smp_attr_node_info(ibdev, out_mad);
if (err)
goto out;
*vendor_id = be32_to_cpup((__be32 *)(out_mad->data + 36)) & 0xffff;
out:
kfree(out_mad);
return err;
}
int mlx5_query_mad_ifc_node_desc(struct mlx5_ib_dev *dev, char *node_desc)
{
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_NODE_DESC;
err = mlx5_MAD_IFC(dev, 1, 1, 1, NULL, NULL, in_mad, out_mad);
if (err)
goto out;
memcpy(node_desc, out_mad->data, 64);
out:
kfree(in_mad);
kfree(out_mad);
return err;
}
int mlx5_query_mad_ifc_node_guid(struct mlx5_ib_dev *dev, __be64 *node_guid)
{
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_NODE_INFO;
err = mlx5_MAD_IFC(dev, 1, 1, 1, NULL, NULL, in_mad, out_mad);
if (err)
goto out;
memcpy(node_guid, out_mad->data + 12, 8);
out:
kfree(in_mad);
kfree(out_mad);
return err;
}
int mlx5_query_mad_ifc_pkey(struct ib_device *ibdev, u8 port, u16 index,
u16 *pkey)
{
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_PKEY_TABLE;
in_mad->attr_mod = cpu_to_be32(index / 32);
err = mlx5_MAD_IFC(to_mdev(ibdev), 1, 1, port, NULL, NULL, in_mad,
out_mad);
if (err)
goto out;
*pkey = be16_to_cpu(((__be16 *)out_mad->data)[index % 32]);
out:
kfree(in_mad);
kfree(out_mad);
return err;
}
int mlx5_query_mad_ifc_gids(struct ib_device *ibdev, u8 port, int index,
union ib_gid *gid)
{
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_PORT_INFO;
in_mad->attr_mod = cpu_to_be32(port);
err = mlx5_MAD_IFC(to_mdev(ibdev), 1, 1, port, NULL, NULL, in_mad,
out_mad);
if (err)
goto out;
memcpy(gid->raw, out_mad->data + 8, 8);
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_GUID_INFO;
in_mad->attr_mod = cpu_to_be32(index / 8);
err = mlx5_MAD_IFC(to_mdev(ibdev), 1, 1, port, NULL, NULL, in_mad,
out_mad);
if (err)
goto out;
memcpy(gid->raw + 8, out_mad->data + (index % 8) * 8, 8);
out:
kfree(in_mad);
kfree(out_mad);
return err;
}
int mlx5_query_mad_ifc_port(struct ib_device *ibdev, u8 port,
struct ib_port_attr *props)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_core_dev *mdev = dev->mdev;
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
int ext_active_speed;
int err = -ENOMEM;
if (port < 1 || port > MLX5_CAP_GEN(mdev, num_ports)) {
mlx5_ib_warn(dev, "invalid port number %d\n", port);
return -EINVAL;
}
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
memset(props, 0, sizeof(*props));
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_PORT_INFO;
in_mad->attr_mod = cpu_to_be32(port);
err = mlx5_MAD_IFC(dev, 1, 1, port, NULL, NULL, in_mad, out_mad);
if (err) {
mlx5_ib_warn(dev, "err %d\n", err);
goto out;
}
props->lid = be16_to_cpup((__be16 *)(out_mad->data + 16));
props->lmc = out_mad->data[34] & 0x7;
props->sm_lid = be16_to_cpup((__be16 *)(out_mad->data + 18));
props->sm_sl = out_mad->data[36] & 0xf;
props->state = out_mad->data[32] & 0xf;
props->phys_state = out_mad->data[33] >> 4;
props->port_cap_flags = be32_to_cpup((__be32 *)(out_mad->data + 20));
props->gid_tbl_len = out_mad->data[50];
props->max_msg_sz = 1 << MLX5_CAP_GEN(mdev, log_max_msg);
props->pkey_tbl_len = mdev->port_caps[port - 1].pkey_table_len;
props->bad_pkey_cntr = be16_to_cpup((__be16 *)(out_mad->data + 46));
props->qkey_viol_cntr = be16_to_cpup((__be16 *)(out_mad->data + 48));
props->active_width = out_mad->data[31] & 0xf;
props->active_speed = out_mad->data[35] >> 4;
props->max_mtu = out_mad->data[41] & 0xf;
props->active_mtu = out_mad->data[36] >> 4;
props->subnet_timeout = out_mad->data[51] & 0x1f;
props->max_vl_num = out_mad->data[37] >> 4;
props->init_type_reply = out_mad->data[41] >> 4;
/* Check if extended speeds (EDR/FDR/...) are supported */
if (props->port_cap_flags & IB_PORT_EXTENDED_SPEEDS_SUP) {
ext_active_speed = out_mad->data[62] >> 4;
switch (ext_active_speed) {
case 1:
props->active_speed = 16; /* FDR */
break;
case 2:
props->active_speed = 32; /* EDR */
break;
}
}
/* If reported active speed is QDR, check if is FDR-10 */
if (props->active_speed == 4) {
if (mdev->port_caps[port - 1].ext_port_cap &
MLX_EXT_PORT_CAP_FLAG_EXTENDED_PORT_INFO) {
init_query_mad(in_mad);
in_mad->attr_id = MLX5_ATTR_EXTENDED_PORT_INFO;
in_mad->attr_mod = cpu_to_be32(port);
err = mlx5_MAD_IFC(dev, 1, 1, port,
NULL, NULL, in_mad, out_mad);
if (err)
goto out;
/* Checking LinkSpeedActive for FDR-10 */
if (out_mad->data[15] & 0x1)
props->active_speed = 8;
}
}
out:
kfree(in_mad);
kfree(out_mad);
return err;
}

View File

@ -40,6 +40,7 @@
#include <linux/io-mapping.h>
#include <linux/sched.h>
#include <rdma/ib_user_verbs.h>
#include <linux/mlx5/vport.h>
#include <rdma/ib_smi.h>
#include <rdma/ib_umem.h>
#include "user.h"
@ -62,36 +63,172 @@ static char mlx5_version[] =
DRIVER_NAME ": Mellanox Connect-IB Infiniband driver v"
DRIVER_VERSION " (" DRIVER_RELDATE ")\n";
static enum rdma_link_layer
mlx5_ib_port_link_layer(struct ib_device *device)
{
struct mlx5_ib_dev *dev = to_mdev(device);
switch (MLX5_CAP_GEN(dev->mdev, port_type)) {
case MLX5_CAP_PORT_TYPE_IB:
return IB_LINK_LAYER_INFINIBAND;
case MLX5_CAP_PORT_TYPE_ETH:
return IB_LINK_LAYER_ETHERNET;
default:
return IB_LINK_LAYER_UNSPECIFIED;
}
}
static int mlx5_use_mad_ifc(struct mlx5_ib_dev *dev)
{
return !dev->mdev->issi;
}
enum {
MLX5_VPORT_ACCESS_METHOD_MAD,
MLX5_VPORT_ACCESS_METHOD_HCA,
MLX5_VPORT_ACCESS_METHOD_NIC,
};
static int mlx5_get_vport_access_method(struct ib_device *ibdev)
{
if (mlx5_use_mad_ifc(to_mdev(ibdev)))
return MLX5_VPORT_ACCESS_METHOD_MAD;
if (mlx5_ib_port_link_layer(ibdev) ==
IB_LINK_LAYER_ETHERNET)
return MLX5_VPORT_ACCESS_METHOD_NIC;
return MLX5_VPORT_ACCESS_METHOD_HCA;
}
static int mlx5_query_system_image_guid(struct ib_device *ibdev,
__be64 *sys_image_guid)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_core_dev *mdev = dev->mdev;
u64 tmp;
int err;
switch (mlx5_get_vport_access_method(ibdev)) {
case MLX5_VPORT_ACCESS_METHOD_MAD:
return mlx5_query_mad_ifc_system_image_guid(ibdev,
sys_image_guid);
case MLX5_VPORT_ACCESS_METHOD_HCA:
err = mlx5_query_hca_vport_system_image_guid(mdev, &tmp);
if (!err)
*sys_image_guid = cpu_to_be64(tmp);
return err;
default:
return -EINVAL;
}
}
static int mlx5_query_max_pkeys(struct ib_device *ibdev,
u16 *max_pkeys)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_core_dev *mdev = dev->mdev;
switch (mlx5_get_vport_access_method(ibdev)) {
case MLX5_VPORT_ACCESS_METHOD_MAD:
return mlx5_query_mad_ifc_max_pkeys(ibdev, max_pkeys);
case MLX5_VPORT_ACCESS_METHOD_HCA:
case MLX5_VPORT_ACCESS_METHOD_NIC:
*max_pkeys = mlx5_to_sw_pkey_sz(MLX5_CAP_GEN(mdev,
pkey_table_size));
return 0;
default:
return -EINVAL;
}
}
static int mlx5_query_vendor_id(struct ib_device *ibdev,
u32 *vendor_id)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
switch (mlx5_get_vport_access_method(ibdev)) {
case MLX5_VPORT_ACCESS_METHOD_MAD:
return mlx5_query_mad_ifc_vendor_id(ibdev, vendor_id);
case MLX5_VPORT_ACCESS_METHOD_HCA:
case MLX5_VPORT_ACCESS_METHOD_NIC:
return mlx5_core_query_vendor_id(dev->mdev, vendor_id);
default:
return -EINVAL;
}
}
static int mlx5_query_node_guid(struct mlx5_ib_dev *dev,
__be64 *node_guid)
{
u64 tmp;
int err;
switch (mlx5_get_vport_access_method(&dev->ib_dev)) {
case MLX5_VPORT_ACCESS_METHOD_MAD:
return mlx5_query_mad_ifc_node_guid(dev, node_guid);
case MLX5_VPORT_ACCESS_METHOD_HCA:
err = mlx5_query_hca_vport_node_guid(dev->mdev, &tmp);
if (!err)
*node_guid = cpu_to_be64(tmp);
return err;
default:
return -EINVAL;
}
}
struct mlx5_reg_node_desc {
u8 desc[64];
};
static int mlx5_query_node_desc(struct mlx5_ib_dev *dev, char *node_desc)
{
struct mlx5_reg_node_desc in;
if (mlx5_use_mad_ifc(dev))
return mlx5_query_mad_ifc_node_desc(dev, node_desc);
memset(&in, 0, sizeof(in));
return mlx5_core_access_reg(dev->mdev, &in, sizeof(in), node_desc,
sizeof(struct mlx5_reg_node_desc),
MLX5_REG_NODE_DESC, 0, 0);
}
static int mlx5_ib_query_device(struct ib_device *ibdev,
struct ib_device_attr *props,
struct ib_udata *uhw)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
struct mlx5_general_caps *gen;
struct mlx5_core_dev *mdev = dev->mdev;
int err = -ENOMEM;
int max_rq_sg;
int max_sq_sg;
u64 flags;
if (uhw->inlen || uhw->outlen)
return -EINVAL;
gen = &dev->mdev->caps.gen;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_NODE_INFO;
err = mlx5_MAD_IFC(to_mdev(ibdev), 1, 1, 1, NULL, NULL, in_mad, out_mad);
if (err)
goto out;
memset(props, 0, sizeof(*props));
err = mlx5_query_system_image_guid(ibdev,
&props->sys_image_guid);
if (err)
return err;
err = mlx5_query_max_pkeys(ibdev, &props->max_pkeys);
if (err)
return err;
err = mlx5_query_vendor_id(ibdev, &props->vendor_id);
if (err)
return err;
props->fw_ver = ((u64)fw_rev_maj(dev->mdev) << 32) |
(fw_rev_min(dev->mdev) << 16) |
@ -100,18 +237,18 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
IB_DEVICE_PORT_ACTIVE_EVENT |
IB_DEVICE_SYS_IMAGE_GUID |
IB_DEVICE_RC_RNR_NAK_GEN;
flags = gen->flags;
if (flags & MLX5_DEV_CAP_FLAG_BAD_PKEY_CNTR)
if (MLX5_CAP_GEN(mdev, pkv))
props->device_cap_flags |= IB_DEVICE_BAD_PKEY_CNTR;
if (flags & MLX5_DEV_CAP_FLAG_BAD_QKEY_CNTR)
if (MLX5_CAP_GEN(mdev, qkv))
props->device_cap_flags |= IB_DEVICE_BAD_QKEY_CNTR;
if (flags & MLX5_DEV_CAP_FLAG_APM)
if (MLX5_CAP_GEN(mdev, apm))
props->device_cap_flags |= IB_DEVICE_AUTO_PATH_MIG;
props->device_cap_flags |= IB_DEVICE_LOCAL_DMA_LKEY;
if (flags & MLX5_DEV_CAP_FLAG_XRC)
if (MLX5_CAP_GEN(mdev, xrc))
props->device_cap_flags |= IB_DEVICE_XRC;
props->device_cap_flags |= IB_DEVICE_MEM_MGT_EXTENSIONS;
if (flags & MLX5_DEV_CAP_FLAG_SIG_HAND_OVER) {
if (MLX5_CAP_GEN(mdev, sho)) {
props->device_cap_flags |= IB_DEVICE_SIGNATURE_HANDOVER;
/* At this stage no support for signature handover */
props->sig_prot_cap = IB_PROT_T10DIF_TYPE_1 |
@ -120,222 +257,271 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
props->sig_guard_cap = IB_GUARD_T10DIF_CRC |
IB_GUARD_T10DIF_CSUM;
}
if (flags & MLX5_DEV_CAP_FLAG_BLOCK_MCAST)
if (MLX5_CAP_GEN(mdev, block_lb_mc))
props->device_cap_flags |= IB_DEVICE_BLOCK_MULTICAST_LOOPBACK;
props->vendor_id = be32_to_cpup((__be32 *)(out_mad->data + 36)) &
0xffffff;
props->vendor_part_id = be16_to_cpup((__be16 *)(out_mad->data + 30));
props->hw_ver = be32_to_cpup((__be32 *)(out_mad->data + 32));
memcpy(&props->sys_image_guid, out_mad->data + 4, 8);
props->vendor_part_id = mdev->pdev->device;
props->hw_ver = mdev->pdev->revision;
props->max_mr_size = ~0ull;
props->page_size_cap = gen->min_page_sz;
props->max_qp = 1 << gen->log_max_qp;
props->max_qp_wr = gen->max_wqes;
max_rq_sg = gen->max_rq_desc_sz / sizeof(struct mlx5_wqe_data_seg);
max_sq_sg = (gen->max_sq_desc_sz - sizeof(struct mlx5_wqe_ctrl_seg)) /
sizeof(struct mlx5_wqe_data_seg);
props->page_size_cap = 1ull << MLX5_CAP_GEN(mdev, log_pg_sz);
props->max_qp = 1 << MLX5_CAP_GEN(mdev, log_max_qp);
props->max_qp_wr = 1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
max_rq_sg = MLX5_CAP_GEN(mdev, max_wqe_sz_rq) /
sizeof(struct mlx5_wqe_data_seg);
max_sq_sg = (MLX5_CAP_GEN(mdev, max_wqe_sz_sq) -
sizeof(struct mlx5_wqe_ctrl_seg)) /
sizeof(struct mlx5_wqe_data_seg);
props->max_sge = min(max_rq_sg, max_sq_sg);
props->max_cq = 1 << gen->log_max_cq;
props->max_cqe = gen->max_cqes - 1;
props->max_mr = 1 << gen->log_max_mkey;
props->max_pd = 1 << gen->log_max_pd;
props->max_qp_rd_atom = 1 << gen->log_max_ra_req_qp;
props->max_qp_init_rd_atom = 1 << gen->log_max_ra_res_qp;
props->max_srq = 1 << gen->log_max_srq;
props->max_srq_wr = gen->max_srq_wqes - 1;
props->local_ca_ack_delay = gen->local_ca_ack_delay;
props->max_cq = 1 << MLX5_CAP_GEN(mdev, log_max_cq);
props->max_cqe = (1 << MLX5_CAP_GEN(mdev, log_max_eq_sz)) - 1;
props->max_mr = 1 << MLX5_CAP_GEN(mdev, log_max_mkey);
props->max_pd = 1 << MLX5_CAP_GEN(mdev, log_max_pd);
props->max_qp_rd_atom = 1 << MLX5_CAP_GEN(mdev, log_max_ra_req_qp);
props->max_qp_init_rd_atom = 1 << MLX5_CAP_GEN(mdev, log_max_ra_res_qp);
props->max_srq = 1 << MLX5_CAP_GEN(mdev, log_max_srq);
props->max_srq_wr = (1 << MLX5_CAP_GEN(mdev, log_max_srq_sz)) - 1;
props->local_ca_ack_delay = MLX5_CAP_GEN(mdev, local_ca_ack_delay);
props->max_res_rd_atom = props->max_qp_rd_atom * props->max_qp;
props->max_srq_sge = max_rq_sg - 1;
props->max_fast_reg_page_list_len = (unsigned int)-1;
props->local_ca_ack_delay = gen->local_ca_ack_delay;
props->atomic_cap = IB_ATOMIC_NONE;
props->masked_atomic_cap = IB_ATOMIC_NONE;
props->max_pkeys = be16_to_cpup((__be16 *)(out_mad->data + 28));
props->max_mcast_grp = 1 << gen->log_max_mcg;
props->max_mcast_qp_attach = gen->max_qp_mcg;
props->max_mcast_grp = 1 << MLX5_CAP_GEN(mdev, log_max_mcg);
props->max_mcast_qp_attach = MLX5_CAP_GEN(mdev, max_qp_mcg);
props->max_total_mcast_qp_attach = props->max_mcast_qp_attach *
props->max_mcast_grp;
props->max_map_per_fmr = INT_MAX; /* no limit in ConnectIB */
#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
if (dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_ON_DMND_PG)
if (MLX5_CAP_GEN(mdev, pg))
props->device_cap_flags |= IB_DEVICE_ON_DEMAND_PAGING;
props->odp_caps = dev->odp_caps;
#endif
out:
kfree(in_mad);
kfree(out_mad);
return 0;
}
enum mlx5_ib_width {
MLX5_IB_WIDTH_1X = 1 << 0,
MLX5_IB_WIDTH_2X = 1 << 1,
MLX5_IB_WIDTH_4X = 1 << 2,
MLX5_IB_WIDTH_8X = 1 << 3,
MLX5_IB_WIDTH_12X = 1 << 4
};
static int translate_active_width(struct ib_device *ibdev, u8 active_width,
u8 *ib_width)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
int err = 0;
if (active_width & MLX5_IB_WIDTH_1X) {
*ib_width = IB_WIDTH_1X;
} else if (active_width & MLX5_IB_WIDTH_2X) {
mlx5_ib_dbg(dev, "active_width %d is not supported by IB spec\n",
(int)active_width);
err = -EINVAL;
} else if (active_width & MLX5_IB_WIDTH_4X) {
*ib_width = IB_WIDTH_4X;
} else if (active_width & MLX5_IB_WIDTH_8X) {
*ib_width = IB_WIDTH_8X;
} else if (active_width & MLX5_IB_WIDTH_12X) {
*ib_width = IB_WIDTH_12X;
} else {
mlx5_ib_dbg(dev, "Invalid active_width %d\n",
(int)active_width);
err = -EINVAL;
}
return err;
}
static int mlx5_mtu_to_ib_mtu(int mtu)
{
switch (mtu) {
case 256: return 1;
case 512: return 2;
case 1024: return 3;
case 2048: return 4;
case 4096: return 5;
default:
pr_warn("invalid mtu\n");
return -1;
}
}
enum ib_max_vl_num {
__IB_MAX_VL_0 = 1,
__IB_MAX_VL_0_1 = 2,
__IB_MAX_VL_0_3 = 3,
__IB_MAX_VL_0_7 = 4,
__IB_MAX_VL_0_14 = 5,
};
enum mlx5_vl_hw_cap {
MLX5_VL_HW_0 = 1,
MLX5_VL_HW_0_1 = 2,
MLX5_VL_HW_0_2 = 3,
MLX5_VL_HW_0_3 = 4,
MLX5_VL_HW_0_4 = 5,
MLX5_VL_HW_0_5 = 6,
MLX5_VL_HW_0_6 = 7,
MLX5_VL_HW_0_7 = 8,
MLX5_VL_HW_0_14 = 15
};
static int translate_max_vl_num(struct ib_device *ibdev, u8 vl_hw_cap,
u8 *max_vl_num)
{
switch (vl_hw_cap) {
case MLX5_VL_HW_0:
*max_vl_num = __IB_MAX_VL_0;
break;
case MLX5_VL_HW_0_1:
*max_vl_num = __IB_MAX_VL_0_1;
break;
case MLX5_VL_HW_0_3:
*max_vl_num = __IB_MAX_VL_0_3;
break;
case MLX5_VL_HW_0_7:
*max_vl_num = __IB_MAX_VL_0_7;
break;
case MLX5_VL_HW_0_14:
*max_vl_num = __IB_MAX_VL_0_14;
break;
default:
return -EINVAL;
}
return 0;
}
static int mlx5_query_hca_port(struct ib_device *ibdev, u8 port,
struct ib_port_attr *props)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_core_dev *mdev = dev->mdev;
struct mlx5_hca_vport_context *rep;
int max_mtu;
int oper_mtu;
int err;
u8 ib_link_width_oper;
u8 vl_hw_cap;
rep = kzalloc(sizeof(*rep), GFP_KERNEL);
if (!rep) {
err = -ENOMEM;
goto out;
}
memset(props, 0, sizeof(*props));
err = mlx5_query_hca_vport_context(mdev, 0, port, 0, rep);
if (err)
goto out;
props->lid = rep->lid;
props->lmc = rep->lmc;
props->sm_lid = rep->sm_lid;
props->sm_sl = rep->sm_sl;
props->state = rep->vport_state;
props->phys_state = rep->port_physical_state;
props->port_cap_flags = rep->cap_mask1;
props->gid_tbl_len = mlx5_get_gid_table_len(MLX5_CAP_GEN(mdev, gid_table_size));
props->max_msg_sz = 1 << MLX5_CAP_GEN(mdev, log_max_msg);
props->pkey_tbl_len = mlx5_to_sw_pkey_sz(MLX5_CAP_GEN(mdev, pkey_table_size));
props->bad_pkey_cntr = rep->pkey_violation_counter;
props->qkey_viol_cntr = rep->qkey_violation_counter;
props->subnet_timeout = rep->subnet_timeout;
props->init_type_reply = rep->init_type_reply;
err = mlx5_query_port_link_width_oper(mdev, &ib_link_width_oper, port);
if (err)
goto out;
err = translate_active_width(ibdev, ib_link_width_oper,
&props->active_width);
if (err)
goto out;
err = mlx5_query_port_proto_oper(mdev, &props->active_speed, MLX5_PTYS_IB,
port);
if (err)
goto out;
mlx5_query_port_max_mtu(mdev, &max_mtu, port);
props->max_mtu = mlx5_mtu_to_ib_mtu(max_mtu);
mlx5_query_port_oper_mtu(mdev, &oper_mtu, port);
props->active_mtu = mlx5_mtu_to_ib_mtu(oper_mtu);
err = mlx5_query_port_vl_hw_cap(mdev, &vl_hw_cap, port);
if (err)
goto out;
err = translate_max_vl_num(ibdev, vl_hw_cap,
&props->max_vl_num);
out:
kfree(rep);
return err;
}
int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,
struct ib_port_attr *props)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
struct mlx5_general_caps *gen;
int ext_active_speed;
int err = -ENOMEM;
switch (mlx5_get_vport_access_method(ibdev)) {
case MLX5_VPORT_ACCESS_METHOD_MAD:
return mlx5_query_mad_ifc_port(ibdev, port, props);
gen = &dev->mdev->caps.gen;
if (port < 1 || port > gen->num_ports) {
mlx5_ib_warn(dev, "invalid port number %d\n", port);
case MLX5_VPORT_ACCESS_METHOD_HCA:
return mlx5_query_hca_port(ibdev, port, props);
default:
return -EINVAL;
}
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
memset(props, 0, sizeof(*props));
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_PORT_INFO;
in_mad->attr_mod = cpu_to_be32(port);
err = mlx5_MAD_IFC(dev, 1, 1, port, NULL, NULL, in_mad, out_mad);
if (err) {
mlx5_ib_warn(dev, "err %d\n", err);
goto out;
}
props->lid = be16_to_cpup((__be16 *)(out_mad->data + 16));
props->lmc = out_mad->data[34] & 0x7;
props->sm_lid = be16_to_cpup((__be16 *)(out_mad->data + 18));
props->sm_sl = out_mad->data[36] & 0xf;
props->state = out_mad->data[32] & 0xf;
props->phys_state = out_mad->data[33] >> 4;
props->port_cap_flags = be32_to_cpup((__be32 *)(out_mad->data + 20));
props->gid_tbl_len = out_mad->data[50];
props->max_msg_sz = 1 << gen->log_max_msg;
props->pkey_tbl_len = gen->port[port - 1].pkey_table_len;
props->bad_pkey_cntr = be16_to_cpup((__be16 *)(out_mad->data + 46));
props->qkey_viol_cntr = be16_to_cpup((__be16 *)(out_mad->data + 48));
props->active_width = out_mad->data[31] & 0xf;
props->active_speed = out_mad->data[35] >> 4;
props->max_mtu = out_mad->data[41] & 0xf;
props->active_mtu = out_mad->data[36] >> 4;
props->subnet_timeout = out_mad->data[51] & 0x1f;
props->max_vl_num = out_mad->data[37] >> 4;
props->init_type_reply = out_mad->data[41] >> 4;
/* Check if extended speeds (EDR/FDR/...) are supported */
if (props->port_cap_flags & IB_PORT_EXTENDED_SPEEDS_SUP) {
ext_active_speed = out_mad->data[62] >> 4;
switch (ext_active_speed) {
case 1:
props->active_speed = 16; /* FDR */
break;
case 2:
props->active_speed = 32; /* EDR */
break;
}
}
/* If reported active speed is QDR, check if is FDR-10 */
if (props->active_speed == 4) {
if (gen->ext_port_cap[port - 1] &
MLX_EXT_PORT_CAP_FLAG_EXTENDED_PORT_INFO) {
init_query_mad(in_mad);
in_mad->attr_id = MLX5_ATTR_EXTENDED_PORT_INFO;
in_mad->attr_mod = cpu_to_be32(port);
err = mlx5_MAD_IFC(dev, 1, 1, port,
NULL, NULL, in_mad, out_mad);
if (err)
goto out;
/* Checking LinkSpeedActive for FDR-10 */
if (out_mad->data[15] & 0x1)
props->active_speed = 8;
}
}
out:
kfree(in_mad);
kfree(out_mad);
return err;
}
static int mlx5_ib_query_gid(struct ib_device *ibdev, u8 port, int index,
union ib_gid *gid)
{
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_core_dev *mdev = dev->mdev;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
switch (mlx5_get_vport_access_method(ibdev)) {
case MLX5_VPORT_ACCESS_METHOD_MAD:
return mlx5_query_mad_ifc_gids(ibdev, port, index, gid);
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_PORT_INFO;
in_mad->attr_mod = cpu_to_be32(port);
case MLX5_VPORT_ACCESS_METHOD_HCA:
return mlx5_query_hca_vport_gid(mdev, 0, port, 0, index, gid);
err = mlx5_MAD_IFC(to_mdev(ibdev), 1, 1, port, NULL, NULL, in_mad, out_mad);
if (err)
goto out;
default:
return -EINVAL;
}
memcpy(gid->raw, out_mad->data + 8, 8);
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_GUID_INFO;
in_mad->attr_mod = cpu_to_be32(index / 8);
err = mlx5_MAD_IFC(to_mdev(ibdev), 1, 1, port, NULL, NULL, in_mad, out_mad);
if (err)
goto out;
memcpy(gid->raw + 8, out_mad->data + (index % 8) * 8, 8);
out:
kfree(in_mad);
kfree(out_mad);
return err;
}
static int mlx5_ib_query_pkey(struct ib_device *ibdev, u8 port, u16 index,
u16 *pkey)
{
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_core_dev *mdev = dev->mdev;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
switch (mlx5_get_vport_access_method(ibdev)) {
case MLX5_VPORT_ACCESS_METHOD_MAD:
return mlx5_query_mad_ifc_pkey(ibdev, port, index, pkey);
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_PKEY_TABLE;
in_mad->attr_mod = cpu_to_be32(index / 32);
err = mlx5_MAD_IFC(to_mdev(ibdev), 1, 1, port, NULL, NULL, in_mad, out_mad);
if (err)
goto out;
*pkey = be16_to_cpu(((__be16 *)out_mad->data)[index % 32]);
out:
kfree(in_mad);
kfree(out_mad);
return err;
case MLX5_VPORT_ACCESS_METHOD_HCA:
case MLX5_VPORT_ACCESS_METHOD_NIC:
return mlx5_query_hca_vport_pkey(mdev, 0, port, 0, index,
pkey);
default:
return -EINVAL;
}
}
struct mlx5_reg_node_desc {
u8 desc[64];
};
static int mlx5_ib_modify_device(struct ib_device *ibdev, int mask,
struct ib_device_modify *props)
{
@ -396,7 +582,6 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
struct mlx5_ib_alloc_ucontext_req_v2 req;
struct mlx5_ib_alloc_ucontext_resp resp;
struct mlx5_ib_ucontext *context;
struct mlx5_general_caps *gen;
struct mlx5_uuar_info *uuari;
struct mlx5_uar *uars;
int gross_uuars;
@ -407,7 +592,6 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
int i;
size_t reqlen;
gen = &dev->mdev->caps.gen;
if (!dev->ib_active)
return ERR_PTR(-EAGAIN);
@ -440,14 +624,14 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
num_uars = req.total_num_uuars / MLX5_NON_FP_BF_REGS_PER_PAGE;
gross_uuars = num_uars * MLX5_BF_REGS_PER_PAGE;
resp.qp_tab_size = 1 << gen->log_max_qp;
resp.bf_reg_size = gen->bf_reg_size;
resp.cache_line_size = L1_CACHE_BYTES;
resp.max_sq_desc_sz = gen->max_sq_desc_sz;
resp.max_rq_desc_sz = gen->max_rq_desc_sz;
resp.max_send_wqebb = gen->max_wqes;
resp.max_recv_wr = gen->max_wqes;
resp.max_srq_recv_wr = gen->max_srq_wqes;
resp.qp_tab_size = 1 << MLX5_CAP_GEN(dev->mdev, log_max_qp);
resp.bf_reg_size = 1 << MLX5_CAP_GEN(dev->mdev, log_bf_reg_size);
resp.cache_line_size = L1_CACHE_BYTES;
resp.max_sq_desc_sz = MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq);
resp.max_rq_desc_sz = MLX5_CAP_GEN(dev->mdev, max_wqe_sz_rq);
resp.max_send_wqebb = 1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz);
resp.max_recv_wr = 1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz);
resp.max_srq_recv_wr = 1 << MLX5_CAP_GEN(dev->mdev, log_max_srq_sz);
context = kzalloc(sizeof(*context), GFP_KERNEL);
if (!context)
@ -497,7 +681,7 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
mutex_init(&context->db_page_mutex);
resp.tot_uuars = req.total_num_uuars;
resp.num_ports = gen->num_ports;
resp.num_ports = MLX5_CAP_GEN(dev->mdev, num_ports);
err = ib_copy_to_udata(udata, &resp,
sizeof(resp) - sizeof(resp.reserved));
if (err)
@ -735,37 +919,15 @@ static int mlx5_ib_mcg_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
static int init_node_data(struct mlx5_ib_dev *dev)
{
struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL;
int err = -ENOMEM;
int err;
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad)
goto out;
init_query_mad(in_mad);
in_mad->attr_id = IB_SMP_ATTR_NODE_DESC;
err = mlx5_MAD_IFC(dev, 1, 1, 1, NULL, NULL, in_mad, out_mad);
err = mlx5_query_node_desc(dev, dev->ib_dev.node_desc);
if (err)
goto out;
return err;
memcpy(dev->ib_dev.node_desc, out_mad->data, 64);
dev->mdev->rev_id = dev->mdev->pdev->revision;
in_mad->attr_id = IB_SMP_ATTR_NODE_INFO;
err = mlx5_MAD_IFC(dev, 1, 1, 1, NULL, NULL, in_mad, out_mad);
if (err)
goto out;
dev->mdev->rev_id = be32_to_cpup((__be32 *)(out_mad->data + 32));
memcpy(&dev->ib_dev.node_guid, out_mad->data + 12, 8);
out:
kfree(in_mad);
kfree(out_mad);
return err;
return mlx5_query_node_guid(dev, &dev->ib_dev.node_guid);
}
static ssize_t show_fw_pages(struct device *device, struct device_attribute *attr,
@ -899,11 +1061,9 @@ static void mlx5_ib_event(struct mlx5_core_dev *dev, void *context,
static void get_ext_port_caps(struct mlx5_ib_dev *dev)
{
struct mlx5_general_caps *gen;
int port;
gen = &dev->mdev->caps.gen;
for (port = 1; port <= gen->num_ports; port++)
for (port = 1; port <= MLX5_CAP_GEN(dev->mdev, num_ports); port++)
mlx5_query_ext_port_caps(dev, port);
}
@ -911,12 +1071,10 @@ static int get_port_caps(struct mlx5_ib_dev *dev)
{
struct ib_device_attr *dprops = NULL;
struct ib_port_attr *pprops = NULL;
struct mlx5_general_caps *gen;
int err = -ENOMEM;
int port;
struct ib_udata uhw = {.inlen = 0, .outlen = 0};
gen = &dev->mdev->caps.gen;
pprops = kmalloc(sizeof(*pprops), GFP_KERNEL);
if (!pprops)
goto out;
@ -931,14 +1089,17 @@ static int get_port_caps(struct mlx5_ib_dev *dev)
goto out;
}
for (port = 1; port <= gen->num_ports; port++) {
for (port = 1; port <= MLX5_CAP_GEN(dev->mdev, num_ports); port++) {
err = mlx5_ib_query_port(&dev->ib_dev, port, pprops);
if (err) {
mlx5_ib_warn(dev, "query_port %d failed %d\n", port, err);
mlx5_ib_warn(dev, "query_port %d failed %d\n",
port, err);
break;
}
gen->port[port - 1].pkey_table_len = dprops->max_pkeys;
gen->port[port - 1].gid_table_len = pprops->gid_tbl_len;
dev->mdev->port_caps[port - 1].pkey_table_len =
dprops->max_pkeys;
dev->mdev->port_caps[port - 1].gid_table_len =
pprops->gid_tbl_len;
mlx5_ib_dbg(dev, "pkey_table_len %d, gid_table_len %d\n",
dprops->max_pkeys, pprops->gid_tbl_len);
}
@ -1167,8 +1328,29 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
atomic_inc(&devr->p0->usecnt);
atomic_set(&devr->s0->usecnt, 0);
memset(&attr, 0, sizeof(attr));
attr.attr.max_sge = 1;
attr.attr.max_wr = 1;
attr.srq_type = IB_SRQT_BASIC;
devr->s1 = mlx5_ib_create_srq(devr->p0, &attr, NULL);
if (IS_ERR(devr->s1)) {
ret = PTR_ERR(devr->s1);
goto error5;
}
devr->s1->device = &dev->ib_dev;
devr->s1->pd = devr->p0;
devr->s1->uobject = NULL;
devr->s1->event_handler = NULL;
devr->s1->srq_context = NULL;
devr->s1->srq_type = IB_SRQT_BASIC;
devr->s1->ext.xrc.cq = devr->c0;
atomic_inc(&devr->p0->usecnt);
atomic_set(&devr->s0->usecnt, 0);
return 0;
error5:
mlx5_ib_destroy_srq(devr->s0);
error4:
mlx5_ib_dealloc_xrcd(devr->x1);
error3:
@ -1183,6 +1365,7 @@ error0:
static void destroy_dev_resources(struct mlx5_ib_resources *devr)
{
mlx5_ib_destroy_srq(devr->s1);
mlx5_ib_destroy_srq(devr->s0);
mlx5_ib_dealloc_xrcd(devr->x0);
mlx5_ib_dealloc_xrcd(devr->x1);
@ -1214,6 +1397,10 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
int err;
int i;
/* don't create IB instance over Eth ports, no RoCE yet! */
if (MLX5_CAP_GEN(mdev, port_type) == MLX5_CAP_PORT_TYPE_ETH)
return NULL;
printk_once(KERN_INFO "%s", mlx5_version);
dev = (struct mlx5_ib_dev *)ib_alloc_device(sizeof(*dev));
@ -1226,15 +1413,16 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
if (err)
goto err_dealloc;
get_ext_port_caps(dev);
if (mlx5_use_mad_ifc(dev))
get_ext_port_caps(dev);
MLX5_INIT_DOORBELL_LOCK(&dev->uar_lock);
strlcpy(dev->ib_dev.name, "mlx5_%d", IB_DEVICE_NAME_MAX);
dev->ib_dev.owner = THIS_MODULE;
dev->ib_dev.node_type = RDMA_NODE_IB_CA;
dev->ib_dev.local_dma_lkey = mdev->caps.gen.reserved_lkey;
dev->num_ports = mdev->caps.gen.num_ports;
dev->ib_dev.local_dma_lkey = 0 /* not supported for now */;
dev->num_ports = MLX5_CAP_GEN(mdev, num_ports);
dev->ib_dev.phys_port_cnt = dev->num_ports;
dev->ib_dev.num_comp_vectors =
dev->mdev->priv.eq_table.num_comp_vectors;
@ -1313,9 +1501,9 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
dev->ib_dev.check_mr_status = mlx5_ib_check_mr_status;
dev->ib_dev.get_port_immutable = mlx5_port_immutable;
mlx5_ib_internal_query_odp_caps(dev);
mlx5_ib_internal_fill_odp_caps(dev);
if (mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_XRC) {
if (MLX5_CAP_GEN(mdev, xrc)) {
dev->ib_dev.alloc_xrcd = mlx5_ib_alloc_xrcd;
dev->ib_dev.dealloc_xrcd = mlx5_ib_dealloc_xrcd;
dev->ib_dev.uverbs_cmd_mask |=

View File

@ -415,6 +415,7 @@ struct mlx5_ib_resources {
struct ib_xrcd *x1;
struct ib_pd *p0;
struct ib_srq *s0;
struct ib_srq *s1;
};
struct mlx5_ib_dev {
@ -597,6 +598,22 @@ struct ib_xrcd *mlx5_ib_alloc_xrcd(struct ib_device *ibdev,
int mlx5_ib_dealloc_xrcd(struct ib_xrcd *xrcd);
int mlx5_ib_get_buf_offset(u64 addr, int page_shift, u32 *offset);
int mlx5_query_ext_port_caps(struct mlx5_ib_dev *dev, u8 port);
int mlx5_query_mad_ifc_smp_attr_node_info(struct ib_device *ibdev,
struct ib_smp *out_mad);
int mlx5_query_mad_ifc_system_image_guid(struct ib_device *ibdev,
__be64 *sys_image_guid);
int mlx5_query_mad_ifc_max_pkeys(struct ib_device *ibdev,
u16 *max_pkeys);
int mlx5_query_mad_ifc_vendor_id(struct ib_device *ibdev,
u32 *vendor_id);
int mlx5_query_mad_ifc_node_desc(struct mlx5_ib_dev *dev, char *node_desc);
int mlx5_query_mad_ifc_node_guid(struct mlx5_ib_dev *dev, __be64 *node_guid);
int mlx5_query_mad_ifc_pkey(struct ib_device *ibdev, u8 port, u16 index,
u16 *pkey);
int mlx5_query_mad_ifc_gids(struct ib_device *ibdev, u8 port, int index,
union ib_gid *gid);
int mlx5_query_mad_ifc_port(struct ib_device *ibdev, u8 port,
struct ib_port_attr *props);
int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,
struct ib_port_attr *props);
int mlx5_ib_init_fmr(struct mlx5_ib_dev *dev);
@ -620,7 +637,7 @@ int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask,
#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
extern struct workqueue_struct *mlx5_ib_page_fault_wq;
int mlx5_ib_internal_query_odp_caps(struct mlx5_ib_dev *dev);
void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev);
void mlx5_ib_mr_pfault_handler(struct mlx5_ib_qp *qp,
struct mlx5_ib_pfault *pfault);
void mlx5_ib_odp_create_qp(struct mlx5_ib_qp *qp);
@ -634,9 +651,9 @@ void mlx5_ib_invalidate_range(struct ib_umem *umem, unsigned long start,
unsigned long end);
#else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
static inline int mlx5_ib_internal_query_odp_caps(struct mlx5_ib_dev *dev)
static inline void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
{
return 0;
return;
}
static inline void mlx5_ib_odp_create_qp(struct mlx5_ib_qp *qp) {}

View File

@ -975,8 +975,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, u64 virt_addr,
struct mlx5_ib_mr *mr;
int inlen;
int err;
bool pg_cap = !!(dev->mdev->caps.gen.flags &
MLX5_DEV_CAP_FLAG_ON_DMND_PG);
bool pg_cap = !!(MLX5_CAP_GEN(dev->mdev, pg));
mr = kzalloc(sizeof(*mr), GFP_KERNEL);
if (!mr)

View File

@ -109,40 +109,33 @@ void mlx5_ib_invalidate_range(struct ib_umem *umem, unsigned long start,
ib_umem_odp_unmap_dma_pages(umem, start, end);
}
#define COPY_ODP_BIT_MLX_TO_IB(reg, ib_caps, field_name, bit_name) do { \
if (be32_to_cpu(reg.field_name) & MLX5_ODP_SUPPORT_##bit_name) \
ib_caps->field_name |= IB_ODP_SUPPORT_##bit_name; \
} while (0)
int mlx5_ib_internal_query_odp_caps(struct mlx5_ib_dev *dev)
void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
{
int err;
struct mlx5_odp_caps hw_caps;
struct ib_odp_caps *caps = &dev->odp_caps;
memset(caps, 0, sizeof(*caps));
if (!(dev->mdev->caps.gen.flags & MLX5_DEV_CAP_FLAG_ON_DMND_PG))
return 0;
err = mlx5_query_odp_caps(dev->mdev, &hw_caps);
if (err)
goto out;
if (!MLX5_CAP_GEN(dev->mdev, pg))
return;
caps->general_caps = IB_ODP_SUPPORT;
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.ud_odp_caps,
SEND);
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps,
SEND);
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps,
RECV);
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps,
WRITE);
COPY_ODP_BIT_MLX_TO_IB(hw_caps, caps, per_transport_caps.rc_odp_caps,
READ);
out:
return err;
if (MLX5_CAP_ODP(dev->mdev, ud_odp_caps.send))
caps->per_transport_caps.ud_odp_caps |= IB_ODP_SUPPORT_SEND;
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.send))
caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_SEND;
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.receive))
caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_RECV;
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.write))
caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_WRITE;
if (MLX5_CAP_ODP(dev->mdev, rc_odp_caps.read))
caps->per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_READ;
return;
}
static struct mlx5_ib_mr *mlx5_ib_odp_find_mr_lkey(struct mlx5_ib_dev *dev,

View File

@ -220,13 +220,11 @@ static void mlx5_ib_qp_event(struct mlx5_core_qp *qp, int type)
static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap,
int has_rq, struct mlx5_ib_qp *qp, struct mlx5_ib_create_qp *ucmd)
{
struct mlx5_general_caps *gen;
int wqe_size;
int wq_size;
gen = &dev->mdev->caps.gen;
/* Sanity check RQ size before proceeding */
if (cap->max_recv_wr > gen->max_wqes)
if (cap->max_recv_wr > (1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz)))
return -EINVAL;
if (!has_rq) {
@ -246,10 +244,11 @@ static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap,
wq_size = roundup_pow_of_two(cap->max_recv_wr) * wqe_size;
wq_size = max_t(int, wq_size, MLX5_SEND_WQE_BB);
qp->rq.wqe_cnt = wq_size / wqe_size;
if (wqe_size > gen->max_rq_desc_sz) {
if (wqe_size > MLX5_CAP_GEN(dev->mdev, max_wqe_sz_rq)) {
mlx5_ib_dbg(dev, "wqe_size %d, max %d\n",
wqe_size,
gen->max_rq_desc_sz);
MLX5_CAP_GEN(dev->mdev,
max_wqe_sz_rq));
return -EINVAL;
}
qp->rq.wqe_shift = ilog2(wqe_size);
@ -330,11 +329,9 @@ static int calc_send_wqe(struct ib_qp_init_attr *attr)
static int calc_sq_size(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
struct mlx5_ib_qp *qp)
{
struct mlx5_general_caps *gen;
int wqe_size;
int wq_size;
gen = &dev->mdev->caps.gen;
if (!attr->cap.max_send_wr)
return 0;
@ -343,9 +340,9 @@ static int calc_sq_size(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
if (wqe_size < 0)
return wqe_size;
if (wqe_size > gen->max_sq_desc_sz) {
if (wqe_size > MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq)) {
mlx5_ib_dbg(dev, "wqe_size(%d) > max_sq_desc_sz(%d)\n",
wqe_size, gen->max_sq_desc_sz);
wqe_size, MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq));
return -EINVAL;
}
@ -358,9 +355,10 @@ static int calc_sq_size(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
wq_size = roundup_pow_of_two(attr->cap.max_send_wr * wqe_size);
qp->sq.wqe_cnt = wq_size / MLX5_SEND_WQE_BB;
if (qp->sq.wqe_cnt > gen->max_wqes) {
if (qp->sq.wqe_cnt > (1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz))) {
mlx5_ib_dbg(dev, "wqe count(%d) exceeds limits(%d)\n",
qp->sq.wqe_cnt, gen->max_wqes);
qp->sq.wqe_cnt,
1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz));
return -ENOMEM;
}
qp->sq.wqe_shift = ilog2(MLX5_SEND_WQE_BB);
@ -375,13 +373,11 @@ static int set_user_buf_size(struct mlx5_ib_dev *dev,
struct mlx5_ib_qp *qp,
struct mlx5_ib_create_qp *ucmd)
{
struct mlx5_general_caps *gen;
int desc_sz = 1 << qp->sq.wqe_shift;
gen = &dev->mdev->caps.gen;
if (desc_sz > gen->max_sq_desc_sz) {
if (desc_sz > MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq)) {
mlx5_ib_warn(dev, "desc_sz %d, max_sq_desc_sz %d\n",
desc_sz, gen->max_sq_desc_sz);
desc_sz, MLX5_CAP_GEN(dev->mdev, max_wqe_sz_sq));
return -EINVAL;
}
@ -393,9 +389,10 @@ static int set_user_buf_size(struct mlx5_ib_dev *dev,
qp->sq.wqe_cnt = ucmd->sq_wqe_count;
if (qp->sq.wqe_cnt > gen->max_wqes) {
if (qp->sq.wqe_cnt > (1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz))) {
mlx5_ib_warn(dev, "wqe_cnt %d, max_wqes %d\n",
qp->sq.wqe_cnt, gen->max_wqes);
qp->sq.wqe_cnt,
1 << MLX5_CAP_GEN(dev->mdev, log_max_qp_sz));
return -EINVAL;
}
@ -768,7 +765,7 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev,
qp->sq.offset = qp->rq.wqe_cnt << qp->rq.wqe_shift;
qp->buf_size = err + (qp->rq.wqe_cnt << qp->rq.wqe_shift);
err = mlx5_buf_alloc(dev->mdev, qp->buf_size, PAGE_SIZE * 2, &qp->buf);
err = mlx5_buf_alloc(dev->mdev, qp->buf_size, &qp->buf);
if (err) {
mlx5_ib_dbg(dev, "err %d\n", err);
goto err_uuar;
@ -866,22 +863,21 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
struct ib_udata *udata, struct mlx5_ib_qp *qp)
{
struct mlx5_ib_resources *devr = &dev->devr;
struct mlx5_core_dev *mdev = dev->mdev;
struct mlx5_ib_create_qp_resp resp;
struct mlx5_create_qp_mbox_in *in;
struct mlx5_general_caps *gen;
struct mlx5_ib_create_qp ucmd;
int inlen = sizeof(*in);
int err;
mlx5_ib_odp_create_qp(qp);
gen = &dev->mdev->caps.gen;
mutex_init(&qp->mutex);
spin_lock_init(&qp->sq.lock);
spin_lock_init(&qp->rq.lock);
if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
if (!(gen->flags & MLX5_DEV_CAP_FLAG_BLOCK_MCAST)) {
if (!MLX5_CAP_GEN(mdev, block_lb_mc)) {
mlx5_ib_dbg(dev, "block multicast loopback isn't supported\n");
return -EINVAL;
} else {
@ -914,15 +910,17 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
if (pd) {
if (pd->uobject) {
__u32 max_wqes =
1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n", ucmd.sq_wqe_count);
if (ucmd.rq_wqe_shift != qp->rq.wqe_shift ||
ucmd.rq_wqe_count != qp->rq.wqe_cnt) {
mlx5_ib_dbg(dev, "invalid rq params\n");
return -EINVAL;
}
if (ucmd.sq_wqe_count > gen->max_wqes) {
if (ucmd.sq_wqe_count > max_wqes) {
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d) > max allowed (%d)\n",
ucmd.sq_wqe_count, gen->max_wqes);
ucmd.sq_wqe_count, max_wqes);
return -EINVAL;
}
err = create_user_qp(dev, pd, qp, udata, &in, &resp, &inlen);
@ -1014,7 +1012,8 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
in->ctx.rq_type_srqn |= cpu_to_be32(to_msrq(init_attr->srq)->msrq.srqn);
} else {
in->ctx.xrcd = cpu_to_be32(to_mxrcd(devr->x1)->xrcdn);
in->ctx.rq_type_srqn |= cpu_to_be32(to_msrq(devr->s0)->msrq.srqn);
in->ctx.rq_type_srqn |=
cpu_to_be32(to_msrq(devr->s1)->msrq.srqn);
}
}
@ -1226,7 +1225,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
struct ib_qp_init_attr *init_attr,
struct ib_udata *udata)
{
struct mlx5_general_caps *gen;
struct mlx5_ib_dev *dev;
struct mlx5_ib_qp *qp;
u16 xrcdn = 0;
@ -1244,12 +1242,11 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
}
dev = to_mdev(to_mxrcd(init_attr->xrcd)->ibxrcd.device);
}
gen = &dev->mdev->caps.gen;
switch (init_attr->qp_type) {
case IB_QPT_XRC_TGT:
case IB_QPT_XRC_INI:
if (!(gen->flags & MLX5_DEV_CAP_FLAG_XRC)) {
if (!MLX5_CAP_GEN(dev->mdev, xrc)) {
mlx5_ib_dbg(dev, "XRC not supported\n");
return ERR_PTR(-ENOSYS);
}
@ -1356,9 +1353,6 @@ enum {
static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate)
{
struct mlx5_general_caps *gen;
gen = &dev->mdev->caps.gen;
if (rate == IB_RATE_PORT_CURRENT) {
return 0;
} else if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) {
@ -1366,7 +1360,7 @@ static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate)
} else {
while (rate != IB_RATE_2_5_GBPS &&
!(1 << (rate + MLX5_STAT_RATE_OFFSET) &
gen->stat_rate_support))
MLX5_CAP_GEN(dev->mdev, stat_rate_support)))
--rate;
}
@ -1377,10 +1371,8 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, const struct ib_ah_attr *ah,
struct mlx5_qp_path *path, u8 port, int attr_mask,
u32 path_flags, const struct ib_qp_attr *attr)
{
struct mlx5_general_caps *gen;
int err;
gen = &dev->mdev->caps.gen;
path->fl = (path_flags & MLX5_PATH_FLAG_FL) ? 0x80 : 0;
path->free_ar = (path_flags & MLX5_PATH_FLAG_FREE_AR) ? 0x80 : 0;
@ -1391,9 +1383,11 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, const struct ib_ah_attr *ah,
path->rlid = cpu_to_be16(ah->dlid);
if (ah->ah_flags & IB_AH_GRH) {
if (ah->grh.sgid_index >= gen->port[port - 1].gid_table_len) {
if (ah->grh.sgid_index >=
dev->mdev->port_caps[port - 1].gid_table_len) {
pr_err("sgid_index (%u) too large. max is %d\n",
ah->grh.sgid_index, gen->port[port - 1].gid_table_len);
ah->grh.sgid_index,
dev->mdev->port_caps[port - 1].gid_table_len);
return -EINVAL;
}
path->grh_mlid |= 1 << 7;
@ -1570,7 +1564,6 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
struct mlx5_ib_qp *qp = to_mqp(ibqp);
struct mlx5_ib_cq *send_cq, *recv_cq;
struct mlx5_qp_context *context;
struct mlx5_general_caps *gen;
struct mlx5_modify_qp_mbox_in *in;
struct mlx5_ib_pd *pd;
enum mlx5_qp_state mlx5_cur, mlx5_new;
@ -1579,7 +1572,6 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
int mlx5_st;
int err;
gen = &dev->mdev->caps.gen;
in = kzalloc(sizeof(*in), GFP_KERNEL);
if (!in)
return -ENOMEM;
@ -1619,7 +1611,8 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
err = -EINVAL;
goto out;
}
context->mtu_msgmax = (attr->path_mtu << 5) | gen->log_max_msg;
context->mtu_msgmax = (attr->path_mtu << 5) |
(u8)MLX5_CAP_GEN(dev->mdev, log_max_msg);
}
if (attr_mask & IB_QP_DEST_QPN)
@ -1777,11 +1770,9 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
struct mlx5_ib_dev *dev = to_mdev(ibqp->device);
struct mlx5_ib_qp *qp = to_mqp(ibqp);
enum ib_qp_state cur_state, new_state;
struct mlx5_general_caps *gen;
int err = -EINVAL;
int port;
gen = &dev->mdev->caps.gen;
mutex_lock(&qp->mutex);
cur_state = attr_mask & IB_QP_CUR_STATE ? attr->cur_qp_state : qp->state;
@ -1793,21 +1784,25 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
goto out;
if ((attr_mask & IB_QP_PORT) &&
(attr->port_num == 0 || attr->port_num > gen->num_ports))
(attr->port_num == 0 ||
attr->port_num > MLX5_CAP_GEN(dev->mdev, num_ports)))
goto out;
if (attr_mask & IB_QP_PKEY_INDEX) {
port = attr_mask & IB_QP_PORT ? attr->port_num : qp->port;
if (attr->pkey_index >= gen->port[port - 1].pkey_table_len)
if (attr->pkey_index >=
dev->mdev->port_caps[port - 1].pkey_table_len)
goto out;
}
if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
attr->max_rd_atomic > (1 << gen->log_max_ra_res_qp))
attr->max_rd_atomic >
(1 << MLX5_CAP_GEN(dev->mdev, log_max_ra_res_qp)))
goto out;
if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
attr->max_dest_rd_atomic > (1 << gen->log_max_ra_req_qp))
attr->max_dest_rd_atomic >
(1 << MLX5_CAP_GEN(dev->mdev, log_max_ra_req_qp)))
goto out;
if (cur_state == new_state && cur_state == IB_QPS_RESET) {
@ -3009,7 +3004,7 @@ static void to_ib_ah_attr(struct mlx5_ib_dev *ibdev, struct ib_ah_attr *ib_ah_at
ib_ah_attr->port_num = path->port;
if (ib_ah_attr->port_num == 0 ||
ib_ah_attr->port_num > dev->caps.gen.num_ports)
ib_ah_attr->port_num > MLX5_CAP_GEN(dev, num_ports))
return;
ib_ah_attr->sl = path->sl & 0xf;
@ -3135,12 +3130,10 @@ struct ib_xrcd *mlx5_ib_alloc_xrcd(struct ib_device *ibdev,
struct ib_udata *udata)
{
struct mlx5_ib_dev *dev = to_mdev(ibdev);
struct mlx5_general_caps *gen;
struct mlx5_ib_xrcd *xrcd;
int err;
gen = &dev->mdev->caps.gen;
if (!(gen->flags & MLX5_DEV_CAP_FLAG_XRC))
if (!MLX5_CAP_GEN(dev->mdev, xrc))
return ERR_PTR(-ENOSYS);
xrcd = kmalloc(sizeof(*xrcd), GFP_KERNEL);

View File

@ -165,7 +165,7 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq,
return err;
}
if (mlx5_buf_alloc(dev->mdev, buf_size, PAGE_SIZE * 2, &srq->buf)) {
if (mlx5_buf_alloc(dev->mdev, buf_size, &srq->buf)) {
mlx5_ib_dbg(dev, "buf alloc failed\n");
err = -ENOMEM;
goto err_db;
@ -236,7 +236,6 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
struct ib_udata *udata)
{
struct mlx5_ib_dev *dev = to_mdev(pd->device);
struct mlx5_general_caps *gen;
struct mlx5_ib_srq *srq;
int desc_size;
int buf_size;
@ -245,13 +244,13 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
int uninitialized_var(inlen);
int is_xrc;
u32 flgs, xrcdn;
__u32 max_srq_wqes = 1 << MLX5_CAP_GEN(dev->mdev, log_max_srq_sz);
gen = &dev->mdev->caps.gen;
/* Sanity check SRQ size before proceeding */
if (init_attr->attr.max_wr >= gen->max_srq_wqes) {
if (init_attr->attr.max_wr >= max_srq_wqes) {
mlx5_ib_dbg(dev, "max_wr %d, cap %d\n",
init_attr->attr.max_wr,
gen->max_srq_wqes);
max_srq_wqes);
return ERR_PTR(-EINVAL);
}
@ -303,7 +302,7 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
in->ctx.pd = cpu_to_be32(to_mpd(pd)->pdn);
in->ctx.db_record = cpu_to_be64(srq->db.dma);
err = mlx5_core_create_srq(dev->mdev, &srq->msrq, in, inlen);
err = mlx5_core_create_srq(dev->mdev, &srq->msrq, in, inlen, is_xrc);
kvfree(in);
if (err) {
mlx5_ib_dbg(dev, "create SRQ failed, err %d\n", err);

View File

@ -2264,7 +2264,7 @@ static int capidrv_addcontr(u16 contr, struct capi_profile *profp)
return -1;
}
card->owner = THIS_MODULE;
init_timer(&card->listentimer);
setup_timer(&card->listentimer, listentimerfunc, (unsigned long)card);
strcpy(card->name, id);
card->contrnr = contr;
card->nbchan = profp->nbchannel;
@ -2331,8 +2331,6 @@ static int capidrv_addcontr(u16 contr, struct capi_profile *profp)
card->cipmask = 0x1FFF03FF; /* any */
card->cipmask2 = 0;
card->listentimer.data = (unsigned long)card;
card->listentimer.function = listentimerfunc;
send_listen(card);
mod_timer(&card->listentimer, jiffies + 60 * HZ);

View File

@ -237,7 +237,7 @@ config HISAX_MIC
config HISAX_NETJET
bool "NETjet card"
depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN) || MICROBLAZE))
depends on VIRT_TO_BUS
help
This enables HiSax support for the NetJet from Traverse
@ -249,7 +249,7 @@ config HISAX_NETJET
config HISAX_NETJET_U
bool "NETspider U card"
depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN) || MICROBLAZE))
depends on VIRT_TO_BUS
help
This enables HiSax support for the Netspider U interface ISDN card

View File

@ -267,8 +267,8 @@ int st5481_setup_usb(struct st5481_adapter *adapter)
}
// The descriptor is wrong for some early samples of the ST5481 chip
altsetting->endpoint[3].desc.wMaxPacketSize = __constant_cpu_to_le16(32);
altsetting->endpoint[4].desc.wMaxPacketSize = __constant_cpu_to_le16(32);
altsetting->endpoint[3].desc.wMaxPacketSize = cpu_to_le16(32);
altsetting->endpoint[4].desc.wMaxPacketSize = cpu_to_le16(32);
// Use alternative setting 3 on interface 0 to have 2B+D
if ((status = usb_set_interface(dev, 0, 3)) < 0) {

View File

@ -601,14 +601,14 @@ static const struct proto_ops data_sock_ops = {
};
static int
data_sock_create(struct net *net, struct socket *sock, int protocol)
data_sock_create(struct net *net, struct socket *sock, int protocol, int kern)
{
struct sock *sk;
if (sock->type != SOCK_DGRAM)
return -ESOCKTNOSUPPORT;
sk = sk_alloc(net, PF_ISDN, GFP_KERNEL, &mISDN_proto);
sk = sk_alloc(net, PF_ISDN, GFP_KERNEL, &mISDN_proto, kern);
if (!sk)
return -ENOMEM;
@ -756,14 +756,14 @@ static const struct proto_ops base_sock_ops = {
static int
base_sock_create(struct net *net, struct socket *sock, int protocol)
base_sock_create(struct net *net, struct socket *sock, int protocol, int kern)
{
struct sock *sk;
if (sock->type != SOCK_RAW)
return -ESOCKTNOSUPPORT;
sk = sk_alloc(net, PF_ISDN, GFP_KERNEL, &mISDN_proto);
sk = sk_alloc(net, PF_ISDN, GFP_KERNEL, &mISDN_proto, kern);
if (!sk)
return -ENOMEM;
@ -785,7 +785,7 @@ mISDN_sock_create(struct net *net, struct socket *sock, int proto, int kern)
switch (proto) {
case ISDN_P_BASE:
err = base_sock_create(net, sock, proto);
err = base_sock_create(net, sock, proto, kern);
break;
case ISDN_P_TE_S0:
case ISDN_P_NT_S0:
@ -799,7 +799,7 @@ mISDN_sock_create(struct net *net, struct socket *sock, int proto, int kern)
case ISDN_P_B_L2DTMF:
case ISDN_P_B_L2DSP:
case ISDN_P_B_L2DSPHDLC:
err = data_sock_create(net, sock, proto);
err = data_sock_create(net, sock, proto, kern);
break;
default:
return err;

View File

@ -267,6 +267,10 @@ static void cmodio_pci_remove(struct pci_dev *dev)
static const struct pci_device_id cmodio_pci_ids[] = {
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9030, PCI_VENDOR_ID_JANZ, 0x0101 },
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050, PCI_VENDOR_ID_JANZ, 0x0100 },
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9030, PCI_VENDOR_ID_JANZ, 0x0201 },
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9030, PCI_VENDOR_ID_JANZ, 0x0202 },
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050, PCI_VENDOR_ID_JANZ, 0x0201 },
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050, PCI_VENDOR_ID_JANZ, 0x0202 },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, cmodio_pci_ids);

View File

@ -179,6 +179,20 @@ config VXLAN
To compile this driver as a module, choose M here: the module
will be called vxlan.
config GENEVE
tristate "Generic Network Virtualization Encapsulation netdev"
depends on INET && GENEVE_CORE
select NET_IP_TUNNEL
---help---
This allows one to create geneve virtual interfaces that provide
Layer 2 Networks over Layer 3 Networks. GENEVE is often used
to tunnel virtual network infrastructure in virtualized environments.
For more information see:
http://tools.ietf.org/html/draft-gross-geneve-02
To compile this driver as a module, choose M here: the module
will be called geneve.
config NETCONSOLE
tristate "Network console logging support"
---help---

View File

@ -23,6 +23,7 @@ obj-$(CONFIG_TUN) += tun.o
obj-$(CONFIG_VETH) += veth.o
obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
obj-$(CONFIG_VXLAN) += vxlan.o
obj-$(CONFIG_GENEVE) += geneve.o
obj-$(CONFIG_NLMON) += nlmon.o
#

View File

@ -15,10 +15,6 @@ menuconfig ARCNET
COM90xx type card, so say Y (or M) to "ARCnet COM90xx chipset
support" below.
You might also want to have a look at the Ethernet-HOWTO, available
from <http://www.tldp.org/docs.html#howto>(even though ARCnet
is not really Ethernet).
To compile this driver as a module, choose M here. The module will
be called arcnet.

View File

@ -75,10 +75,10 @@
/* Port Key definitions
* key is determined according to the link speed, duplex and
* user key (which is yet not supported)
* --------------------------------------------------------------
* Port key : | User key | Speed | Duplex |
* --------------------------------------------------------------
* 16 6 1 0
* --------------------------------------------------------------
* Port key | User key (10 bits) | Speed (5 bits) | Duplex|
* --------------------------------------------------------------
* |15 6|5 1|0
*/
#define AD_DUPLEX_KEY_MASKS 0x1
#define AD_SPEED_KEY_MASKS 0x3E
@ -1908,8 +1908,14 @@ void bond_3ad_initialize(struct bonding *bond, u16 tick_resolution)
BOND_AD_INFO(bond).aggregator_identifier = 0;
BOND_AD_INFO(bond).system.sys_priority = 0xFFFF;
BOND_AD_INFO(bond).system.sys_mac_addr = *((struct mac_addr *)bond->dev->dev_addr);
BOND_AD_INFO(bond).system.sys_priority =
bond->params.ad_actor_sys_prio;
if (is_zero_ether_addr(bond->params.ad_actor_system))
BOND_AD_INFO(bond).system.sys_mac_addr =
*((struct mac_addr *)bond->dev->dev_addr);
else
BOND_AD_INFO(bond).system.sys_mac_addr =
*((struct mac_addr *)bond->params.ad_actor_system);
/* initialize how many times this module is called in one
* second (should be about every 100ms)
@ -1945,10 +1951,10 @@ void bond_3ad_bind_slave(struct slave *slave)
port->slave = slave;
port->actor_port_number = SLAVE_AD_INFO(slave)->id;
/* key is determined according to the link speed, duplex and user key(which
* is yet not supported)
/* key is determined according to the link speed, duplex and
* user key
*/
port->actor_admin_port_key = 0;
port->actor_admin_port_key = bond->params.ad_user_port_key << 6;
port->actor_admin_port_key |= __get_duplex(port);
port->actor_admin_port_key |= (__get_link_speed(port) << 1);
port->actor_oper_port_key = port->actor_admin_port_key;
@ -1959,6 +1965,8 @@ void bond_3ad_bind_slave(struct slave *slave)
port->sm_vars &= ~AD_PORT_LACP_ENABLED;
/* actor system is the bond's system */
port->actor_system = BOND_AD_INFO(bond).system.sys_mac_addr;
port->actor_system_priority =
BOND_AD_INFO(bond).system.sys_priority;
/* tx timer(to verify that no more than MAX_TX_IN_SECOND
* lacpdu's are sent in one second)
*/

View File

@ -76,7 +76,7 @@
#include <net/netns/generic.h>
#include <net/pkt_sched.h>
#include <linux/rculist.h>
#include <net/flow_keys.h>
#include <net/flow_dissector.h>
#include <net/switchdev.h>
#include <net/bonding.h>
#include <net/bond_3ad.h>
@ -1015,10 +1015,7 @@ static netdev_features_t bond_fix_features(struct net_device *dev,
netdev_features_t mask;
struct slave *slave;
/* If any slave has the offload feature flag set,
* set the offload flag on the bond.
*/
mask = features | NETIF_F_HW_SWITCH_OFFLOAD;
mask = features;
features &= ~NETIF_F_ONE_FOR_ALL;
features |= NETIF_F_ALL_FOR_ALL;
@ -3054,16 +3051,15 @@ static bool bond_flow_dissect(struct bonding *bond, struct sk_buff *skb,
int noff, proto = -1;
if (bond->params.xmit_policy > BOND_XMIT_POLICY_LAYER23)
return skb_flow_dissect(skb, fk);
return skb_flow_dissect_flow_keys(skb, fk);
fk->ports = 0;
fk->ports.ports = 0;
noff = skb_network_offset(skb);
if (skb->protocol == htons(ETH_P_IP)) {
if (unlikely(!pskb_may_pull(skb, noff + sizeof(*iph))))
return false;
iph = ip_hdr(skb);
fk->src = iph->saddr;
fk->dst = iph->daddr;
iph_to_flow_copy_v4addrs(fk, iph);
noff += iph->ihl << 2;
if (!ip_is_fragment(iph))
proto = iph->protocol;
@ -3071,15 +3067,14 @@ static bool bond_flow_dissect(struct bonding *bond, struct sk_buff *skb,
if (unlikely(!pskb_may_pull(skb, noff + sizeof(*iph6))))
return false;
iph6 = ipv6_hdr(skb);
fk->src = (__force __be32)ipv6_addr_hash(&iph6->saddr);
fk->dst = (__force __be32)ipv6_addr_hash(&iph6->daddr);
iph_to_flow_copy_v6addrs(fk, iph6);
noff += sizeof(*iph6);
proto = iph6->nexthdr;
} else {
return false;
}
if (bond->params.xmit_policy == BOND_XMIT_POLICY_LAYER34 && proto >= 0)
fk->ports = skb_flow_get_ports(skb, noff, proto);
fk->ports.ports = skb_flow_get_ports(skb, noff, proto);
return true;
}
@ -3105,8 +3100,9 @@ u32 bond_xmit_hash(struct bonding *bond, struct sk_buff *skb)
bond->params.xmit_policy == BOND_XMIT_POLICY_ENCAP23)
hash = bond_eth_hash(skb);
else
hash = (__force u32)flow.ports;
hash ^= (__force u32)flow.dst ^ (__force u32)flow.src;
hash = (__force u32)flow.ports.ports;
hash ^= (__force u32)flow_get_u32_dst(&flow) ^
(__force u32)flow_get_u32_src(&flow);
hash ^= (hash >> 16);
hash ^= (hash >> 8);
@ -4039,8 +4035,12 @@ static const struct net_device_ops bond_netdev_ops = {
.ndo_add_slave = bond_enslave,
.ndo_del_slave = bond_release,
.ndo_fix_features = bond_fix_features,
.ndo_bridge_setlink = ndo_dflt_netdev_switch_port_bridge_setlink,
.ndo_bridge_dellink = ndo_dflt_netdev_switch_port_bridge_dellink,
.ndo_bridge_setlink = switchdev_port_bridge_setlink,
.ndo_bridge_getlink = switchdev_port_bridge_getlink,
.ndo_bridge_dellink = switchdev_port_bridge_dellink,
.ndo_fdb_add = switchdev_port_fdb_add,
.ndo_fdb_del = switchdev_port_fdb_del,
.ndo_fdb_dump = switchdev_port_fdb_dump,
.ndo_features_check = passthru_features_check,
};
@ -4140,6 +4140,8 @@ static int bond_check_params(struct bond_params *params)
struct bond_opt_value newval;
const struct bond_opt_value *valptr;
int arp_all_targets_value;
u16 ad_actor_sys_prio = 0;
u16 ad_user_port_key = 0;
/* Convert string parameters. */
if (mode) {
@ -4434,6 +4436,24 @@ static int bond_check_params(struct bond_params *params)
fail_over_mac_value = BOND_FOM_NONE;
}
bond_opt_initstr(&newval, "default");
valptr = bond_opt_parse(
bond_opt_get(BOND_OPT_AD_ACTOR_SYS_PRIO),
&newval);
if (!valptr) {
pr_err("Error: No ad_actor_sys_prio default value");
return -EINVAL;
}
ad_actor_sys_prio = valptr->value;
valptr = bond_opt_parse(bond_opt_get(BOND_OPT_AD_USER_PORT_KEY),
&newval);
if (!valptr) {
pr_err("Error: No ad_user_port_key default value");
return -EINVAL;
}
ad_user_port_key = valptr->value;
if (lp_interval == 0) {
pr_warn("Warning: ip_interval must be between 1 and %d, so it was reset to %d\n",
INT_MAX, BOND_ALB_DEFAULT_LP_INTERVAL);
@ -4462,6 +4482,9 @@ static int bond_check_params(struct bond_params *params)
params->lp_interval = lp_interval;
params->packets_per_slave = packets_per_slave;
params->tlb_dynamic_lb = 1; /* Default value */
params->ad_actor_sys_prio = ad_actor_sys_prio;
eth_zero_addr(params->ad_actor_system);
params->ad_user_port_key = ad_user_port_key;
if (packets_per_slave > 0) {
params->reciprocal_packets_per_slave =
reciprocal_value(packets_per_slave);

View File

@ -28,6 +28,8 @@ static size_t bond_get_slave_size(const struct net_device *bond_dev,
nla_total_size(MAX_ADDR_LEN) + /* IFLA_BOND_SLAVE_PERM_HWADDR */
nla_total_size(sizeof(u16)) + /* IFLA_BOND_SLAVE_QUEUE_ID */
nla_total_size(sizeof(u16)) + /* IFLA_BOND_SLAVE_AD_AGGREGATOR_ID */
nla_total_size(sizeof(u8)) + /* IFLA_BOND_SLAVE_AD_ACTOR_OPER_PORT_STATE */
nla_total_size(sizeof(u16)) + /* IFLA_BOND_SLAVE_AD_PARTNER_OPER_PORT_STATE */
0;
}
@ -56,12 +58,23 @@ static int bond_fill_slave_info(struct sk_buff *skb,
if (BOND_MODE(slave->bond) == BOND_MODE_8023AD) {
const struct aggregator *agg;
const struct port *ad_port;
ad_port = &SLAVE_AD_INFO(slave)->port;
agg = SLAVE_AD_INFO(slave)->port.aggregator;
if (agg)
if (agg) {
if (nla_put_u16(skb, IFLA_BOND_SLAVE_AD_AGGREGATOR_ID,
agg->aggregator_identifier))
goto nla_put_failure;
if (nla_put_u8(skb,
IFLA_BOND_SLAVE_AD_ACTOR_OPER_PORT_STATE,
ad_port->actor_oper_port_state))
goto nla_put_failure;
if (nla_put_u16(skb,
IFLA_BOND_SLAVE_AD_PARTNER_OPER_PORT_STATE,
ad_port->partner_oper.port_state))
goto nla_put_failure;
}
}
return 0;
@ -94,6 +107,10 @@ static const struct nla_policy bond_policy[IFLA_BOND_MAX + 1] = {
[IFLA_BOND_AD_LACP_RATE] = { .type = NLA_U8 },
[IFLA_BOND_AD_SELECT] = { .type = NLA_U8 },
[IFLA_BOND_AD_INFO] = { .type = NLA_NESTED },
[IFLA_BOND_AD_ACTOR_SYS_PRIO] = { .type = NLA_U16 },
[IFLA_BOND_AD_USER_PORT_KEY] = { .type = NLA_U16 },
[IFLA_BOND_AD_ACTOR_SYSTEM] = { .type = NLA_BINARY,
.len = ETH_ALEN },
};
static const struct nla_policy bond_slave_policy[IFLA_BOND_SLAVE_MAX + 1] = {
@ -379,6 +396,36 @@ static int bond_changelink(struct net_device *bond_dev,
if (err)
return err;
}
if (data[IFLA_BOND_AD_ACTOR_SYS_PRIO]) {
int actor_sys_prio =
nla_get_u16(data[IFLA_BOND_AD_ACTOR_SYS_PRIO]);
bond_opt_initval(&newval, actor_sys_prio);
err = __bond_opt_set(bond, BOND_OPT_AD_ACTOR_SYS_PRIO, &newval);
if (err)
return err;
}
if (data[IFLA_BOND_AD_USER_PORT_KEY]) {
int port_key =
nla_get_u16(data[IFLA_BOND_AD_USER_PORT_KEY]);
bond_opt_initval(&newval, port_key);
err = __bond_opt_set(bond, BOND_OPT_AD_USER_PORT_KEY, &newval);
if (err)
return err;
}
if (data[IFLA_BOND_AD_ACTOR_SYSTEM]) {
if (nla_len(data[IFLA_BOND_AD_ACTOR_SYSTEM]) != ETH_ALEN)
return -EINVAL;
bond_opt_initval(&newval,
nla_get_be64(data[IFLA_BOND_AD_ACTOR_SYSTEM]));
err = __bond_opt_set(bond, BOND_OPT_AD_ACTOR_SYSTEM, &newval);
if (err)
return err;
}
return 0;
}
@ -426,6 +473,9 @@ static size_t bond_get_size(const struct net_device *bond_dev)
nla_total_size(sizeof(u16)) + /* IFLA_BOND_AD_INFO_ACTOR_KEY */
nla_total_size(sizeof(u16)) + /* IFLA_BOND_AD_INFO_PARTNER_KEY*/
nla_total_size(ETH_ALEN) + /* IFLA_BOND_AD_INFO_PARTNER_MAC*/
nla_total_size(sizeof(u16)) + /* IFLA_BOND_AD_ACTOR_SYS_PRIO */
nla_total_size(sizeof(u16)) + /* IFLA_BOND_AD_USER_PORT_KEY */
nla_total_size(ETH_ALEN) + /* IFLA_BOND_AD_ACTOR_SYSTEM */
0;
}
@ -551,6 +601,20 @@ static int bond_fill_info(struct sk_buff *skb,
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
struct ad_info info;
if (capable(CAP_NET_ADMIN)) {
if (nla_put_u16(skb, IFLA_BOND_AD_ACTOR_SYS_PRIO,
bond->params.ad_actor_sys_prio))
goto nla_put_failure;
if (nla_put_u16(skb, IFLA_BOND_AD_USER_PORT_KEY,
bond->params.ad_user_port_key))
goto nla_put_failure;
if (nla_put(skb, IFLA_BOND_AD_ACTOR_SYSTEM,
sizeof(bond->params.ad_actor_system),
&bond->params.ad_actor_system))
goto nla_put_failure;
}
if (!bond_3ad_get_active_agg_info(bond, &info)) {
struct nlattr *nest;

View File

@ -70,6 +70,12 @@ static int bond_option_slaves_set(struct bonding *bond,
const struct bond_opt_value *newval);
static int bond_option_tlb_dynamic_lb_set(struct bonding *bond,
const struct bond_opt_value *newval);
static int bond_option_ad_actor_sys_prio_set(struct bonding *bond,
const struct bond_opt_value *newval);
static int bond_option_ad_actor_system_set(struct bonding *bond,
const struct bond_opt_value *newval);
static int bond_option_ad_user_port_key_set(struct bonding *bond,
const struct bond_opt_value *newval);
static const struct bond_opt_value bond_mode_tbl[] = {
@ -186,6 +192,18 @@ static const struct bond_opt_value bond_tlb_dynamic_lb_tbl[] = {
{ NULL, -1, 0}
};
static const struct bond_opt_value bond_ad_actor_sys_prio_tbl[] = {
{ "minval", 1, BOND_VALFLAG_MIN},
{ "maxval", 65535, BOND_VALFLAG_MAX | BOND_VALFLAG_DEFAULT},
{ NULL, -1, 0},
};
static const struct bond_opt_value bond_ad_user_port_key_tbl[] = {
{ "minval", 0, BOND_VALFLAG_MIN | BOND_VALFLAG_DEFAULT},
{ "maxval", 1023, BOND_VALFLAG_MAX},
{ NULL, -1, 0},
};
static const struct bond_option bond_opts[BOND_OPT_LAST] = {
[BOND_OPT_MODE] = {
.id = BOND_OPT_MODE,
@ -379,6 +397,29 @@ static const struct bond_option bond_opts[BOND_OPT_LAST] = {
.values = bond_tlb_dynamic_lb_tbl,
.flags = BOND_OPTFLAG_IFDOWN,
.set = bond_option_tlb_dynamic_lb_set,
},
[BOND_OPT_AD_ACTOR_SYS_PRIO] = {
.id = BOND_OPT_AD_ACTOR_SYS_PRIO,
.name = "ad_actor_sys_prio",
.unsuppmodes = BOND_MODE_ALL_EX(BIT(BOND_MODE_8023AD)),
.flags = BOND_OPTFLAG_IFDOWN,
.values = bond_ad_actor_sys_prio_tbl,
.set = bond_option_ad_actor_sys_prio_set,
},
[BOND_OPT_AD_ACTOR_SYSTEM] = {
.id = BOND_OPT_AD_ACTOR_SYSTEM,
.name = "ad_actor_system",
.unsuppmodes = BOND_MODE_ALL_EX(BIT(BOND_MODE_8023AD)),
.flags = BOND_OPTFLAG_RAWVAL | BOND_OPTFLAG_IFDOWN,
.set = bond_option_ad_actor_system_set,
},
[BOND_OPT_AD_USER_PORT_KEY] = {
.id = BOND_OPT_AD_USER_PORT_KEY,
.name = "ad_user_port_key",
.unsuppmodes = BOND_MODE_ALL_EX(BIT(BOND_MODE_8023AD)),
.flags = BOND_OPTFLAG_IFDOWN,
.values = bond_ad_user_port_key_tbl,
.set = bond_option_ad_user_port_key_set,
}
};
@ -1349,3 +1390,53 @@ static int bond_option_tlb_dynamic_lb_set(struct bonding *bond,
return 0;
}
static int bond_option_ad_actor_sys_prio_set(struct bonding *bond,
const struct bond_opt_value *newval)
{
netdev_info(bond->dev, "Setting ad_actor_sys_prio to %llu\n",
newval->value);
bond->params.ad_actor_sys_prio = newval->value;
return 0;
}
static int bond_option_ad_actor_system_set(struct bonding *bond,
const struct bond_opt_value *newval)
{
u8 macaddr[ETH_ALEN];
u8 *mac;
int i;
if (newval->string) {
i = sscanf(newval->string, "%hhx:%hhx:%hhx:%hhx:%hhx:%hhx",
&macaddr[0], &macaddr[1], &macaddr[2],
&macaddr[3], &macaddr[4], &macaddr[5]);
if (i != ETH_ALEN)
goto err;
mac = macaddr;
} else {
mac = (u8 *)&newval->value;
}
if (!is_valid_ether_addr(mac))
goto err;
netdev_info(bond->dev, "Setting ad_actor_system to %pM\n", mac);
ether_addr_copy(bond->params.ad_actor_system, mac);
return 0;
err:
netdev_err(bond->dev, "Invalid MAC address.\n");
return -EINVAL;
}
static int bond_option_ad_user_port_key_set(struct bonding *bond,
const struct bond_opt_value *newval)
{
netdev_info(bond->dev, "Setting ad_user_port_key to %llu\n",
newval->value);
bond->params.ad_user_port_key = newval->value;
return 0;
}

View File

@ -135,23 +135,30 @@ static void bond_info_show_master(struct seq_file *seq)
bond->params.ad_select);
seq_printf(seq, "Aggregator selection policy (ad_select): %s\n",
optval->string);
if (capable(CAP_NET_ADMIN)) {
seq_printf(seq, "System priority: %d\n",
BOND_AD_INFO(bond).system.sys_priority);
seq_printf(seq, "System MAC address: %pM\n",
&BOND_AD_INFO(bond).system.sys_mac_addr);
if (__bond_3ad_get_active_agg_info(bond, &ad_info)) {
seq_printf(seq, "bond %s has no active aggregator\n",
bond->dev->name);
} else {
seq_printf(seq, "Active Aggregator Info:\n");
if (__bond_3ad_get_active_agg_info(bond, &ad_info)) {
seq_printf(seq,
"bond %s has no active aggregator\n",
bond->dev->name);
} else {
seq_printf(seq, "Active Aggregator Info:\n");
seq_printf(seq, "\tAggregator ID: %d\n",
ad_info.aggregator_id);
seq_printf(seq, "\tNumber of ports: %d\n",
ad_info.ports);
seq_printf(seq, "\tActor Key: %d\n",
ad_info.actor_key);
seq_printf(seq, "\tPartner Key: %d\n",
ad_info.partner_key);
seq_printf(seq, "\tPartner Mac Address: %pM\n",
ad_info.partner_system);
seq_printf(seq, "\tAggregator ID: %d\n",
ad_info.aggregator_id);
seq_printf(seq, "\tNumber of ports: %d\n",
ad_info.ports);
seq_printf(seq, "\tActor Key: %d\n",
ad_info.actor_key);
seq_printf(seq, "\tPartner Key: %d\n",
ad_info.partner_key);
seq_printf(seq, "\tPartner Mac Address: %pM\n",
ad_info.partner_system);
}
}
}
}
@ -195,29 +202,35 @@ static void bond_info_show_slave(struct seq_file *seq,
seq_printf(seq, "Partner Churned Count: %d\n",
port->churn_partner_count);
seq_puts(seq, "details actor lacp pdu:\n");
seq_printf(seq, " system priority: %d\n",
port->actor_system_priority);
seq_printf(seq, " port key: %d\n",
port->actor_oper_port_key);
seq_printf(seq, " port priority: %d\n",
port->actor_port_priority);
seq_printf(seq, " port number: %d\n",
port->actor_port_number);
seq_printf(seq, " port state: %d\n",
port->actor_oper_port_state);
if (capable(CAP_NET_ADMIN)) {
seq_puts(seq, "details actor lacp pdu:\n");
seq_printf(seq, " system priority: %d\n",
port->actor_system_priority);
seq_printf(seq, " system mac address: %pM\n",
&port->actor_system);
seq_printf(seq, " port key: %d\n",
port->actor_oper_port_key);
seq_printf(seq, " port priority: %d\n",
port->actor_port_priority);
seq_printf(seq, " port number: %d\n",
port->actor_port_number);
seq_printf(seq, " port state: %d\n",
port->actor_oper_port_state);
seq_puts(seq, "details partner lacp pdu:\n");
seq_printf(seq, " system priority: %d\n",
port->partner_oper.system_priority);
seq_printf(seq, " oper key: %d\n",
port->partner_oper.key);
seq_printf(seq, " port priority: %d\n",
port->partner_oper.port_priority);
seq_printf(seq, " port number: %d\n",
port->partner_oper.port_number);
seq_printf(seq, " port state: %d\n",
port->partner_oper.port_state);
seq_puts(seq, "details partner lacp pdu:\n");
seq_printf(seq, " system priority: %d\n",
port->partner_oper.system_priority);
seq_printf(seq, " system mac address: %pM\n",
&port->partner_oper.system);
seq_printf(seq, " oper key: %d\n",
port->partner_oper.key);
seq_printf(seq, " port priority: %d\n",
port->partner_oper.port_priority);
seq_printf(seq, " port number: %d\n",
port->partner_oper.port_number);
seq_printf(seq, " port state: %d\n",
port->partner_oper.port_state);
}
} else {
seq_puts(seq, "Aggregator ID: N/A\n");
}

View File

@ -549,7 +549,7 @@ static ssize_t bonding_show_ad_actor_key(struct device *d,
int count = 0;
struct bonding *bond = to_bond(d);
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
if (BOND_MODE(bond) == BOND_MODE_8023AD && capable(CAP_NET_ADMIN)) {
struct ad_info ad_info;
count = sprintf(buf, "%d\n",
bond_3ad_get_active_agg_info(bond, &ad_info)
@ -569,7 +569,7 @@ static ssize_t bonding_show_ad_partner_key(struct device *d,
int count = 0;
struct bonding *bond = to_bond(d);
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
if (BOND_MODE(bond) == BOND_MODE_8023AD && capable(CAP_NET_ADMIN)) {
struct ad_info ad_info;
count = sprintf(buf, "%d\n",
bond_3ad_get_active_agg_info(bond, &ad_info)
@ -589,7 +589,7 @@ static ssize_t bonding_show_ad_partner_mac(struct device *d,
int count = 0;
struct bonding *bond = to_bond(d);
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
if (BOND_MODE(bond) == BOND_MODE_8023AD && capable(CAP_NET_ADMIN)) {
struct ad_info ad_info;
if (!bond_3ad_get_active_agg_info(bond, &ad_info))
count = sprintf(buf, "%pM\n", ad_info.partner_system);
@ -692,6 +692,49 @@ static ssize_t bonding_show_packets_per_slave(struct device *d,
static DEVICE_ATTR(packets_per_slave, S_IRUGO | S_IWUSR,
bonding_show_packets_per_slave, bonding_sysfs_store_option);
static ssize_t bonding_show_ad_actor_sys_prio(struct device *d,
struct device_attribute *attr,
char *buf)
{
struct bonding *bond = to_bond(d);
if (BOND_MODE(bond) == BOND_MODE_8023AD && capable(CAP_NET_ADMIN))
return sprintf(buf, "%hu\n", bond->params.ad_actor_sys_prio);
return 0;
}
static DEVICE_ATTR(ad_actor_sys_prio, S_IRUGO | S_IWUSR,
bonding_show_ad_actor_sys_prio, bonding_sysfs_store_option);
static ssize_t bonding_show_ad_actor_system(struct device *d,
struct device_attribute *attr,
char *buf)
{
struct bonding *bond = to_bond(d);
if (BOND_MODE(bond) == BOND_MODE_8023AD && capable(CAP_NET_ADMIN))
return sprintf(buf, "%pM\n", bond->params.ad_actor_system);
return 0;
}
static DEVICE_ATTR(ad_actor_system, S_IRUGO | S_IWUSR,
bonding_show_ad_actor_system, bonding_sysfs_store_option);
static ssize_t bonding_show_ad_user_port_key(struct device *d,
struct device_attribute *attr,
char *buf)
{
struct bonding *bond = to_bond(d);
if (BOND_MODE(bond) == BOND_MODE_8023AD && capable(CAP_NET_ADMIN))
return sprintf(buf, "%hu\n", bond->params.ad_user_port_key);
return 0;
}
static DEVICE_ATTR(ad_user_port_key, S_IRUGO | S_IWUSR,
bonding_show_ad_user_port_key, bonding_sysfs_store_option);
static struct attribute *per_bond_attrs[] = {
&dev_attr_slaves.attr,
&dev_attr_mode.attr,
@ -725,6 +768,9 @@ static struct attribute *per_bond_attrs[] = {
&dev_attr_lp_interval.attr,
&dev_attr_packets_per_slave.attr,
&dev_attr_tlb_dynamic_lb.attr,
&dev_attr_ad_actor_sys_prio.attr,
&dev_attr_ad_actor_system.attr,
&dev_attr_ad_user_port_key.attr,
NULL,
};

View File

@ -80,6 +80,36 @@ static ssize_t ad_aggregator_id_show(struct slave *slave, char *buf)
}
static SLAVE_ATTR_RO(ad_aggregator_id);
static ssize_t ad_actor_oper_port_state_show(struct slave *slave, char *buf)
{
const struct port *ad_port;
if (BOND_MODE(slave->bond) == BOND_MODE_8023AD) {
ad_port = &SLAVE_AD_INFO(slave)->port;
if (ad_port->aggregator)
return sprintf(buf, "%u\n",
ad_port->actor_oper_port_state);
}
return sprintf(buf, "N/A\n");
}
static SLAVE_ATTR_RO(ad_actor_oper_port_state);
static ssize_t ad_partner_oper_port_state_show(struct slave *slave, char *buf)
{
const struct port *ad_port;
if (BOND_MODE(slave->bond) == BOND_MODE_8023AD) {
ad_port = &SLAVE_AD_INFO(slave)->port;
if (ad_port->aggregator)
return sprintf(buf, "%u\n",
ad_port->partner_oper.port_state);
}
return sprintf(buf, "N/A\n");
}
static SLAVE_ATTR_RO(ad_partner_oper_port_state);
static const struct slave_attribute *slave_attrs[] = {
&slave_attr_state,
&slave_attr_mii_status,
@ -87,6 +117,8 @@ static const struct slave_attribute *slave_attrs[] = {
&slave_attr_perm_hwaddr,
&slave_attr_queue_id,
&slave_attr_ad_aggregator_id,
&slave_attr_ad_actor_oper_port_state,
&slave_attr_ad_partner_oper_port_state,
NULL
};

View File

@ -440,6 +440,9 @@ unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)
struct can_frame *cf = (struct can_frame *)skb->data;
u8 dlc = cf->can_dlc;
if (!(skb->tstamp.tv64))
__net_timestamp(skb);
netif_rx(priv->echo_skb[idx]);
priv->echo_skb[idx] = NULL;
@ -575,6 +578,7 @@ struct sk_buff *alloc_can_skb(struct net_device *dev, struct can_frame **cf)
if (unlikely(!skb))
return NULL;
__net_timestamp(skb);
skb->protocol = htons(ETH_P_CAN);
skb->pkt_type = PACKET_BROADCAST;
skb->ip_summed = CHECKSUM_UNNECESSARY;
@ -603,6 +607,7 @@ struct sk_buff *alloc_canfd_skb(struct net_device *dev,
if (unlikely(!skb))
return NULL;
__net_timestamp(skb);
skb->protocol = htons(ETH_P_CANFD);
skb->pkt_type = PACKET_BROADCAST;
skb->ip_summed = CHECKSUM_UNNECESSARY;

View File

@ -93,13 +93,13 @@
(FLEXCAN_CTRL_ERR_BUS | FLEXCAN_CTRL_ERR_STATE)
/* FLEXCAN control register 2 (CTRL2) bits */
#define FLEXCAN_CRL2_ECRWRE BIT(29)
#define FLEXCAN_CRL2_WRMFRZ BIT(28)
#define FLEXCAN_CRL2_RFFN(x) (((x) & 0x0f) << 24)
#define FLEXCAN_CRL2_TASD(x) (((x) & 0x1f) << 19)
#define FLEXCAN_CRL2_MRP BIT(18)
#define FLEXCAN_CRL2_RRS BIT(17)
#define FLEXCAN_CRL2_EACEN BIT(16)
#define FLEXCAN_CTRL2_ECRWRE BIT(29)
#define FLEXCAN_CTRL2_WRMFRZ BIT(28)
#define FLEXCAN_CTRL2_RFFN(x) (((x) & 0x0f) << 24)
#define FLEXCAN_CTRL2_TASD(x) (((x) & 0x1f) << 19)
#define FLEXCAN_CTRL2_MRP BIT(18)
#define FLEXCAN_CTRL2_RRS BIT(17)
#define FLEXCAN_CTRL2_EACEN BIT(16)
/* FLEXCAN memory error control register (MECR) bits */
#define FLEXCAN_MECR_ECRWRDIS BIT(31)
@ -158,7 +158,6 @@
FLEXCAN_IFLAG_BUF(FLEXCAN_TX_BUF_ID))
/* FLEXCAN message buffers */
#define FLEXCAN_MB_CNT_CODE(x) (((x) & 0xf) << 24)
#define FLEXCAN_MB_CODE_RX_INACTIVE (0x0 << 24)
#define FLEXCAN_MB_CODE_RX_EMPTY (0x4 << 24)
#define FLEXCAN_MB_CODE_RX_FULL (0x2 << 24)
@ -184,14 +183,14 @@
* FLEXCAN hardware feature flags
*
* Below is some version info we got:
* SOC Version IP-Version Glitch- [TR]WRN_INT Memory err
* Filter? connected? detection
* MX25 FlexCAN2 03.00.00.00 no no no
* MX28 FlexCAN2 03.00.04.00 yes yes no
* MX35 FlexCAN2 03.00.00.00 no no no
* MX53 FlexCAN2 03.00.00.00 yes no no
* MX6s FlexCAN3 10.00.12.00 yes yes no
* VF610 FlexCAN3 ? no yes yes
* SOC Version IP-Version Glitch- [TR]WRN_INT Memory err RTR re-
* Filter? connected? detection ception in MB
* MX25 FlexCAN2 03.00.00.00 no no no no
* MX28 FlexCAN2 03.00.04.00 yes yes no no
* MX35 FlexCAN2 03.00.00.00 no no no no
* MX53 FlexCAN2 03.00.00.00 yes no no no
* MX6s FlexCAN3 10.00.12.00 yes yes no yes
* VF610 FlexCAN3 ? no yes yes yes?
*
* Some SOCs do not have the RX_WARN & TX_WARN interrupt line connected.
*/
@ -221,7 +220,7 @@ struct flexcan_regs {
u32 imask1; /* 0x28 */
u32 iflag2; /* 0x2c */
u32 iflag1; /* 0x30 */
u32 crl2; /* 0x34 */
u32 ctrl2; /* 0x34 */
u32 esr2; /* 0x38 */
u32 imeur; /* 0x3c */
u32 lrfr; /* 0x40 */
@ -230,6 +229,16 @@ struct flexcan_regs {
u32 rxfir; /* 0x4c */
u32 _reserved3[12]; /* 0x50 */
struct flexcan_mb cantxfg[64]; /* 0x80 */
/* FIFO-mode:
* MB
* 0x080...0x08f 0 RX message buffer
* 0x090...0x0df 1-5 reserverd
* 0x0e0...0x0ff 6-7 8 entry ID table
* (mx25, mx28, mx35, mx53)
* 0x0e0...0x2df 6-7..37 8..128 entry ID table
* size conf'ed via ctrl2::RFFN
* (mx6, vf610)
*/
u32 _reserved4[408];
u32 mecr; /* 0xae0 */
u32 erriar; /* 0xae4 */
@ -468,7 +477,7 @@ static int flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev)
struct flexcan_regs __iomem *regs = priv->base;
struct can_frame *cf = (struct can_frame *)skb->data;
u32 can_id;
u32 ctrl = FLEXCAN_MB_CNT_CODE(0xc) | (cf->can_dlc << 16);
u32 ctrl = FLEXCAN_MB_CODE_TX_DATA | (cf->can_dlc << 16);
if (can_dropped_invalid_skb(dev, skb))
return NETDEV_TX_OK;
@ -815,7 +824,7 @@ static int flexcan_chip_start(struct net_device *dev)
{
struct flexcan_priv *priv = netdev_priv(dev);
struct flexcan_regs __iomem *regs = priv->base;
u32 reg_mcr, reg_ctrl, reg_crl2, reg_mecr;
u32 reg_mcr, reg_ctrl, reg_ctrl2, reg_mecr;
int err, i;
/* enable module */
@ -918,9 +927,9 @@ static int flexcan_chip_start(struct net_device *dev)
* and Correction of Memory Errors" to write to
* MECR register
*/
reg_crl2 = flexcan_read(&regs->crl2);
reg_crl2 |= FLEXCAN_CRL2_ECRWRE;
flexcan_write(reg_crl2, &regs->crl2);
reg_ctrl2 = flexcan_read(&regs->ctrl2);
reg_ctrl2 |= FLEXCAN_CTRL2_ECRWRE;
flexcan_write(reg_ctrl2, &regs->ctrl2);
reg_mecr = flexcan_read(&regs->mecr);
reg_mecr &= ~FLEXCAN_MECR_ECRWRDIS;

View File

@ -40,6 +40,7 @@
#define MSYNC_PEER 0x00 /* ICAN only */
#define MSYNC_LOCL 0x01 /* host only */
#define TARGET_RUNNING 0x02
#define FIRMWARE_STAMP 0x60 /* big endian firmware stamp */
#define MSYNC_RB0 0x01
#define MSYNC_RB1 0x02
@ -83,6 +84,7 @@
#define MSG_COFFREQ 0x42
#define MSG_CONREQ 0x43
#define MSG_CCONFREQ 0x47
#define MSG_LMTS 0xb4
/*
* Janz ICAN3 CAN Inquiry Message Types
@ -165,6 +167,12 @@
/* SJA1000 Clock Input */
#define ICAN3_CAN_CLOCK 8000000
/* Janz ICAN3 firmware types */
enum ican3_fwtype {
ICAN3_FWTYPE_ICANOS,
ICAN3_FWTYPE_CAL_CANOPEN,
};
/* Driver Name */
#define DRV_NAME "janz-ican3"
@ -215,6 +223,10 @@ struct ican3_dev {
struct completion buserror_comp;
struct can_berr_counter bec;
/* firmware type */
enum ican3_fwtype fwtype;
char fwinfo[32];
/* old and new style host interface */
unsigned int iftype;
@ -750,13 +762,61 @@ static int ican3_set_id_filter(struct ican3_dev *mod, bool accept)
*/
static int ican3_set_bus_state(struct ican3_dev *mod, bool on)
{
struct can_bittiming *bt = &mod->can.bittiming;
struct ican3_msg msg;
u8 btr0, btr1;
int res;
memset(&msg, 0, sizeof(msg));
msg.spec = on ? MSG_CONREQ : MSG_COFFREQ;
msg.len = cpu_to_le16(0);
/* This algorithm was stolen from drivers/net/can/sja1000/sja1000.c */
/* The bittiming register command for the ICAN3 just sets the bit timing */
/* registers on the SJA1000 chip directly */
btr0 = ((bt->brp - 1) & 0x3f) | (((bt->sjw - 1) & 0x3) << 6);
btr1 = ((bt->prop_seg + bt->phase_seg1 - 1) & 0xf) |
(((bt->phase_seg2 - 1) & 0x7) << 4);
if (mod->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
btr1 |= 0x80;
return ican3_send_msg(mod, &msg);
if (mod->fwtype == ICAN3_FWTYPE_ICANOS) {
if (on) {
/* set bittiming */
memset(&msg, 0, sizeof(msg));
msg.spec = MSG_CBTRREQ;
msg.len = cpu_to_le16(4);
msg.data[0] = 0x00;
msg.data[1] = 0x00;
msg.data[2] = btr0;
msg.data[3] = btr1;
res = ican3_send_msg(mod, &msg);
if (res)
return res;
}
/* can-on/off request */
memset(&msg, 0, sizeof(msg));
msg.spec = on ? MSG_CONREQ : MSG_COFFREQ;
msg.len = cpu_to_le16(0);
return ican3_send_msg(mod, &msg);
} else if (mod->fwtype == ICAN3_FWTYPE_CAL_CANOPEN) {
memset(&msg, 0, sizeof(msg));
msg.spec = MSG_LMTS;
if (on) {
msg.len = cpu_to_le16(4);
msg.data[0] = 0;
msg.data[1] = 0;
msg.data[2] = btr0;
msg.data[3] = btr1;
} else {
msg.len = cpu_to_le16(2);
msg.data[0] = 1;
msg.data[1] = 0;
}
return ican3_send_msg(mod, &msg);
}
return -ENOTSUPP;
}
static int ican3_set_termination(struct ican3_dev *mod, bool on)
@ -1402,7 +1462,7 @@ static int ican3_reset_module(struct ican3_dev *mod)
return 0;
msleep(10);
} while (time_before(jiffies, start + HZ / 4));
} while (time_before(jiffies, start + HZ / 2));
netdev_err(mod->ndev, "failed to reset CAN module\n");
return -ETIMEDOUT;
@ -1427,6 +1487,17 @@ static int ican3_startup_module(struct ican3_dev *mod)
return ret;
}
/* detect firmware */
memcpy_fromio(mod->fwinfo, mod->dpm + FIRMWARE_STAMP, sizeof(mod->fwinfo) - 1);
if (strncmp(mod->fwinfo, "JANZ-ICAN3", 10)) {
netdev_err(mod->ndev, "ICAN3 not detected (found %s)\n", mod->fwinfo);
return -ENODEV;
}
if (strstr(mod->fwinfo, "CAL/CANopen"))
mod->fwtype = ICAN3_FWTYPE_CAL_CANOPEN;
else
mod->fwtype = ICAN3_FWTYPE_ICANOS;
/* re-enable interrupts so we can send messages */
iowrite8(1 << mod->num, &mod->ctrl->int_enable);
@ -1615,36 +1686,6 @@ static const struct can_bittiming_const ican3_bittiming_const = {
.brp_inc = 1,
};
/*
* This routine was stolen from drivers/net/can/sja1000/sja1000.c
*
* The bittiming register command for the ICAN3 just sets the bit timing
* registers on the SJA1000 chip directly
*/
static int ican3_set_bittiming(struct net_device *ndev)
{
struct ican3_dev *mod = netdev_priv(ndev);
struct can_bittiming *bt = &mod->can.bittiming;
struct ican3_msg msg;
u8 btr0, btr1;
btr0 = ((bt->brp - 1) & 0x3f) | (((bt->sjw - 1) & 0x3) << 6);
btr1 = ((bt->prop_seg + bt->phase_seg1 - 1) & 0xf) |
(((bt->phase_seg2 - 1) & 0x7) << 4);
if (mod->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
btr1 |= 0x80;
memset(&msg, 0, sizeof(msg));
msg.spec = MSG_CBTRREQ;
msg.len = cpu_to_le16(4);
msg.data[0] = 0x00;
msg.data[1] = 0x00;
msg.data[2] = btr0;
msg.data[3] = btr1;
return ican3_send_msg(mod, &msg);
}
static int ican3_set_mode(struct net_device *ndev, enum can_mode mode)
{
struct ican3_dev *mod = netdev_priv(ndev);
@ -1730,11 +1771,22 @@ static ssize_t ican3_sysfs_set_term(struct device *dev,
return count;
}
static ssize_t ican3_sysfs_show_fwinfo(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ican3_dev *mod = netdev_priv(to_net_dev(dev));
return scnprintf(buf, PAGE_SIZE, "%s\n", mod->fwinfo);
}
static DEVICE_ATTR(termination, S_IWUSR | S_IRUGO, ican3_sysfs_show_term,
ican3_sysfs_set_term);
static DEVICE_ATTR(fwinfo, S_IRUSR | S_IRUGO, ican3_sysfs_show_fwinfo, NULL);
static struct attribute *ican3_sysfs_attrs[] = {
&dev_attr_termination.attr,
&dev_attr_fwinfo.attr,
NULL,
};
@ -1794,7 +1846,6 @@ static int ican3_probe(struct platform_device *pdev)
mod->can.clock.freq = ICAN3_CAN_CLOCK;
mod->can.bittiming_const = &ican3_bittiming_const;
mod->can.do_set_bittiming = ican3_set_bittiming;
mod->can.do_set_mode = ican3_set_mode;
mod->can.do_get_berr_counter = ican3_get_berr_counter;
mod->can.ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES
@ -1866,7 +1917,7 @@ static int ican3_probe(struct platform_device *pdev)
goto out_free_irq;
}
dev_info(dev, "module %d: registered CAN device\n", pdata->modno);
netdev_info(mod->ndev, "module %d: registered CAN device\n", pdata->modno);
return 0;
out_free_irq:

View File

@ -207,6 +207,7 @@ static void slc_bump(struct slcan *sl)
if (!skb)
return;
__net_timestamp(skb);
skb->dev = sl->dev;
skb->protocol = htons(ETH_P_CAN);
skb->pkt_type = PACKET_BROADCAST;

View File

@ -190,10 +190,11 @@
#define RXBEID0_OFF 4
#define RXBDLC_OFF 5
#define RXBDAT_OFF 6
#define RXFSIDH(n) ((n) * 4)
#define RXFSIDL(n) ((n) * 4 + 1)
#define RXFEID8(n) ((n) * 4 + 2)
#define RXFEID0(n) ((n) * 4 + 3)
#define RXFSID(n) ((n < 3) ? 0 : 4)
#define RXFSIDH(n) ((n) * 4 + RXFSID(n))
#define RXFSIDL(n) ((n) * 4 + 1 + RXFSID(n))
#define RXFEID8(n) ((n) * 4 + 2 + RXFSID(n))
#define RXFEID0(n) ((n) * 4 + 3 + RXFSID(n))
#define RXMSIDH(n) ((n) * 4 + 0x20)
#define RXMSIDL(n) ((n) * 4 + 0x21)
#define RXMEID8(n) ((n) * 4 + 0x22)

View File

@ -78,6 +78,9 @@ static void vcan_rx(struct sk_buff *skb, struct net_device *dev)
skb->dev = dev;
skb->ip_summed = CHECKSUM_UNNECESSARY;
if (!(skb->tstamp.tv64))
__net_timestamp(skb);
netif_rx_ni(skb);
}

View File

@ -37,22 +37,22 @@ config NET_DSA_MV88E6123_61_65
ethernet switch chips.
config NET_DSA_MV88E6171
tristate "Marvell 88E6171/6172 ethernet switch chip support"
tristate "Marvell 88E6171/6175/6350/6351 ethernet switch chip support"
depends on NET_DSA
select NET_DSA_MV88E6XXX
select NET_DSA_TAG_EDSA
---help---
This enables support for the Marvell 88E6171/6172 ethernet switch
chips.
This enables support for the Marvell 88E6171/6175/6350/6351
ethernet switches chips.
config NET_DSA_MV88E6352
tristate "Marvell 88E6176/88E6352 ethernet switch chip support"
tristate "Marvell 88E6172/88E6176/88E6352 ethernet switch chip support"
depends on NET_DSA
select NET_DSA_MV88E6XXX
select NET_DSA_TAG_EDSA
---help---
This enables support for the Marvell 88E6176 and 88E6352 ethernet
switch chips.
This enables support for the Marvell 88E6172, 88E6176 and 88E6352
ethernet switch chips.
config NET_DSA_BCM_SF2
tristate "Broadcom Starfighter 2 Ethernet switch support"

View File

@ -24,6 +24,7 @@
#include <net/dsa.h>
#include <linux/ethtool.h>
#include <linux/if_bridge.h>
#include <linux/brcmphy.h>
#include "bcm_sf2.h"
#include "bcm_sf2_regs.h"
@ -697,7 +698,7 @@ static int bcm_sf2_sw_setup(struct dsa_switch *ds)
/* Include the pseudo-PHY address and the broadcast PHY address to
* divert reads towards our workaround
*/
ds->phys_mii_mask |= ((1 << 30) | (1 << 0));
ds->phys_mii_mask |= ((1 << BRCM_PSEUDO_PHY_ADDR) | (1 << 0));
rev = reg_readl(priv, REG_SWITCH_REVISION);
priv->hw_params.top_rev = (rev >> SWITCH_TOP_REV_SHIFT) &
@ -782,7 +783,7 @@ static int bcm_sf2_sw_phy_read(struct dsa_switch *ds, int addr, int regnum)
*/
switch (addr) {
case 0:
case 30:
case BRCM_PSEUDO_PHY_ADDR:
return bcm_sf2_sw_indir_rw(ds, 1, addr, regnum, 0);
default:
return 0xffff;
@ -797,7 +798,7 @@ static int bcm_sf2_sw_phy_write(struct dsa_switch *ds, int addr, int regnum,
*/
switch (addr) {
case 0:
case 30:
case BRCM_PSEUDO_PHY_ADDR:
bcm_sf2_sw_indir_rw(ds, 0, addr, regnum, val);
break;
}
@ -911,6 +912,13 @@ static void bcm_sf2_sw_fixed_link_update(struct dsa_switch *ds, int port,
*/
if (port == 7) {
status->link = priv->port_sts[port].link;
/* For MoCA interfaces, also force a link down notification
* since some version of the user-space daemon (mocad) use
* cmd->autoneg to force the link, which messes up the PHY
* state machine and make it go in PHY_FORCING state instead.
*/
if (!status->link)
netif_carrier_off(ds->ports[port]);
status->duplex = 1;
} else {
status->link = 1;

View File

@ -54,192 +54,40 @@ static char *mv88e6123_61_65_probe(struct device *host_dev, int sw_addr)
static int mv88e6123_61_65_setup_global(struct dsa_switch *ds)
{
u32 upstream_port = dsa_upstream_port(ds);
int ret;
int i;
u32 reg;
ret = mv88e6xxx_setup_global(ds);
if (ret)
return ret;
/* Disable the PHY polling unit (since there won't be any
* external PHYs to poll), don't discard packets with
* excessive collisions, and mask all interrupt sources.
*/
REG_WRITE(REG_GLOBAL, 0x04, 0x0000);
/* Set the default address aging time to 5 minutes, and
* enable address learn messages to be sent to all message
* ports.
*/
REG_WRITE(REG_GLOBAL, 0x0a, 0x0148);
/* Configure the priority mapping registers. */
ret = mv88e6xxx_config_prio(ds);
if (ret < 0)
return ret;
REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL, 0x0000);
/* Configure the upstream port, and configure the upstream
* port as the port to which ingress and egress monitor frames
* are to be sent.
*/
REG_WRITE(REG_GLOBAL, 0x1a, (dsa_upstream_port(ds) * 0x1110));
reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT;
REG_WRITE(REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
/* Disable remote management for now, and set the switch's
* DSA device number.
*/
REG_WRITE(REG_GLOBAL, 0x1c, ds->index & 0x1f);
/* Send all frames with destination addresses matching
* 01:80:c2:00:00:2x to the CPU port.
*/
REG_WRITE(REG_GLOBAL2, 0x02, 0xffff);
/* Send all frames with destination addresses matching
* 01:80:c2:00:00:0x to the CPU port.
*/
REG_WRITE(REG_GLOBAL2, 0x03, 0xffff);
/* Disable the loopback filter, disable flow control
* messages, disable flood broadcast override, disable
* removing of provider tags, disable ATU age violation
* interrupts, disable tag flow control, force flow
* control priority to the highest, and send all special
* multicast frames to the CPU at the highest priority.
*/
REG_WRITE(REG_GLOBAL2, 0x05, 0x00ff);
/* Program the DSA routing table. */
for (i = 0; i < 32; i++) {
int nexthop;
nexthop = 0x1f;
if (i != ds->index && i < ds->dst->pd->nr_chips)
nexthop = ds->pd->rtable[i] & 0x1f;
REG_WRITE(REG_GLOBAL2, 0x06, 0x8000 | (i << 8) | nexthop);
}
/* Clear all trunk masks. */
for (i = 0; i < 8; i++)
REG_WRITE(REG_GLOBAL2, 0x07, 0x8000 | (i << 12) | 0xff);
/* Clear all trunk mappings. */
for (i = 0; i < 16; i++)
REG_WRITE(REG_GLOBAL2, 0x08, 0x8000 | (i << 11));
/* Disable ingress rate limiting by resetting all ingress
* rate limit registers to their initial state.
*/
for (i = 0; i < 6; i++)
REG_WRITE(REG_GLOBAL2, 0x09, 0x9000 | (i << 8));
/* Initialise cross-chip port VLAN table to reset defaults. */
REG_WRITE(REG_GLOBAL2, 0x0b, 0x9000);
/* Clear the priority override table. */
for (i = 0; i < 16; i++)
REG_WRITE(REG_GLOBAL2, 0x0f, 0x8000 | (i << 8));
/* @@@ initialise AVB (22/23) watchdog (27) sdet (29) registers */
REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL_2, ds->index & 0x1f);
return 0;
}
static int mv88e6123_61_65_setup_port(struct dsa_switch *ds, int p)
{
int addr = REG_PORT(p);
u16 val;
/* MAC Forcing register: don't force link, speed, duplex
* or flow control state to any particular values on physical
* ports, but force the CPU port and all DSA ports to 1000 Mb/s
* full duplex.
*/
if (dsa_is_cpu_port(ds, p) || ds->dsa_port_mask & (1 << p))
REG_WRITE(addr, 0x01, 0x003e);
else
REG_WRITE(addr, 0x01, 0x0003);
/* Do not limit the period of time that this port can be
* paused for by the remote end or the period of time that
* this port can pause the remote end.
*/
REG_WRITE(addr, 0x02, 0x0000);
/* Port Control: disable Drop-on-Unlock, disable Drop-on-Lock,
* disable Header mode, enable IGMP/MLD snooping, disable VLAN
* tunneling, determine priority by looking at 802.1p and IP
* priority fields (IP prio has precedence), and set STP state
* to Forwarding.
*
* If this is the CPU link, use DSA or EDSA tagging depending
* on which tagging mode was configured.
*
* If this is a link to another switch, use DSA tagging mode.
*
* If this is the upstream port for this switch, enable
* forwarding of unknown unicasts and multicasts.
*/
val = 0x0433;
if (dsa_is_cpu_port(ds, p)) {
if (ds->dst->tag_protocol == DSA_TAG_PROTO_EDSA)
val |= 0x3300;
else
val |= 0x0100;
}
if (ds->dsa_port_mask & (1 << p))
val |= 0x0100;
if (p == dsa_upstream_port(ds))
val |= 0x000c;
REG_WRITE(addr, 0x04, val);
/* Port Control 2: don't force a good FCS, set the maximum
* frame size to 10240 bytes, don't let the switch add or
* strip 802.1q tags, don't discard tagged or untagged frames
* on this port, do a destination address lookup on all
* received packets as usual, disable ARP mirroring and don't
* send a copy of all transmitted/received frames on this port
* to the CPU.
*/
REG_WRITE(addr, 0x08, 0x2080);
/* Egress rate control: disable egress rate control. */
REG_WRITE(addr, 0x09, 0x0001);
/* Egress rate control 2: disable egress rate control. */
REG_WRITE(addr, 0x0a, 0x0000);
/* Port Association Vector: when learning source addresses
* of packets, add the address to the address database using
* a port bitmap that has only the bit for this port set and
* the other bits clear.
*/
REG_WRITE(addr, 0x0b, 1 << p);
/* Port ATU control: disable limiting the number of address
* database entries that this port is allowed to use.
*/
REG_WRITE(addr, 0x0c, 0x0000);
/* Priority Override: disable DA, SA and VTU priority override. */
REG_WRITE(addr, 0x0d, 0x0000);
/* Port Ethertype: use the Ethertype DSA Ethertype value. */
REG_WRITE(addr, 0x0f, ETH_P_EDSA);
/* Tag Remap: use an identity 802.1p prio -> switch prio
* mapping.
*/
REG_WRITE(addr, 0x18, 0x3210);
/* Tag Remap 2: use an identity 802.1p prio -> switch prio
* mapping.
*/
REG_WRITE(addr, 0x19, 0x7654);
return mv88e6xxx_setup_port_common(ds, p);
}
static int mv88e6123_61_65_setup(struct dsa_switch *ds)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int i;
int ret;
ret = mv88e6xxx_setup_common(ds);
@ -262,19 +110,11 @@ static int mv88e6123_61_65_setup(struct dsa_switch *ds)
if (ret < 0)
return ret;
/* @@@ initialise vtu and atu */
ret = mv88e6123_61_65_setup_global(ds);
if (ret < 0)
return ret;
for (i = 0; i < ps->num_ports; i++) {
ret = mv88e6123_61_65_setup_port(ds, i);
if (ret < 0)
return ret;
}
return 0;
return mv88e6xxx_setup_ports(ds);
}
struct dsa_switch_driver mv88e6123_61_65_switch_driver = {

View File

@ -37,6 +37,8 @@ static char *mv88e6131_probe(struct device *host_dev, int sw_addr)
return "Marvell 88E6131 (B2)";
if (ret_masked == PORT_SWITCH_ID_6131)
return "Marvell 88E6131";
if (ret_masked == PORT_SWITCH_ID_6185)
return "Marvell 88E6185";
}
return NULL;
@ -44,186 +46,62 @@ static char *mv88e6131_probe(struct device *host_dev, int sw_addr)
static int mv88e6131_setup_global(struct dsa_switch *ds)
{
u32 upstream_port = dsa_upstream_port(ds);
int ret;
int i;
u32 reg;
ret = mv88e6xxx_setup_global(ds);
if (ret)
return ret;
/* Enable the PHY polling unit, don't discard packets with
* excessive collisions, use a weighted fair queueing scheme
* to arbitrate between packet queues, set the maximum frame
* size to 1632, and mask all interrupt sources.
*/
REG_WRITE(REG_GLOBAL, 0x04, 0x4400);
/* Set the default address aging time to 5 minutes, and
* enable address learn messages to be sent to all message
* ports.
*/
REG_WRITE(REG_GLOBAL, 0x0a, 0x0148);
/* Configure the priority mapping registers. */
ret = mv88e6xxx_config_prio(ds);
if (ret < 0)
return ret;
REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL,
GLOBAL_CONTROL_PPU_ENABLE | GLOBAL_CONTROL_MAX_FRAME_1632);
/* Set the VLAN ethertype to 0x8100. */
REG_WRITE(REG_GLOBAL, 0x19, 0x8100);
REG_WRITE(REG_GLOBAL, GLOBAL_CORE_TAG_TYPE, 0x8100);
/* Disable ARP mirroring, and configure the upstream port as
* the port to which ingress and egress monitor frames are to
* be sent.
*/
REG_WRITE(REG_GLOBAL, 0x1a, (dsa_upstream_port(ds) * 0x1100) | 0x00f0);
reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
GLOBAL_MONITOR_CONTROL_ARP_DISABLED;
REG_WRITE(REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
/* Disable cascade port functionality unless this device
* is used in a cascade configuration, and set the switch's
* DSA device number.
*/
if (ds->dst->pd->nr_chips > 1)
REG_WRITE(REG_GLOBAL, 0x1c, 0xf000 | (ds->index & 0x1f));
REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL_2,
GLOBAL_CONTROL_2_MULTIPLE_CASCADE |
(ds->index & 0x1f));
else
REG_WRITE(REG_GLOBAL, 0x1c, 0xe000 | (ds->index & 0x1f));
/* Send all frames with destination addresses matching
* 01:80:c2:00:00:0x to the CPU port.
*/
REG_WRITE(REG_GLOBAL2, 0x03, 0xffff);
/* Ignore removed tag data on doubly tagged packets, disable
* flow control messages, force flow control priority to the
* highest, and send all special multicast frames to the CPU
* port at the highest priority.
*/
REG_WRITE(REG_GLOBAL2, 0x05, 0x00ff);
/* Program the DSA routing table. */
for (i = 0; i < 32; i++) {
int nexthop;
nexthop = 0x1f;
if (ds->pd->rtable &&
i != ds->index && i < ds->dst->pd->nr_chips)
nexthop = ds->pd->rtable[i] & 0x1f;
REG_WRITE(REG_GLOBAL2, 0x06, 0x8000 | (i << 8) | nexthop);
}
/* Clear all trunk masks. */
for (i = 0; i < 8; i++)
REG_WRITE(REG_GLOBAL2, 0x07, 0x8000 | (i << 12) | 0x7ff);
/* Clear all trunk mappings. */
for (i = 0; i < 16; i++)
REG_WRITE(REG_GLOBAL2, 0x08, 0x8000 | (i << 11));
REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL_2,
GLOBAL_CONTROL_2_NO_CASCADE |
(ds->index & 0x1f));
/* Force the priority of IGMP/MLD snoop frames and ARP frames
* to the highest setting.
*/
REG_WRITE(REG_GLOBAL2, 0x0f, 0x00ff);
REG_WRITE(REG_GLOBAL2, GLOBAL2_PRIO_OVERRIDE,
GLOBAL2_PRIO_OVERRIDE_FORCE_SNOOP |
7 << GLOBAL2_PRIO_OVERRIDE_SNOOP_SHIFT |
GLOBAL2_PRIO_OVERRIDE_FORCE_ARP |
7 << GLOBAL2_PRIO_OVERRIDE_ARP_SHIFT);
return 0;
}
static int mv88e6131_setup_port(struct dsa_switch *ds, int p)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int addr = REG_PORT(p);
u16 val;
/* MAC Forcing register: don't force link, speed, duplex
* or flow control state to any particular values on physical
* ports, but force the CPU port and all DSA ports to 1000 Mb/s
* (100 Mb/s on 6085) full duplex.
*/
if (dsa_is_cpu_port(ds, p) || ds->dsa_port_mask & (1 << p))
if (ps->id == PORT_SWITCH_ID_6085)
REG_WRITE(addr, 0x01, 0x003d); /* 100 Mb/s */
else
REG_WRITE(addr, 0x01, 0x003e); /* 1000 Mb/s */
else
REG_WRITE(addr, 0x01, 0x0003);
/* Port Control: disable Core Tag, disable Drop-on-Lock,
* transmit frames unmodified, disable Header mode,
* enable IGMP/MLD snoop, disable DoubleTag, disable VLAN
* tunneling, determine priority by looking at 802.1p and
* IP priority fields (IP prio has precedence), and set STP
* state to Forwarding.
*
* If this is the upstream port for this switch, enable
* forwarding of unknown unicasts, and enable DSA tagging
* mode.
*
* If this is the link to another switch, use DSA tagging
* mode, but do not enable forwarding of unknown unicasts.
*/
val = 0x0433;
if (p == dsa_upstream_port(ds)) {
val |= 0x0104;
/* On 6085, unknown multicast forward is controlled
* here rather than in Port Control 2 register.
*/
if (ps->id == PORT_SWITCH_ID_6085)
val |= 0x0008;
}
if (ds->dsa_port_mask & (1 << p))
val |= 0x0100;
REG_WRITE(addr, 0x04, val);
/* Port Control 2: don't force a good FCS, don't use
* VLAN-based, source address-based or destination
* address-based priority overrides, don't let the switch
* add or strip 802.1q tags, don't discard tagged or
* untagged frames on this port, do a destination address
* lookup on received packets as usual, don't send a copy
* of all transmitted/received frames on this port to the
* CPU, and configure the upstream port number.
*
* If this is the upstream port for this switch, enable
* forwarding of unknown multicast addresses.
*/
if (ps->id == PORT_SWITCH_ID_6085)
/* on 6085, bits 3:0 are reserved, bit 6 control ARP
* mirroring, and multicast forward is handled in
* Port Control register.
*/
REG_WRITE(addr, 0x08, 0x0080);
else {
val = 0x0080 | dsa_upstream_port(ds);
if (p == dsa_upstream_port(ds))
val |= 0x0040;
REG_WRITE(addr, 0x08, val);
}
/* Rate Control: disable ingress rate limiting. */
REG_WRITE(addr, 0x09, 0x0000);
/* Rate Control 2: disable egress rate limiting. */
REG_WRITE(addr, 0x0a, 0x0000);
/* Port Association Vector: when learning source addresses
* of packets, add the address to the address database using
* a port bitmap that has only the bit for this port set and
* the other bits clear.
*/
REG_WRITE(addr, 0x0b, 1 << p);
/* Tag Remap: use an identity 802.1p prio -> switch prio
* mapping.
*/
REG_WRITE(addr, 0x18, 0x3210);
/* Tag Remap 2: use an identity 802.1p prio -> switch prio
* mapping.
*/
REG_WRITE(addr, 0x19, 0x7654);
return mv88e6xxx_setup_port_common(ds, p);
}
static int mv88e6131_setup(struct dsa_switch *ds)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int i;
int ret;
ret = mv88e6xxx_setup_common(ds);
@ -234,6 +112,7 @@ static int mv88e6131_setup(struct dsa_switch *ds)
switch (ps->id) {
case PORT_SWITCH_ID_6085:
case PORT_SWITCH_ID_6185:
ps->num_ports = 10;
break;
case PORT_SWITCH_ID_6095:
@ -251,19 +130,11 @@ static int mv88e6131_setup(struct dsa_switch *ds)
if (ret < 0)
return ret;
/* @@@ initialise vtu and atu */
ret = mv88e6131_setup_global(ds);
if (ret < 0)
return ret;
for (i = 0; i < ps->num_ports; i++) {
ret = mv88e6131_setup_port(ds, i);
if (ret < 0)
return ret;
}
return 0;
return mv88e6xxx_setup_ports(ds);
}
static int mv88e6131_port_to_phy_addr(struct dsa_switch *ds, int port)

View File

@ -1,4 +1,4 @@
/* net/dsa/mv88e6171.c - Marvell 88e6171/8826172 switch chip support
/* net/dsa/mv88e6171.c - Marvell 88e6171 switch chip support
* Copyright (c) 2008-2009 Marvell Semiconductor
* Copyright (c) 2014 Claudio Leite <leitec@staticky.com>
*
@ -29,8 +29,12 @@ static char *mv88e6171_probe(struct device *host_dev, int sw_addr)
if (ret >= 0) {
if ((ret & 0xfff0) == PORT_SWITCH_ID_6171)
return "Marvell 88E6171";
if ((ret & 0xfff0) == PORT_SWITCH_ID_6172)
return "Marvell 88E6172";
if ((ret & 0xfff0) == PORT_SWITCH_ID_6175)
return "Marvell 88E6175";
if ((ret & 0xfff0) == PORT_SWITCH_ID_6350)
return "Marvell 88E6350";
if ((ret & 0xfff0) == PORT_SWITCH_ID_6351)
return "Marvell 88E6351";
}
return NULL;
@ -38,196 +42,41 @@ static char *mv88e6171_probe(struct device *host_dev, int sw_addr)
static int mv88e6171_setup_global(struct dsa_switch *ds)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
u32 upstream_port = dsa_upstream_port(ds);
int ret;
int i;
u32 reg;
ret = mv88e6xxx_setup_global(ds);
if (ret)
return ret;
/* Discard packets with excessive collisions, mask all
* interrupt sources, enable PPU.
*/
REG_WRITE(REG_GLOBAL, 0x04, 0x6000);
/* Set the default address aging time to 5 minutes, and
* enable address learn messages to be sent to all message
* ports.
*/
REG_WRITE(REG_GLOBAL, 0x0a, 0x0148);
/* Configure the priority mapping registers. */
ret = mv88e6xxx_config_prio(ds);
if (ret < 0)
return ret;
REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL,
GLOBAL_CONTROL_PPU_ENABLE | GLOBAL_CONTROL_DISCARD_EXCESS);
/* Configure the upstream port, and configure the upstream
* port as the port to which ingress and egress monitor frames
* are to be sent.
*/
if (REG_READ(REG_PORT(0), 0x03) == 0x1710)
REG_WRITE(REG_GLOBAL, 0x1a, (dsa_upstream_port(ds) * 0x1111));
else
REG_WRITE(REG_GLOBAL, 0x1a, (dsa_upstream_port(ds) * 0x1110));
reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT |
upstream_port << GLOBAL_MONITOR_CONTROL_MIRROR_SHIFT;
REG_WRITE(REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
/* Disable remote management for now, and set the switch's
* DSA device number.
*/
REG_WRITE(REG_GLOBAL, 0x1c, ds->index & 0x1f);
/* Send all frames with destination addresses matching
* 01:80:c2:00:00:2x to the CPU port.
*/
REG_WRITE(REG_GLOBAL2, 0x02, 0xffff);
/* Send all frames with destination addresses matching
* 01:80:c2:00:00:0x to the CPU port.
*/
REG_WRITE(REG_GLOBAL2, 0x03, 0xffff);
/* Disable the loopback filter, disable flow control
* messages, disable flood broadcast override, disable
* removing of provider tags, disable ATU age violation
* interrupts, disable tag flow control, force flow
* control priority to the highest, and send all special
* multicast frames to the CPU at the highest priority.
*/
REG_WRITE(REG_GLOBAL2, 0x05, 0x00ff);
/* Program the DSA routing table. */
for (i = 0; i < 32; i++) {
int nexthop;
nexthop = 0x1f;
if (i != ds->index && i < ds->dst->pd->nr_chips)
nexthop = ds->pd->rtable[i] & 0x1f;
REG_WRITE(REG_GLOBAL2, 0x06, 0x8000 | (i << 8) | nexthop);
}
/* Clear all trunk masks. */
for (i = 0; i < ps->num_ports; i++)
REG_WRITE(REG_GLOBAL2, 0x07, 0x8000 | (i << 12) | 0xff);
/* Clear all trunk mappings. */
for (i = 0; i < 16; i++)
REG_WRITE(REG_GLOBAL2, 0x08, 0x8000 | (i << 11));
/* Disable ingress rate limiting by resetting all ingress
* rate limit registers to their initial state.
*/
for (i = 0; i < 6; i++)
REG_WRITE(REG_GLOBAL2, 0x09, 0x9000 | (i << 8));
/* Initialise cross-chip port VLAN table to reset defaults. */
REG_WRITE(REG_GLOBAL2, 0x0b, 0x9000);
/* Clear the priority override table. */
for (i = 0; i < 16; i++)
REG_WRITE(REG_GLOBAL2, 0x0f, 0x8000 | (i << 8));
/* @@@ initialise AVB (22/23) watchdog (27) sdet (29) registers */
REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL_2, ds->index & 0x1f);
return 0;
}
static int mv88e6171_setup_port(struct dsa_switch *ds, int p)
{
int addr = REG_PORT(p);
u16 val;
/* MAC Forcing register: don't force link, speed, duplex
* or flow control state to any particular values on physical
* ports, but force the CPU port and all DSA ports to 1000 Mb/s
* full duplex.
*/
val = REG_READ(addr, 0x01);
if (dsa_is_cpu_port(ds, p) || ds->dsa_port_mask & (1 << p))
REG_WRITE(addr, 0x01, val | 0x003e);
else
REG_WRITE(addr, 0x01, val | 0x0003);
/* Do not limit the period of time that this port can be
* paused for by the remote end or the period of time that
* this port can pause the remote end.
*/
REG_WRITE(addr, 0x02, 0x0000);
/* Port Control: disable Drop-on-Unlock, disable Drop-on-Lock,
* disable Header mode, enable IGMP/MLD snooping, disable VLAN
* tunneling, determine priority by looking at 802.1p and IP
* priority fields (IP prio has precedence), and set STP state
* to Forwarding.
*
* If this is the CPU link, use DSA or EDSA tagging depending
* on which tagging mode was configured.
*
* If this is a link to another switch, use DSA tagging mode.
*
* If this is the upstream port for this switch, enable
* forwarding of unknown unicasts and multicasts.
*/
val = 0x0433;
if (dsa_is_cpu_port(ds, p)) {
if (ds->dst->tag_protocol == DSA_TAG_PROTO_EDSA)
val |= 0x3300;
else
val |= 0x0100;
}
if (ds->dsa_port_mask & (1 << p))
val |= 0x0100;
if (p == dsa_upstream_port(ds))
val |= 0x000c;
REG_WRITE(addr, 0x04, val);
/* Port Control 2: don't force a good FCS, set the maximum
* frame size to 10240 bytes, don't let the switch add or
* strip 802.1q tags, don't discard tagged or untagged frames
* on this port, do a destination address lookup on all
* received packets as usual, disable ARP mirroring and don't
* send a copy of all transmitted/received frames on this port
* to the CPU.
*/
REG_WRITE(addr, 0x08, 0x2080);
/* Egress rate control: disable egress rate control. */
REG_WRITE(addr, 0x09, 0x0001);
/* Egress rate control 2: disable egress rate control. */
REG_WRITE(addr, 0x0a, 0x0000);
/* Port Association Vector: when learning source addresses
* of packets, add the address to the address database using
* a port bitmap that has only the bit for this port set and
* the other bits clear.
*/
REG_WRITE(addr, 0x0b, 1 << p);
/* Port ATU control: disable limiting the number of address
* database entries that this port is allowed to use.
*/
REG_WRITE(addr, 0x0c, 0x0000);
/* Priority Override: disable DA, SA and VTU priority override. */
REG_WRITE(addr, 0x0d, 0x0000);
/* Port Ethertype: use the Ethertype DSA Ethertype value. */
REG_WRITE(addr, 0x0f, ETH_P_EDSA);
/* Tag Remap: use an identity 802.1p prio -> switch prio
* mapping.
*/
REG_WRITE(addr, 0x18, 0x3210);
/* Tag Remap 2: use an identity 802.1p prio -> switch prio
* mapping.
*/
REG_WRITE(addr, 0x19, 0x7654);
return mv88e6xxx_setup_port_common(ds, p);
}
static int mv88e6171_setup(struct dsa_switch *ds)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int i;
int ret;
ret = mv88e6xxx_setup_common(ds);
@ -240,44 +89,11 @@ static int mv88e6171_setup(struct dsa_switch *ds)
if (ret < 0)
return ret;
/* @@@ initialise vtu and atu */
ret = mv88e6171_setup_global(ds);
if (ret < 0)
return ret;
for (i = 0; i < ps->num_ports; i++) {
if (!(dsa_is_cpu_port(ds, i) || ds->phys_port_mask & (1 << i)))
continue;
ret = mv88e6171_setup_port(ds, i);
if (ret < 0)
return ret;
}
return 0;
}
static int mv88e6171_get_eee(struct dsa_switch *ds, int port,
struct ethtool_eee *e)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
if (ps->id == PORT_SWITCH_ID_6172)
return mv88e6xxx_get_eee(ds, port, e);
return -EOPNOTSUPP;
}
static int mv88e6171_set_eee(struct dsa_switch *ds, int port,
struct phy_device *phydev, struct ethtool_eee *e)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
if (ps->id == PORT_SWITCH_ID_6172)
return mv88e6xxx_set_eee(ds, port, phydev, e);
return -EOPNOTSUPP;
return mv88e6xxx_setup_ports(ds);
}
struct dsa_switch_driver mv88e6171_switch_driver = {
@ -292,8 +108,6 @@ struct dsa_switch_driver mv88e6171_switch_driver = {
.get_strings = mv88e6xxx_get_strings,
.get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
.get_sset_count = mv88e6xxx_get_sset_count,
.set_eee = mv88e6171_set_eee,
.get_eee = mv88e6171_get_eee,
#ifdef CONFIG_NET_DSA_HWMON
.get_temp = mv88e6xxx_get_temp,
#endif
@ -308,4 +122,6 @@ struct dsa_switch_driver mv88e6171_switch_driver = {
};
MODULE_ALIAS("platform:mv88e6171");
MODULE_ALIAS("platform:mv88e6172");
MODULE_ALIAS("platform:mv88e6175");
MODULE_ALIAS("platform:mv88e6350");
MODULE_ALIAS("platform:mv88e6351");

View File

@ -32,6 +32,8 @@ static char *mv88e6352_probe(struct device *host_dev, int sw_addr)
ret = __mv88e6xxx_reg_read(bus, sw_addr, REG_PORT(0), PORT_SWITCH_ID);
if (ret >= 0) {
if ((ret & 0xfff0) == PORT_SWITCH_ID_6172)
return "Marvell 88E6172";
if ((ret & 0xfff0) == PORT_SWITCH_ID_6176)
return "Marvell 88E6176";
if (ret == PORT_SWITCH_ID_6352_A0)
@ -47,187 +49,37 @@ static char *mv88e6352_probe(struct device *host_dev, int sw_addr)
static int mv88e6352_setup_global(struct dsa_switch *ds)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
u32 upstream_port = dsa_upstream_port(ds);
int ret;
int i;
u32 reg;
ret = mv88e6xxx_setup_global(ds);
if (ret)
return ret;
/* Discard packets with excessive collisions,
* mask all interrupt sources, enable PPU (bit 14, undocumented).
*/
REG_WRITE(REG_GLOBAL, 0x04, 0x6000);
/* Set the default address aging time to 5 minutes, and
* enable address learn messages to be sent to all message
* ports.
*/
REG_WRITE(REG_GLOBAL, 0x0a, 0x0148);
/* Configure the priority mapping registers. */
ret = mv88e6xxx_config_prio(ds);
if (ret < 0)
return ret;
REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL,
GLOBAL_CONTROL_PPU_ENABLE | GLOBAL_CONTROL_DISCARD_EXCESS);
/* Configure the upstream port, and configure the upstream
* port as the port to which ingress and egress monitor frames
* are to be sent.
*/
REG_WRITE(REG_GLOBAL, 0x1a, (dsa_upstream_port(ds) * 0x1110));
reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT;
REG_WRITE(REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
/* Disable remote management for now, and set the switch's
* DSA device number.
*/
REG_WRITE(REG_GLOBAL, 0x1c, ds->index & 0x1f);
/* Send all frames with destination addresses matching
* 01:80:c2:00:00:2x to the CPU port.
*/
REG_WRITE(REG_GLOBAL2, 0x02, 0xffff);
/* Send all frames with destination addresses matching
* 01:80:c2:00:00:0x to the CPU port.
*/
REG_WRITE(REG_GLOBAL2, 0x03, 0xffff);
/* Disable the loopback filter, disable flow control
* messages, disable flood broadcast override, disable
* removing of provider tags, disable ATU age violation
* interrupts, disable tag flow control, force flow
* control priority to the highest, and send all special
* multicast frames to the CPU at the highest priority.
*/
REG_WRITE(REG_GLOBAL2, 0x05, 0x00ff);
/* Program the DSA routing table. */
for (i = 0; i < 32; i++) {
int nexthop = 0x1f;
if (i != ds->index && i < ds->dst->pd->nr_chips)
nexthop = ds->pd->rtable[i] & 0x1f;
REG_WRITE(REG_GLOBAL2, 0x06, 0x8000 | (i << 8) | nexthop);
}
/* Clear all trunk masks. */
for (i = 0; i < 8; i++)
REG_WRITE(REG_GLOBAL2, 0x07, 0x8000 | (i << 12) | 0x7f);
/* Clear all trunk mappings. */
for (i = 0; i < 16; i++)
REG_WRITE(REG_GLOBAL2, 0x08, 0x8000 | (i << 11));
/* Disable ingress rate limiting by resetting all ingress
* rate limit registers to their initial state.
*/
for (i = 0; i < ps->num_ports; i++)
REG_WRITE(REG_GLOBAL2, 0x09, 0x9000 | (i << 8));
/* Initialise cross-chip port VLAN table to reset defaults. */
REG_WRITE(REG_GLOBAL2, 0x0b, 0x9000);
/* Clear the priority override table. */
for (i = 0; i < 16; i++)
REG_WRITE(REG_GLOBAL2, 0x0f, 0x8000 | (i << 8));
/* @@@ initialise AVB (22/23) watchdog (27) sdet (29) registers */
return 0;
}
static int mv88e6352_setup_port(struct dsa_switch *ds, int p)
{
int addr = REG_PORT(p);
u16 val;
/* MAC Forcing register: don't force link, speed, duplex
* or flow control state to any particular values on physical
* ports, but force the CPU port and all DSA ports to 1000 Mb/s
* full duplex.
*/
if (dsa_is_cpu_port(ds, p) || ds->dsa_port_mask & (1 << p))
REG_WRITE(addr, 0x01, 0x003e);
else
REG_WRITE(addr, 0x01, 0x0003);
/* Do not limit the period of time that this port can be
* paused for by the remote end or the period of time that
* this port can pause the remote end.
*/
REG_WRITE(addr, 0x02, 0x0000);
/* Port Control: disable Drop-on-Unlock, disable Drop-on-Lock,
* disable Header mode, enable IGMP/MLD snooping, disable VLAN
* tunneling, determine priority by looking at 802.1p and IP
* priority fields (IP prio has precedence), and set STP state
* to Forwarding.
*
* If this is the CPU link, use DSA or EDSA tagging depending
* on which tagging mode was configured.
*
* If this is a link to another switch, use DSA tagging mode.
*
* If this is the upstream port for this switch, enable
* forwarding of unknown unicasts and multicasts.
*/
val = 0x0433;
if (dsa_is_cpu_port(ds, p)) {
if (ds->dst->tag_protocol == DSA_TAG_PROTO_EDSA)
val |= 0x3300;
else
val |= 0x0100;
}
if (ds->dsa_port_mask & (1 << p))
val |= 0x0100;
if (p == dsa_upstream_port(ds))
val |= 0x000c;
REG_WRITE(addr, 0x04, val);
/* Port Control 2: don't force a good FCS, set the maximum
* frame size to 10240 bytes, don't let the switch add or
* strip 802.1q tags, don't discard tagged or untagged frames
* on this port, do a destination address lookup on all
* received packets as usual, disable ARP mirroring and don't
* send a copy of all transmitted/received frames on this port
* to the CPU.
*/
REG_WRITE(addr, 0x08, 0x2080);
/* Egress rate control: disable egress rate control. */
REG_WRITE(addr, 0x09, 0x0001);
/* Egress rate control 2: disable egress rate control. */
REG_WRITE(addr, 0x0a, 0x0000);
/* Port Association Vector: when learning source addresses
* of packets, add the address to the address database using
* a port bitmap that has only the bit for this port set and
* the other bits clear.
*/
REG_WRITE(addr, 0x0b, 1 << p);
/* Port ATU control: disable limiting the number of address
* database entries that this port is allowed to use.
*/
REG_WRITE(addr, 0x0c, 0x0000);
/* Priority Override: disable DA, SA and VTU priority override. */
REG_WRITE(addr, 0x0d, 0x0000);
/* Port Ethertype: use the Ethertype DSA Ethertype value. */
REG_WRITE(addr, 0x0f, ETH_P_EDSA);
/* Tag Remap: use an identity 802.1p prio -> switch prio
* mapping.
*/
REG_WRITE(addr, 0x18, 0x3210);
/* Tag Remap 2: use an identity 802.1p prio -> switch prio
* mapping.
*/
REG_WRITE(addr, 0x19, 0x7654);
return mv88e6xxx_setup_port_common(ds, p);
}
#ifdef CONFIG_NET_DSA_HWMON
static int mv88e6352_get_temp(struct dsa_switch *ds, int *temp)
@ -292,7 +144,6 @@ static int mv88e6352_setup(struct dsa_switch *ds)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int ret;
int i;
ret = mv88e6xxx_setup_common(ds);
if (ret < 0)
@ -306,19 +157,11 @@ static int mv88e6352_setup(struct dsa_switch *ds)
if (ret < 0)
return ret;
/* @@@ initialise vtu and atu */
ret = mv88e6352_setup_global(ds);
if (ret < 0)
return ret;
for (i = 0; i < ps->num_ports; i++) {
ret = mv88e6352_setup_port(ds, i);
if (ret < 0)
return ret;
}
return 0;
return mv88e6xxx_setup_ports(ds);
}
static int mv88e6352_read_eeprom_word(struct dsa_switch *ds, int addr)
@ -552,3 +395,4 @@ struct dsa_switch_driver mv88e6352_switch_driver = {
};
MODULE_ALIAS("platform:mv88e6352");
MODULE_ALIAS("platform:mv88e6172");

Some files were not shown because too many files have changed in this diff Show More