1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:
 "Yeah I should have sent a pull request last week, so there is a lot
  more here than usual:

   1) Fix memory leak in ebtables compat code, from Wenwen Wang.

   2) Several kTLS bug fixes from Jakub Kicinski (circular close on
      disconnect etc.)

   3) Force slave speed check on link state recovery in bonding 802.3ad
      mode, from Thomas Falcon.

   4) Clear RX descriptor bits before assigning buffers to them in
      stmmac, from Jose Abreu.

   5) Several missing of_node_put() calls, mostly wrt. for_each_*() OF
      loops, from Nishka Dasgupta.

   6) Double kfree_skb() in peak_usb can driver, from Stephane Grosjean.

   7) Need to hold sock across skb->destructor invocation, from Cong
      Wang.

   8) IP header length needs to be validated in ipip tunnel xmit, from
      Haishuang Yan.

   9) Use after free in ip6 tunnel driver, also from Haishuang Yan.

  10) Do not use MSI interrupts on r8169 chips before RTL8168d, from
      Heiner Kallweit.

  11) Upon bridge device init failure, we need to delete the local fdb.
      From Nikolay Aleksandrov.

  12) Handle erros from of_get_mac_address() properly in stmmac, from
      Martin Blumenstingl.

  13) Handle concurrent rename vs. dump in netfilter ipset, from Jozsef
      Kadlecsik.

  14) Setting NETIF_F_LLTX on mac80211 causes complete breakage with
      some devices, so revert. From Johannes Berg.

  15) Fix deadlock in rxrpc, from David Howells.

  16) Fix Kconfig deps of enetc driver, we must have PHYLIB. From Yue
      Haibing.

  17) Fix mvpp2 crash on module removal, from Matteo Croce.

  18) Fix race in genphy_update_link, from Heiner Kallweit.

  19) bpf_xdp_adjust_head() stopped working with generic XDP when we
      fixes generic XDP to support stacked devices properly, fix from
      Jesper Dangaard Brouer.

  20) Unbalanced RCU locking in rt6_update_exception_stamp_rt(), from
      David Ahern.

  21) Several memory leaks in new sja1105 driver, from Vladimir Oltean"

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (214 commits)
  net: dsa: sja1105: Fix memory leak on meta state machine error path
  net: dsa: sja1105: Fix memory leak on meta state machine normal path
  net: dsa: sja1105: Really fix panic on unregistering PTP clock
  net: dsa: sja1105: Use the LOCKEDS bit for SJA1105 E/T as well
  net: dsa: sja1105: Fix broken learning with vlan_filtering disabled
  net: dsa: qca8k: Add of_node_put() in qca8k_setup_mdio_bus()
  net: sched: sample: allow accessing psample_group with rtnl
  net: sched: police: allow accessing police->params with rtnl
  net: hisilicon: Fix dma_map_single failed on arm64
  net: hisilicon: fix hip04-xmit never return TX_BUSY
  net: hisilicon: make hip04_tx_reclaim non-reentrant
  tc-testing: updated vlan action tests with batch create/delete
  net sched: update vlan action for batched events operations
  net: stmmac: tc: Do not return a fragment entry
  net: stmmac: Fix issues when number of Queues >= 4
  net: stmmac: xgmac: Fix XGMAC selftests
  be2net: disable bh with spin_lock in be_process_mcc
  net: cxgb3_main: Fix a resource leak in a error path in 'init_one()'
  net: ethernet: sun4i-emac: Support phy-handle property for finding PHYs
  net: bridge: move default pvid init/deinit to NETDEV_REGISTER/UNREGISTER
  ...
alistair/sunxi64-5.4-dsi
Linus Torvalds 2019-08-06 17:11:59 -07:00
commit 33920f1ec5
225 changed files with 2403 additions and 1275 deletions

View File

@ -424,13 +424,24 @@ Statistics
Following minimum set of TLS-related statistics should be reported
by the driver:
* ``rx_tls_decrypted`` - number of successfully decrypted TLS segments
* ``tx_tls_encrypted`` - number of in-order TLS segments passed to device
for encryption
* ``rx_tls_decrypted_packets`` - number of successfully decrypted RX packets
which were part of a TLS stream.
* ``rx_tls_decrypted_bytes`` - number of TLS payload bytes in RX packets
which were successfully decrypted.
* ``tx_tls_encrypted_packets`` - number of TX packets passed to the device
for encryption of their TLS payload.
* ``tx_tls_encrypted_bytes`` - number of TLS payload bytes in TX packets
passed to the device for encryption.
* ``tx_tls_ctx`` - number of TLS TX HW offload contexts added to device for
encryption.
* ``tx_tls_ooo`` - number of TX packets which were part of a TLS stream
but did not arrive in the expected order
* ``tx_tls_drop_no_sync_data`` - number of TX packets dropped because
they arrived out of order and associated record could not be found
but did not arrive in the expected order.
* ``tx_tls_drop_no_sync_data`` - number of TX packets which were part of
a TLS stream dropped, because they arrived out of order and associated
record could not be found.
* ``tx_tls_drop_bypass_req`` - number of TX packets which were part of a TLS
stream dropped, because they contain both data that has been encrypted by
software and data that expects hardware crypto offload.
Notable corner cases, exceptions and additional requirements
============================================================

View File

@ -6827,13 +6827,6 @@ F: Documentation/filesystems/gfs2*.txt
F: fs/gfs2/
F: include/uapi/linux/gfs2_ondisk.h
GIGASET ISDN DRIVERS
M: Paul Bolle <pebolle@tiscali.nl>
L: gigaset307x-common@lists.sourceforge.net
W: http://gigaset307x.sourceforge.net/
S: Odd Fixes
F: drivers/staging/isdn/gigaset/
GNSS SUBSYSTEM
M: Johan Hovold <johan@kernel.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/johan/gnss.git
@ -11149,6 +11142,7 @@ L: netdev@vger.kernel.org
S: Maintained
W: https://fedorahosted.org/dropwatch/
F: net/core/drop_monitor.c
F: include/uapi/linux/net_dropmon.h
NETWORKING DRIVERS
M: "David S. Miller" <davem@davemloft.net>
@ -11287,6 +11281,7 @@ M: Aviad Yehezkel <aviadye@mellanox.com>
M: Dave Watson <davejwatson@fb.com>
M: John Fastabend <john.fastabend@gmail.com>
M: Daniel Borkmann <daniel@iogearbox.net>
M: Jakub Kicinski <jakub.kicinski@netronome.com>
L: netdev@vger.kernel.org
S: Maintained
F: net/tls/*
@ -17565,7 +17560,6 @@ M: Jakub Kicinski <jakub.kicinski@netronome.com>
M: Jesper Dangaard Brouer <hawk@kernel.org>
M: John Fastabend <john.fastabend@gmail.com>
L: netdev@vger.kernel.org
L: xdp-newbies@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: net/core/xdp.c

View File

@ -63,6 +63,7 @@
#include <asm/byteorder.h>
#include <linux/vmalloc.h>
#include <linux/jiffies.h>
#include <linux/nospec.h>
#include "iphase.h"
#include "suni.h"
#define swap_byte_order(x) (((x & 0xff) << 8) | ((x & 0xff00) >> 8))
@ -2760,8 +2761,11 @@ static int ia_ioctl(struct atm_dev *dev, unsigned int cmd, void __user *arg)
}
if (copy_from_user(&ia_cmds, arg, sizeof ia_cmds)) return -EFAULT;
board = ia_cmds.status;
if ((board < 0) || (board > iadev_count))
board = 0;
if ((board < 0) || (board > iadev_count))
board = 0;
board = array_index_nospec(board, iadev_count + 1);
iadev = ia_dev[board];
switch (ia_cmds.cmd) {
case MEMDUMP:

View File

@ -1394,6 +1394,7 @@ start_isoc_chain(struct usb_fifo *fifo, int num_packets_per_urb,
printk(KERN_DEBUG
"%s: %s: alloc urb for fifo %i failed",
hw->name, __func__, fifo->fifonum);
continue;
}
fifo->iso[i].owner_fifo = (struct usb_fifo *) fifo;
fifo->iso[i].indx = i;
@ -1692,13 +1693,23 @@ hfcsusb_stop_endpoint(struct hfcsusb *hw, int channel)
static int
setup_hfcsusb(struct hfcsusb *hw)
{
void *dmabuf = kmalloc(sizeof(u_char), GFP_KERNEL);
u_char b;
int ret;
if (debug & DBG_HFC_CALL_TRACE)
printk(KERN_DEBUG "%s: %s\n", hw->name, __func__);
if (!dmabuf)
return -ENOMEM;
ret = read_reg_atomic(hw, HFCUSB_CHIP_ID, dmabuf);
memcpy(&b, dmabuf, sizeof(u_char));
kfree(dmabuf);
/* check the chip id */
if (read_reg_atomic(hw, HFCUSB_CHIP_ID, &b) != 1) {
if (ret != 1) {
printk(KERN_DEBUG "%s: %s: cannot read chip id\n",
hw->name, __func__);
return 1;

View File

@ -363,10 +363,13 @@ static int __init arcrimi_setup(char *s)
switch (ints[0]) {
default: /* ERROR */
pr_err("Too many arguments\n");
/* Fall through */
case 3: /* Node ID */
node = ints[3];
/* Fall through */
case 2: /* IRQ */
irq = ints[2];
/* Fall through */
case 1: /* IO address */
io = ints[1];
}

View File

@ -197,16 +197,22 @@ static int __init com20020isa_setup(char *s)
switch (ints[0]) {
default: /* ERROR */
pr_info("Too many arguments\n");
/* Fall through */
case 6: /* Timeout */
timeout = ints[6];
/* Fall through */
case 5: /* CKP value */
clockp = ints[5];
/* Fall through */
case 4: /* Backplane flag */
backplane = ints[4];
/* Fall through */
case 3: /* Node ID */
node = ints[3];
/* Fall through */
case 2: /* IRQ */
irq = ints[2];
/* Fall through */
case 1: /* IO address */
io = ints[1];
}

View File

@ -363,8 +363,10 @@ static int __init com90io_setup(char *s)
switch (ints[0]) {
default: /* ERROR */
pr_err("Too many arguments\n");
/* Fall through */
case 2: /* IRQ */
irq = ints[2];
/* Fall through */
case 1: /* IO address */
io = ints[1];
}

View File

@ -693,10 +693,13 @@ static int __init com90xx_setup(char *s)
switch (ints[0]) {
default: /* ERROR */
pr_err("Too many arguments\n");
/* Fall through */
case 3: /* Mem address */
shmem = ints[3];
/* Fall through */
case 2: /* IRQ */
irq = ints[2];
/* Fall through */
case 1: /* IO address */
io = ints[1];
}

View File

@ -2196,6 +2196,15 @@ static void bond_miimon_commit(struct bonding *bond)
bond_for_each_slave(bond, slave, iter) {
switch (slave->new_link) {
case BOND_LINK_NOCHANGE:
/* For 802.3ad mode, check current slave speed and
* duplex again in case its port was disabled after
* invalid speed/duplex reporting but recovered before
* link monitoring could make a decision on the actual
* link status
*/
if (BOND_MODE(bond) == BOND_MODE_8023AD &&
slave->link == BOND_LINK_UP)
bond_3ad_adapter_speed_duplex_changed(slave);
continue;
case BOND_LINK_UP:

View File

@ -1249,6 +1249,8 @@ int register_candev(struct net_device *dev)
return -EINVAL;
dev->rtnl_link_ops = &can_link_ops;
netif_carrier_off(dev);
return register_netdev(dev);
}
EXPORT_SYMBOL_GPL(register_candev);

View File

@ -400,9 +400,10 @@ static void flexcan_enable_wakeup_irq(struct flexcan_priv *priv, bool enable)
priv->write(reg_mcr, &regs->mcr);
}
static inline void flexcan_enter_stop_mode(struct flexcan_priv *priv)
static inline int flexcan_enter_stop_mode(struct flexcan_priv *priv)
{
struct flexcan_regs __iomem *regs = priv->regs;
unsigned int ackval;
u32 reg_mcr;
reg_mcr = priv->read(&regs->mcr);
@ -412,20 +413,37 @@ static inline void flexcan_enter_stop_mode(struct flexcan_priv *priv)
/* enable stop request */
regmap_update_bits(priv->stm.gpr, priv->stm.req_gpr,
1 << priv->stm.req_bit, 1 << priv->stm.req_bit);
/* get stop acknowledgment */
if (regmap_read_poll_timeout(priv->stm.gpr, priv->stm.ack_gpr,
ackval, ackval & (1 << priv->stm.ack_bit),
0, FLEXCAN_TIMEOUT_US))
return -ETIMEDOUT;
return 0;
}
static inline void flexcan_exit_stop_mode(struct flexcan_priv *priv)
static inline int flexcan_exit_stop_mode(struct flexcan_priv *priv)
{
struct flexcan_regs __iomem *regs = priv->regs;
unsigned int ackval;
u32 reg_mcr;
/* remove stop request */
regmap_update_bits(priv->stm.gpr, priv->stm.req_gpr,
1 << priv->stm.req_bit, 0);
/* get stop acknowledgment */
if (regmap_read_poll_timeout(priv->stm.gpr, priv->stm.ack_gpr,
ackval, !(ackval & (1 << priv->stm.ack_bit)),
0, FLEXCAN_TIMEOUT_US))
return -ETIMEDOUT;
reg_mcr = priv->read(&regs->mcr);
reg_mcr &= ~FLEXCAN_MCR_SLF_WAK;
priv->write(reg_mcr, &regs->mcr);
return 0;
}
static inline void flexcan_error_irq_enable(const struct flexcan_priv *priv)
@ -1437,10 +1455,10 @@ static int flexcan_setup_stop_mode(struct platform_device *pdev)
priv = netdev_priv(dev);
priv->stm.gpr = syscon_node_to_regmap(gpr_np);
of_node_put(gpr_np);
if (IS_ERR(priv->stm.gpr)) {
dev_dbg(&pdev->dev, "could not find gpr regmap\n");
return PTR_ERR(priv->stm.gpr);
ret = PTR_ERR(priv->stm.gpr);
goto out_put_node;
}
priv->stm.req_gpr = out_val[1];
@ -1455,7 +1473,9 @@ static int flexcan_setup_stop_mode(struct platform_device *pdev)
device_set_wakeup_capable(&pdev->dev, true);
return 0;
out_put_node:
of_node_put(gpr_np);
return ret;
}
static const struct of_device_id flexcan_of_match[] = {
@ -1612,7 +1632,9 @@ static int __maybe_unused flexcan_suspend(struct device *device)
*/
if (device_may_wakeup(device)) {
enable_irq_wake(dev->irq);
flexcan_enter_stop_mode(priv);
err = flexcan_enter_stop_mode(priv);
if (err)
return err;
} else {
err = flexcan_chip_disable(priv);
if (err)
@ -1662,10 +1684,13 @@ static int __maybe_unused flexcan_noirq_resume(struct device *device)
{
struct net_device *dev = dev_get_drvdata(device);
struct flexcan_priv *priv = netdev_priv(dev);
int err;
if (netif_running(dev) && device_may_wakeup(device)) {
flexcan_enable_wakeup_irq(priv, false);
flexcan_exit_stop_mode(priv);
err = flexcan_exit_stop_mode(priv);
if (err)
return err;
}
return 0;

View File

@ -1508,10 +1508,11 @@ static int rcar_canfd_rx_poll(struct napi_struct *napi, int quota)
/* All packets processed */
if (num_pkts < quota) {
napi_complete_done(napi, num_pkts);
/* Enable Rx FIFO interrupts */
rcar_canfd_set_bit(priv->base, RCANFD_RFCC(ridx),
RCANFD_RFCC_RFIE);
if (napi_complete_done(napi, num_pkts)) {
/* Enable Rx FIFO interrupts */
rcar_canfd_set_bit(priv->base, RCANFD_RFCC(ridx),
RCANFD_RFCC_RFIE);
}
}
return num_pkts;
}

View File

@ -479,7 +479,7 @@ static void pcan_free_channels(struct pcan_pccard *card)
if (!netdev)
continue;
strncpy(name, netdev->name, IFNAMSIZ);
strlcpy(name, netdev->name, IFNAMSIZ);
unregister_sja1000dev(netdev);

View File

@ -664,17 +664,6 @@ static int mcp251x_power_enable(struct regulator *reg, int enable)
return regulator_disable(reg);
}
static void mcp251x_open_clean(struct net_device *net)
{
struct mcp251x_priv *priv = netdev_priv(net);
struct spi_device *spi = priv->spi;
free_irq(spi->irq, priv);
mcp251x_hw_sleep(spi);
mcp251x_power_enable(priv->transceiver, 0);
close_candev(net);
}
static int mcp251x_stop(struct net_device *net)
{
struct mcp251x_priv *priv = netdev_priv(net);
@ -941,37 +930,43 @@ static int mcp251x_open(struct net_device *net)
flags | IRQF_ONESHOT, DEVICE_NAME, priv);
if (ret) {
dev_err(&spi->dev, "failed to acquire irq %d\n", spi->irq);
mcp251x_power_enable(priv->transceiver, 0);
close_candev(net);
goto open_unlock;
goto out_close;
}
priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
0);
if (!priv->wq) {
ret = -ENOMEM;
goto out_clean;
}
INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
ret = mcp251x_hw_reset(spi);
if (ret) {
mcp251x_open_clean(net);
goto open_unlock;
}
if (ret)
goto out_free_wq;
ret = mcp251x_setup(net, spi);
if (ret) {
mcp251x_open_clean(net);
goto open_unlock;
}
if (ret)
goto out_free_wq;
ret = mcp251x_set_normal_mode(spi);
if (ret) {
mcp251x_open_clean(net);
goto open_unlock;
}
if (ret)
goto out_free_wq;
can_led_event(net, CAN_LED_EVENT_OPEN);
netif_wake_queue(net);
mutex_unlock(&priv->mcp_lock);
open_unlock:
return 0;
out_free_wq:
destroy_workqueue(priv->wq);
out_clean:
free_irq(spi->irq, priv);
mcp251x_hw_sleep(spi);
out_close:
mcp251x_power_enable(priv->transceiver, 0);
close_candev(net);
mutex_unlock(&priv->mcp_lock);
return ret;
}

View File

@ -568,16 +568,16 @@ static int peak_usb_ndo_stop(struct net_device *netdev)
dev->state &= ~PCAN_USB_STATE_STARTED;
netif_stop_queue(netdev);
close_candev(netdev);
dev->can.state = CAN_STATE_STOPPED;
/* unlink all pending urbs and free used memory */
peak_usb_unlink_all_urbs(dev);
if (dev->adapter->dev_stop)
dev->adapter->dev_stop(dev);
close_candev(netdev);
dev->can.state = CAN_STATE_STOPPED;
/* can set bus off now */
if (dev->adapter->dev_set_bus) {
int err = dev->adapter->dev_set_bus(dev, 0);
@ -855,7 +855,7 @@ static void peak_usb_disconnect(struct usb_interface *intf)
dev_prev_siblings = dev->prev_siblings;
dev->state &= ~PCAN_USB_STATE_CONNECTED;
strncpy(name, netdev->name, IFNAMSIZ);
strlcpy(name, netdev->name, IFNAMSIZ);
unregister_netdev(netdev);

View File

@ -841,7 +841,7 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
goto err_out;
/* allocate command buffer once for all for the interface */
pdev->cmd_buffer_addr = kmalloc(PCAN_UFD_CMD_BUFFER_SIZE,
pdev->cmd_buffer_addr = kzalloc(PCAN_UFD_CMD_BUFFER_SIZE,
GFP_KERNEL);
if (!pdev->cmd_buffer_addr)
goto err_out_1;

View File

@ -494,7 +494,7 @@ static int pcan_usb_pro_drv_loaded(struct peak_usb_device *dev, int loaded)
u8 *buffer;
int err;
buffer = kmalloc(PCAN_USBPRO_FCT_DRVLD_REQ_LEN, GFP_KERNEL);
buffer = kzalloc(PCAN_USBPRO_FCT_DRVLD_REQ_LEN, GFP_KERNEL);
if (!buffer)
return -ENOMEM;

View File

@ -27,7 +27,6 @@
#include <linux/platform_data/mv88e6xxx.h>
#include <linux/netdevice.h>
#include <linux/gpio/consumer.h>
#include <linux/phy.h>
#include <linux/phylink.h>
#include <net/dsa.h>
@ -430,7 +429,7 @@ int mv88e6xxx_port_setup_mac(struct mv88e6xxx_chip *chip, int port, int link,
return 0;
/* Port's MAC control must not be changed unless the link is down */
err = chip->info->ops->port_set_link(chip, port, 0);
err = chip->info->ops->port_set_link(chip, port, LINK_FORCED_DOWN);
if (err)
return err;
@ -482,30 +481,6 @@ static int mv88e6xxx_phy_is_internal(struct dsa_switch *ds, int port)
return port < chip->info->num_internal_phys;
}
/* We expect the switch to perform auto negotiation if there is a real
* phy. However, in the case of a fixed link phy, we force the port
* settings from the fixed link settings.
*/
static void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
struct phy_device *phydev)
{
struct mv88e6xxx_chip *chip = ds->priv;
int err;
if (!phy_is_pseudo_fixed_link(phydev) &&
mv88e6xxx_phy_is_internal(ds, port))
return;
mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_setup_mac(chip, port, phydev->link, phydev->speed,
phydev->duplex, phydev->pause,
phydev->interface);
mv88e6xxx_reg_unlock(chip);
if (err && err != -EOPNOTSUPP)
dev_err(ds->dev, "p%d: failed to configure MAC\n", port);
}
static void mv88e6065_phylink_validate(struct mv88e6xxx_chip *chip, int port,
unsigned long *mask,
struct phylink_link_state *state)
@ -2721,6 +2696,7 @@ static int mv88e6xxx_mdios_register(struct mv88e6xxx_chip *chip,
err = mv88e6xxx_mdio_register(chip, child, true);
if (err) {
mv88e6xxx_mdios_unregister(chip);
of_node_put(child);
return err;
}
}
@ -4638,7 +4614,6 @@ static int mv88e6xxx_port_egress_floods(struct dsa_switch *ds, int port,
static const struct dsa_switch_ops mv88e6xxx_switch_ops = {
.get_tag_protocol = mv88e6xxx_get_tag_protocol,
.setup = mv88e6xxx_setup,
.adjust_link = mv88e6xxx_adjust_link,
.phylink_validate = mv88e6xxx_validate,
.phylink_mac_link_state = mv88e6xxx_link_state,
.phylink_mac_config = mv88e6xxx_mac_config,

View File

@ -2,7 +2,7 @@
/*
* Copyright (C) 2009 Felix Fietkau <nbd@nbd.name>
* Copyright (C) 2011-2012 Gabor Juhos <juhosg@openwrt.org>
* Copyright (c) 2015, The Linux Foundation. All rights reserved.
* Copyright (c) 2015, 2019, The Linux Foundation. All rights reserved.
* Copyright (c) 2016 John Crispin <john@phrozen.org>
*/
@ -583,8 +583,11 @@ qca8k_setup_mdio_bus(struct qca8k_priv *priv)
for_each_available_child_of_node(ports, port) {
err = of_property_read_u32(port, "reg", &reg);
if (err)
if (err) {
of_node_put(port);
of_node_put(ports);
return err;
}
if (!dsa_is_user_port(priv->ds, reg))
continue;
@ -595,6 +598,7 @@ qca8k_setup_mdio_bus(struct qca8k_priv *priv)
internal_mdio_mask |= BIT(reg);
}
of_node_put(ports);
if (!external_mdio_mask && !internal_mdio_mask) {
dev_err(priv->dev, "no PHYs are defined.\n");
return -EINVAL;
@ -935,6 +939,8 @@ qca8k_port_enable(struct dsa_switch *ds, int port,
qca8k_port_set_status(priv, port, 1);
priv->port_sts[port].enabled = 1;
phy_support_asym_pause(phy);
return 0;
}

View File

@ -277,6 +277,18 @@ sja1105et_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
SJA1105ET_SIZE_L2_LOOKUP_ENTRY, op);
}
static size_t sja1105et_dyn_l2_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_l2_lookup_entry *entry = entry_ptr;
u8 *cmd = buf + SJA1105ET_SIZE_L2_LOOKUP_ENTRY;
const int size = SJA1105_SIZE_DYN_CMD;
sja1105_packing(cmd, &entry->lockeds, 28, 28, size, op);
return sja1105et_l2_lookup_entry_packing(buf, entry_ptr, op);
}
static void
sja1105et_mgmt_route_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
@ -477,7 +489,7 @@ sja1105et_general_params_entry_packing(void *buf, void *entry_ptr,
/* SJA1105E/T: First generation */
struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_L2_LOOKUP] = {
.entry_packing = sja1105et_l2_lookup_entry_packing,
.entry_packing = sja1105et_dyn_l2_lookup_entry_packing,
.cmd_packing = sja1105et_l2_lookup_cmd_packing,
.access = (OP_READ | OP_WRITE | OP_DEL),
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,

View File

@ -218,7 +218,7 @@ static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
/* This selects between Independent VLAN Learning (IVL) and
* Shared VLAN Learning (SVL)
*/
.shared_learn = false,
.shared_learn = true,
/* Don't discard management traffic based on ENFPORT -
* we don't perform SMAC port enforcement anyway, so
* what we are setting here doesn't matter.
@ -625,6 +625,7 @@ static int sja1105_parse_ports_node(struct sja1105_private *priv,
if (of_property_read_u32(child, "reg", &index) < 0) {
dev_err(dev, "Port number not defined in device tree "
"(property \"reg\")\n");
of_node_put(child);
return -ENODEV;
}
@ -634,6 +635,7 @@ static int sja1105_parse_ports_node(struct sja1105_private *priv,
dev_err(dev, "Failed to read phy-mode or "
"phy-interface-type property for port %d\n",
index);
of_node_put(child);
return -ENODEV;
}
ports[index].phy_mode = phy_mode;
@ -643,6 +645,7 @@ static int sja1105_parse_ports_node(struct sja1105_private *priv,
if (!of_phy_is_fixed_link(child)) {
dev_err(dev, "phy-handle or fixed-link "
"properties missing!\n");
of_node_put(child);
return -ENODEV;
}
/* phy-handle is missing, but fixed-link isn't.
@ -1089,8 +1092,13 @@ int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
l2_lookup.vlanid = vid;
l2_lookup.iotag = SJA1105_S_TAG;
l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
if (dsa_port_is_vlan_filtering(&ds->ports[port])) {
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
} else {
l2_lookup.mask_vlanid = 0;
l2_lookup.mask_iotag = 0;
}
l2_lookup.destports = BIT(port);
rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
@ -1147,8 +1155,13 @@ int sja1105pqrs_fdb_del(struct dsa_switch *ds, int port,
l2_lookup.vlanid = vid;
l2_lookup.iotag = SJA1105_S_TAG;
l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
if (dsa_port_is_vlan_filtering(&ds->ports[port])) {
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
} else {
l2_lookup.mask_vlanid = 0;
l2_lookup.mask_iotag = 0;
}
l2_lookup.destports = BIT(port);
rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
@ -1178,60 +1191,31 @@ static int sja1105_fdb_add(struct dsa_switch *ds, int port,
const unsigned char *addr, u16 vid)
{
struct sja1105_private *priv = ds->priv;
u16 rx_vid, tx_vid;
int rc, i;
if (dsa_port_is_vlan_filtering(&ds->ports[port]))
return priv->info->fdb_add_cmd(ds, port, addr, vid);
/* Since we make use of VLANs even when the bridge core doesn't tell us
* to, translate these FDB entries into the correct dsa_8021q ones.
* The basic idea (also repeats for removal below) is:
* - Each of the other front-panel ports needs to be able to forward a
* pvid-tagged (aka tagged with their rx_vid) frame that matches this
* DMAC.
* - The CPU port (aka the tx_vid of this port) needs to be able to
* send a frame matching this DMAC to the specified port.
* For a better picture see net/dsa/tag_8021q.c.
/* dsa_8021q is in effect when the bridge's vlan_filtering isn't,
* so the switch still does some VLAN processing internally.
* But Shared VLAN Learning (SVL) is also active, and it will take
* care of autonomous forwarding between the unique pvid's of each
* port. Here we just make sure that users can't add duplicate FDB
* entries when in this mode - the actual VID doesn't matter except
* for what gets printed in 'bridge fdb show'. In the case of zero,
* no VID gets printed at all.
*/
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
if (i == port)
continue;
if (i == dsa_upstream_port(priv->ds, port))
continue;
if (!dsa_port_is_vlan_filtering(&ds->ports[port]))
vid = 0;
rx_vid = dsa_8021q_rx_vid(ds, i);
rc = priv->info->fdb_add_cmd(ds, port, addr, rx_vid);
if (rc < 0)
return rc;
}
tx_vid = dsa_8021q_tx_vid(ds, port);
return priv->info->fdb_add_cmd(ds, port, addr, tx_vid);
return priv->info->fdb_add_cmd(ds, port, addr, vid);
}
static int sja1105_fdb_del(struct dsa_switch *ds, int port,
const unsigned char *addr, u16 vid)
{
struct sja1105_private *priv = ds->priv;
u16 rx_vid, tx_vid;
int rc, i;
if (dsa_port_is_vlan_filtering(&ds->ports[port]))
return priv->info->fdb_del_cmd(ds, port, addr, vid);
if (!dsa_port_is_vlan_filtering(&ds->ports[port]))
vid = 0;
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
if (i == port)
continue;
if (i == dsa_upstream_port(priv->ds, port))
continue;
rx_vid = dsa_8021q_rx_vid(ds, i);
rc = priv->info->fdb_del_cmd(ds, port, addr, rx_vid);
if (rc < 0)
return rc;
}
tx_vid = dsa_8021q_tx_vid(ds, port);
return priv->info->fdb_del_cmd(ds, port, addr, tx_vid);
return priv->info->fdb_del_cmd(ds, port, addr, vid);
}
static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
@ -1270,39 +1254,9 @@ static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
continue;
u64_to_ether_addr(l2_lookup.macaddr, macaddr);
/* On SJA1105 E/T, the switch doesn't implement the LOCKEDS
* bit, so it doesn't tell us whether a FDB entry is static
* or not.
* But, of course, we can find out - we're the ones who added
* it in the first place.
*/
if (priv->info->device_id == SJA1105E_DEVICE_ID ||
priv->info->device_id == SJA1105T_DEVICE_ID) {
int match;
match = sja1105_find_static_fdb_entry(priv, port,
&l2_lookup);
l2_lookup.lockeds = (match >= 0);
}
/* We need to hide the dsa_8021q VLANs from the user. This
* basically means hiding the duplicates and only showing
* the pvid that is supposed to be active in standalone and
* non-vlan_filtering modes (aka 1).
* - For statically added FDB entries (bridge fdb add), we
* can convert the TX VID (coming from the CPU port) into the
* pvid and ignore the RX VIDs of the other ports.
* - For dynamically learned FDB entries, a single entry with
* no duplicates is learned - that which has the real port's
* pvid, aka RX VID.
*/
if (!dsa_port_is_vlan_filtering(&ds->ports[port])) {
if (l2_lookup.vlanid == tx_vid ||
l2_lookup.vlanid == rx_vid)
l2_lookup.vlanid = 1;
else
continue;
}
/* We need to hide the dsa_8021q VLANs from the user. */
if (!dsa_port_is_vlan_filtering(&ds->ports[port]))
l2_lookup.vlanid = 0;
cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data);
}
return 0;
@ -1594,6 +1548,7 @@ static int sja1105_vlan_prepare(struct dsa_switch *ds, int port,
*/
static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
{
struct sja1105_l2_lookup_params_entry *l2_lookup_params;
struct sja1105_general_params_entry *general_params;
struct sja1105_private *priv = ds->priv;
struct sja1105_table *table;
@ -1622,6 +1577,28 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
general_params->incl_srcpt1 = enabled;
general_params->incl_srcpt0 = enabled;
/* VLAN filtering => independent VLAN learning.
* No VLAN filtering => shared VLAN learning.
*
* In shared VLAN learning mode, untagged traffic still gets
* pvid-tagged, and the FDB table gets populated with entries
* containing the "real" (pvid or from VLAN tag) VLAN ID.
* However the switch performs a masked L2 lookup in the FDB,
* effectively only looking up a frame's DMAC (and not VID) for the
* forwarding decision.
*
* This is extremely convenient for us, because in modes with
* vlan_filtering=0, dsa_8021q actually installs unique pvid's into
* each front panel port. This is good for identification but breaks
* learning badly - the VID of the learnt FDB entry is unique, aka
* no frames coming from any other port are going to have it. So
* for forwarding purposes, this is as though learning was broken
* (all frames get flooded).
*/
table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS];
l2_lookup_params = table->entries;
l2_lookup_params->shared_learn = !enabled;
rc = sja1105_static_config_reload(priv);
if (rc)
dev_err(ds->dev, "Failed to change VLAN Ethertype\n");
@ -1751,6 +1728,8 @@ static void sja1105_teardown(struct dsa_switch *ds)
cancel_work_sync(&priv->tagger_data.rxtstamp_work);
skb_queue_purge(&priv->tagger_data.skb_rxtstamp_queue);
sja1105_ptp_clock_unregister(priv);
sja1105_static_config_free(&priv->static_config);
}
static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot,
@ -2208,9 +2187,7 @@ static int sja1105_remove(struct spi_device *spi)
{
struct sja1105_private *priv = spi_get_drvdata(spi);
sja1105_ptp_clock_unregister(priv);
dsa_unregister_switch(priv->ds);
sja1105_static_config_free(&priv->static_config);
return 0;
}

View File

@ -369,16 +369,15 @@ int sja1105_ptp_clock_register(struct sja1105_private *priv)
.mult = SJA1105_CC_MULT,
};
mutex_init(&priv->ptp_lock);
INIT_DELAYED_WORK(&priv->refresh_work, sja1105_ptp_overflow_check);
schedule_delayed_work(&priv->refresh_work, SJA1105_REFRESH_INTERVAL);
priv->ptp_caps = sja1105_ptp_caps;
priv->clock = ptp_clock_register(&priv->ptp_caps, ds->dev);
if (IS_ERR_OR_NULL(priv->clock))
return PTR_ERR(priv->clock);
INIT_DELAYED_WORK(&priv->refresh_work, sja1105_ptp_overflow_check);
schedule_delayed_work(&priv->refresh_work, SJA1105_REFRESH_INTERVAL);
return sja1105_ptp_reset(priv);
}

View File

@ -12,8 +12,8 @@ config NET_VENDOR_8390
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about Western Digital cards. If you say Y, you will be
asked for your specific card in the following questions.
the questions about National Semiconductor 8390 cards. If you say Y,
you will be asked for your specific card in the following questions.
if NET_VENDOR_8390

View File

@ -2362,7 +2362,7 @@ static int et131x_tx_dma_memory_alloc(struct et131x_adapter *adapter)
/* Allocate memory for the TCB's (Transmit Control Block) */
tx_ring->tcb_ring = kcalloc(NUM_TCB, sizeof(struct tcb),
GFP_ATOMIC | GFP_DMA);
GFP_KERNEL | GFP_DMA);
if (!tx_ring->tcb_ring)
return -ENOMEM;

View File

@ -860,7 +860,9 @@ static int emac_probe(struct platform_device *pdev)
goto out_clk_disable_unprepare;
}
db->phy_node = of_parse_phandle(np, "phy", 0);
db->phy_node = of_parse_phandle(np, "phy-handle", 0);
if (!db->phy_node)
db->phy_node = of_parse_phandle(np, "phy", 0);
if (!db->phy_node) {
dev_err(&pdev->dev, "no associated PHY\n");
ret = -ENODEV;

View File

@ -14,7 +14,7 @@ config NET_VENDOR_AMD
say Y.
Note that the answer to this question does not directly affect
the kernel: saying N will just case the configurator to skip all
the kernel: saying N will just cause the configurator to skip all
the questions regarding AMD chipsets. If you say Y, you will be asked
for your specific chipset/driver in the following questions.

View File

@ -11,8 +11,8 @@ config NET_VENDOR_APPLE
If you have a network (Ethernet) card belonging to this class, say Y.
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about IBM devices. If you say Y, you will be asked for
kernel: saying N will just cause the configurator to skip all the
questions about Apple devices. If you say Y, you will be asked for
your specific card in the following questions.
if NET_VENDOR_APPLE

View File

@ -1141,7 +1141,7 @@ static int ag71xx_rings_init(struct ag71xx *ag)
tx->descs_cpu = dma_alloc_coherent(&ag->pdev->dev,
ring_size * AG71XX_DESC_SIZE,
&tx->descs_dma, GFP_ATOMIC);
&tx->descs_dma, GFP_KERNEL);
if (!tx->descs_cpu) {
kfree(tx->buf);
tx->buf = NULL;

View File

@ -14,9 +14,9 @@ config NET_VENDOR_BROADCOM
say Y.
Note that the answer to this question does not directly affect
the kernel: saying N will just case the configurator to skip all
the questions regarding AMD chipsets. If you say Y, you will be asked
for your specific chipset/driver in the following questions.
the kernel: saying N will just cause the configurator to skip all
the questions regarding Broadcom chipsets. If you say Y, you will
be asked for your specific chipset/driver in the following questions.
if NET_VENDOR_BROADCOM

View File

@ -992,7 +992,7 @@ static int bcm_sysport_poll(struct napi_struct *napi, int budget)
{
struct bcm_sysport_priv *priv =
container_of(napi, struct bcm_sysport_priv, napi);
struct dim_sample dim_sample;
struct dim_sample dim_sample = {};
unsigned int work_done = 0;
work_done = bcm_sysport_desc_rx(priv, budget);

View File

@ -1934,8 +1934,7 @@ u16 bnx2x_select_queue(struct net_device *dev, struct sk_buff *skb,
}
/* select a non-FCoE queue */
return netdev_pick_tx(dev, skb, NULL) %
(BNX2X_NUM_ETH_QUEUES(bp) * bp->max_cos);
return netdev_pick_tx(dev, skb, NULL) % (BNX2X_NUM_ETH_QUEUES(bp));
}
void bnx2x_set_num_queues(struct bnx2x *bp)

View File

@ -2136,7 +2136,7 @@ static int bnxt_poll(struct napi_struct *napi, int budget)
}
}
if (bp->flags & BNXT_FLAG_DIM) {
struct dim_sample dim_sample;
struct dim_sample dim_sample = {};
dim_update_sample(cpr->event_ctr,
cpr->rx_packets,

View File

@ -1895,7 +1895,7 @@ static int bcmgenet_rx_poll(struct napi_struct *napi, int budget)
{
struct bcmgenet_rx_ring *ring = container_of(napi,
struct bcmgenet_rx_ring, napi);
struct dim_sample dim_sample;
struct dim_sample dim_sample = {};
unsigned int work_done;
work_done = bcmgenet_desc_rx(ring, budget);

View File

@ -1381,24 +1381,18 @@ static int acpi_get_mac_address(struct device *dev, struct acpi_device *adev,
u8 *dst)
{
u8 mac[ETH_ALEN];
int ret;
u8 *addr;
ret = fwnode_property_read_u8_array(acpi_fwnode_handle(adev),
"mac-address", mac, ETH_ALEN);
if (ret)
goto out;
if (!is_valid_ether_addr(mac)) {
addr = fwnode_get_mac_address(acpi_fwnode_handle(adev), mac, ETH_ALEN);
if (!addr) {
dev_err(dev, "MAC address invalid: %pM\n", mac);
ret = -EINVAL;
goto out;
return -EINVAL;
}
dev_info(dev, "MAC address set to: %pM\n", mac);
memcpy(dst, mac, ETH_ALEN);
out:
return ret;
ether_addr_copy(dst, mac);
return 0;
}
/* Currently only sets the MAC address. */

View File

@ -3269,7 +3269,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (!adapter->regs) {
dev_err(&pdev->dev, "cannot map device registers\n");
err = -ENOMEM;
goto out_free_adapter;
goto out_free_adapter_nofail;
}
adapter->pdev = pdev;
@ -3397,6 +3397,9 @@ out_free_dev:
if (adapter->port[i])
free_netdev(adapter->port[i]);
out_free_adapter_nofail:
kfree_skb(adapter->nofail_skb);
out_free_adapter:
kfree(adapter);

View File

@ -550,7 +550,7 @@ int be_process_mcc(struct be_adapter *adapter)
int num = 0, status = 0;
struct be_mcc_obj *mcc_obj = &adapter->mcc_obj;
spin_lock(&adapter->mcc_cq_lock);
spin_lock_bh(&adapter->mcc_cq_lock);
while ((compl = be_mcc_compl_get(adapter))) {
if (compl->flags & CQE_FLAGS_ASYNC_MASK) {
@ -566,7 +566,7 @@ int be_process_mcc(struct be_adapter *adapter)
if (num)
be_cq_notify(adapter, mcc_obj->cq.id, mcc_obj->rearm_cq, num);
spin_unlock(&adapter->mcc_cq_lock);
spin_unlock_bh(&adapter->mcc_cq_lock);
return status;
}
@ -581,9 +581,7 @@ static int be_mcc_wait_compl(struct be_adapter *adapter)
if (be_check_error(adapter, BE_ERROR_ANY))
return -EIO;
local_bh_disable();
status = be_process_mcc(adapter);
local_bh_enable();
if (atomic_read(&mcc_obj->q.used) == 0)
break;

View File

@ -5630,9 +5630,7 @@ static void be_worker(struct work_struct *work)
* mcc completions
*/
if (!netif_running(adapter->netdev)) {
local_bh_disable();
be_process_mcc(adapter);
local_bh_enable();
goto reschedule;
}

View File

@ -2,6 +2,7 @@
config FSL_ENETC
tristate "ENETC PF driver"
depends on PCI && PCI_MSI && (ARCH_LAYERSCAPE || COMPILE_TEST)
select PHYLIB
help
This driver supports NXP ENETC gigabit ethernet controller PCIe
physical function (PF) devices, managing ENETC Ports at a privileged
@ -12,6 +13,7 @@ config FSL_ENETC
config FSL_ENETC_VF
tristate "ENETC VF driver"
depends on PCI && PCI_MSI && (ARCH_LAYERSCAPE || COMPILE_TEST)
select PHYLIB
help
This driver supports NXP ENETC gigabit ethernet controller PCIe
virtual function (VF) devices enabled by the ENETC PF driver.

View File

@ -2439,9 +2439,6 @@ MODULE_PARM_DESC(fsl_fm_rx_extra_headroom, "Extra headroom for Rx buffers");
* buffers when not using jumbo frames.
* Must be large enough to accommodate the network MTU, but small enough
* to avoid wasting skb memory.
*
* Could be overridden once, at boot-time, via the
* fm_set_max_frm() callback.
*/
static int fsl_fm_max_frm = FSL_FM_MAX_FRAME_SIZE;
module_param(fsl_fm_max_frm, int, 0);

View File

@ -31,9 +31,6 @@
struct gve_rx_desc_queue {
struct gve_rx_desc *desc_ring; /* the descriptor ring */
dma_addr_t bus; /* the bus for the desc_ring */
u32 cnt; /* free-running total number of completed packets */
u32 fill_cnt; /* free-running total number of descriptors posted */
u32 mask; /* masks the cnt to the size of the ring */
u8 seqno; /* the next expected seqno for this desc*/
};
@ -60,8 +57,6 @@ struct gve_rx_data_queue {
dma_addr_t data_bus; /* dma mapping of the slots */
struct gve_rx_slot_page_info *page_info; /* page info of the buffers */
struct gve_queue_page_list *qpl; /* qpl assigned to this queue */
u32 mask; /* masks the cnt to the size of the ring */
u32 cnt; /* free-running total number of completed packets */
};
struct gve_priv;
@ -73,6 +68,9 @@ struct gve_rx_ring {
struct gve_rx_data_queue data;
u64 rbytes; /* free-running bytes received */
u64 rpackets; /* free-running packets received */
u32 cnt; /* free-running total number of completed packets */
u32 fill_cnt; /* free-running total number of descs and buffs posted */
u32 mask; /* masks the cnt and fill_cnt to the size of the ring */
u32 q_num; /* queue index */
u32 ntfy_id; /* notification block index */
struct gve_queue_resources *q_resources; /* head and tail pointer idx */

View File

@ -138,8 +138,8 @@ gve_get_ethtool_stats(struct net_device *netdev,
for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
struct gve_rx_ring *rx = &priv->rx[ring];
data[i++] = rx->desc.cnt;
data[i++] = rx->desc.fill_cnt;
data[i++] = rx->cnt;
data[i++] = rx->fill_cnt;
}
} else {
i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;

View File

@ -37,7 +37,7 @@ static void gve_rx_free_ring(struct gve_priv *priv, int idx)
rx->data.qpl = NULL;
kvfree(rx->data.page_info);
slots = rx->data.mask + 1;
slots = rx->mask + 1;
bytes = sizeof(*rx->data.data_ring) * slots;
dma_free_coherent(dev, bytes, rx->data.data_ring,
rx->data.data_bus);
@ -64,7 +64,7 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx)
/* Allocate one page per Rx queue slot. Each page is split into two
* packet buffers, when possible we "page flip" between the two.
*/
slots = rx->data.mask + 1;
slots = rx->mask + 1;
rx->data.page_info = kvzalloc(slots *
sizeof(*rx->data.page_info), GFP_KERNEL);
@ -111,7 +111,7 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx)
rx->q_num = idx;
slots = priv->rx_pages_per_qpl;
rx->data.mask = slots - 1;
rx->mask = slots - 1;
/* alloc rx data ring */
bytes = sizeof(*rx->data.data_ring) * slots;
@ -125,7 +125,7 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx)
err = -ENOMEM;
goto abort_with_slots;
}
rx->desc.fill_cnt = filled_pages;
rx->fill_cnt = filled_pages;
/* Ensure data ring slots (packet buffers) are visible. */
dma_wmb();
@ -156,8 +156,8 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx)
err = -ENOMEM;
goto abort_with_q_resources;
}
rx->desc.mask = slots - 1;
rx->desc.cnt = 0;
rx->mask = slots - 1;
rx->cnt = 0;
rx->desc.seqno = 1;
gve_rx_add_to_block(priv, idx);
@ -213,7 +213,7 @@ void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx)
{
u32 db_idx = be32_to_cpu(rx->q_resources->db_index);
iowrite32be(rx->desc.fill_cnt, &priv->db_bar2[db_idx]);
iowrite32be(rx->fill_cnt, &priv->db_bar2[db_idx]);
}
static enum pkt_hash_types gve_rss_type(__be16 pkt_flags)
@ -273,7 +273,7 @@ static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info,
}
static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc,
netdev_features_t feat)
netdev_features_t feat, u32 idx)
{
struct gve_rx_slot_page_info *page_info;
struct gve_priv *priv = rx->gve;
@ -282,14 +282,12 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc,
struct sk_buff *skb;
int pagecount;
u16 len;
u32 idx;
/* drop this packet */
if (unlikely(rx_desc->flags_seq & GVE_RXF_ERR))
return true;
len = be16_to_cpu(rx_desc->len) - GVE_RX_PAD;
idx = rx->data.cnt & rx->data.mask;
page_info = &rx->data.page_info[idx];
/* gvnic can only receive into registered segments. If the buffer
@ -340,8 +338,6 @@ have_skb:
if (!skb)
return true;
rx->data.cnt++;
if (likely(feat & NETIF_F_RXCSUM)) {
/* NIC passes up the partial sum */
if (rx_desc->csum)
@ -370,7 +366,7 @@ static bool gve_rx_work_pending(struct gve_rx_ring *rx)
__be16 flags_seq;
u32 next_idx;
next_idx = rx->desc.cnt & rx->desc.mask;
next_idx = rx->cnt & rx->mask;
desc = rx->desc.desc_ring + next_idx;
flags_seq = desc->flags_seq;
@ -385,8 +381,8 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
{
struct gve_priv *priv = rx->gve;
struct gve_rx_desc *desc;
u32 cnt = rx->desc.cnt;
u32 idx = cnt & rx->desc.mask;
u32 cnt = rx->cnt;
u32 idx = cnt & rx->mask;
u32 work_done = 0;
u64 bytes = 0;
@ -401,10 +397,10 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
rx->q_num, GVE_SEQNO(desc->flags_seq),
rx->desc.seqno);
bytes += be16_to_cpu(desc->len) - GVE_RX_PAD;
if (!gve_rx(rx, desc, feat))
if (!gve_rx(rx, desc, feat, idx))
gve_schedule_reset(priv);
cnt++;
idx = cnt & rx->desc.mask;
idx = cnt & rx->mask;
desc = rx->desc.desc_ring + idx;
rx->desc.seqno = gve_next_seqno(rx->desc.seqno);
work_done++;
@ -417,8 +413,8 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
rx->rpackets += work_done;
rx->rbytes += bytes;
u64_stats_update_end(&rx->statss);
rx->desc.cnt = cnt;
rx->desc.fill_cnt += work_done;
rx->cnt = cnt;
rx->fill_cnt += work_done;
/* restock desc ring slots */
dma_wmb(); /* Ensure descs are visible before ringing doorbell */

View File

@ -220,6 +220,7 @@ struct hip04_priv {
unsigned int reg_inten;
struct napi_struct napi;
struct device *dev;
struct net_device *ndev;
struct tx_desc *tx_desc;
@ -248,7 +249,7 @@ struct hip04_priv {
static inline unsigned int tx_count(unsigned int head, unsigned int tail)
{
return (head - tail) % (TX_DESC_NUM - 1);
return (head - tail) % TX_DESC_NUM;
}
static void hip04_config_port(struct net_device *ndev, u32 speed, u32 duplex)
@ -465,7 +466,7 @@ static int hip04_tx_reclaim(struct net_device *ndev, bool force)
}
if (priv->tx_phys[tx_tail]) {
dma_unmap_single(&ndev->dev, priv->tx_phys[tx_tail],
dma_unmap_single(priv->dev, priv->tx_phys[tx_tail],
priv->tx_skb[tx_tail]->len,
DMA_TO_DEVICE);
priv->tx_phys[tx_tail] = 0;
@ -516,8 +517,8 @@ hip04_mac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
return NETDEV_TX_BUSY;
}
phys = dma_map_single(&ndev->dev, skb->data, skb->len, DMA_TO_DEVICE);
if (dma_mapping_error(&ndev->dev, phys)) {
phys = dma_map_single(priv->dev, skb->data, skb->len, DMA_TO_DEVICE);
if (dma_mapping_error(priv->dev, phys)) {
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
@ -585,6 +586,9 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget)
u16 len;
u32 err;
/* clean up tx descriptors */
tx_remaining = hip04_tx_reclaim(ndev, false);
while (cnt && !last) {
buf = priv->rx_buf[priv->rx_head];
skb = build_skb(buf, priv->rx_buf_size);
@ -593,7 +597,7 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget)
goto refill;
}
dma_unmap_single(&ndev->dev, priv->rx_phys[priv->rx_head],
dma_unmap_single(priv->dev, priv->rx_phys[priv->rx_head],
RX_BUF_SIZE, DMA_FROM_DEVICE);
priv->rx_phys[priv->rx_head] = 0;
@ -622,9 +626,9 @@ refill:
buf = netdev_alloc_frag(priv->rx_buf_size);
if (!buf)
goto done;
phys = dma_map_single(&ndev->dev, buf,
phys = dma_map_single(priv->dev, buf,
RX_BUF_SIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(&ndev->dev, phys))
if (dma_mapping_error(priv->dev, phys))
goto done;
priv->rx_buf[priv->rx_head] = buf;
priv->rx_phys[priv->rx_head] = phys;
@ -645,8 +649,7 @@ refill:
}
napi_complete_done(napi, rx);
done:
/* clean up tx descriptors and start a new timer if necessary */
tx_remaining = hip04_tx_reclaim(ndev, false);
/* start a new timer if necessary */
if (rx < budget && tx_remaining)
hip04_start_tx_timer(priv);
@ -728,9 +731,9 @@ static int hip04_mac_open(struct net_device *ndev)
for (i = 0; i < RX_DESC_NUM; i++) {
dma_addr_t phys;
phys = dma_map_single(&ndev->dev, priv->rx_buf[i],
phys = dma_map_single(priv->dev, priv->rx_buf[i],
RX_BUF_SIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(&ndev->dev, phys))
if (dma_mapping_error(priv->dev, phys))
return -EIO;
priv->rx_phys[i] = phys;
@ -764,7 +767,7 @@ static int hip04_mac_stop(struct net_device *ndev)
for (i = 0; i < RX_DESC_NUM; i++) {
if (priv->rx_phys[i]) {
dma_unmap_single(&ndev->dev, priv->rx_phys[i],
dma_unmap_single(priv->dev, priv->rx_phys[i],
RX_BUF_SIZE, DMA_FROM_DEVICE);
priv->rx_phys[i] = 0;
}
@ -907,6 +910,7 @@ static int hip04_mac_probe(struct platform_device *pdev)
return -ENOMEM;
priv = netdev_priv(ndev);
priv->dev = d;
priv->ndev = ndev;
platform_set_drvdata(pdev, ndev);
SET_NETDEV_DEV(ndev, &pdev->dev);

View File

@ -3251,7 +3251,7 @@ static int ehea_mem_notifier(struct notifier_block *nb,
switch (action) {
case MEM_CANCEL_OFFLINE:
pr_info("memory offlining canceled");
/* Fall through: re-add canceled memory block */
/* Fall through - re-add canceled memory block */
case MEM_ONLINE:
pr_info("memory is going online");

View File

@ -319,20 +319,33 @@ static int orion_mdio_probe(struct platform_device *pdev)
init_waitqueue_head(&dev->smi_busy_wait);
for (i = 0; i < ARRAY_SIZE(dev->clk); i++) {
dev->clk[i] = of_clk_get(pdev->dev.of_node, i);
if (PTR_ERR(dev->clk[i]) == -EPROBE_DEFER) {
if (pdev->dev.of_node) {
for (i = 0; i < ARRAY_SIZE(dev->clk); i++) {
dev->clk[i] = of_clk_get(pdev->dev.of_node, i);
if (PTR_ERR(dev->clk[i]) == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
goto out_clk;
}
if (IS_ERR(dev->clk[i]))
break;
clk_prepare_enable(dev->clk[i]);
}
if (!IS_ERR(of_clk_get(pdev->dev.of_node,
ARRAY_SIZE(dev->clk))))
dev_warn(&pdev->dev,
"unsupported number of clocks, limiting to the first "
__stringify(ARRAY_SIZE(dev->clk)) "\n");
} else {
dev->clk[0] = clk_get(&pdev->dev, NULL);
if (PTR_ERR(dev->clk[0]) == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
goto out_clk;
}
if (IS_ERR(dev->clk[i]))
break;
clk_prepare_enable(dev->clk[i]);
if (!IS_ERR(dev->clk[0]))
clk_prepare_enable(dev->clk[0]);
}
if (!IS_ERR(of_clk_get(pdev->dev.of_node, ARRAY_SIZE(dev->clk))))
dev_warn(&pdev->dev, "unsupported number of clocks, limiting to the first "
__stringify(ARRAY_SIZE(dev->clk)) "\n");
dev->err_interrupt = platform_get_irq(pdev, 0);
if (dev->err_interrupt > 0 &&

View File

@ -811,6 +811,26 @@ static int mvpp2_swf_bm_pool_init(struct mvpp2_port *port)
return 0;
}
static void mvpp2_set_hw_csum(struct mvpp2_port *port,
enum mvpp2_bm_pool_log_num new_long_pool)
{
const netdev_features_t csums = NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
/* Update L4 checksum when jumbo enable/disable on port.
* Only port 0 supports hardware checksum offload due to
* the Tx FIFO size limitation.
* Also, don't set NETIF_F_HW_CSUM because L3_offset in TX descriptor
* has 7 bits, so the maximum L3 offset is 128.
*/
if (new_long_pool == MVPP2_BM_JUMBO && port->id != 0) {
port->dev->features &= ~csums;
port->dev->hw_features &= ~csums;
} else {
port->dev->features |= csums;
port->dev->hw_features |= csums;
}
}
static int mvpp2_bm_update_mtu(struct net_device *dev, int mtu)
{
struct mvpp2_port *port = netdev_priv(dev);
@ -843,15 +863,7 @@ static int mvpp2_bm_update_mtu(struct net_device *dev, int mtu)
/* Add port to new short & long pool */
mvpp2_swf_bm_pool_init(port);
/* Update L4 checksum when jumbo enable/disable on port */
if (new_long_pool == MVPP2_BM_JUMBO && port->id != 0) {
dev->features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
dev->hw_features &= ~(NETIF_F_IP_CSUM |
NETIF_F_IPV6_CSUM);
} else {
dev->features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
dev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
}
mvpp2_set_hw_csum(port, new_long_pool);
}
dev->mtu = mtu;
@ -3700,6 +3712,7 @@ static int mvpp2_set_mac_address(struct net_device *dev, void *p)
static int mvpp2_change_mtu(struct net_device *dev, int mtu)
{
struct mvpp2_port *port = netdev_priv(dev);
bool running = netif_running(dev);
int err;
if (!IS_ALIGNED(MVPP2_RX_PKT_SIZE(mtu), 8)) {
@ -3708,40 +3721,24 @@ static int mvpp2_change_mtu(struct net_device *dev, int mtu)
mtu = ALIGN(MVPP2_RX_PKT_SIZE(mtu), 8);
}
if (!netif_running(dev)) {
err = mvpp2_bm_update_mtu(dev, mtu);
if (!err) {
port->pkt_size = MVPP2_RX_PKT_SIZE(mtu);
return 0;
}
/* Reconfigure BM to the original MTU */
err = mvpp2_bm_update_mtu(dev, dev->mtu);
if (err)
goto log_error;
}
mvpp2_stop_dev(port);
if (running)
mvpp2_stop_dev(port);
err = mvpp2_bm_update_mtu(dev, mtu);
if (!err) {
if (err) {
netdev_err(dev, "failed to change MTU\n");
/* Reconfigure BM to the original MTU */
mvpp2_bm_update_mtu(dev, dev->mtu);
} else {
port->pkt_size = MVPP2_RX_PKT_SIZE(mtu);
goto out_start;
}
/* Reconfigure BM to the original MTU */
err = mvpp2_bm_update_mtu(dev, dev->mtu);
if (err)
goto log_error;
if (running) {
mvpp2_start_dev(port);
mvpp2_egress_enable(port);
mvpp2_ingress_enable(port);
}
out_start:
mvpp2_start_dev(port);
mvpp2_egress_enable(port);
mvpp2_ingress_enable(port);
return 0;
log_error:
netdev_err(dev, "failed to change MTU\n");
return err;
}
@ -4739,9 +4736,9 @@ static void mvpp2_xlg_config(struct mvpp2_port *port, unsigned int mode,
else
ctrl0 &= ~MVPP22_XLG_CTRL0_RX_FLOW_CTRL_EN;
ctrl4 &= ~MVPP22_XLG_CTRL4_MACMODSELECT_GMAC;
ctrl4 |= MVPP22_XLG_CTRL4_FWD_FC | MVPP22_XLG_CTRL4_FWD_PFC |
MVPP22_XLG_CTRL4_EN_IDLE_CHECK;
ctrl4 &= ~(MVPP22_XLG_CTRL4_MACMODSELECT_GMAC |
MVPP22_XLG_CTRL4_EN_IDLE_CHECK);
ctrl4 |= MVPP22_XLG_CTRL4_FWD_FC | MVPP22_XLG_CTRL4_FWD_PFC;
if (old_ctrl0 != ctrl0)
writel(ctrl0, port->base + MVPP22_XLG_CTRL0_REG);
@ -5208,10 +5205,7 @@ static int mvpp2_port_probe(struct platform_device *pdev,
dev->features |= NETIF_F_NTUPLE;
}
if (port->pool_long->id == MVPP2_BM_JUMBO && port->id != 0) {
dev->features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
dev->hw_features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
}
mvpp2_set_hw_csum(port, port->pool_long->id);
dev->vlan_features |= features;
dev->gso_max_segs = MVPP2_MAX_TSO_SEGS;
@ -5759,9 +5753,6 @@ static int mvpp2_remove(struct platform_device *pdev)
mvpp2_dbgfs_cleanup(priv);
flush_workqueue(priv->stats_queue);
destroy_workqueue(priv->stats_queue);
fwnode_for_each_available_child_node(fwnode, port_fwnode) {
if (priv->port_list[i]) {
mutex_destroy(&priv->port_list[i]->gather_stats_lock);
@ -5770,6 +5761,8 @@ static int mvpp2_remove(struct platform_device *pdev)
i++;
}
destroy_workqueue(priv->stats_queue);
for (i = 0; i < MVPP2_BM_POOLS_NUM; i++) {
struct mvpp2_bm_pool *bm_pool = &priv->bm_pools[i];

View File

@ -4924,6 +4924,13 @@ static const struct dmi_system_id msi_blacklist[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "P5W DH Deluxe"),
},
},
{
.ident = "ASUS P6T",
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."),
DMI_MATCH(DMI_BOARD_NAME, "P6T"),
},
},
{}
};

View File

@ -9,7 +9,6 @@ if NET_VENDOR_MEDIATEK
config NET_MEDIATEK_SOC
tristate "MediaTek SoC Gigabit Ethernet support"
depends on NET_VENDOR_MEDIATEK
select PHYLIB
---help---
This driver supports the gigabit ethernet MACs in the

View File

@ -213,7 +213,7 @@ void mlx5_unregister_device(struct mlx5_core_dev *dev)
struct mlx5_interface *intf;
mutex_lock(&mlx5_intf_mutex);
list_for_each_entry(intf, &intf_list, list)
list_for_each_entry_reverse(intf, &intf_list, list)
mlx5_remove_device(intf, priv);
list_del(&priv->dev_list);
mutex_unlock(&mlx5_intf_mutex);

View File

@ -159,7 +159,7 @@ do { \
enum mlx5e_rq_group {
MLX5E_RQ_GROUP_REGULAR,
MLX5E_RQ_GROUP_XSK,
MLX5E_NUM_RQ_GROUPS /* Keep last. */
#define MLX5E_NUM_RQ_GROUPS(g) (1 + MLX5E_RQ_GROUP_##g)
};
static inline u16 mlx5_min_rx_wqes(int wq_type, u32 wq_size)
@ -182,14 +182,6 @@ static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
min_t(int, mlx5_comp_vectors_count(mdev), MLX5E_MAX_NUM_CHANNELS);
}
/* Use this function to get max num channels after netdev was created */
static inline int mlx5e_get_netdev_max_channels(struct net_device *netdev)
{
return min_t(unsigned int,
netdev->num_rx_queues / MLX5E_NUM_RQ_GROUPS,
netdev->num_tx_queues);
}
struct mlx5e_tx_wqe {
struct mlx5_wqe_ctrl_seg ctrl;
struct mlx5_wqe_eth_seg eth;
@ -830,6 +822,7 @@ struct mlx5e_priv {
struct net_device *netdev;
struct mlx5e_stats stats;
struct mlx5e_channel_stats channel_stats[MLX5E_MAX_NUM_CHANNELS];
u16 max_nch;
u8 max_opened_tc;
struct hwtstamp_config tstamp;
u16 q_counter;
@ -871,6 +864,7 @@ struct mlx5e_profile {
mlx5e_fp_handle_rx_cqe handle_rx_cqe_mpwqe;
} rx_handlers;
int max_tc;
u8 rq_groups;
};
void mlx5e_build_ptys2ethtool_map(void);

View File

@ -66,9 +66,10 @@ static inline void mlx5e_qid_get_ch_and_group(struct mlx5e_params *params,
*group = qid / nch;
}
static inline bool mlx5e_qid_validate(struct mlx5e_params *params, u64 qid)
static inline bool mlx5e_qid_validate(const struct mlx5e_profile *profile,
struct mlx5e_params *params, u64 qid)
{
return qid < params->num_channels * MLX5E_NUM_RQ_GROUPS;
return qid < params->num_channels * profile->rq_groups;
}
/* Parameter calculations */

View File

@ -78,9 +78,10 @@ static const u32 mlx5e_ext_link_speed[MLX5E_EXT_LINK_MODES_NUMBER] = {
};
static void mlx5e_port_get_speed_arr(struct mlx5_core_dev *mdev,
const u32 **arr, u32 *size)
const u32 **arr, u32 *size,
bool force_legacy)
{
bool ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
bool ext = force_legacy ? false : MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
*size = ext ? ARRAY_SIZE(mlx5e_ext_link_speed) :
ARRAY_SIZE(mlx5e_link_speed);
@ -152,7 +153,8 @@ int mlx5_port_set_eth_ptys(struct mlx5_core_dev *dev, bool an_disable,
sizeof(out), MLX5_REG_PTYS, 0, 1);
}
u32 mlx5e_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper)
u32 mlx5e_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper,
bool force_legacy)
{
unsigned long temp = eth_proto_oper;
const u32 *table;
@ -160,7 +162,7 @@ u32 mlx5e_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper)
u32 max_size;
int i;
mlx5e_port_get_speed_arr(mdev, &table, &max_size);
mlx5e_port_get_speed_arr(mdev, &table, &max_size, force_legacy);
i = find_first_bit(&temp, max_size);
if (i < max_size)
speed = table[i];
@ -170,6 +172,7 @@ u32 mlx5e_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper)
int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
{
struct mlx5e_port_eth_proto eproto;
bool force_legacy = false;
bool ext;
int err;
@ -177,8 +180,13 @@ int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
err = mlx5_port_query_eth_proto(mdev, 1, ext, &eproto);
if (err)
goto out;
*speed = mlx5e_port_ptys2speed(mdev, eproto.oper);
if (ext && !eproto.admin) {
force_legacy = true;
err = mlx5_port_query_eth_proto(mdev, 1, false, &eproto);
if (err)
goto out;
}
*speed = mlx5e_port_ptys2speed(mdev, eproto.oper, force_legacy);
if (!(*speed))
err = -EINVAL;
@ -201,7 +209,7 @@ int mlx5e_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
if (err)
return err;
mlx5e_port_get_speed_arr(mdev, &table, &max_size);
mlx5e_port_get_speed_arr(mdev, &table, &max_size, false);
for (i = 0; i < max_size; ++i)
if (eproto.cap & MLX5E_PROT_MASK(i))
max_speed = max(max_speed, table[i]);
@ -210,14 +218,15 @@ int mlx5e_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
return 0;
}
u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed)
u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed,
bool force_legacy)
{
u32 link_modes = 0;
const u32 *table;
u32 max_size;
int i;
mlx5e_port_get_speed_arr(mdev, &table, &max_size);
mlx5e_port_get_speed_arr(mdev, &table, &max_size, force_legacy);
for (i = 0; i < max_size; ++i) {
if (table[i] == speed)
link_modes |= MLX5E_PROT_MASK(i);

View File

@ -48,10 +48,12 @@ void mlx5_port_query_eth_autoneg(struct mlx5_core_dev *dev, u8 *an_status,
u8 *an_disable_cap, u8 *an_disable_admin);
int mlx5_port_set_eth_ptys(struct mlx5_core_dev *dev, bool an_disable,
u32 proto_admin, bool ext);
u32 mlx5e_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper);
u32 mlx5e_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper,
bool force_legacy);
int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
int mlx5e_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed);
u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed,
bool force_legacy);
int mlx5e_port_query_pbmc(struct mlx5_core_dev *mdev, void *out);
int mlx5e_port_set_pbmc(struct mlx5_core_dev *mdev, void *in);

View File

@ -412,7 +412,7 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
goto out;
tls_ctx = tls_get_ctx(skb->sk);
if (unlikely(tls_ctx->netdev != netdev))
if (unlikely(WARN_ON_ONCE(tls_ctx->netdev != netdev)))
goto err_out;
priv_tx = mlx5e_get_ktls_tx_priv_ctx(tls_ctx);

View File

@ -391,7 +391,7 @@ void mlx5e_ethtool_get_channels(struct mlx5e_priv *priv,
{
mutex_lock(&priv->state_lock);
ch->max_combined = mlx5e_get_netdev_max_channels(priv->netdev);
ch->max_combined = priv->max_nch;
ch->combined_count = priv->channels.params.num_channels;
if (priv->xsk.refcnt) {
/* The upper half are XSK queues. */
@ -785,7 +785,7 @@ static void ptys2ethtool_supported_advertised_port(struct ethtool_link_ksettings
}
static void get_speed_duplex(struct net_device *netdev,
u32 eth_proto_oper,
u32 eth_proto_oper, bool force_legacy,
struct ethtool_link_ksettings *link_ksettings)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
@ -795,7 +795,7 @@ static void get_speed_duplex(struct net_device *netdev,
if (!netif_carrier_ok(netdev))
goto out;
speed = mlx5e_port_ptys2speed(priv->mdev, eth_proto_oper);
speed = mlx5e_port_ptys2speed(priv->mdev, eth_proto_oper, force_legacy);
if (!speed) {
speed = SPEED_UNKNOWN;
goto out;
@ -914,8 +914,8 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
/* Fields: eth_proto_admin and ext_eth_proto_admin are
* mutually exclusive. Hence try reading legacy advertising
* when extended advertising is zero.
* admin_ext indicates how eth_proto_admin should be
* interpreted
* admin_ext indicates which proto_admin (ext vs. legacy)
* should be read and interpreted
*/
admin_ext = ext;
if (ext && !eth_proto_admin) {
@ -924,7 +924,7 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
admin_ext = false;
}
eth_proto_oper = MLX5_GET_ETH_PROTO(ptys_reg, out, ext,
eth_proto_oper = MLX5_GET_ETH_PROTO(ptys_reg, out, admin_ext,
eth_proto_oper);
eth_proto_lp = MLX5_GET(ptys_reg, out, eth_proto_lp_advertise);
an_disable_admin = MLX5_GET(ptys_reg, out, an_disable_admin);
@ -939,7 +939,8 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
get_supported(mdev, eth_proto_cap, link_ksettings);
get_advertising(eth_proto_admin, tx_pause, rx_pause, link_ksettings,
admin_ext);
get_speed_duplex(priv->netdev, eth_proto_oper, link_ksettings);
get_speed_duplex(priv->netdev, eth_proto_oper, !admin_ext,
link_ksettings);
eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap;
@ -1016,45 +1017,69 @@ static u32 mlx5e_ethtool2ptys_ext_adver_link(const unsigned long *link_modes)
return ptys_modes;
}
static bool ext_link_mode_requested(const unsigned long *adver)
{
#define MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT ETHTOOL_LINK_MODE_50000baseKR_Full_BIT
int size = __ETHTOOL_LINK_MODE_MASK_NBITS - MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT;
__ETHTOOL_DECLARE_LINK_MODE_MASK(modes);
bitmap_set(modes, MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT, size);
return bitmap_intersects(modes, adver, __ETHTOOL_LINK_MODE_MASK_NBITS);
}
static bool ext_speed_requested(u32 speed)
{
#define MLX5E_MAX_PTYS_LEGACY_SPEED 100000
return !!(speed > MLX5E_MAX_PTYS_LEGACY_SPEED);
}
static bool ext_requested(u8 autoneg, const unsigned long *adver, u32 speed)
{
bool ext_link_mode = ext_link_mode_requested(adver);
bool ext_speed = ext_speed_requested(speed);
return autoneg == AUTONEG_ENABLE ? ext_link_mode : ext_speed;
}
int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
const struct ethtool_link_ksettings *link_ksettings)
{
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5e_port_eth_proto eproto;
const unsigned long *adver;
bool an_changes = false;
u8 an_disable_admin;
bool ext_supported;
bool ext_requested;
u8 an_disable_cap;
bool an_disable;
u32 link_modes;
u8 an_status;
u8 autoneg;
u32 speed;
bool ext;
int err;
u32 (*ethtool2ptys_adver_func)(const unsigned long *adver);
#define MLX5E_PTYS_EXT ((1ULL << ETHTOOL_LINK_MODE_50000baseKR_Full_BIT) - 1)
ext_requested = !!(link_ksettings->link_modes.advertising[0] >
MLX5E_PTYS_EXT ||
link_ksettings->link_modes.advertising[1]);
ext_supported = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
ext_requested &= ext_supported;
adver = link_ksettings->link_modes.advertising;
autoneg = link_ksettings->base.autoneg;
speed = link_ksettings->base.speed;
ethtool2ptys_adver_func = ext_requested ?
mlx5e_ethtool2ptys_ext_adver_link :
ext = ext_requested(autoneg, adver, speed),
ext_supported = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
if (!ext_supported && ext)
return -EOPNOTSUPP;
ethtool2ptys_adver_func = ext ? mlx5e_ethtool2ptys_ext_adver_link :
mlx5e_ethtool2ptys_adver_link;
err = mlx5_port_query_eth_proto(mdev, 1, ext_requested, &eproto);
err = mlx5_port_query_eth_proto(mdev, 1, ext, &eproto);
if (err) {
netdev_err(priv->netdev, "%s: query port eth proto failed: %d\n",
__func__, err);
goto out;
}
link_modes = link_ksettings->base.autoneg == AUTONEG_ENABLE ?
ethtool2ptys_adver_func(link_ksettings->link_modes.advertising) :
mlx5e_port_speed2linkmodes(mdev, speed);
link_modes = autoneg == AUTONEG_ENABLE ? ethtool2ptys_adver_func(adver) :
mlx5e_port_speed2linkmodes(mdev, speed, !ext);
link_modes = link_modes & eproto.cap;
if (!link_modes) {
@ -1067,14 +1092,14 @@ int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
mlx5_port_query_eth_autoneg(mdev, &an_status, &an_disable_cap,
&an_disable_admin);
an_disable = link_ksettings->base.autoneg == AUTONEG_DISABLE;
an_disable = autoneg == AUTONEG_DISABLE;
an_changes = ((!an_disable && an_disable_admin) ||
(an_disable && !an_disable_admin));
if (!an_changes && link_modes == eproto.admin)
goto out;
mlx5_port_set_eth_ptys(mdev, an_disable, link_modes, ext_requested);
mlx5_port_set_eth_ptys(mdev, an_disable, link_modes, ext);
mlx5_toggle_port_link(mdev);
out:

View File

@ -611,7 +611,8 @@ static int validate_flow(struct mlx5e_priv *priv,
return -ENOSPC;
if (fs->ring_cookie != RX_CLS_FLOW_DISC)
if (!mlx5e_qid_validate(&priv->channels.params, fs->ring_cookie))
if (!mlx5e_qid_validate(priv->profile, &priv->channels.params,
fs->ring_cookie))
return -EINVAL;
switch (fs->flow_type & ~(FLOW_EXT | FLOW_MAC_EXT)) {

View File

@ -331,12 +331,11 @@ static inline u64 mlx5e_get_mpwqe_offset(struct mlx5e_rq *rq, u16 wqe_ix)
static void mlx5e_init_frags_partition(struct mlx5e_rq *rq)
{
struct mlx5e_wqe_frag_info next_frag, *prev;
struct mlx5e_wqe_frag_info next_frag = {};
struct mlx5e_wqe_frag_info *prev = NULL;
int i;
next_frag.di = &rq->wqe.di[0];
next_frag.offset = 0;
prev = NULL;
for (i = 0; i < mlx5_wq_cyc_get_size(&rq->wqe.wq); i++) {
struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0];
@ -1677,10 +1676,10 @@ static int mlx5e_open_sqs(struct mlx5e_channel *c,
struct mlx5e_channel_param *cparam)
{
struct mlx5e_priv *priv = c->priv;
int err, tc, max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
int err, tc;
for (tc = 0; tc < params->num_tc; tc++) {
int txq_ix = c->ix + tc * max_nch;
int txq_ix = c->ix + tc * priv->max_nch;
err = mlx5e_open_txqsq(c, c->priv->tisn[tc], txq_ix,
params, &cparam->sq, &c->sq[tc], tc);
@ -2438,11 +2437,10 @@ int mlx5e_create_indirect_rqt(struct mlx5e_priv *priv)
int mlx5e_create_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs)
{
const int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
int err;
int ix;
for (ix = 0; ix < max_nch; ix++) {
for (ix = 0; ix < priv->max_nch; ix++) {
err = mlx5e_create_rqt(priv, 1 /*size */, &tirs[ix].rqt);
if (unlikely(err))
goto err_destroy_rqts;
@ -2460,10 +2458,9 @@ err_destroy_rqts:
void mlx5e_destroy_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs)
{
const int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
int i;
for (i = 0; i < max_nch; i++)
for (i = 0; i < priv->max_nch; i++)
mlx5e_destroy_rqt(priv, &tirs[i].rqt);
}
@ -2557,7 +2554,7 @@ static void mlx5e_redirect_rqts(struct mlx5e_priv *priv,
mlx5e_redirect_rqt(priv, rqtn, MLX5E_INDIR_RQT_SIZE, rrp);
}
for (ix = 0; ix < mlx5e_get_netdev_max_channels(priv->netdev); ix++) {
for (ix = 0; ix < priv->max_nch; ix++) {
struct mlx5e_redirect_rqt_param direct_rrp = {
.is_rss = false,
{
@ -2758,7 +2755,7 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
goto free_in;
}
for (ix = 0; ix < mlx5e_get_netdev_max_channels(priv->netdev); ix++) {
for (ix = 0; ix < priv->max_nch; ix++) {
err = mlx5_core_modify_tir(mdev, priv->direct_tir[ix].tirn,
in, inlen);
if (err)
@ -2858,12 +2855,11 @@ static void mlx5e_netdev_set_tcs(struct net_device *netdev)
static void mlx5e_build_tc2txq_maps(struct mlx5e_priv *priv)
{
int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
int i, tc;
for (i = 0; i < max_nch; i++)
for (i = 0; i < priv->max_nch; i++)
for (tc = 0; tc < priv->profile->max_tc; tc++)
priv->channel_tc2txq[i][tc] = i + tc * max_nch;
priv->channel_tc2txq[i][tc] = i + tc * priv->max_nch;
}
static void mlx5e_build_tx2sq_maps(struct mlx5e_priv *priv)
@ -2884,7 +2880,7 @@ static void mlx5e_build_tx2sq_maps(struct mlx5e_priv *priv)
void mlx5e_activate_priv_channels(struct mlx5e_priv *priv)
{
int num_txqs = priv->channels.num * priv->channels.params.num_tc;
int num_rxqs = priv->channels.num * MLX5E_NUM_RQ_GROUPS;
int num_rxqs = priv->channels.num * priv->profile->rq_groups;
struct net_device *netdev = priv->netdev;
mlx5e_netdev_set_tcs(netdev);
@ -3306,7 +3302,6 @@ err_destroy_inner_tirs:
int mlx5e_create_direct_tirs(struct mlx5e_priv *priv, struct mlx5e_tir *tirs)
{
const int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
struct mlx5e_tir *tir;
void *tirc;
int inlen;
@ -3319,7 +3314,7 @@ int mlx5e_create_direct_tirs(struct mlx5e_priv *priv, struct mlx5e_tir *tirs)
if (!in)
return -ENOMEM;
for (ix = 0; ix < max_nch; ix++) {
for (ix = 0; ix < priv->max_nch; ix++) {
memset(in, 0, inlen);
tir = &tirs[ix];
tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
@ -3358,10 +3353,9 @@ void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc)
void mlx5e_destroy_direct_tirs(struct mlx5e_priv *priv, struct mlx5e_tir *tirs)
{
const int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
int i;
for (i = 0; i < max_nch; i++)
for (i = 0; i < priv->max_nch; i++)
mlx5e_destroy_tir(priv->mdev, &tirs[i]);
}
@ -3487,7 +3481,7 @@ void mlx5e_fold_sw_stats64(struct mlx5e_priv *priv, struct rtnl_link_stats64 *s)
{
int i;
for (i = 0; i < mlx5e_get_netdev_max_channels(priv->netdev); i++) {
for (i = 0; i < priv->max_nch; i++) {
struct mlx5e_channel_stats *channel_stats = &priv->channel_stats[i];
struct mlx5e_rq_stats *xskrq_stats = &channel_stats->xskrq;
struct mlx5e_rq_stats *rq_stats = &channel_stats->rq;
@ -4960,8 +4954,7 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev,
return err;
mlx5e_build_nic_params(mdev, &priv->xsk, rss, &priv->channels.params,
mlx5e_get_netdev_max_channels(netdev),
netdev->mtu);
priv->max_nch, netdev->mtu);
mlx5e_timestamp_init(priv);
@ -5164,6 +5157,7 @@ static const struct mlx5e_profile mlx5e_nic_profile = {
.rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe,
.rx_handlers.handle_rx_cqe_mpwqe = mlx5e_handle_rx_cqe_mpwrq,
.max_tc = MLX5E_MAX_NUM_TC,
.rq_groups = MLX5E_NUM_RQ_GROUPS(XSK),
};
/* mlx5e generic netdev management API (move to en_common.c) */
@ -5181,6 +5175,7 @@ int mlx5e_netdev_init(struct net_device *netdev,
priv->profile = profile;
priv->ppriv = ppriv;
priv->msglevel = MLX5E_MSG_LEVEL;
priv->max_nch = netdev->num_rx_queues / max_t(u8, profile->rq_groups, 1);
priv->max_opened_tc = 1;
mutex_init(&priv->state_lock);
@ -5218,7 +5213,7 @@ struct net_device *mlx5e_create_netdev(struct mlx5_core_dev *mdev,
netdev = alloc_etherdev_mqs(sizeof(struct mlx5e_priv),
nch * profile->max_tc,
nch * MLX5E_NUM_RQ_GROUPS);
nch * profile->rq_groups);
if (!netdev) {
mlx5_core_err(mdev, "alloc_etherdev_mqs() failed\n");
return NULL;

View File

@ -1701,6 +1701,7 @@ static const struct mlx5e_profile mlx5e_rep_profile = {
.rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe_rep,
.rx_handlers.handle_rx_cqe_mpwqe = mlx5e_handle_rx_cqe_mpwrq,
.max_tc = 1,
.rq_groups = MLX5E_NUM_RQ_GROUPS(REGULAR),
};
static const struct mlx5e_profile mlx5e_uplink_rep_profile = {
@ -1718,6 +1719,7 @@ static const struct mlx5e_profile mlx5e_uplink_rep_profile = {
.rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe_rep,
.rx_handlers.handle_rx_cqe_mpwqe = mlx5e_handle_rx_cqe_mpwrq,
.max_tc = MLX5E_MAX_NUM_TC,
.rq_groups = MLX5E_NUM_RQ_GROUPS(REGULAR),
};
static bool

View File

@ -172,7 +172,7 @@ static void mlx5e_grp_sw_update_stats(struct mlx5e_priv *priv)
memset(s, 0, sizeof(*s));
for (i = 0; i < mlx5e_get_netdev_max_channels(priv->netdev); i++) {
for (i = 0; i < priv->max_nch; i++) {
struct mlx5e_channel_stats *channel_stats =
&priv->channel_stats[i];
struct mlx5e_xdpsq_stats *xdpsq_red_stats = &channel_stats->xdpsq;
@ -1395,7 +1395,7 @@ static const struct counter_desc ch_stats_desc[] = {
static int mlx5e_grp_channels_get_num_stats(struct mlx5e_priv *priv)
{
int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
int max_nch = priv->max_nch;
return (NUM_RQ_STATS * max_nch) +
(NUM_CH_STATS * max_nch) +
@ -1409,8 +1409,8 @@ static int mlx5e_grp_channels_get_num_stats(struct mlx5e_priv *priv)
static int mlx5e_grp_channels_fill_strings(struct mlx5e_priv *priv, u8 *data,
int idx)
{
int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
bool is_xsk = priv->xsk.ever_used;
int max_nch = priv->max_nch;
int i, j, tc;
for (i = 0; i < max_nch; i++)
@ -1452,8 +1452,8 @@ static int mlx5e_grp_channels_fill_strings(struct mlx5e_priv *priv, u8 *data,
static int mlx5e_grp_channels_fill_stats(struct mlx5e_priv *priv, u64 *data,
int idx)
{
int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
bool is_xsk = priv->xsk.ever_used;
int max_nch = priv->max_nch;
int i, j, tc;
for (i = 0; i < max_nch; i++)

View File

@ -1230,13 +1230,13 @@ static struct mlx5_fc *mlx5e_tc_get_counter(struct mlx5e_tc_flow *flow)
void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe)
{
struct mlx5e_neigh *m_neigh = &nhe->m_neigh;
u64 bytes, packets, lastuse = 0;
struct mlx5e_tc_flow *flow;
struct mlx5e_encap_entry *e;
struct mlx5_fc *counter;
struct neigh_table *tbl;
bool neigh_used = false;
struct neighbour *n;
u64 lastuse;
if (m_neigh->family == AF_INET)
tbl = &arp_tbl;
@ -1256,7 +1256,7 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe)
encaps[efi->index]);
if (flow->flags & MLX5E_TC_FLOW_OFFLOADED) {
counter = mlx5e_tc_get_counter(flow);
mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse);
lastuse = mlx5_fc_query_lastuse(counter);
if (time_after((unsigned long)lastuse, nhe->reported_lastuse)) {
neigh_used = true;
break;

View File

@ -49,7 +49,7 @@ static inline bool mlx5e_channel_no_affinity_change(struct mlx5e_channel *c)
static void mlx5e_handle_tx_dim(struct mlx5e_txqsq *sq)
{
struct mlx5e_sq_stats *stats = sq->stats;
struct dim_sample dim_sample;
struct dim_sample dim_sample = {};
if (unlikely(!test_bit(MLX5E_SQ_STATE_AM, &sq->state)))
return;
@ -61,7 +61,7 @@ static void mlx5e_handle_tx_dim(struct mlx5e_txqsq *sq)
static void mlx5e_handle_rx_dim(struct mlx5e_rq *rq)
{
struct mlx5e_rq_stats *stats = rq->stats;
struct dim_sample dim_sample;
struct dim_sample dim_sample = {};
if (unlikely(!test_bit(MLX5E_RQ_STATE_AM, &rq->state)))
return;

View File

@ -68,7 +68,7 @@ enum fs_flow_table_type {
FS_FT_SNIFFER_RX = 0X5,
FS_FT_SNIFFER_TX = 0X6,
FS_FT_RDMA_RX = 0X7,
FS_FT_MAX_TYPE = FS_FT_SNIFFER_TX,
FS_FT_MAX_TYPE = FS_FT_RDMA_RX,
};
enum fs_flow_table_op_mod {
@ -275,7 +275,8 @@ void mlx5_cleanup_fs(struct mlx5_core_dev *dev);
(type == FS_FT_FDB) ? MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, cap) : \
(type == FS_FT_SNIFFER_RX) ? MLX5_CAP_FLOWTABLE_SNIFFER_RX(mdev, cap) : \
(type == FS_FT_SNIFFER_TX) ? MLX5_CAP_FLOWTABLE_SNIFFER_TX(mdev, cap) : \
(BUILD_BUG_ON_ZERO(FS_FT_SNIFFER_TX != FS_FT_MAX_TYPE))\
(type == FS_FT_RDMA_RX) ? MLX5_CAP_FLOWTABLE_RDMA_RX(mdev, cap) : \
(BUILD_BUG_ON_ZERO(FS_FT_RDMA_RX != FS_FT_MAX_TYPE))\
)
#endif

View File

@ -369,6 +369,11 @@ int mlx5_fc_query(struct mlx5_core_dev *dev, struct mlx5_fc *counter,
}
EXPORT_SYMBOL(mlx5_fc_query);
u64 mlx5_fc_query_lastuse(struct mlx5_fc *counter)
{
return counter->cache.lastuse;
}
void mlx5_fc_query_cached(struct mlx5_fc *counter,
u64 *bytes, u64 *packets, u64 *lastuse)
{

View File

@ -88,8 +88,7 @@ int mlx5i_init(struct mlx5_core_dev *mdev,
netdev->mtu = netdev->max_mtu;
mlx5e_build_nic_params(mdev, NULL, &priv->rss_params, &priv->channels.params,
mlx5e_get_netdev_max_channels(netdev),
netdev->mtu);
priv->max_nch, netdev->mtu);
mlx5i_build_nic_params(mdev, &priv->channels.params);
mlx5e_timestamp_init(priv);
@ -118,11 +117,10 @@ void mlx5i_cleanup(struct mlx5e_priv *priv)
static void mlx5i_grp_sw_update_stats(struct mlx5e_priv *priv)
{
int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
struct mlx5e_sw_stats s = { 0 };
int i, j;
for (i = 0; i < max_nch; i++) {
for (i = 0; i < priv->max_nch; i++) {
struct mlx5e_channel_stats *channel_stats;
struct mlx5e_rq_stats *rq_stats;
@ -436,6 +434,7 @@ static const struct mlx5e_profile mlx5i_nic_profile = {
.rx_handlers.handle_rx_cqe = mlx5i_handle_rx_cqe,
.rx_handlers.handle_rx_cqe_mpwqe = NULL, /* Not supported */
.max_tc = MLX5I_MAX_NUM_TC,
.rq_groups = MLX5E_NUM_RQ_GROUPS(REGULAR),
};
/* mlx5i netdev NDos */

View File

@ -355,6 +355,7 @@ static const struct mlx5e_profile mlx5i_pkey_nic_profile = {
.rx_handlers.handle_rx_cqe = mlx5i_handle_rx_cqe,
.rx_handlers.handle_rx_cqe_mpwqe = NULL, /* Not supported */
.max_tc = MLX5I_MAX_NUM_TC,
.rq_groups = MLX5E_NUM_RQ_GROUPS(REGULAR),
};
const struct mlx5e_profile *mlx5i_pkey_get_profile(void)

View File

@ -6330,7 +6330,7 @@ static int __init mlxsw_sp_module_init(void)
return 0;
err_sp2_pci_driver_register:
mlxsw_pci_driver_unregister(&mlxsw_sp2_pci_driver);
mlxsw_pci_driver_unregister(&mlxsw_sp1_pci_driver);
err_sp1_pci_driver_register:
mlxsw_core_driver_unregister(&mlxsw_sp2_driver);
err_sp2_core_driver_register:

View File

@ -951,4 +951,8 @@ void mlxsw_sp_port_nve_fini(struct mlxsw_sp_port *mlxsw_sp_port);
int mlxsw_sp_nve_init(struct mlxsw_sp *mlxsw_sp);
void mlxsw_sp_nve_fini(struct mlxsw_sp *mlxsw_sp);
/* spectrum_nve_vxlan.c */
int mlxsw_sp_nve_inc_parsing_depth_get(struct mlxsw_sp *mlxsw_sp);
void mlxsw_sp_nve_inc_parsing_depth_put(struct mlxsw_sp *mlxsw_sp);
#endif

View File

@ -437,8 +437,8 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
MLXSW_SP1_SB_PR_CPU_SIZE, true, false),
};
#define MLXSW_SP2_SB_PR_INGRESS_SIZE 38128752
#define MLXSW_SP2_SB_PR_EGRESS_SIZE 38128752
#define MLXSW_SP2_SB_PR_INGRESS_SIZE 35297568
#define MLXSW_SP2_SB_PR_EGRESS_SIZE 35297568
#define MLXSW_SP2_SB_PR_CPU_SIZE (256 * 1000)
/* Order according to mlxsw_sp2_sb_pool_dess */

View File

@ -775,6 +775,7 @@ static void mlxsw_sp_nve_tunnel_fini(struct mlxsw_sp *mlxsw_sp)
ops->fini(nve);
mlxsw_sp_kvdl_free(mlxsw_sp, MLXSW_SP_KVDL_ENTRY_TYPE_ADJ, 1,
nve->tunnel_index);
memset(&nve->config, 0, sizeof(nve->config));
}
nve->num_nve_tunnels--;
}

View File

@ -29,6 +29,7 @@ struct mlxsw_sp_nve {
unsigned int num_max_mc_entries[MLXSW_SP_L3_PROTO_MAX];
u32 tunnel_index;
u16 ul_rif_index; /* Reserved for Spectrum */
unsigned int inc_parsing_depth_refs;
};
struct mlxsw_sp_nve_ops {

View File

@ -103,9 +103,9 @@ static void mlxsw_sp_nve_vxlan_config(const struct mlxsw_sp_nve *nve,
config->udp_dport = cfg->dst_port;
}
static int mlxsw_sp_nve_parsing_set(struct mlxsw_sp *mlxsw_sp,
unsigned int parsing_depth,
__be16 udp_dport)
static int __mlxsw_sp_nve_parsing_set(struct mlxsw_sp *mlxsw_sp,
unsigned int parsing_depth,
__be16 udp_dport)
{
char mprs_pl[MLXSW_REG_MPRS_LEN];
@ -113,6 +113,56 @@ static int mlxsw_sp_nve_parsing_set(struct mlxsw_sp *mlxsw_sp,
return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(mprs), mprs_pl);
}
static int mlxsw_sp_nve_parsing_set(struct mlxsw_sp *mlxsw_sp,
__be16 udp_dport)
{
int parsing_depth = mlxsw_sp->nve->inc_parsing_depth_refs ?
MLXSW_SP_NVE_VXLAN_PARSING_DEPTH :
MLXSW_SP_NVE_DEFAULT_PARSING_DEPTH;
return __mlxsw_sp_nve_parsing_set(mlxsw_sp, parsing_depth, udp_dport);
}
static int
__mlxsw_sp_nve_inc_parsing_depth_get(struct mlxsw_sp *mlxsw_sp,
__be16 udp_dport)
{
int err;
mlxsw_sp->nve->inc_parsing_depth_refs++;
err = mlxsw_sp_nve_parsing_set(mlxsw_sp, udp_dport);
if (err)
goto err_nve_parsing_set;
return 0;
err_nve_parsing_set:
mlxsw_sp->nve->inc_parsing_depth_refs--;
return err;
}
static void
__mlxsw_sp_nve_inc_parsing_depth_put(struct mlxsw_sp *mlxsw_sp,
__be16 udp_dport)
{
mlxsw_sp->nve->inc_parsing_depth_refs--;
mlxsw_sp_nve_parsing_set(mlxsw_sp, udp_dport);
}
int mlxsw_sp_nve_inc_parsing_depth_get(struct mlxsw_sp *mlxsw_sp)
{
__be16 udp_dport = mlxsw_sp->nve->config.udp_dport;
return __mlxsw_sp_nve_inc_parsing_depth_get(mlxsw_sp, udp_dport);
}
void mlxsw_sp_nve_inc_parsing_depth_put(struct mlxsw_sp *mlxsw_sp)
{
__be16 udp_dport = mlxsw_sp->nve->config.udp_dport;
__mlxsw_sp_nve_inc_parsing_depth_put(mlxsw_sp, udp_dport);
}
static void
mlxsw_sp_nve_vxlan_config_prepare(char *tngcr_pl,
const struct mlxsw_sp_nve_config *config)
@ -176,9 +226,7 @@ static int mlxsw_sp1_nve_vxlan_init(struct mlxsw_sp_nve *nve,
struct mlxsw_sp *mlxsw_sp = nve->mlxsw_sp;
int err;
err = mlxsw_sp_nve_parsing_set(mlxsw_sp,
MLXSW_SP_NVE_VXLAN_PARSING_DEPTH,
config->udp_dport);
err = __mlxsw_sp_nve_inc_parsing_depth_get(mlxsw_sp, config->udp_dport);
if (err)
return err;
@ -203,8 +251,7 @@ err_promote_decap:
err_rtdp_set:
mlxsw_sp1_nve_vxlan_config_clear(mlxsw_sp);
err_config_set:
mlxsw_sp_nve_parsing_set(mlxsw_sp, MLXSW_SP_NVE_DEFAULT_PARSING_DEPTH,
config->udp_dport);
__mlxsw_sp_nve_inc_parsing_depth_put(mlxsw_sp, 0);
return err;
}
@ -216,8 +263,7 @@ static void mlxsw_sp1_nve_vxlan_fini(struct mlxsw_sp_nve *nve)
mlxsw_sp_router_nve_demote_decap(mlxsw_sp, config->ul_tb_id,
config->ul_proto, &config->ul_sip);
mlxsw_sp1_nve_vxlan_config_clear(mlxsw_sp);
mlxsw_sp_nve_parsing_set(mlxsw_sp, MLXSW_SP_NVE_DEFAULT_PARSING_DEPTH,
config->udp_dport);
__mlxsw_sp_nve_inc_parsing_depth_put(mlxsw_sp, 0);
}
static int
@ -320,9 +366,7 @@ static int mlxsw_sp2_nve_vxlan_init(struct mlxsw_sp_nve *nve,
struct mlxsw_sp *mlxsw_sp = nve->mlxsw_sp;
int err;
err = mlxsw_sp_nve_parsing_set(mlxsw_sp,
MLXSW_SP_NVE_VXLAN_PARSING_DEPTH,
config->udp_dport);
err = __mlxsw_sp_nve_inc_parsing_depth_get(mlxsw_sp, config->udp_dport);
if (err)
return err;
@ -348,8 +392,7 @@ err_promote_decap:
err_rtdp_set:
mlxsw_sp2_nve_vxlan_config_clear(mlxsw_sp);
err_config_set:
mlxsw_sp_nve_parsing_set(mlxsw_sp, MLXSW_SP_NVE_DEFAULT_PARSING_DEPTH,
config->udp_dport);
__mlxsw_sp_nve_inc_parsing_depth_put(mlxsw_sp, 0);
return err;
}
@ -361,8 +404,7 @@ static void mlxsw_sp2_nve_vxlan_fini(struct mlxsw_sp_nve *nve)
mlxsw_sp_router_nve_demote_decap(mlxsw_sp, config->ul_tb_id,
config->ul_proto, &config->ul_sip);
mlxsw_sp2_nve_vxlan_config_clear(mlxsw_sp);
mlxsw_sp_nve_parsing_set(mlxsw_sp, MLXSW_SP_NVE_DEFAULT_PARSING_DEPTH,
config->udp_dport);
__mlxsw_sp_nve_inc_parsing_depth_put(mlxsw_sp, 0);
}
const struct mlxsw_sp_nve_ops mlxsw_sp2_nve_vxlan_ops = {

View File

@ -979,6 +979,9 @@ static int mlxsw_sp1_ptp_mtpppc_update(struct mlxsw_sp_port *mlxsw_sp_port,
{
struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
struct mlxsw_sp_port *tmp;
u16 orig_ing_types = 0;
u16 orig_egr_types = 0;
int err;
int i;
/* MTPPPC configures timestamping globally, not per port. Find the
@ -986,12 +989,26 @@ static int mlxsw_sp1_ptp_mtpppc_update(struct mlxsw_sp_port *mlxsw_sp_port,
*/
for (i = 1; i < mlxsw_core_max_ports(mlxsw_sp->core); i++) {
tmp = mlxsw_sp->ports[i];
if (tmp) {
orig_ing_types |= tmp->ptp.ing_types;
orig_egr_types |= tmp->ptp.egr_types;
}
if (tmp && tmp != mlxsw_sp_port) {
ing_types |= tmp->ptp.ing_types;
egr_types |= tmp->ptp.egr_types;
}
}
if ((ing_types || egr_types) && !(orig_ing_types || orig_egr_types)) {
err = mlxsw_sp_nve_inc_parsing_depth_get(mlxsw_sp);
if (err) {
netdev_err(mlxsw_sp_port->dev, "Failed to increase parsing depth");
return err;
}
}
if (!(ing_types || egr_types) && (orig_ing_types || orig_egr_types))
mlxsw_sp_nve_inc_parsing_depth_put(mlxsw_sp);
return mlxsw_sp1_ptp_mtpppc_set(mlxsw_sp_port->mlxsw_sp,
ing_types, egr_types);
}

View File

@ -1818,6 +1818,7 @@ EXPORT_SYMBOL(ocelot_init);
void ocelot_deinit(struct ocelot *ocelot)
{
cancel_delayed_work(&ocelot->stats_work);
destroy_workqueue(ocelot->stats_queue);
mutex_destroy(&ocelot->stats_lock);
ocelot_ace_deinit();

View File

@ -444,12 +444,12 @@ static u8 *nfp_vnic_get_sw_stats_strings(struct net_device *netdev, u8 *data)
data = nfp_pr_et(data, "hw_rx_csum_complete");
data = nfp_pr_et(data, "hw_rx_csum_err");
data = nfp_pr_et(data, "rx_replace_buf_alloc_fail");
data = nfp_pr_et(data, "rx_tls_decrypted");
data = nfp_pr_et(data, "rx_tls_decrypted_packets");
data = nfp_pr_et(data, "hw_tx_csum");
data = nfp_pr_et(data, "hw_tx_inner_csum");
data = nfp_pr_et(data, "tx_gather");
data = nfp_pr_et(data, "tx_lso");
data = nfp_pr_et(data, "tx_tls_encrypted");
data = nfp_pr_et(data, "tx_tls_encrypted_packets");
data = nfp_pr_et(data, "tx_tls_ooo");
data = nfp_pr_et(data, "tx_tls_drop_no_sync_data");

View File

@ -11,7 +11,7 @@ config NET_VENDOR_NI
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about National Instrument devices.
the questions about National Instruments devices.
If you say Y, you will be asked for your specific device in the
following questions.

View File

@ -1,10 +1,10 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# Packet engine device configuration
# Packet Engines device configuration
#
config NET_VENDOR_PACKET_ENGINES
bool "Packet Engine devices"
bool "Packet Engines devices"
default y
depends on PCI
---help---
@ -12,7 +12,7 @@ config NET_VENDOR_PACKET_ENGINES
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about packet engine devices. If you say Y, you will
the questions about Packet Engines devices. If you say Y, you will
be asked for your specific card in the following questions.
if NET_VENDOR_PACKET_ENGINES

View File

@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for the Packet Engine network device drivers.
# Makefile for the Packet Engines network device drivers.
#
obj-$(CONFIG_HAMACHI) += hamachi.o

View File

@ -1093,7 +1093,7 @@ static int qed_int_deassertion(struct qed_hwfn *p_hwfn,
snprintf(bit_name, 30,
p_aeu->bit_name, num);
else
strncpy(bit_name,
strlcpy(bit_name,
p_aeu->bit_name, 30);
/* We now need to pass bitmask in its

View File

@ -442,7 +442,7 @@ static void qed_rdma_init_devinfo(struct qed_hwfn *p_hwfn,
/* Vendor specific information */
dev->vendor_id = cdev->vendor_id;
dev->vendor_part_id = cdev->device_id;
dev->hw_ver = 0;
dev->hw_ver = cdev->chip_rev;
dev->fw_ver = (FW_MAJOR_VERSION << 24) | (FW_MINOR_VERSION << 16) |
(FW_REVISION_VERSION << 8) | (FW_ENGINEERING_VERSION);

View File

@ -206,9 +206,9 @@ rmnet_map_ipv4_ul_csum_header(void *iphdr,
ul_header->csum_insert_offset = skb->csum_offset;
ul_header->csum_enabled = 1;
if (ip4h->protocol == IPPROTO_UDP)
ul_header->udp_ip4_ind = 1;
ul_header->udp_ind = 1;
else
ul_header->udp_ip4_ind = 0;
ul_header->udp_ind = 0;
/* Changing remaining fields to network order */
hdr++;
@ -239,6 +239,7 @@ rmnet_map_ipv6_ul_csum_header(void *ip6hdr,
struct rmnet_map_ul_csum_header *ul_header,
struct sk_buff *skb)
{
struct ipv6hdr *ip6h = (struct ipv6hdr *)ip6hdr;
__be16 *hdr = (__be16 *)ul_header, offset;
offset = htons((__force u16)(skb_transport_header(skb) -
@ -246,7 +247,11 @@ rmnet_map_ipv6_ul_csum_header(void *ip6hdr,
ul_header->csum_start_offset = offset;
ul_header->csum_insert_offset = skb->csum_offset;
ul_header->csum_enabled = 1;
ul_header->udp_ip4_ind = 0;
if (ip6h->nexthdr == IPPROTO_UDP)
ul_header->udp_ind = 1;
else
ul_header->udp_ind = 0;
/* Changing remaining fields to network order */
hdr++;
@ -419,7 +424,7 @@ sw_csum:
ul_header->csum_start_offset = 0;
ul_header->csum_insert_offset = 0;
ul_header->csum_enabled = 0;
ul_header->udp_ip4_ind = 0;
ul_header->udp_ind = 0;
priv->stats.csum_sw++;
}

View File

@ -6136,10 +6136,7 @@ static int r8169_phy_connect(struct rtl8169_private *tp)
if (ret)
return ret;
if (tp->supports_gmii)
phy_remove_link_mode(phydev,
ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
else
if (!tp->supports_gmii)
phy_set_max_speed(phydev, SPEED_100);
phy_support_asym_pause(phydev);
@ -6589,13 +6586,18 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
{
unsigned int flags;
if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
switch (tp->mac_version) {
case RTL_GIGA_MAC_VER_02 ... RTL_GIGA_MAC_VER_06:
rtl_unlock_config_regs(tp);
RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
rtl_lock_config_regs(tp);
/* fall through */
case RTL_GIGA_MAC_VER_07 ... RTL_GIGA_MAC_VER_24:
flags = PCI_IRQ_LEGACY;
} else {
break;
default:
flags = PCI_IRQ_ALL_TYPES;
break;
}
return pci_alloc_irq_vectors(tp->pci_dev, 1, 1, flags);

View File

@ -2208,10 +2208,12 @@ static int rocker_router_fib_event(struct notifier_block *nb,
if (fen_info->fi->fib_nh_is_v6) {
NL_SET_ERR_MSG_MOD(info->extack, "IPv6 gateway with IPv4 route is not supported");
kfree(fib_work);
return notifier_from_errno(-EINVAL);
}
if (fen_info->fi->nh) {
NL_SET_ERR_MSG_MOD(info->extack, "IPv4 route with nexthop objects is not supported");
kfree(fib_work);
return notifier_from_errno(-EINVAL);
}
}

View File

@ -11,7 +11,7 @@ config NET_VENDOR_SAMSUNG
say Y.
Note that the answer to this question does not directly affect
the kernel: saying N will just case the configurator to skip all
the kernel: saying N will just cause the configurator to skip all
the questions about Samsung chipsets. If you say Y, you will be asked
for your specific chipset/driver in the following questions.

View File

@ -712,6 +712,7 @@ static void smc911x_phy_detect(struct net_device *dev)
/* Found an external PHY */
break;
}
/* Else, fall through */
default:
/* Internal media only */
SMC_GET_PHY_ID1(lp, 1, id1);

View File

@ -85,6 +85,8 @@ static void dwmac4_rx_queue_priority(struct mac_device_info *hw,
u32 value;
base_register = (queue < 4) ? GMAC_RXQ_CTRL2 : GMAC_RXQ_CTRL3;
if (queue >= 4)
queue -= 4;
value = readl(ioaddr + base_register);
@ -102,6 +104,8 @@ static void dwmac4_tx_queue_priority(struct mac_device_info *hw,
u32 value;
base_register = (queue < 4) ? GMAC_TXQ_PRTY_MAP0 : GMAC_TXQ_PRTY_MAP1;
if (queue >= 4)
queue -= 4;
value = readl(ioaddr + base_register);

View File

@ -44,11 +44,13 @@
#define XGMAC_CORE_INIT_RX 0
#define XGMAC_PACKET_FILTER 0x00000008
#define XGMAC_FILTER_RA BIT(31)
#define XGMAC_FILTER_HPF BIT(10)
#define XGMAC_FILTER_PCF BIT(7)
#define XGMAC_FILTER_PM BIT(4)
#define XGMAC_FILTER_HMC BIT(2)
#define XGMAC_FILTER_PR BIT(0)
#define XGMAC_HASH_TABLE(x) (0x00000010 + (x) * 4)
#define XGMAC_MAX_HASH_TABLE 8
#define XGMAC_RXQ_CTRL0 0x000000a0
#define XGMAC_RXQEN(x) GENMASK((x) * 2 + 1, (x) * 2)
#define XGMAC_RXQEN_SHIFT(x) ((x) * 2)
@ -99,11 +101,12 @@
#define XGMAC_MDIO_ADDR 0x00000200
#define XGMAC_MDIO_DATA 0x00000204
#define XGMAC_MDIO_C22P 0x00000220
#define XGMAC_ADDR0_HIGH 0x00000300
#define XGMAC_ADDRx_HIGH(x) (0x00000300 + (x) * 0x8)
#define XGMAC_ADDR_MAX 32
#define XGMAC_AE BIT(31)
#define XGMAC_DCS GENMASK(19, 16)
#define XGMAC_DCS_SHIFT 16
#define XGMAC_ADDR0_LOW 0x00000304
#define XGMAC_ADDRx_LOW(x) (0x00000304 + (x) * 0x8)
#define XGMAC_ARP_ADDR 0x00000c10
#define XGMAC_TIMESTAMP_STATUS 0x00000d20
#define XGMAC_TXTSC BIT(15)

View File

@ -4,6 +4,8 @@
* stmmac XGMAC support.
*/
#include <linux/bitrev.h>
#include <linux/crc32.h>
#include "stmmac.h"
#include "dwxgmac2.h"
@ -106,6 +108,8 @@ static void dwxgmac2_rx_queue_prio(struct mac_device_info *hw, u32 prio,
u32 value, reg;
reg = (queue < 4) ? XGMAC_RXQ_CTRL2 : XGMAC_RXQ_CTRL3;
if (queue >= 4)
queue -= 4;
value = readl(ioaddr + reg);
value &= ~XGMAC_PSRQ(queue);
@ -169,6 +173,8 @@ static void dwxgmac2_map_mtl_to_dma(struct mac_device_info *hw, u32 queue,
u32 value, reg;
reg = (queue < 4) ? XGMAC_MTL_RXQ_DMA_MAP0 : XGMAC_MTL_RXQ_DMA_MAP1;
if (queue >= 4)
queue -= 4;
value = readl(ioaddr + reg);
value &= ~XGMAC_QxMDMACH(queue);
@ -278,10 +284,10 @@ static void dwxgmac2_set_umac_addr(struct mac_device_info *hw,
u32 value;
value = (addr[5] << 8) | addr[4];
writel(value | XGMAC_AE, ioaddr + XGMAC_ADDR0_HIGH);
writel(value | XGMAC_AE, ioaddr + XGMAC_ADDRx_HIGH(reg_n));
value = (addr[3] << 24) | (addr[2] << 16) | (addr[1] << 8) | addr[0];
writel(value, ioaddr + XGMAC_ADDR0_LOW);
writel(value, ioaddr + XGMAC_ADDRx_LOW(reg_n));
}
static void dwxgmac2_get_umac_addr(struct mac_device_info *hw,
@ -291,8 +297,8 @@ static void dwxgmac2_get_umac_addr(struct mac_device_info *hw,
u32 hi_addr, lo_addr;
/* Read the MAC address from the hardware */
hi_addr = readl(ioaddr + XGMAC_ADDR0_HIGH);
lo_addr = readl(ioaddr + XGMAC_ADDR0_LOW);
hi_addr = readl(ioaddr + XGMAC_ADDRx_HIGH(reg_n));
lo_addr = readl(ioaddr + XGMAC_ADDRx_LOW(reg_n));
/* Extract the MAC address from the high and low words */
addr[0] = lo_addr & 0xff;
@ -303,19 +309,82 @@ static void dwxgmac2_get_umac_addr(struct mac_device_info *hw,
addr[5] = (hi_addr >> 8) & 0xff;
}
static void dwxgmac2_set_mchash(void __iomem *ioaddr, u32 *mcfilterbits,
int mcbitslog2)
{
int numhashregs, regs;
switch (mcbitslog2) {
case 6:
numhashregs = 2;
break;
case 7:
numhashregs = 4;
break;
case 8:
numhashregs = 8;
break;
default:
return;
}
for (regs = 0; regs < numhashregs; regs++)
writel(mcfilterbits[regs], ioaddr + XGMAC_HASH_TABLE(regs));
}
static void dwxgmac2_set_filter(struct mac_device_info *hw,
struct net_device *dev)
{
void __iomem *ioaddr = (void __iomem *)dev->base_addr;
u32 value = XGMAC_FILTER_RA;
u32 value = readl(ioaddr + XGMAC_PACKET_FILTER);
int mcbitslog2 = hw->mcast_bits_log2;
u32 mc_filter[8];
int i;
value &= ~(XGMAC_FILTER_PR | XGMAC_FILTER_HMC | XGMAC_FILTER_PM);
value |= XGMAC_FILTER_HPF;
memset(mc_filter, 0, sizeof(mc_filter));
if (dev->flags & IFF_PROMISC) {
value |= XGMAC_FILTER_PR | XGMAC_FILTER_PCF;
value |= XGMAC_FILTER_PR;
value |= XGMAC_FILTER_PCF;
} else if ((dev->flags & IFF_ALLMULTI) ||
(netdev_mc_count(dev) > HASH_TABLE_SIZE)) {
(netdev_mc_count(dev) > hw->multicast_filter_bins)) {
value |= XGMAC_FILTER_PM;
writel(~0x0, ioaddr + XGMAC_HASH_TABLE(0));
writel(~0x0, ioaddr + XGMAC_HASH_TABLE(1));
for (i = 0; i < XGMAC_MAX_HASH_TABLE; i++)
writel(~0x0, ioaddr + XGMAC_HASH_TABLE(i));
} else if (!netdev_mc_empty(dev)) {
struct netdev_hw_addr *ha;
value |= XGMAC_FILTER_HMC;
netdev_for_each_mc_addr(ha, dev) {
int nr = (bitrev32(~crc32_le(~0, ha->addr, 6)) >>
(32 - mcbitslog2));
mc_filter[nr >> 5] |= (1 << (nr & 0x1F));
}
}
dwxgmac2_set_mchash(ioaddr, mc_filter, mcbitslog2);
/* Handle multiple unicast addresses */
if (netdev_uc_count(dev) > XGMAC_ADDR_MAX) {
value |= XGMAC_FILTER_PR;
} else {
struct netdev_hw_addr *ha;
int reg = 1;
netdev_for_each_uc_addr(ha, dev) {
dwxgmac2_set_umac_addr(hw, ha->addr, reg);
reg++;
}
for ( ; reg < XGMAC_ADDR_MAX; reg++) {
writel(0, ioaddr + XGMAC_ADDRx_HIGH(reg));
writel(0, ioaddr + XGMAC_ADDRx_LOW(reg));
}
}
writel(value, ioaddr + XGMAC_PACKET_FILTER);

View File

@ -814,20 +814,15 @@ static void stmmac_validate(struct phylink_config *config,
phylink_set(mac_supported, 10baseT_Full);
phylink_set(mac_supported, 100baseT_Half);
phylink_set(mac_supported, 100baseT_Full);
phylink_set(mac_supported, 1000baseT_Half);
phylink_set(mac_supported, 1000baseT_Full);
phylink_set(mac_supported, 1000baseKX_Full);
phylink_set(mac_supported, Autoneg);
phylink_set(mac_supported, Pause);
phylink_set(mac_supported, Asym_Pause);
phylink_set_port_modes(mac_supported);
if (priv->plat->has_gmac ||
priv->plat->has_gmac4 ||
priv->plat->has_xgmac) {
phylink_set(mac_supported, 1000baseT_Half);
phylink_set(mac_supported, 1000baseT_Full);
phylink_set(mac_supported, 1000baseKX_Full);
}
/* Cut down 1G if asked to */
if ((max_speed > 0) && (max_speed < 1000)) {
phylink_set(mask, 1000baseT_Full);
@ -1295,6 +1290,8 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags)
"(%s) dma_rx_phy=0x%08x\n", __func__,
(u32)rx_q->dma_rx_phy);
stmmac_clear_rx_descriptors(priv, queue);
for (i = 0; i < DMA_RX_SIZE; i++) {
struct dma_desc *p;
@ -1312,8 +1309,6 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags)
rx_q->cur_rx = 0;
rx_q->dirty_rx = (unsigned int)(i - DMA_RX_SIZE);
stmmac_clear_rx_descriptors(priv, queue);
/* Setup the chained descriptor addresses */
if (priv->mode == STMMAC_CHAIN_MODE) {
if (priv->extend_desc)
@ -1555,9 +1550,8 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv)
goto err_dma;
}
rx_q->buf_pool = kmalloc_array(DMA_RX_SIZE,
sizeof(*rx_q->buf_pool),
GFP_KERNEL);
rx_q->buf_pool = kcalloc(DMA_RX_SIZE, sizeof(*rx_q->buf_pool),
GFP_KERNEL);
if (!rx_q->buf_pool)
goto err_dma;
@ -1608,15 +1602,15 @@ static int alloc_dma_tx_desc_resources(struct stmmac_priv *priv)
tx_q->queue_index = queue;
tx_q->priv_data = priv;
tx_q->tx_skbuff_dma = kmalloc_array(DMA_TX_SIZE,
sizeof(*tx_q->tx_skbuff_dma),
GFP_KERNEL);
tx_q->tx_skbuff_dma = kcalloc(DMA_TX_SIZE,
sizeof(*tx_q->tx_skbuff_dma),
GFP_KERNEL);
if (!tx_q->tx_skbuff_dma)
goto err_dma;
tx_q->tx_skbuff = kmalloc_array(DMA_TX_SIZE,
sizeof(struct sk_buff *),
GFP_KERNEL);
tx_q->tx_skbuff = kcalloc(DMA_TX_SIZE,
sizeof(struct sk_buff *),
GFP_KERNEL);
if (!tx_q->tx_skbuff)
goto err_dma;
@ -3277,9 +3271,11 @@ static inline int stmmac_rx_threshold_count(struct stmmac_rx_queue *rx_q)
static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
{
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
int dirty = stmmac_rx_dirty(priv, queue);
int len, dirty = stmmac_rx_dirty(priv, queue);
unsigned int entry = rx_q->dirty_rx;
len = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE) * PAGE_SIZE;
while (dirty-- > 0) {
struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry];
struct dma_desc *p;
@ -3297,6 +3293,13 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
}
buf->addr = page_pool_get_dma_addr(buf->page);
/* Sync whole allocation to device. This will invalidate old
* data.
*/
dma_sync_single_for_device(priv->device, buf->addr, len,
DMA_FROM_DEVICE);
stmmac_set_desc_addr(priv, p, buf->addr);
stmmac_refill_desc3(priv, rx_q, p);
@ -3431,8 +3434,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
skb_copy_to_linear_data(skb, page_address(buf->page),
frame_len);
skb_put(skb, frame_len);
dma_sync_single_for_device(priv->device, buf->addr,
frame_len, DMA_FROM_DEVICE);
if (netif_msg_pktdata(priv)) {
netdev_dbg(priv->dev, "frame received (%dbytes)",
@ -4319,8 +4320,9 @@ int stmmac_dvr_probe(struct device *device,
NAPI_POLL_WEIGHT);
}
if (queue < priv->plat->tx_queues_to_use) {
netif_napi_add(ndev, &ch->tx_napi, stmmac_napi_poll_tx,
NAPI_POLL_WEIGHT);
netif_tx_napi_add(ndev, &ch->tx_napi,
stmmac_napi_poll_tx,
NAPI_POLL_WEIGHT);
}
}

View File

@ -370,6 +370,13 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
return ERR_PTR(-ENOMEM);
*mac = of_get_mac_address(np);
if (IS_ERR(*mac)) {
if (PTR_ERR(*mac) == -EPROBE_DEFER)
return ERR_CAST(*mac);
*mac = NULL;
}
plat->interface = of_get_phy_mode(np);
/* Some wrapper drivers still rely on phy_node. Let's save it while

View File

@ -37,7 +37,7 @@ static struct stmmac_tc_entry *tc_find_entry(struct stmmac_priv *priv,
entry = &priv->tc_entries[i];
if (!entry->in_use && !first && free)
first = entry;
if (entry->handle == loc && !free)
if ((entry->handle == loc) && !free && !entry->is_frag)
dup = entry;
}

View File

@ -788,6 +788,7 @@ spider_net_release_tx_chain(struct spider_net_card *card, int brutal)
/* fallthrough, if we release the descriptors
* brutally (then we don't care about
* SPIDER_NET_DESCR_CARDOWNED) */
/* Fall through */
case SPIDER_NET_DESCR_RESPONSE_ERROR:
case SPIDER_NET_DESCR_PROTECTION_ERROR:

View File

@ -13,7 +13,7 @@ config NET_VENDOR_XSCALE
Note that the answer to this question does not directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about XSacle IXP devices. If you say Y, you will be
the questions about XScale IXP devices. If you say Y, you will be
asked for your specific card in the following questions.
if NET_VENDOR_XSCALE

View File

@ -500,8 +500,9 @@ static int transmit(struct baycom_state *bc, int cnt, unsigned char stat)
}
break;
}
/* fall through */
default: /* fall through */
default:
if (bc->hdlctx.calibrate <= 0)
return 0;
i = min_t(int, cnt, bc->hdlctx.calibrate);

View File

@ -216,8 +216,10 @@ static struct gpio_desc *fixed_phy_get_gpiod(struct device_node *np)
if (IS_ERR(gpiod)) {
if (PTR_ERR(gpiod) == -EPROBE_DEFER)
return gpiod;
pr_err("error getting GPIO for fixed link %pOF, proceed without\n",
fixed_link_node);
if (PTR_ERR(gpiod) != -ENOENT)
pr_err("error getting GPIO for fixed link %pOF, proceed without\n",
fixed_link_node);
gpiod = NULL;
}

View File

@ -2226,8 +2226,8 @@ static int vsc8514_probe(struct phy_device *phydev)
vsc8531->supp_led_modes = VSC85XX_SUPP_LED_MODES;
vsc8531->hw_stats = vsc85xx_hw_stats;
vsc8531->nstats = ARRAY_SIZE(vsc85xx_hw_stats);
vsc8531->stats = devm_kmalloc_array(&phydev->mdio.dev, vsc8531->nstats,
sizeof(u64), GFP_KERNEL);
vsc8531->stats = devm_kcalloc(&phydev->mdio.dev, vsc8531->nstats,
sizeof(u64), GFP_KERNEL);
if (!vsc8531->stats)
return -ENOMEM;
@ -2251,8 +2251,8 @@ static int vsc8574_probe(struct phy_device *phydev)
vsc8531->supp_led_modes = VSC8584_SUPP_LED_MODES;
vsc8531->hw_stats = vsc8584_hw_stats;
vsc8531->nstats = ARRAY_SIZE(vsc8584_hw_stats);
vsc8531->stats = devm_kmalloc_array(&phydev->mdio.dev, vsc8531->nstats,
sizeof(u64), GFP_KERNEL);
vsc8531->stats = devm_kcalloc(&phydev->mdio.dev, vsc8531->nstats,
sizeof(u64), GFP_KERNEL);
if (!vsc8531->stats)
return -ENOMEM;
@ -2281,8 +2281,8 @@ static int vsc8584_probe(struct phy_device *phydev)
vsc8531->supp_led_modes = VSC8584_SUPP_LED_MODES;
vsc8531->hw_stats = vsc8584_hw_stats;
vsc8531->nstats = ARRAY_SIZE(vsc8584_hw_stats);
vsc8531->stats = devm_kmalloc_array(&phydev->mdio.dev, vsc8531->nstats,
sizeof(u64), GFP_KERNEL);
vsc8531->stats = devm_kcalloc(&phydev->mdio.dev, vsc8531->nstats,
sizeof(u64), GFP_KERNEL);
if (!vsc8531->stats)
return -ENOMEM;
@ -2311,8 +2311,8 @@ static int vsc85xx_probe(struct phy_device *phydev)
vsc8531->supp_led_modes = VSC85XX_SUPP_LED_MODES;
vsc8531->hw_stats = vsc85xx_hw_stats;
vsc8531->nstats = ARRAY_SIZE(vsc85xx_hw_stats);
vsc8531->stats = devm_kmalloc_array(&phydev->mdio.dev, vsc8531->nstats,
sizeof(u64), GFP_KERNEL);
vsc8531->stats = devm_kcalloc(&phydev->mdio.dev, vsc8531->nstats,
sizeof(u64), GFP_KERNEL);
if (!vsc8531->stats)
return -ENOMEM;

View File

@ -1774,6 +1774,12 @@ done:
phydev->link = status & BMSR_LSTATUS ? 1 : 0;
phydev->autoneg_complete = status & BMSR_ANEGCOMPLETE ? 1 : 0;
/* Consider the case that autoneg was started and "aneg complete"
* bit has been reset, but "link up" bit not yet.
*/
if (phydev->autoneg == AUTONEG_ENABLE && !phydev->autoneg_complete)
phydev->link = 0;
return 0;
}
EXPORT_SYMBOL(genphy_update_link);

View File

@ -48,8 +48,9 @@ void phy_led_trigger_change_speed(struct phy_device *phy)
if (!phy->last_triggered)
led_trigger_event(&phy->led_link_trigger->trigger,
LED_FULL);
else
led_trigger_event(&phy->last_triggered->trigger, LED_OFF);
led_trigger_event(&phy->last_triggered->trigger, LED_OFF);
led_trigger_event(&plt->trigger, LED_FULL);
phy->last_triggered = plt;
}

View File

@ -216,6 +216,8 @@ static int phylink_parse_fixedlink(struct phylink *pl,
pl->supported, true);
linkmode_zero(pl->supported);
phylink_set(pl->supported, MII);
phylink_set(pl->supported, Pause);
phylink_set(pl->supported, Asym_Pause);
if (s) {
__set_bit(s->bit, pl->supported);
} else {
@ -990,10 +992,10 @@ void phylink_start(struct phylink *pl)
}
if (pl->link_an_mode == MLO_AN_FIXED && pl->get_fixed_state)
mod_timer(&pl->link_poll, jiffies + HZ);
if (pl->sfp_bus)
sfp_upstream_start(pl->sfp_bus);
if (pl->phydev)
phy_start(pl->phydev);
if (pl->sfp_bus)
sfp_upstream_start(pl->sfp_bus);
}
EXPORT_SYMBOL_GPL(phylink_start);
@ -1010,10 +1012,10 @@ void phylink_stop(struct phylink *pl)
{
ASSERT_RTNL();
if (pl->phydev)
phy_stop(pl->phydev);
if (pl->sfp_bus)
sfp_upstream_stop(pl->sfp_bus);
if (pl->phydev)
phy_stop(pl->phydev);
del_timer_sync(&pl->link_poll);
if (pl->link_irq) {
free_irq(pl->link_irq, pl);

View File

@ -1115,6 +1115,9 @@ static const struct proto_ops pppoe_ops = {
.recvmsg = pppoe_recvmsg,
.mmap = sock_no_mmap,
.ioctl = pppox_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = pppox_compat_ioctl,
#endif
};
static const struct pppox_proto pppoe_proto = {

View File

@ -17,6 +17,7 @@
#include <linux/string.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/compat.h>
#include <linux/errno.h>
#include <linux/netdevice.h>
#include <linux/net.h>
@ -98,6 +99,18 @@ int pppox_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
EXPORT_SYMBOL(pppox_ioctl);
#ifdef CONFIG_COMPAT
int pppox_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
{
if (cmd == PPPOEIOCSFWD32)
cmd = PPPOEIOCSFWD;
return pppox_ioctl(sock, cmd, (unsigned long)compat_ptr(arg));
}
EXPORT_SYMBOL(pppox_compat_ioctl);
#endif
static int pppox_create(struct net *net, struct socket *sock, int protocol,
int kern)
{

Some files were not shown because too many files have changed in this diff Show More