1
0
Fork 0

Merge git://git.kernel.org:/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

 1) Fix transmissions in dynamic SMPS mode in ath9k, from Felix Fietkau.

 2) TX skb error handling fix in mt76 driver, also from Felix.

 3) Fix BPF_FETCH atomic in x86 JIT, from Brendan Jackman.

 4) Avoid double free of percpu pointers when freeing a cloned bpf prog.
    From Cong Wang.

 5) Use correct printf format for dma_addr_t in ath11k, from Geert
    Uytterhoeven.

 6) Fix resolve_btfids build with older toolchains, from Kun-Chuan
    Hsieh.

 7) Don't report truncated frames to mac80211 in mt76 driver, from
    Lorenzop Bianconi.

 8) Fix watcdog timeout on suspend/resume of stmmac, from Joakim Zhang.

 9) mscc ocelot needs NET_DEVLINK selct in Kconfig, from Arnd Bergmann.

10) Fix sign comparison bug in TCP_ZEROCOPY_RECEIVE getsockopt(), from
    Arjun Roy.

11) Ignore routes with deleted nexthop object in mlxsw, from Ido
    Schimmel.

12) Need to undo tcp early demux lookup sometimes in nf_nat, from
    Florian Westphal.

13) Fix gro aggregation for udp encaps with zero csum, from Daniel
    Borkmann.

14) Make sure to always use imp*_ndo_send when necessaey, from Jason A.
    Donenfeld.

15) Fix TRSCER masks in sh_eth driver from Sergey Shtylyov.

16) prevent overly huge skb allocationsd in qrtr, from Pavel Skripkin.

17) Prevent rx ring copnsumer index loss of sync in enetc, from Vladimir
    Oltean.

18) Make sure textsearch copntrol block is large enough, from Wilem de
    Bruijn.

19) Revert MAC changes to r8152 leading to instability, from Hates Wang.

20) Advance iov in 9p even for empty reads, from Jissheng Zhang.

21) Double hook unregister in nftables, from PabloNeira Ayuso.

22) Fix memleak in ixgbe, fropm Dinghao Liu.

23) Avoid dups in pkt scheduler class dumps, from Maximilian Heyne.

24) Various mptcp fixes from Florian Westphal, Paolo Abeni, and Geliang
    Tang.

25) Fix DOI refcount bugs in cipso, from Paul Moore.

26) One too many irqsave in ibmvnic, from Junlin Yang.

27) Fix infinite loop with MPLS gso segmenting via virtio_net, from
    Balazs Nemeth.

* git://git.kernel.org:/pub/scm/linux/kernel/git/netdev/net: (164 commits)
  s390/qeth: fix notification for pending buffers during teardown
  s390/qeth: schedule TX NAPI on QAOB completion
  s390/qeth: improve completion of pending TX buffers
  s390/qeth: fix memory leak after failed TX Buffer allocation
  net: avoid infinite loop in mpls_gso_segment when mpls_hlen == 0
  net: check if protocol extracted by virtio_net_hdr_set_proto is correct
  net: dsa: xrs700x: check if partner is same as port in hsr join
  net: lapbether: Remove netif_start_queue / netif_stop_queue
  atm: idt77252: fix null-ptr-dereference
  atm: uPD98402: fix incorrect allocation
  atm: fix a typo in the struct description
  net: qrtr: fix error return code of qrtr_sendmsg()
  mptcp: fix length of ADD_ADDR with port sub-option
  net: bonding: fix error return code of bond_neigh_init()
  net: enetc: allow hardware timestamping on TX queues with tc-etf enabled
  net: enetc: set MAC RX FIFO to recommended value
  net: davicom: Use platform_get_irq_optional()
  net: davicom: Fix regulator not turned off on driver removal
  net: davicom: Fix regulator not turned off on failed probe
  net: dsa: fix switchdev objects on bridge master mistakenly being applied on ports
  ...
rM2-mainline
Linus Torvalds 2021-03-09 17:15:56 -08:00
commit 05a59d7979
159 changed files with 1457 additions and 787 deletions

View File

@ -1988,7 +1988,7 @@ netif_carrier.
If use_carrier is 0, then the MII monitor will first query the If use_carrier is 0, then the MII monitor will first query the
device's (via ioctl) MII registers and check the link state. If that device's (via ioctl) MII registers and check the link state. If that
request fails (not just that it returns carrier down), then the MII request fails (not just that it returns carrier down), then the MII
monitor will make an ethtool ETHOOL_GLINK request to attempt to obtain monitor will make an ethtool ETHTOOL_GLINK request to attempt to obtain
the same information. If both methods fail (i.e., the driver either the same information. If both methods fail (i.e., the driver either
does not support or had some error in processing both the MII register does not support or had some error in processing both the MII register
and ethtool requests), then the MII monitor will assume the link is and ethtool requests), then the MII monitor will assume the link is

View File

@ -142,73 +142,13 @@ Please send incremental versions on top of what has been merged in order to fix
the patches the way they would look like if your latest patch series was to be the patches the way they would look like if your latest patch series was to be
merged. merged.
How can I tell what patches are queued up for backporting to the various stable releases? Are there special rules regarding stable submissions on netdev?
-----------------------------------------------------------------------------------------
Normally Greg Kroah-Hartman collects stable commits himself, but for
networking, Dave collects up patches he deems critical for the
networking subsystem, and then hands them off to Greg.
There is a patchworks queue that you can see here:
https://patchwork.kernel.org/bundle/netdev/stable/?state=*
It contains the patches which Dave has selected, but not yet handed off
to Greg. If Greg already has the patch, then it will be here:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
A quick way to find whether the patch is in this stable-queue is to
simply clone the repo, and then git grep the mainline commit ID, e.g.
::
stable-queue$ git grep -l 284041ef21fdf2e
releases/3.0.84/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
releases/3.4.51/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
releases/3.9.8/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
stable/stable-queue$
I see a network patch and I think it should be backported to stable. Should I request it via stable@vger.kernel.org like the references in the kernel's Documentation/process/stable-kernel-rules.rst file say?
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
No, not for networking. Check the stable queues as per above first
to see if it is already queued. If not, then send a mail to netdev,
listing the upstream commit ID and why you think it should be a stable
candidate.
Before you jump to go do the above, do note that the normal stable rules
in :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
still apply. So you need to explicitly indicate why it is a critical
fix and exactly what users are impacted. In addition, you need to
convince yourself that you *really* think it has been overlooked,
vs. having been considered and rejected.
Generally speaking, the longer it has had a chance to "soak" in
mainline, the better the odds that it is an OK candidate for stable. So
scrambling to request a commit be added the day after it appears should
be avoided.
I have created a network patch and I think it should be backported to stable. Should I add a Cc: stable@vger.kernel.org like the references in the kernel's Documentation/ directory say?
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
No. See above answer. In short, if you think it really belongs in
stable, then ensure you write a decent commit log that describes who
gets impacted by the bug fix and how it manifests itself, and when the
bug was introduced. If you do that properly, then the commit will get
handled appropriately and most likely get put in the patchworks stable
queue if it really warrants it.
If you think there is some valid information relating to it being in
stable that does *not* belong in the commit log, then use the three dash
marker line as described in
:ref:`Documentation/process/submitting-patches.rst <the_canonical_patch_format>`
to temporarily embed that information into the patch that you send.
Are all networking bug fixes backported to all stable releases?
--------------------------------------------------------------- ---------------------------------------------------------------
Due to capacity, Dave could only take care of the backports for the While it used to be the case that netdev submissions were not supposed
last two stable releases. For earlier stable releases, each stable to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer
branch maintainer is supposed to take care of them. If you find any the case today. Please follow the standard stable rules in
patch is missing from an earlier stable branch, please notify :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`,
stable@vger.kernel.org with either a commit ID or a formal patch and make sure you include appropriate Fixes tags!
backported, and CC Dave and other relevant networking developers.
Is the comment style convention different for the networking content? Is the comment style convention different for the networking content?
--------------------------------------------------------------------- ---------------------------------------------------------------------

View File

@ -35,12 +35,6 @@ Rules on what kind of patches are accepted, and which ones are not, into the
Procedure for submitting patches to the -stable tree Procedure for submitting patches to the -stable tree
---------------------------------------------------- ----------------------------------------------------
- If the patch covers files in net/ or drivers/net please follow netdev stable
submission guidelines as described in
:ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
after first checking the stable networking queue at
https://patchwork.kernel.org/bundle/netdev/stable/?state=*
to ensure the requested patch is not already queued up.
- Security patches should not be handled (solely) by the -stable review - Security patches should not be handled (solely) by the -stable review
process but should follow the procedures in process but should follow the procedures in
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`. :ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.

View File

@ -250,11 +250,6 @@ should also read
:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>` :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
in addition to this file. in addition to this file.
Note, however, that some subsystem maintainers want to come to their own
conclusions on which patches should go to the stable trees. The networking
maintainer, in particular, would rather not see individual developers
adding lines like the above to their patches.
If changes affect userland-kernel interfaces, please send the MAN-PAGES If changes affect userland-kernel interfaces, please send the MAN-PAGES
maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at
least a notification of the change, so that some information makes its way least a notification of the change, so that some information makes its way

View File

@ -10716,7 +10716,8 @@ F: drivers/net/ethernet/marvell/mvpp2/
MARVELL MWIFIEX WIRELESS DRIVER MARVELL MWIFIEX WIRELESS DRIVER
M: Amitkumar Karwar <amitkarwar@gmail.com> M: Amitkumar Karwar <amitkarwar@gmail.com>
M: Ganapathi Bhat <ganapathi.bhat@nxp.com> M: Ganapathi Bhat <ganapathi017@gmail.com>
M: Sharvari Harisangam <sharvari.harisangam@nxp.com>
M: Xinming Hu <huxinming820@gmail.com> M: Xinming Hu <huxinming820@gmail.com>
L: linux-wireless@vger.kernel.org L: linux-wireless@vger.kernel.org
S: Maintained S: Maintained

View File

@ -1349,6 +1349,7 @@ st: if (is_imm8(insn->off))
insn->imm == (BPF_XOR | BPF_FETCH)) { insn->imm == (BPF_XOR | BPF_FETCH)) {
u8 *branch_target; u8 *branch_target;
bool is64 = BPF_SIZE(insn->code) == BPF_DW; bool is64 = BPF_SIZE(insn->code) == BPF_DW;
u32 real_src_reg = src_reg;
/* /*
* Can't be implemented with a single x86 insn. * Can't be implemented with a single x86 insn.
@ -1357,6 +1358,9 @@ st: if (is_imm8(insn->off))
/* Will need RAX as a CMPXCHG operand so save R0 */ /* Will need RAX as a CMPXCHG operand so save R0 */
emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0); emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0);
if (src_reg == BPF_REG_0)
real_src_reg = BPF_REG_AX;
branch_target = prog; branch_target = prog;
/* Load old value */ /* Load old value */
emit_ldx(&prog, BPF_SIZE(insn->code), emit_ldx(&prog, BPF_SIZE(insn->code),
@ -1366,9 +1370,9 @@ st: if (is_imm8(insn->off))
* put the result in the AUX_REG. * put the result in the AUX_REG.
*/ */
emit_mov_reg(&prog, is64, AUX_REG, BPF_REG_0); emit_mov_reg(&prog, is64, AUX_REG, BPF_REG_0);
maybe_emit_mod(&prog, AUX_REG, src_reg, is64); maybe_emit_mod(&prog, AUX_REG, real_src_reg, is64);
EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)], EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)],
add_2reg(0xC0, AUX_REG, src_reg)); add_2reg(0xC0, AUX_REG, real_src_reg));
/* Attempt to swap in new value */ /* Attempt to swap in new value */
err = emit_atomic(&prog, BPF_CMPXCHG, err = emit_atomic(&prog, BPF_CMPXCHG,
dst_reg, AUX_REG, insn->off, dst_reg, AUX_REG, insn->off,
@ -1381,7 +1385,7 @@ st: if (is_imm8(insn->off))
*/ */
EMIT2(X86_JNE, -(prog - branch_target) - 2); EMIT2(X86_JNE, -(prog - branch_target) - 2);
/* Return the pre-modification value */ /* Return the pre-modification value */
emit_mov_reg(&prog, is64, src_reg, BPF_REG_0); emit_mov_reg(&prog, is64, real_src_reg, BPF_REG_0);
/* Restore R0 after clobbering RAX */ /* Restore R0 after clobbering RAX */
emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX); emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX);
break; break;

View File

@ -2260,7 +2260,8 @@ out:
return rc; return rc;
err_eni_release: err_eni_release:
eni_do_release(dev); dev->phy = NULL;
iounmap(ENI_DEV(dev)->ioaddr);
err_unregister: err_unregister:
atm_dev_deregister(dev); atm_dev_deregister(dev);
err_free_consistent: err_free_consistent:

View File

@ -262,7 +262,7 @@ static int idt77105_start(struct atm_dev *dev)
{ {
unsigned long flags; unsigned long flags;
if (!(dev->dev_data = kmalloc(sizeof(struct idt77105_priv),GFP_KERNEL))) if (!(dev->phy_data = kmalloc(sizeof(struct idt77105_priv),GFP_KERNEL)))
return -ENOMEM; return -ENOMEM;
PRIV(dev)->dev = dev; PRIV(dev)->dev = dev;
spin_lock_irqsave(&idt77105_priv_lock, flags); spin_lock_irqsave(&idt77105_priv_lock, flags);
@ -337,7 +337,7 @@ static int idt77105_stop(struct atm_dev *dev)
else else
idt77105_all = walk->next; idt77105_all = walk->next;
dev->phy = NULL; dev->phy = NULL;
dev->dev_data = NULL; dev->phy_data = NULL;
kfree(walk); kfree(walk);
break; break;
} }

View File

@ -2233,6 +2233,7 @@ static int lanai_dev_open(struct atm_dev *atmdev)
conf1_write(lanai); conf1_write(lanai);
#endif #endif
iounmap(lanai->base); iounmap(lanai->base);
lanai->base = NULL;
error_pci: error_pci:
pci_disable_device(lanai->pci); pci_disable_device(lanai->pci);
error: error:
@ -2245,6 +2246,8 @@ static int lanai_dev_open(struct atm_dev *atmdev)
static void lanai_dev_close(struct atm_dev *atmdev) static void lanai_dev_close(struct atm_dev *atmdev)
{ {
struct lanai_dev *lanai = (struct lanai_dev *) atmdev->dev_data; struct lanai_dev *lanai = (struct lanai_dev *) atmdev->dev_data;
if (lanai->base==NULL)
return;
printk(KERN_INFO DEV_LABEL "(itf %d): shutting down interface\n", printk(KERN_INFO DEV_LABEL "(itf %d): shutting down interface\n",
lanai->number); lanai->number);
lanai_timed_poll_stop(lanai); lanai_timed_poll_stop(lanai);
@ -2552,7 +2555,7 @@ static int lanai_init_one(struct pci_dev *pci,
struct atm_dev *atmdev; struct atm_dev *atmdev;
int result; int result;
lanai = kmalloc(sizeof(*lanai), GFP_KERNEL); lanai = kzalloc(sizeof(*lanai), GFP_KERNEL);
if (lanai == NULL) { if (lanai == NULL) {
printk(KERN_ERR DEV_LABEL printk(KERN_ERR DEV_LABEL
": couldn't allocate dev_data structure!\n"); ": couldn't allocate dev_data structure!\n");

View File

@ -211,7 +211,7 @@ static void uPD98402_int(struct atm_dev *dev)
static int uPD98402_start(struct atm_dev *dev) static int uPD98402_start(struct atm_dev *dev)
{ {
DPRINTK("phy_start\n"); DPRINTK("phy_start\n");
if (!(dev->dev_data = kmalloc(sizeof(struct uPD98402_priv),GFP_KERNEL))) if (!(dev->phy_data = kmalloc(sizeof(struct uPD98402_priv),GFP_KERNEL)))
return -ENOMEM; return -ENOMEM;
spin_lock_init(&PRIV(dev)->lock); spin_lock_init(&PRIV(dev)->lock);
memset(&PRIV(dev)->sonet_stats,0,sizeof(struct k_sonet_stats)); memset(&PRIV(dev)->sonet_stats,0,sizeof(struct k_sonet_stats));

View File

@ -3978,11 +3978,15 @@ static int bond_neigh_init(struct neighbour *n)
rcu_read_lock(); rcu_read_lock();
slave = bond_first_slave_rcu(bond); slave = bond_first_slave_rcu(bond);
if (!slave) if (!slave) {
ret = -EINVAL;
goto out; goto out;
}
slave_ops = slave->dev->netdev_ops; slave_ops = slave->dev->netdev_ops;
if (!slave_ops->ndo_neigh_setup) if (!slave_ops->ndo_neigh_setup) {
ret = -EINVAL;
goto out; goto out;
}
/* TODO: find another way [1] to implement this. /* TODO: find another way [1] to implement this.
* Passing a zeroed structure is fragile, * Passing a zeroed structure is fragile,

View File

@ -701,7 +701,7 @@ static int flexcan_chip_freeze(struct flexcan_priv *priv)
u32 reg; u32 reg;
reg = priv->read(&regs->mcr); reg = priv->read(&regs->mcr);
reg |= FLEXCAN_MCR_HALT; reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT;
priv->write(reg, &regs->mcr); priv->write(reg, &regs->mcr);
while (timeout-- && !(priv->read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK)) while (timeout-- && !(priv->read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK))
@ -1480,10 +1480,13 @@ static int flexcan_chip_start(struct net_device *dev)
flexcan_set_bittiming(dev); flexcan_set_bittiming(dev);
/* set freeze, halt */
err = flexcan_chip_freeze(priv);
if (err)
goto out_chip_disable;
/* MCR /* MCR
* *
* enable freeze
* halt now
* only supervisor access * only supervisor access
* enable warning int * enable warning int
* enable individual RX masking * enable individual RX masking
@ -1492,9 +1495,8 @@ static int flexcan_chip_start(struct net_device *dev)
*/ */
reg_mcr = priv->read(&regs->mcr); reg_mcr = priv->read(&regs->mcr);
reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff); reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV | reg_mcr |= FLEXCAN_MCR_SUPV | FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ |
FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ | FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
/* MCR /* MCR
* *
@ -1865,10 +1867,14 @@ static int register_flexcandev(struct net_device *dev)
if (err) if (err)
goto out_chip_disable; goto out_chip_disable;
/* set freeze, halt and activate FIFO, restrict register access */ /* set freeze, halt */
err = flexcan_chip_freeze(priv);
if (err)
goto out_chip_disable;
/* activate FIFO, restrict register access */
reg = priv->read(&regs->mcr); reg = priv->read(&regs->mcr);
reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | reg |= FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
priv->write(reg, &regs->mcr); priv->write(reg, &regs->mcr);
/* Currently we only support newer versions of this core /* Currently we only support newer versions of this core

View File

@ -237,14 +237,14 @@ static int tcan4x5x_init(struct m_can_classdev *cdev)
if (ret) if (ret)
return ret; return ret;
/* Zero out the MCAN buffers */
m_can_init_ram(cdev);
ret = regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG, ret = regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG,
TCAN4X5X_MODE_SEL_MASK, TCAN4X5X_MODE_NORMAL); TCAN4X5X_MODE_SEL_MASK, TCAN4X5X_MODE_NORMAL);
if (ret) if (ret)
return ret; return ret;
/* Zero out the MCAN buffers */
m_can_init_ram(cdev);
return ret; return ret;
} }

View File

@ -335,8 +335,6 @@ static void mcp251xfd_ring_init(struct mcp251xfd_priv *priv)
u8 len; u8 len;
int i, j; int i, j;
netdev_reset_queue(priv->ndev);
/* TEF */ /* TEF */
tef_ring = priv->tef; tef_ring = priv->tef;
tef_ring->head = 0; tef_ring->head = 0;
@ -1249,8 +1247,7 @@ mcp251xfd_handle_tefif_recover(const struct mcp251xfd_priv *priv, const u32 seq)
static int static int
mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv, mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv,
const struct mcp251xfd_hw_tef_obj *hw_tef_obj, const struct mcp251xfd_hw_tef_obj *hw_tef_obj)
unsigned int *frame_len_ptr)
{ {
struct net_device_stats *stats = &priv->ndev->stats; struct net_device_stats *stats = &priv->ndev->stats;
u32 seq, seq_masked, tef_tail_masked; u32 seq, seq_masked, tef_tail_masked;
@ -1272,8 +1269,7 @@ mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv,
stats->tx_bytes += stats->tx_bytes +=
can_rx_offload_get_echo_skb(&priv->offload, can_rx_offload_get_echo_skb(&priv->offload,
mcp251xfd_get_tef_tail(priv), mcp251xfd_get_tef_tail(priv),
hw_tef_obj->ts, hw_tef_obj->ts, NULL);
frame_len_ptr);
stats->tx_packets++; stats->tx_packets++;
priv->tef->tail++; priv->tef->tail++;
@ -1331,7 +1327,6 @@ mcp251xfd_tef_obj_read(const struct mcp251xfd_priv *priv,
static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv) static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
{ {
struct mcp251xfd_hw_tef_obj hw_tef_obj[MCP251XFD_TX_OBJ_NUM_MAX]; struct mcp251xfd_hw_tef_obj hw_tef_obj[MCP251XFD_TX_OBJ_NUM_MAX];
unsigned int total_frame_len = 0;
u8 tef_tail, len, l; u8 tef_tail, len, l;
int err, i; int err, i;
@ -1353,9 +1348,7 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
} }
for (i = 0; i < len; i++) { for (i = 0; i < len; i++) {
unsigned int frame_len; err = mcp251xfd_handle_tefif_one(priv, &hw_tef_obj[i]);
err = mcp251xfd_handle_tefif_one(priv, &hw_tef_obj[i], &frame_len);
/* -EAGAIN means the Sequence Number in the TEF /* -EAGAIN means the Sequence Number in the TEF
* doesn't match our tef_tail. This can happen if we * doesn't match our tef_tail. This can happen if we
* read the TEF objects too early. Leave loop let the * read the TEF objects too early. Leave loop let the
@ -1365,8 +1358,6 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
goto out_netif_wake_queue; goto out_netif_wake_queue;
if (err) if (err)
return err; return err;
total_frame_len += frame_len;
} }
out_netif_wake_queue: out_netif_wake_queue:
@ -1397,7 +1388,6 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv)
return err; return err;
tx_ring->tail += len; tx_ring->tail += len;
netdev_completed_queue(priv->ndev, len, total_frame_len);
err = mcp251xfd_check_tef_tail(priv); err = mcp251xfd_check_tef_tail(priv);
if (err) if (err)
@ -2443,7 +2433,6 @@ static netdev_tx_t mcp251xfd_start_xmit(struct sk_buff *skb,
struct mcp251xfd_priv *priv = netdev_priv(ndev); struct mcp251xfd_priv *priv = netdev_priv(ndev);
struct mcp251xfd_tx_ring *tx_ring = priv->tx; struct mcp251xfd_tx_ring *tx_ring = priv->tx;
struct mcp251xfd_tx_obj *tx_obj; struct mcp251xfd_tx_obj *tx_obj;
unsigned int frame_len;
u8 tx_head; u8 tx_head;
int err; int err;
@ -2462,9 +2451,7 @@ static netdev_tx_t mcp251xfd_start_xmit(struct sk_buff *skb,
if (mcp251xfd_get_tx_free(tx_ring) == 0) if (mcp251xfd_get_tx_free(tx_ring) == 0)
netif_stop_queue(ndev); netif_stop_queue(ndev);
frame_len = can_skb_get_frame_len(skb); can_put_echo_skb(skb, ndev, tx_head, 0);
can_put_echo_skb(skb, ndev, tx_head, frame_len);
netdev_sent_queue(priv->ndev, frame_len);
err = mcp251xfd_tx_obj_write(priv, tx_obj); err = mcp251xfd_tx_obj_write(priv, tx_obj);
if (err) if (err)

View File

@ -406,7 +406,7 @@ static int bcm_sf2_sw_rst(struct bcm_sf2_priv *priv)
/* The watchdog reset does not work on 7278, we need to hit the /* The watchdog reset does not work on 7278, we need to hit the
* "external" reset line through the reset controller. * "external" reset line through the reset controller.
*/ */
if (priv->type == BCM7278_DEVICE_ID && !IS_ERR(priv->rcdev)) { if (priv->type == BCM7278_DEVICE_ID) {
ret = reset_control_assert(priv->rcdev); ret = reset_control_assert(priv->rcdev);
if (ret) if (ret)
return ret; return ret;
@ -1265,7 +1265,7 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev)
priv->rcdev = devm_reset_control_get_optional_exclusive(&pdev->dev, priv->rcdev = devm_reset_control_get_optional_exclusive(&pdev->dev,
"switch"); "switch");
if (PTR_ERR(priv->rcdev) == -EPROBE_DEFER) if (IS_ERR(priv->rcdev))
return PTR_ERR(priv->rcdev); return PTR_ERR(priv->rcdev);
/* Auto-detection using standard registers will not work, so /* Auto-detection using standard registers will not work, so
@ -1426,7 +1426,7 @@ static int bcm_sf2_sw_remove(struct platform_device *pdev)
bcm_sf2_mdio_unregister(priv); bcm_sf2_mdio_unregister(priv);
clk_disable_unprepare(priv->clk_mdiv); clk_disable_unprepare(priv->clk_mdiv);
clk_disable_unprepare(priv->clk); clk_disable_unprepare(priv->clk);
if (priv->type == BCM7278_DEVICE_ID && !IS_ERR(priv->rcdev)) if (priv->type == BCM7278_DEVICE_ID)
reset_control_assert(priv->rcdev); reset_control_assert(priv->rcdev);
return 0; return 0;

View File

@ -1624,6 +1624,7 @@ mtk_get_tag_protocol(struct dsa_switch *ds, int port,
} }
} }
#ifdef CONFIG_GPIOLIB
static inline u32 static inline u32
mt7530_gpio_to_bit(unsigned int offset) mt7530_gpio_to_bit(unsigned int offset)
{ {
@ -1726,6 +1727,7 @@ mt7530_setup_gpio(struct mt7530_priv *priv)
return devm_gpiochip_add_data(dev, gc, priv); return devm_gpiochip_add_data(dev, gc, priv);
} }
#endif /* CONFIG_GPIOLIB */
static int static int
mt7530_setup(struct dsa_switch *ds) mt7530_setup(struct dsa_switch *ds)
@ -1868,11 +1870,13 @@ mt7530_setup(struct dsa_switch *ds)
} }
} }
#ifdef CONFIG_GPIOLIB
if (of_property_read_bool(priv->dev->of_node, "gpio-controller")) { if (of_property_read_bool(priv->dev->of_node, "gpio-controller")) {
ret = mt7530_setup_gpio(priv); ret = mt7530_setup_gpio(priv);
if (ret) if (ret)
return ret; return ret;
} }
#endif /* CONFIG_GPIOLIB */
mt7530_setup_port5(ds, interface); mt7530_setup_port5(ds, interface);

View File

@ -1922,7 +1922,7 @@ out_unlock_ptp:
speed = SPEED_1000; speed = SPEED_1000;
else if (bmcr & BMCR_SPEED100) else if (bmcr & BMCR_SPEED100)
speed = SPEED_100; speed = SPEED_100;
else if (bmcr & BMCR_SPEED10) else
speed = SPEED_10; speed = SPEED_10;
sja1105_sgmii_pcs_force_speed(priv, speed); sja1105_sgmii_pcs_force_speed(priv, speed);
@ -3369,14 +3369,14 @@ static int sja1105_port_ucast_bcast_flood(struct sja1105_private *priv, int to,
if (flags.val & BR_FLOOD) if (flags.val & BR_FLOOD)
priv->ucast_egress_floods |= BIT(to); priv->ucast_egress_floods |= BIT(to);
else else
priv->ucast_egress_floods |= BIT(to); priv->ucast_egress_floods &= ~BIT(to);
} }
if (flags.mask & BR_BCAST_FLOOD) { if (flags.mask & BR_BCAST_FLOOD) {
if (flags.val & BR_BCAST_FLOOD) if (flags.val & BR_BCAST_FLOOD)
priv->bcast_egress_floods |= BIT(to); priv->bcast_egress_floods |= BIT(to);
else else
priv->bcast_egress_floods |= BIT(to); priv->bcast_egress_floods &= ~BIT(to);
} }
return sja1105_manage_flood_domains(priv); return sja1105_manage_flood_domains(priv);

View File

@ -528,7 +528,10 @@ static int xrs700x_hsr_join(struct dsa_switch *ds, int port,
return -EOPNOTSUPP; return -EOPNOTSUPP;
dsa_hsr_foreach_port(dp, ds, hsr) { dsa_hsr_foreach_port(dp, ds, hsr) {
if (dp->index != port) {
partner = dp; partner = dp;
break;
}
} }
/* We can't enable redundancy on the switch until both /* We can't enable redundancy on the switch until both
@ -582,7 +585,10 @@ static int xrs700x_hsr_leave(struct dsa_switch *ds, int port,
unsigned int val; unsigned int val;
dsa_hsr_foreach_port(dp, ds, hsr) { dsa_hsr_foreach_port(dp, ds, hsr) {
if (dp->index != port) {
partner = dp; partner = dp;
break;
}
} }
if (!partner) if (!partner)

View File

@ -1894,13 +1894,16 @@ static int alx_resume(struct device *dev)
if (!netif_running(alx->dev)) if (!netif_running(alx->dev))
return 0; return 0;
netif_device_attach(alx->dev);
rtnl_lock(); rtnl_lock();
err = __alx_open(alx, true); err = __alx_open(alx, true);
rtnl_unlock(); rtnl_unlock();
if (err)
return err; return err;
netif_device_attach(alx->dev);
return 0;
} }
static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume); static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);

View File

@ -592,6 +592,9 @@ static int bcm4908_enet_poll(struct napi_struct *napi, int weight)
bcm4908_enet_intrs_on(enet); bcm4908_enet_intrs_on(enet);
} }
/* Hardware could disable ring if it run out of descriptors */
bcm4908_enet_dma_rx_ring_enable(enet, &enet->rx_ring);
return handled; return handled;
} }

View File

@ -8556,10 +8556,18 @@ static void bnxt_setup_inta(struct bnxt *bp)
bp->irq_tbl[0].handler = bnxt_inta; bp->irq_tbl[0].handler = bnxt_inta;
} }
static int bnxt_init_int_mode(struct bnxt *bp);
static int bnxt_setup_int_mode(struct bnxt *bp) static int bnxt_setup_int_mode(struct bnxt *bp)
{ {
int rc; int rc;
if (!bp->irq_tbl) {
rc = bnxt_init_int_mode(bp);
if (rc || !bp->irq_tbl)
return rc ?: -ENODEV;
}
if (bp->flags & BNXT_FLAG_USING_MSIX) if (bp->flags & BNXT_FLAG_USING_MSIX)
bnxt_setup_msix(bp); bnxt_setup_msix(bp);
else else
@ -8744,7 +8752,7 @@ static int bnxt_init_inta(struct bnxt *bp)
static int bnxt_init_int_mode(struct bnxt *bp) static int bnxt_init_int_mode(struct bnxt *bp)
{ {
int rc = 0; int rc = -ENODEV;
if (bp->flags & BNXT_FLAG_MSIX_CAP) if (bp->flags & BNXT_FLAG_MSIX_CAP)
rc = bnxt_init_msix(bp); rc = bnxt_init_msix(bp);
@ -9514,7 +9522,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
{ {
struct hwrm_func_drv_if_change_output *resp = bp->hwrm_cmd_resp_addr; struct hwrm_func_drv_if_change_output *resp = bp->hwrm_cmd_resp_addr;
struct hwrm_func_drv_if_change_input req = {0}; struct hwrm_func_drv_if_change_input req = {0};
bool resc_reinit = false, fw_reset = false; bool fw_reset = !bp->irq_tbl;
bool resc_reinit = false;
int rc, retry = 0; int rc, retry = 0;
u32 flags = 0; u32 flags = 0;
@ -9557,6 +9566,7 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) { if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) {
netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n"); netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n");
set_bit(BNXT_STATE_ABORT_ERR, &bp->state);
return -ENODEV; return -ENODEV;
} }
if (resc_reinit || fw_reset) { if (resc_reinit || fw_reset) {
@ -9890,6 +9900,9 @@ static int bnxt_reinit_after_abort(struct bnxt *bp)
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)) if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state))
return -EBUSY; return -EBUSY;
if (bp->dev->reg_state == NETREG_UNREGISTERED)
return -ENODEV;
rc = bnxt_fw_init_one(bp); rc = bnxt_fw_init_one(bp);
if (!rc) { if (!rc) {
bnxt_clear_int_mode(bp); bnxt_clear_int_mode(bp);

View File

@ -3954,6 +3954,13 @@ static int macb_init(struct platform_device *pdev)
return 0; return 0;
} }
static const struct macb_usrio_config macb_default_usrio = {
.mii = MACB_BIT(MII),
.rmii = MACB_BIT(RMII),
.rgmii = GEM_BIT(RGMII),
.refclk = MACB_BIT(CLKEN),
};
#if defined(CONFIG_OF) #if defined(CONFIG_OF)
/* 1518 rounded up */ /* 1518 rounded up */
#define AT91ETHER_MAX_RBUFF_SZ 0x600 #define AT91ETHER_MAX_RBUFF_SZ 0x600
@ -4439,13 +4446,6 @@ static int fu540_c000_init(struct platform_device *pdev)
return macb_init(pdev); return macb_init(pdev);
} }
static const struct macb_usrio_config macb_default_usrio = {
.mii = MACB_BIT(MII),
.rmii = MACB_BIT(RMII),
.rgmii = GEM_BIT(RGMII),
.refclk = MACB_BIT(CLKEN),
};
static const struct macb_usrio_config sama7g5_usrio = { static const struct macb_usrio_config sama7g5_usrio = {
.mii = 0, .mii = 0,
.rmii = 1, .rmii = 1,
@ -4594,6 +4594,7 @@ static const struct macb_config default_gem_config = {
.dma_burst_length = 16, .dma_burst_length = 16,
.clk_init = macb_clk_init, .clk_init = macb_clk_init,
.init = macb_init, .init = macb_init,
.usrio = &macb_default_usrio,
.jumbo_max_len = 10240, .jumbo_max_len = 10240,
}; };

View File

@ -672,7 +672,7 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
if (tx_info->pending_close) { if (tx_info->pending_close) {
spin_unlock(&tx_info->lock); spin_unlock(&tx_info->lock);
if (!status) { if (!status) {
/* it's a late success, tcb status is establised, /* it's a late success, tcb status is established,
* mark it close. * mark it close.
*/ */
chcr_ktls_mark_tcb_close(tx_info); chcr_ktls_mark_tcb_close(tx_info);
@ -930,7 +930,7 @@ chcr_ktls_get_tx_flits(u32 nr_frags, unsigned int key_ctx_len)
} }
/* /*
* chcr_ktls_check_tcp_options: To check if there is any TCP option availbale * chcr_ktls_check_tcp_options: To check if there is any TCP option available
* other than timestamp. * other than timestamp.
* @skb - skb contains partial record.. * @skb - skb contains partial record..
* return: 1 / 0 * return: 1 / 0
@ -1115,7 +1115,7 @@ static int chcr_ktls_xmit_wr_complete(struct sk_buff *skb,
} }
if (unlikely(credits < ETHTXQ_STOP_THRES)) { if (unlikely(credits < ETHTXQ_STOP_THRES)) {
/* Credits are below the threshold vaues, stop the queue after /* Credits are below the threshold values, stop the queue after
* injecting the Work Request for this packet. * injecting the Work Request for this packet.
*/ */
chcr_eth_txq_stop(q); chcr_eth_txq_stop(q);
@ -2006,7 +2006,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
/* TCP segments can be in received either complete or partial. /* TCP segments can be in received either complete or partial.
* chcr_end_part_handler will handle cases if complete record or end * chcr_end_part_handler will handle cases if complete record or end
* part of the record is received. Incase of partial end part of record, * part of the record is received. In case of partial end part of record,
* we will send the complete record again. * we will send the complete record again.
*/ */

View File

@ -133,6 +133,8 @@ struct board_info {
u32 wake_state; u32 wake_state;
int ip_summed; int ip_summed;
struct regulator *power_supply;
}; };
/* debug code */ /* debug code */
@ -1449,7 +1451,7 @@ dm9000_probe(struct platform_device *pdev)
if (ret) { if (ret) {
dev_err(dev, "failed to request reset gpio %d: %d\n", dev_err(dev, "failed to request reset gpio %d: %d\n",
reset_gpios, ret); reset_gpios, ret);
return -ENODEV; goto out_regulator_disable;
} }
/* According to manual PWRST# Low Period Min 1ms */ /* According to manual PWRST# Low Period Min 1ms */
@ -1461,8 +1463,10 @@ dm9000_probe(struct platform_device *pdev)
if (!pdata) { if (!pdata) {
pdata = dm9000_parse_dt(&pdev->dev); pdata = dm9000_parse_dt(&pdev->dev);
if (IS_ERR(pdata)) if (IS_ERR(pdata)) {
return PTR_ERR(pdata); ret = PTR_ERR(pdata);
goto out_regulator_disable;
}
} }
/* Init network device */ /* Init network device */
@ -1479,6 +1483,8 @@ dm9000_probe(struct platform_device *pdev)
db->dev = &pdev->dev; db->dev = &pdev->dev;
db->ndev = ndev; db->ndev = ndev;
if (!IS_ERR(power))
db->power_supply = power;
spin_lock_init(&db->lock); spin_lock_init(&db->lock);
mutex_init(&db->addr_lock); mutex_init(&db->addr_lock);
@ -1501,7 +1507,7 @@ dm9000_probe(struct platform_device *pdev)
goto out; goto out;
} }
db->irq_wake = platform_get_irq(pdev, 1); db->irq_wake = platform_get_irq_optional(pdev, 1);
if (db->irq_wake >= 0) { if (db->irq_wake >= 0) {
dev_dbg(db->dev, "wakeup irq %d\n", db->irq_wake); dev_dbg(db->dev, "wakeup irq %d\n", db->irq_wake);
@ -1703,6 +1709,10 @@ out:
dm9000_release_board(pdev, db); dm9000_release_board(pdev, db);
free_netdev(ndev); free_netdev(ndev);
out_regulator_disable:
if (!IS_ERR(power))
regulator_disable(power);
return ret; return ret;
} }
@ -1760,10 +1770,13 @@ static int
dm9000_drv_remove(struct platform_device *pdev) dm9000_drv_remove(struct platform_device *pdev)
{ {
struct net_device *ndev = platform_get_drvdata(pdev); struct net_device *ndev = platform_get_drvdata(pdev);
struct board_info *dm = to_dm9000_board(ndev);
unregister_netdev(ndev); unregister_netdev(ndev);
dm9000_release_board(pdev, netdev_priv(ndev)); dm9000_release_board(pdev, dm);
free_netdev(ndev); /* free device structure */ free_netdev(ndev); /* free device structure */
if (dm->power_supply)
regulator_disable(dm->power_supply);
dev_dbg(&pdev->dev, "released and freed device\n"); dev_dbg(&pdev->dev, "released and freed device\n");
return 0; return 0;

View File

@ -281,6 +281,8 @@ static int enetc_poll(struct napi_struct *napi, int budget)
int work_done; int work_done;
int i; int i;
enetc_lock_mdio();
for (i = 0; i < v->count_tx_rings; i++) for (i = 0; i < v->count_tx_rings; i++)
if (!enetc_clean_tx_ring(&v->tx_ring[i], budget)) if (!enetc_clean_tx_ring(&v->tx_ring[i], budget))
complete = false; complete = false;
@ -291,8 +293,10 @@ static int enetc_poll(struct napi_struct *napi, int budget)
if (work_done) if (work_done)
v->rx_napi_work = true; v->rx_napi_work = true;
if (!complete) if (!complete) {
enetc_unlock_mdio();
return budget; return budget;
}
napi_complete_done(napi, work_done); napi_complete_done(napi, work_done);
@ -301,8 +305,6 @@ static int enetc_poll(struct napi_struct *napi, int budget)
v->rx_napi_work = false; v->rx_napi_work = false;
enetc_lock_mdio();
/* enable interrupts */ /* enable interrupts */
enetc_wr_reg_hot(v->rbier, ENETC_RBIER_RXTIE); enetc_wr_reg_hot(v->rbier, ENETC_RBIER_RXTIE);
@ -327,8 +329,8 @@ static void enetc_get_tx_tstamp(struct enetc_hw *hw, union enetc_tx_bd *txbd,
{ {
u32 lo, hi, tstamp_lo; u32 lo, hi, tstamp_lo;
lo = enetc_rd(hw, ENETC_SICTR0); lo = enetc_rd_hot(hw, ENETC_SICTR0);
hi = enetc_rd(hw, ENETC_SICTR1); hi = enetc_rd_hot(hw, ENETC_SICTR1);
tstamp_lo = le32_to_cpu(txbd->wb.tstamp); tstamp_lo = le32_to_cpu(txbd->wb.tstamp);
if (lo <= tstamp_lo) if (lo <= tstamp_lo)
hi -= 1; hi -= 1;
@ -342,6 +344,12 @@ static void enetc_tstamp_tx(struct sk_buff *skb, u64 tstamp)
if (skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) { if (skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) {
memset(&shhwtstamps, 0, sizeof(shhwtstamps)); memset(&shhwtstamps, 0, sizeof(shhwtstamps));
shhwtstamps.hwtstamp = ns_to_ktime(tstamp); shhwtstamps.hwtstamp = ns_to_ktime(tstamp);
/* Ensure skb_mstamp_ns, which might have been populated with
* the txtime, is not mistaken for a software timestamp,
* because this will prevent the dispatch of our hardware
* timestamp to the socket.
*/
skb->tstamp = ktime_set(0, 0);
skb_tstamp_tx(skb, &shhwtstamps); skb_tstamp_tx(skb, &shhwtstamps);
} }
} }
@ -358,9 +366,7 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
i = tx_ring->next_to_clean; i = tx_ring->next_to_clean;
tx_swbd = &tx_ring->tx_swbd[i]; tx_swbd = &tx_ring->tx_swbd[i];
enetc_lock_mdio();
bds_to_clean = enetc_bd_ready_count(tx_ring, i); bds_to_clean = enetc_bd_ready_count(tx_ring, i);
enetc_unlock_mdio();
do_tstamp = false; do_tstamp = false;
@ -403,8 +409,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
tx_swbd = tx_ring->tx_swbd; tx_swbd = tx_ring->tx_swbd;
} }
enetc_lock_mdio();
/* BD iteration loop end */ /* BD iteration loop end */
if (is_eof) { if (is_eof) {
tx_frm_cnt++; tx_frm_cnt++;
@ -415,8 +419,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
if (unlikely(!bds_to_clean)) if (unlikely(!bds_to_clean))
bds_to_clean = enetc_bd_ready_count(tx_ring, i); bds_to_clean = enetc_bd_ready_count(tx_ring, i);
enetc_unlock_mdio();
} }
tx_ring->next_to_clean = i; tx_ring->next_to_clean = i;
@ -527,9 +529,8 @@ static void enetc_get_rx_tstamp(struct net_device *ndev,
static void enetc_get_offloads(struct enetc_bdr *rx_ring, static void enetc_get_offloads(struct enetc_bdr *rx_ring,
union enetc_rx_bd *rxbd, struct sk_buff *skb) union enetc_rx_bd *rxbd, struct sk_buff *skb)
{ {
#ifdef CONFIG_FSL_ENETC_PTP_CLOCK
struct enetc_ndev_priv *priv = netdev_priv(rx_ring->ndev); struct enetc_ndev_priv *priv = netdev_priv(rx_ring->ndev);
#endif
/* TODO: hashing */ /* TODO: hashing */
if (rx_ring->ndev->features & NETIF_F_RXCSUM) { if (rx_ring->ndev->features & NETIF_F_RXCSUM) {
u16 inet_csum = le16_to_cpu(rxbd->r.inet_csum); u16 inet_csum = le16_to_cpu(rxbd->r.inet_csum);
@ -538,12 +539,31 @@ static void enetc_get_offloads(struct enetc_bdr *rx_ring,
skb->ip_summed = CHECKSUM_COMPLETE; skb->ip_summed = CHECKSUM_COMPLETE;
} }
/* copy VLAN to skb, if one is extracted, for now we assume it's a if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN) {
* standard TPID, but HW also supports custom values __be16 tpid = 0;
*/
if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN) switch (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_TPID) {
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), case 0:
le16_to_cpu(rxbd->r.vlan_opt)); tpid = htons(ETH_P_8021Q);
break;
case 1:
tpid = htons(ETH_P_8021AD);
break;
case 2:
tpid = htons(enetc_port_rd(&priv->si->hw,
ENETC_PCVLANR1));
break;
case 3:
tpid = htons(enetc_port_rd(&priv->si->hw,
ENETC_PCVLANR2));
break;
default:
break;
}
__vlan_hwaccel_put_tag(skb, tpid, le16_to_cpu(rxbd->r.vlan_opt));
}
#ifdef CONFIG_FSL_ENETC_PTP_CLOCK #ifdef CONFIG_FSL_ENETC_PTP_CLOCK
if (priv->active_offloads & ENETC_F_RX_TSTAMP) if (priv->active_offloads & ENETC_F_RX_TSTAMP)
enetc_get_rx_tstamp(rx_ring->ndev, rxbd, skb); enetc_get_rx_tstamp(rx_ring->ndev, rxbd, skb);
@ -660,8 +680,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
u32 bd_status; u32 bd_status;
u16 size; u16 size;
enetc_lock_mdio();
if (cleaned_cnt >= ENETC_RXBD_BUNDLE) { if (cleaned_cnt >= ENETC_RXBD_BUNDLE) {
int count = enetc_refill_rx_ring(rx_ring, cleaned_cnt); int count = enetc_refill_rx_ring(rx_ring, cleaned_cnt);
@ -672,19 +690,15 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
rxbd = enetc_rxbd(rx_ring, i); rxbd = enetc_rxbd(rx_ring, i);
bd_status = le32_to_cpu(rxbd->r.lstatus); bd_status = le32_to_cpu(rxbd->r.lstatus);
if (!bd_status) { if (!bd_status)
enetc_unlock_mdio();
break; break;
}
enetc_wr_reg_hot(rx_ring->idr, BIT(rx_ring->index)); enetc_wr_reg_hot(rx_ring->idr, BIT(rx_ring->index));
dma_rmb(); /* for reading other rxbd fields */ dma_rmb(); /* for reading other rxbd fields */
size = le16_to_cpu(rxbd->r.buf_len); size = le16_to_cpu(rxbd->r.buf_len);
skb = enetc_map_rx_buff_to_skb(rx_ring, i, size); skb = enetc_map_rx_buff_to_skb(rx_ring, i, size);
if (!skb) { if (!skb)
enetc_unlock_mdio();
break; break;
}
enetc_get_offloads(rx_ring, rxbd, skb); enetc_get_offloads(rx_ring, rxbd, skb);
@ -696,7 +710,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
if (unlikely(bd_status & if (unlikely(bd_status &
ENETC_RXBD_LSTATUS(ENETC_RXBD_ERR_MASK))) { ENETC_RXBD_LSTATUS(ENETC_RXBD_ERR_MASK))) {
enetc_unlock_mdio();
dev_kfree_skb(skb); dev_kfree_skb(skb);
while (!(bd_status & ENETC_RXBD_LSTATUS_F)) { while (!(bd_status & ENETC_RXBD_LSTATUS_F)) {
dma_rmb(); dma_rmb();
@ -736,8 +749,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
enetc_process_skb(rx_ring, skb); enetc_process_skb(rx_ring, skb);
enetc_unlock_mdio();
napi_gro_receive(napi, skb); napi_gro_receive(napi, skb);
rx_frm_cnt++; rx_frm_cnt++;
@ -984,7 +995,7 @@ static void enetc_free_rxtx_rings(struct enetc_ndev_priv *priv)
enetc_free_tx_ring(priv->tx_ring[i]); enetc_free_tx_ring(priv->tx_ring[i]);
} }
static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr) int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
{ {
int size = cbdr->bd_count * sizeof(struct enetc_cbd); int size = cbdr->bd_count * sizeof(struct enetc_cbd);
@ -1005,7 +1016,7 @@ static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
return 0; return 0;
} }
static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr) void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
{ {
int size = cbdr->bd_count * sizeof(struct enetc_cbd); int size = cbdr->bd_count * sizeof(struct enetc_cbd);
@ -1013,7 +1024,7 @@ static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
cbdr->bd_base = NULL; cbdr->bd_base = NULL;
} }
static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr) void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
{ {
/* set CBDR cache attributes */ /* set CBDR cache attributes */
enetc_wr(hw, ENETC_SICAR2, enetc_wr(hw, ENETC_SICAR2,
@ -1033,7 +1044,7 @@ static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
cbdr->cir = hw->reg + ENETC_SICBDRCIR; cbdr->cir = hw->reg + ENETC_SICBDRCIR;
} }
static void enetc_clear_cbdr(struct enetc_hw *hw) void enetc_clear_cbdr(struct enetc_hw *hw)
{ {
enetc_wr(hw, ENETC_SICBDRMR, 0); enetc_wr(hw, ENETC_SICBDRMR, 0);
} }
@ -1058,13 +1069,12 @@ static int enetc_setup_default_rss_table(struct enetc_si *si, int num_groups)
return 0; return 0;
} }
static int enetc_configure_si(struct enetc_ndev_priv *priv) int enetc_configure_si(struct enetc_ndev_priv *priv)
{ {
struct enetc_si *si = priv->si; struct enetc_si *si = priv->si;
struct enetc_hw *hw = &si->hw; struct enetc_hw *hw = &si->hw;
int err; int err;
enetc_setup_cbdr(hw, &si->cbd_ring);
/* set SI cache attributes */ /* set SI cache attributes */
enetc_wr(hw, ENETC_SICAR0, enetc_wr(hw, ENETC_SICAR0,
ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT); ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT);
@ -1112,6 +1122,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
if (err) if (err)
return err; return err;
enetc_setup_cbdr(&si->hw, &si->cbd_ring);
priv->cls_rules = kcalloc(si->num_fs_entries, sizeof(*priv->cls_rules), priv->cls_rules = kcalloc(si->num_fs_entries, sizeof(*priv->cls_rules),
GFP_KERNEL); GFP_KERNEL);
if (!priv->cls_rules) { if (!priv->cls_rules) {
@ -1119,14 +1131,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
goto err_alloc_cls; goto err_alloc_cls;
} }
err = enetc_configure_si(priv);
if (err)
goto err_config_si;
return 0; return 0;
err_config_si:
kfree(priv->cls_rules);
err_alloc_cls: err_alloc_cls:
enetc_clear_cbdr(&si->hw); enetc_clear_cbdr(&si->hw);
enetc_free_cbdr(priv->dev, &si->cbd_ring); enetc_free_cbdr(priv->dev, &si->cbd_ring);
@ -1212,7 +1218,8 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
rx_ring->idr = hw->reg + ENETC_SIRXIDR; rx_ring->idr = hw->reg + ENETC_SIRXIDR;
enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring)); enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring));
enetc_wr(hw, ENETC_SIRXIDR, rx_ring->next_to_use); /* update ENETC's consumer index */
enetc_rxbdr_wr(hw, idx, ENETC_RBCIR, rx_ring->next_to_use);
/* enable ring */ /* enable ring */
enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr); enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr);

View File

@ -292,6 +292,7 @@ void enetc_get_si_caps(struct enetc_si *si);
void enetc_init_si_rings_params(struct enetc_ndev_priv *priv); void enetc_init_si_rings_params(struct enetc_ndev_priv *priv);
int enetc_alloc_si_resources(struct enetc_ndev_priv *priv); int enetc_alloc_si_resources(struct enetc_ndev_priv *priv);
void enetc_free_si_resources(struct enetc_ndev_priv *priv); void enetc_free_si_resources(struct enetc_ndev_priv *priv);
int enetc_configure_si(struct enetc_ndev_priv *priv);
int enetc_open(struct net_device *ndev); int enetc_open(struct net_device *ndev);
int enetc_close(struct net_device *ndev); int enetc_close(struct net_device *ndev);
@ -309,6 +310,10 @@ int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
void enetc_set_ethtool_ops(struct net_device *ndev); void enetc_set_ethtool_ops(struct net_device *ndev);
/* control buffer descriptor ring (CBDR) */ /* control buffer descriptor ring (CBDR) */
int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr);
void enetc_clear_cbdr(struct enetc_hw *hw);
int enetc_set_mac_flt_entry(struct enetc_si *si, int index, int enetc_set_mac_flt_entry(struct enetc_si *si, int index,
char *mac_addr, int si_map); char *mac_addr, int si_map);
int enetc_clear_mac_flt_entry(struct enetc_si *si, int index); int enetc_clear_mac_flt_entry(struct enetc_si *si, int index);

View File

@ -172,6 +172,8 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_PSIPMAR0(n) (0x0100 + (n) * 0x8) /* n = SI index */ #define ENETC_PSIPMAR0(n) (0x0100 + (n) * 0x8) /* n = SI index */
#define ENETC_PSIPMAR1(n) (0x0104 + (n) * 0x8) #define ENETC_PSIPMAR1(n) (0x0104 + (n) * 0x8)
#define ENETC_PVCLCTR 0x0208 #define ENETC_PVCLCTR 0x0208
#define ENETC_PCVLANR1 0x0210
#define ENETC_PCVLANR2 0x0214
#define ENETC_VLAN_TYPE_C BIT(0) #define ENETC_VLAN_TYPE_C BIT(0)
#define ENETC_VLAN_TYPE_S BIT(1) #define ENETC_VLAN_TYPE_S BIT(1)
#define ENETC_PVCLCTR_OVTPIDL(bmp) ((bmp) & 0xff) /* VLAN_TYPE */ #define ENETC_PVCLCTR_OVTPIDL(bmp) ((bmp) & 0xff) /* VLAN_TYPE */
@ -232,14 +234,23 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_PM0_MAXFRM 0x8014 #define ENETC_PM0_MAXFRM 0x8014
#define ENETC_SET_TX_MTU(val) ((val) << 16) #define ENETC_SET_TX_MTU(val) ((val) << 16)
#define ENETC_SET_MAXFRM(val) ((val) & 0xffff) #define ENETC_SET_MAXFRM(val) ((val) & 0xffff)
#define ENETC_PM0_RX_FIFO 0x801c
#define ENETC_PM0_RX_FIFO_VAL 1
#define ENETC_PM_IMDIO_BASE 0x8030 #define ENETC_PM_IMDIO_BASE 0x8030
#define ENETC_PM0_IF_MODE 0x8300 #define ENETC_PM0_IF_MODE 0x8300
#define ENETC_PMO_IFM_RG BIT(2) #define ENETC_PM0_IFM_RG BIT(2)
#define ENETC_PM0_IFM_RLP (BIT(5) | BIT(11)) #define ENETC_PM0_IFM_RLP (BIT(5) | BIT(11))
#define ENETC_PM0_IFM_RGAUTO (BIT(15) | ENETC_PMO_IFM_RG | BIT(1)) #define ENETC_PM0_IFM_EN_AUTO BIT(15)
#define ENETC_PM0_IFM_XGMII BIT(12) #define ENETC_PM0_IFM_SSP_MASK GENMASK(14, 13)
#define ENETC_PM0_IFM_SSP_1000 (2 << 13)
#define ENETC_PM0_IFM_SSP_100 (0 << 13)
#define ENETC_PM0_IFM_SSP_10 (1 << 13)
#define ENETC_PM0_IFM_FULL_DPX BIT(12)
#define ENETC_PM0_IFM_IFMODE_MASK GENMASK(1, 0)
#define ENETC_PM0_IFM_IFMODE_XGMII 0
#define ENETC_PM0_IFM_IFMODE_GMII 2
#define ENETC_PSIDCAPR 0x1b08 #define ENETC_PSIDCAPR 0x1b08
#define ENETC_PSIDCAPR_MSK GENMASK(15, 0) #define ENETC_PSIDCAPR_MSK GENMASK(15, 0)
#define ENETC_PSFCAPR 0x1b18 #define ENETC_PSFCAPR 0x1b18
@ -453,6 +464,8 @@ static inline u64 _enetc_rd_reg64_wa(void __iomem *reg)
#define enetc_wr_reg(reg, val) _enetc_wr_reg_wa((reg), (val)) #define enetc_wr_reg(reg, val) _enetc_wr_reg_wa((reg), (val))
#define enetc_rd(hw, off) enetc_rd_reg((hw)->reg + (off)) #define enetc_rd(hw, off) enetc_rd_reg((hw)->reg + (off))
#define enetc_wr(hw, off, val) enetc_wr_reg((hw)->reg + (off), val) #define enetc_wr(hw, off, val) enetc_wr_reg((hw)->reg + (off), val)
#define enetc_rd_hot(hw, off) enetc_rd_reg_hot((hw)->reg + (off))
#define enetc_wr_hot(hw, off, val) enetc_wr_reg_hot((hw)->reg + (off), val)
#define enetc_rd64(hw, off) _enetc_rd_reg64_wa((hw)->reg + (off)) #define enetc_rd64(hw, off) _enetc_rd_reg64_wa((hw)->reg + (off))
/* port register accessors - PF only */ /* port register accessors - PF only */
#define enetc_port_rd(hw, off) enetc_rd_reg((hw)->port + (off)) #define enetc_port_rd(hw, off) enetc_rd_reg((hw)->port + (off))
@ -568,6 +581,7 @@ union enetc_rx_bd {
#define ENETC_RXBD_LSTATUS(flags) ((flags) << 16) #define ENETC_RXBD_LSTATUS(flags) ((flags) << 16)
#define ENETC_RXBD_FLAG_VLAN BIT(9) #define ENETC_RXBD_FLAG_VLAN BIT(9)
#define ENETC_RXBD_FLAG_TSTMP BIT(10) #define ENETC_RXBD_FLAG_TSTMP BIT(10)
#define ENETC_RXBD_FLAG_TPID GENMASK(1, 0)
#define ENETC_MAC_ADDR_FILT_CNT 8 /* # of supported entries per port */ #define ENETC_MAC_ADDR_FILT_CNT 8 /* # of supported entries per port */
#define EMETC_MAC_ADDR_FILT_RES 3 /* # of reserved entries at the beginning */ #define EMETC_MAC_ADDR_FILT_RES 3 /* # of reserved entries at the beginning */

View File

@ -190,7 +190,6 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
{ {
struct enetc_ndev_priv *priv = netdev_priv(ndev); struct enetc_ndev_priv *priv = netdev_priv(ndev);
struct enetc_pf *pf = enetc_si_priv(priv->si); struct enetc_pf *pf = enetc_si_priv(priv->si);
char vlan_promisc_simap = pf->vlan_promisc_simap;
struct enetc_hw *hw = &priv->si->hw; struct enetc_hw *hw = &priv->si->hw;
bool uprom = false, mprom = false; bool uprom = false, mprom = false;
struct enetc_mac_filter *filter; struct enetc_mac_filter *filter;
@ -203,16 +202,12 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
psipmr = ENETC_PSIPMR_SET_UP(0) | ENETC_PSIPMR_SET_MP(0); psipmr = ENETC_PSIPMR_SET_UP(0) | ENETC_PSIPMR_SET_MP(0);
uprom = true; uprom = true;
mprom = true; mprom = true;
/* Enable VLAN promiscuous mode for SI0 (PF) */
vlan_promisc_simap |= BIT(0);
} else if (ndev->flags & IFF_ALLMULTI) { } else if (ndev->flags & IFF_ALLMULTI) {
/* enable multi cast promisc mode for SI0 (PF) */ /* enable multi cast promisc mode for SI0 (PF) */
psipmr = ENETC_PSIPMR_SET_MP(0); psipmr = ENETC_PSIPMR_SET_MP(0);
mprom = true; mprom = true;
} }
enetc_set_vlan_promisc(&pf->si->hw, vlan_promisc_simap);
/* first 2 filter entries belong to PF */ /* first 2 filter entries belong to PF */
if (!uprom) { if (!uprom) {
/* Update unicast filters */ /* Update unicast filters */
@ -320,7 +315,7 @@ static void enetc_set_loopback(struct net_device *ndev, bool en)
u32 reg; u32 reg;
reg = enetc_port_rd(hw, ENETC_PM0_IF_MODE); reg = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
if (reg & ENETC_PMO_IFM_RG) { if (reg & ENETC_PM0_IFM_RG) {
/* RGMII mode */ /* RGMII mode */
reg = (reg & ~ENETC_PM0_IFM_RLP) | reg = (reg & ~ENETC_PM0_IFM_RLP) |
(en ? ENETC_PM0_IFM_RLP : 0); (en ? ENETC_PM0_IFM_RLP : 0);
@ -495,17 +490,30 @@ static void enetc_configure_port_mac(struct enetc_hw *hw)
enetc_port_wr(hw, ENETC_PM1_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN | enetc_port_wr(hw, ENETC_PM1_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN |
ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC); ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC);
/* On LS1028A, the MAC RX FIFO defaults to 2, which is too high
* and may lead to RX lock-up under traffic. Set it to 1 instead,
* as recommended by the hardware team.
*/
enetc_port_wr(hw, ENETC_PM0_RX_FIFO, ENETC_PM0_RX_FIFO_VAL);
} }
static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode) static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode)
{ {
/* set auto-speed for RGMII */ u32 val;
if (enetc_port_rd(hw, ENETC_PM0_IF_MODE) & ENETC_PMO_IFM_RG ||
phy_interface_mode_is_rgmii(phy_mode))
enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_RGAUTO);
if (phy_mode == PHY_INTERFACE_MODE_USXGMII) if (phy_interface_mode_is_rgmii(phy_mode)) {
enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_XGMII); val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
val &= ~ENETC_PM0_IFM_EN_AUTO;
val &= ENETC_PM0_IFM_IFMODE_MASK;
val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG;
enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
}
if (phy_mode == PHY_INTERFACE_MODE_USXGMII) {
val = ENETC_PM0_IFM_FULL_DPX | ENETC_PM0_IFM_IFMODE_XGMII;
enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
}
} }
static void enetc_mac_enable(struct enetc_hw *hw, bool en) static void enetc_mac_enable(struct enetc_hw *hw, bool en)
@ -937,6 +945,34 @@ static void enetc_pl_mac_config(struct phylink_config *config,
phylink_set_pcs(priv->phylink, &pf->pcs->pcs); phylink_set_pcs(priv->phylink, &pf->pcs->pcs);
} }
static void enetc_force_rgmii_mac(struct enetc_hw *hw, int speed, int duplex)
{
u32 old_val, val;
old_val = val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
if (speed == SPEED_1000) {
val &= ~ENETC_PM0_IFM_SSP_MASK;
val |= ENETC_PM0_IFM_SSP_1000;
} else if (speed == SPEED_100) {
val &= ~ENETC_PM0_IFM_SSP_MASK;
val |= ENETC_PM0_IFM_SSP_100;
} else if (speed == SPEED_10) {
val &= ~ENETC_PM0_IFM_SSP_MASK;
val |= ENETC_PM0_IFM_SSP_10;
}
if (duplex == DUPLEX_FULL)
val |= ENETC_PM0_IFM_FULL_DPX;
else
val &= ~ENETC_PM0_IFM_FULL_DPX;
if (val == old_val)
return;
enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
}
static void enetc_pl_mac_link_up(struct phylink_config *config, static void enetc_pl_mac_link_up(struct phylink_config *config,
struct phy_device *phy, unsigned int mode, struct phy_device *phy, unsigned int mode,
phy_interface_t interface, int speed, phy_interface_t interface, int speed,
@ -949,6 +985,10 @@ static void enetc_pl_mac_link_up(struct phylink_config *config,
if (priv->active_offloads & ENETC_F_QBV) if (priv->active_offloads & ENETC_F_QBV)
enetc_sched_speed_set(priv, speed); enetc_sched_speed_set(priv, speed);
if (!phylink_autoneg_inband(mode) &&
phy_interface_mode_is_rgmii(interface))
enetc_force_rgmii_mac(&pf->si->hw, speed, duplex);
enetc_mac_enable(&pf->si->hw, true); enetc_mac_enable(&pf->si->hw, true);
} }
@ -1041,6 +1081,26 @@ static int enetc_init_port_rss_memory(struct enetc_si *si)
return err; return err;
} }
static void enetc_init_unused_port(struct enetc_si *si)
{
struct device *dev = &si->pdev->dev;
struct enetc_hw *hw = &si->hw;
int err;
si->cbd_ring.bd_count = ENETC_CBDR_DEFAULT_SIZE;
err = enetc_alloc_cbdr(dev, &si->cbd_ring);
if (err)
return;
enetc_setup_cbdr(hw, &si->cbd_ring);
enetc_init_port_rfs_memory(si);
enetc_init_port_rss_memory(si);
enetc_clear_cbdr(hw);
enetc_free_cbdr(dev, &si->cbd_ring);
}
static int enetc_pf_probe(struct pci_dev *pdev, static int enetc_pf_probe(struct pci_dev *pdev,
const struct pci_device_id *ent) const struct pci_device_id *ent)
{ {
@ -1051,11 +1111,6 @@ static int enetc_pf_probe(struct pci_dev *pdev,
struct enetc_pf *pf; struct enetc_pf *pf;
int err; int err;
if (node && !of_device_is_available(node)) {
dev_info(&pdev->dev, "device is disabled, skipping\n");
return -ENODEV;
}
err = enetc_pci_probe(pdev, KBUILD_MODNAME, sizeof(*pf)); err = enetc_pci_probe(pdev, KBUILD_MODNAME, sizeof(*pf));
if (err) { if (err) {
dev_err(&pdev->dev, "PCI probing failed\n"); dev_err(&pdev->dev, "PCI probing failed\n");
@ -1069,6 +1124,13 @@ static int enetc_pf_probe(struct pci_dev *pdev,
goto err_map_pf_space; goto err_map_pf_space;
} }
if (node && !of_device_is_available(node)) {
enetc_init_unused_port(si);
dev_info(&pdev->dev, "device is disabled, skipping\n");
err = -ENODEV;
goto err_device_disabled;
}
pf = enetc_si_priv(si); pf = enetc_si_priv(si);
pf->si = si; pf->si = si;
pf->total_vfs = pci_sriov_get_totalvfs(pdev); pf->total_vfs = pci_sriov_get_totalvfs(pdev);
@ -1108,6 +1170,12 @@ static int enetc_pf_probe(struct pci_dev *pdev,
goto err_init_port_rss; goto err_init_port_rss;
} }
err = enetc_configure_si(priv);
if (err) {
dev_err(&pdev->dev, "Failed to configure SI\n");
goto err_config_si;
}
err = enetc_alloc_msix(priv); err = enetc_alloc_msix(priv);
if (err) { if (err) {
dev_err(&pdev->dev, "MSIX alloc failed\n"); dev_err(&pdev->dev, "MSIX alloc failed\n");
@ -1136,6 +1204,7 @@ err_phylink_create:
enetc_mdiobus_destroy(pf); enetc_mdiobus_destroy(pf);
err_mdiobus_create: err_mdiobus_create:
enetc_free_msix(priv); enetc_free_msix(priv);
err_config_si:
err_init_port_rss: err_init_port_rss:
err_init_port_rfs: err_init_port_rfs:
err_alloc_msix: err_alloc_msix:
@ -1144,6 +1213,7 @@ err_alloc_si_res:
si->ndev = NULL; si->ndev = NULL;
free_netdev(ndev); free_netdev(ndev);
err_alloc_netdev: err_alloc_netdev:
err_device_disabled:
err_map_pf_space: err_map_pf_space:
enetc_pci_remove(pdev); enetc_pci_remove(pdev);

View File

@ -171,6 +171,12 @@ static int enetc_vf_probe(struct pci_dev *pdev,
goto err_alloc_si_res; goto err_alloc_si_res;
} }
err = enetc_configure_si(priv);
if (err) {
dev_err(&pdev->dev, "Failed to configure SI\n");
goto err_config_si;
}
err = enetc_alloc_msix(priv); err = enetc_alloc_msix(priv);
if (err) { if (err) {
dev_err(&pdev->dev, "MSIX alloc failed\n"); dev_err(&pdev->dev, "MSIX alloc failed\n");
@ -187,6 +193,7 @@ static int enetc_vf_probe(struct pci_dev *pdev,
err_reg_netdev: err_reg_netdev:
enetc_free_msix(priv); enetc_free_msix(priv);
err_config_si:
err_alloc_msix: err_alloc_msix:
enetc_free_si_resources(priv); enetc_free_si_resources(priv);
err_alloc_si_res: err_alloc_si_res:

View File

@ -377,9 +377,16 @@ static int fec_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
u64 ns; u64 ns;
unsigned long flags; unsigned long flags;
mutex_lock(&adapter->ptp_clk_mutex);
/* Check the ptp clock */
if (!adapter->ptp_clk_on) {
mutex_unlock(&adapter->ptp_clk_mutex);
return -EINVAL;
}
spin_lock_irqsave(&adapter->tmreg_lock, flags); spin_lock_irqsave(&adapter->tmreg_lock, flags);
ns = timecounter_read(&adapter->tc); ns = timecounter_read(&adapter->tc);
spin_unlock_irqrestore(&adapter->tmreg_lock, flags); spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
mutex_unlock(&adapter->ptp_clk_mutex);
*ts = ns_to_timespec64(ns); *ts = ns_to_timespec64(ns);

View File

@ -2390,6 +2390,10 @@ static bool gfar_add_rx_frag(struct gfar_rx_buff *rxb, u32 lstatus,
if (lstatus & BD_LFLAG(RXBD_LAST)) if (lstatus & BD_LFLAG(RXBD_LAST))
size -= skb->len; size -= skb->len;
WARN(size < 0, "gianfar: rx fragment size underflow");
if (size < 0)
return false;
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
rxb->page_offset + RXBUF_ALIGNMENT, rxb->page_offset + RXBUF_ALIGNMENT,
size, GFAR_RXB_TRUESIZE); size, GFAR_RXB_TRUESIZE);
@ -2552,6 +2556,17 @@ static int gfar_clean_rx_ring(struct gfar_priv_rx_q *rx_queue,
if (lstatus & BD_LFLAG(RXBD_EMPTY)) if (lstatus & BD_LFLAG(RXBD_EMPTY))
break; break;
/* lost RXBD_LAST descriptor due to overrun */
if (skb &&
(lstatus & BD_LFLAG(RXBD_FIRST))) {
/* discard faulty buffer */
dev_kfree_skb(skb);
skb = NULL;
rx_queue->stats.rx_dropped++;
/* can continue normally */
}
/* order rx buffer descriptor reads */ /* order rx buffer descriptor reads */
rmb(); rmb();

View File

@ -1663,8 +1663,10 @@ static int hns_nic_clear_all_rx_fetch(struct net_device *ndev)
for (j = 0; j < fetch_num; j++) { for (j = 0; j < fetch_num; j++) {
/* alloc one skb and init */ /* alloc one skb and init */
skb = hns_assemble_skb(ndev); skb = hns_assemble_skb(ndev);
if (!skb) if (!skb) {
ret = -ENOMEM;
goto out; goto out;
}
rd = &tx_ring_data(priv, skb->queue_mapping); rd = &tx_ring_data(priv, skb->queue_mapping);
hns_nic_net_xmit_hw(ndev, skb, rd); hns_nic_net_xmit_hw(ndev, skb, rd);

View File

@ -1053,16 +1053,16 @@ struct hclge_fd_tcam_config_3_cmd {
#define HCLGE_FD_AD_DROP_B 0 #define HCLGE_FD_AD_DROP_B 0
#define HCLGE_FD_AD_DIRECT_QID_B 1 #define HCLGE_FD_AD_DIRECT_QID_B 1
#define HCLGE_FD_AD_QID_S 2 #define HCLGE_FD_AD_QID_S 2
#define HCLGE_FD_AD_QID_M GENMASK(12, 2) #define HCLGE_FD_AD_QID_M GENMASK(11, 2)
#define HCLGE_FD_AD_USE_COUNTER_B 12 #define HCLGE_FD_AD_USE_COUNTER_B 12
#define HCLGE_FD_AD_COUNTER_NUM_S 13 #define HCLGE_FD_AD_COUNTER_NUM_S 13
#define HCLGE_FD_AD_COUNTER_NUM_M GENMASK(20, 13) #define HCLGE_FD_AD_COUNTER_NUM_M GENMASK(20, 13)
#define HCLGE_FD_AD_NXT_STEP_B 20 #define HCLGE_FD_AD_NXT_STEP_B 20
#define HCLGE_FD_AD_NXT_KEY_S 21 #define HCLGE_FD_AD_NXT_KEY_S 21
#define HCLGE_FD_AD_NXT_KEY_M GENMASK(26, 21) #define HCLGE_FD_AD_NXT_KEY_M GENMASK(25, 21)
#define HCLGE_FD_AD_WR_RULE_ID_B 0 #define HCLGE_FD_AD_WR_RULE_ID_B 0
#define HCLGE_FD_AD_RULE_ID_S 1 #define HCLGE_FD_AD_RULE_ID_S 1
#define HCLGE_FD_AD_RULE_ID_M GENMASK(13, 1) #define HCLGE_FD_AD_RULE_ID_M GENMASK(12, 1)
#define HCLGE_FD_AD_TC_OVRD_B 16 #define HCLGE_FD_AD_TC_OVRD_B 16
#define HCLGE_FD_AD_TC_SIZE_S 17 #define HCLGE_FD_AD_TC_SIZE_S 17
#define HCLGE_FD_AD_TC_SIZE_M GENMASK(20, 17) #define HCLGE_FD_AD_TC_SIZE_M GENMASK(20, 17)

View File

@ -5245,9 +5245,9 @@ static bool hclge_fd_convert_tuple(u32 tuple_bit, u8 *key_x, u8 *key_y,
case BIT(INNER_SRC_MAC): case BIT(INNER_SRC_MAC):
for (i = 0; i < ETH_ALEN; i++) { for (i = 0; i < ETH_ALEN; i++) {
calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.src_mac[i], calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
rule->tuples.src_mac[i]); rule->tuples_mask.src_mac[i]);
calc_y(key_y[ETH_ALEN - 1 - i], rule->tuples.src_mac[i], calc_y(key_y[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
rule->tuples.src_mac[i]); rule->tuples_mask.src_mac[i]);
} }
return true; return true;
@ -6330,8 +6330,7 @@ static void hclge_fd_get_ext_info(struct ethtool_rx_flow_spec *fs,
fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1); fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1);
fs->m_ext.vlan_tci = fs->m_ext.vlan_tci =
rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ? rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ?
cpu_to_be16(VLAN_VID_MASK) : 0 : cpu_to_be16(rule->tuples_mask.vlan_tag1);
cpu_to_be16(rule->tuples_mask.vlan_tag1);
} }
if (fs->flow_type & FLOW_MAC_EXT) { if (fs->flow_type & FLOW_MAC_EXT) {

View File

@ -1905,10 +1905,9 @@ static int ibmvnic_set_mac(struct net_device *netdev, void *p)
if (!is_valid_ether_addr(addr->sa_data)) if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL; return -EADDRNOTAVAIL;
if (adapter->state != VNIC_PROBED) {
ether_addr_copy(adapter->mac_addr, addr->sa_data); ether_addr_copy(adapter->mac_addr, addr->sa_data);
if (adapter->state != VNIC_PROBED)
rc = __ibmvnic_set_mac(netdev, addr->sa_data); rc = __ibmvnic_set_mac(netdev, addr->sa_data);
}
return rc; return rc;
} }
@ -5218,16 +5217,14 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
{ {
struct device *dev = &adapter->vdev->dev; struct device *dev = &adapter->vdev->dev;
unsigned long timeout = msecs_to_jiffies(20000); unsigned long timeout = msecs_to_jiffies(20000);
u64 old_num_rx_queues, old_num_tx_queues; u64 old_num_rx_queues = adapter->req_rx_queues;
u64 old_num_tx_queues = adapter->req_tx_queues;
int rc; int rc;
adapter->from_passive_init = false; adapter->from_passive_init = false;
if (reset) { if (reset)
old_num_rx_queues = adapter->req_rx_queues;
old_num_tx_queues = adapter->req_tx_queues;
reinit_completion(&adapter->init_done); reinit_completion(&adapter->init_done);
}
adapter->init_done_rc = 0; adapter->init_done_rc = 0;
rc = ibmvnic_send_crq_init(adapter); rc = ibmvnic_send_crq_init(adapter);
@ -5410,9 +5407,9 @@ static void ibmvnic_remove(struct vio_dev *dev)
* after setting state, so __ibmvnic_reset() which is called * after setting state, so __ibmvnic_reset() which is called
* from the flush_work() below, can make progress. * from the flush_work() below, can make progress.
*/ */
spin_lock_irqsave(&adapter->rwi_lock, flags); spin_lock(&adapter->rwi_lock);
adapter->state = VNIC_REMOVING; adapter->state = VNIC_REMOVING;
spin_unlock_irqrestore(&adapter->rwi_lock, flags); spin_unlock(&adapter->rwi_lock);
spin_unlock_irqrestore(&adapter->state_lock, flags); spin_unlock_irqrestore(&adapter->state_lock, flags);

View File

@ -1776,7 +1776,8 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
goto err_alloc; goto err_alloc;
} }
if (iavf_process_config(adapter)) err = iavf_process_config(adapter);
if (err)
goto err_alloc; goto err_alloc;
adapter->current_op = VIRTCHNL_OP_UNKNOWN; adapter->current_op = VIRTCHNL_OP_UNKNOWN;

View File

@ -575,6 +575,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
return -EINVAL; return -EINVAL;
} }
if (xs->props.mode != XFRM_MODE_TRANSPORT) {
netdev_err(dev, "Unsupported mode for ipsec offload\n");
return -EINVAL;
}
if (ixgbe_ipsec_check_mgmt_ip(xs)) { if (ixgbe_ipsec_check_mgmt_ip(xs)) {
netdev_err(dev, "IPsec IP addr clash with mgmt filters\n"); netdev_err(dev, "IPsec IP addr clash with mgmt filters\n");
return -EINVAL; return -EINVAL;

View File

@ -9565,7 +9565,9 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
ixgbe_atr_compute_perfect_hash_82599(&input->filter, mask); ixgbe_atr_compute_perfect_hash_82599(&input->filter, mask);
err = ixgbe_fdir_write_perfect_filter_82599(hw, &input->filter, err = ixgbe_fdir_write_perfect_filter_82599(hw, &input->filter,
input->sw_idx, queue); input->sw_idx, queue);
if (!err) if (err)
goto err_out_w_lock;
ixgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx); ixgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
spin_unlock(&adapter->fdir_perfect_lock); spin_unlock(&adapter->fdir_perfect_lock);

View File

@ -272,6 +272,11 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
return -EINVAL; return -EINVAL;
} }
if (xs->props.mode != XFRM_MODE_TRANSPORT) {
netdev_err(dev, "Unsupported mode for ipsec offload\n");
return -EINVAL;
}
if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) { if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
struct rx_sa rsa; struct rx_sa rsa;

View File

@ -56,7 +56,9 @@ static bool is_dev_rpm(void *cgxd)
bool is_lmac_valid(struct cgx *cgx, int lmac_id) bool is_lmac_valid(struct cgx *cgx, int lmac_id)
{ {
return cgx && test_bit(lmac_id, &cgx->lmac_bmap); if (!cgx || lmac_id < 0 || lmac_id >= MAX_LMAC_PER_CGX)
return false;
return test_bit(lmac_id, &cgx->lmac_bmap);
} }
struct mac_ops *get_mac_ops(void *cgxd) struct mac_ops *get_mac_ops(void *cgxd)

View File

@ -1225,8 +1225,6 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
goto push_new_skb; goto push_new_skb;
} }
desc_data.dma_addr = new_dma_addr;
/* We can't fail anymore at this point: it's safe to unmap the skb. */ /* We can't fail anymore at this point: it's safe to unmap the skb. */
mtk_star_dma_unmap_rx(priv, &desc_data); mtk_star_dma_unmap_rx(priv, &desc_data);
@ -1236,6 +1234,9 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
desc_data.skb->dev = ndev; desc_data.skb->dev = ndev;
netif_receive_skb(desc_data.skb); netif_receive_skb(desc_data.skb);
/* update dma_addr for new skb */
desc_data.dma_addr = new_dma_addr;
push_new_skb: push_new_skb:
desc_data.len = skb_tailroom(new_skb); desc_data.len = skb_tailroom(new_skb);
desc_data.skb = new_skb; desc_data.skb = new_skb;

View File

@ -47,7 +47,7 @@
#define EN_ETHTOOL_SHORT_MASK cpu_to_be16(0xffff) #define EN_ETHTOOL_SHORT_MASK cpu_to_be16(0xffff)
#define EN_ETHTOOL_WORD_MASK cpu_to_be32(0xffffffff) #define EN_ETHTOOL_WORD_MASK cpu_to_be32(0xffffffff)
static int mlx4_en_moderation_update(struct mlx4_en_priv *priv) int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
{ {
int i, t; int i, t;
int err = 0; int err = 0;

View File

@ -3554,6 +3554,8 @@ int mlx4_en_reset_config(struct net_device *dev,
en_err(priv, "Failed starting port\n"); en_err(priv, "Failed starting port\n");
} }
if (!err)
err = mlx4_en_moderation_update(priv);
out: out:
mutex_unlock(&mdev->state_lock); mutex_unlock(&mdev->state_lock);
kfree(tmp); kfree(tmp);

View File

@ -775,6 +775,7 @@ void mlx4_en_ptp_overflow_check(struct mlx4_en_dev *mdev);
#define DEV_FEATURE_CHANGED(dev, new_features, feature) \ #define DEV_FEATURE_CHANGED(dev, new_features, feature) \
((dev->features & feature) ^ (new_features & feature)) ((dev->features & feature) ^ (new_features & feature))
int mlx4_en_moderation_update(struct mlx4_en_priv *priv);
int mlx4_en_reset_config(struct net_device *dev, int mlx4_en_reset_config(struct net_device *dev,
struct hwtstamp_config ts_config, struct hwtstamp_config ts_config,
netdev_features_t new_features); netdev_features_t new_features);

View File

@ -4430,6 +4430,7 @@ MLXSW_ITEM32(reg, ptys, ext_eth_proto_cap, 0x08, 0, 32);
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 BIT(20) #define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 BIT(20)
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 BIT(21) #define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 BIT(21)
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 BIT(22) #define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 BIT(22)
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4 BIT(23)
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_CR BIT(27) #define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_CR BIT(27)
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_KR BIT(28) #define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_KR BIT(28)
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_SR BIT(29) #define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_SR BIT(29)

View File

@ -1169,6 +1169,11 @@ static const struct mlxsw_sp1_port_link_mode mlxsw_sp1_port_link_mode[] = {
.mask_ethtool = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, .mask_ethtool = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
.speed = SPEED_100000, .speed = SPEED_100000,
}, },
{
.mask = MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
.mask_ethtool = ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
.speed = SPEED_100000,
},
}; };
#define MLXSW_SP1_PORT_LINK_MODE_LEN ARRAY_SIZE(mlxsw_sp1_port_link_mode) #define MLXSW_SP1_PORT_LINK_MODE_LEN ARRAY_SIZE(mlxsw_sp1_port_link_mode)

View File

@ -5951,6 +5951,10 @@ mlxsw_sp_router_fib4_replace(struct mlxsw_sp *mlxsw_sp,
if (mlxsw_sp->router->aborted) if (mlxsw_sp->router->aborted)
return 0; return 0;
if (fen_info->fi->nh &&
!mlxsw_sp_nexthop_obj_group_lookup(mlxsw_sp, fen_info->fi->nh->id))
return 0;
fib_node = mlxsw_sp_fib_node_get(mlxsw_sp, fen_info->tb_id, fib_node = mlxsw_sp_fib_node_get(mlxsw_sp, fen_info->tb_id,
&fen_info->dst, sizeof(fen_info->dst), &fen_info->dst, sizeof(fen_info->dst),
fen_info->dst_len, fen_info->dst_len,
@ -6601,6 +6605,9 @@ static int mlxsw_sp_router_fib6_replace(struct mlxsw_sp *mlxsw_sp,
if (mlxsw_sp_fib6_rt_should_ignore(rt)) if (mlxsw_sp_fib6_rt_should_ignore(rt))
return 0; return 0;
if (rt->nh && !mlxsw_sp_nexthop_obj_group_lookup(mlxsw_sp, rt->nh->id))
return 0;
fib_node = mlxsw_sp_fib_node_get(mlxsw_sp, rt->fib6_table->tb6_id, fib_node = mlxsw_sp_fib_node_get(mlxsw_sp, rt->fib6_table->tb6_id,
&rt->fib6_dst.addr, &rt->fib6_dst.addr,
sizeof(rt->fib6_dst.addr), sizeof(rt->fib6_dst.addr),

View File

@ -613,7 +613,8 @@ static const struct mlxsw_sx_port_link_mode mlxsw_sx_port_link_mode[] = {
{ {
.mask = MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 | .mask = MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 |
MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 | MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 |
MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4, MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 |
MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
.speed = 100000, .speed = 100000,
}, },
}; };

View File

@ -2040,7 +2040,7 @@ lan743x_rx_trim_skb(struct sk_buff *skb, int frame_length)
dev_kfree_skb_irq(skb); dev_kfree_skb_irq(skb);
return NULL; return NULL;
} }
frame_length = max_t(int, 0, frame_length - RX_HEAD_PADDING - 2); frame_length = max_t(int, 0, frame_length - RX_HEAD_PADDING - 4);
if (skb->len > frame_length) { if (skb->len > frame_length) {
skb->tail -= skb->len - frame_length; skb->tail -= skb->len - frame_length;
skb->len = frame_length; skb->len = frame_length;

View File

@ -13,6 +13,7 @@ if NET_VENDOR_MICROSEMI
# Users should depend on NET_SWITCHDEV, HAS_IOMEM # Users should depend on NET_SWITCHDEV, HAS_IOMEM
config MSCC_OCELOT_SWITCH_LIB config MSCC_OCELOT_SWITCH_LIB
select NET_DEVLINK
select REGMAP_MMIO select REGMAP_MMIO
select PACKING select PACKING
select PHYLIB select PHYLIB

View File

@ -540,13 +540,14 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
flow_rule_match_ipv4_addrs(rule, &match);
if (filter->block_id == VCAP_IS1 && *(u32 *)&match.mask->dst) { if (filter->block_id == VCAP_IS1 && *(u32 *)&match.mask->dst) {
NL_SET_ERR_MSG_MOD(extack, NL_SET_ERR_MSG_MOD(extack,
"Key type S1_NORMAL cannot match on destination IP"); "Key type S1_NORMAL cannot match on destination IP");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
flow_rule_match_ipv4_addrs(rule, &match);
tmp = &filter->key.ipv4.sip.value.addr[0]; tmp = &filter->key.ipv4.sip.value.addr[0];
memcpy(tmp, &match.key->src, 4); memcpy(tmp, &match.key->src, 4);

View File

@ -767,7 +767,7 @@ static void r8168fp_adjust_ocp_cmd(struct rtl8169_private *tp, u32 *cmd, int typ
if (type == ERIAR_OOB && if (type == ERIAR_OOB &&
(tp->mac_version == RTL_GIGA_MAC_VER_52 || (tp->mac_version == RTL_GIGA_MAC_VER_52 ||
tp->mac_version == RTL_GIGA_MAC_VER_53)) tp->mac_version == RTL_GIGA_MAC_VER_53))
*cmd |= 0x7f0 << 18; *cmd |= 0xf70 << 18;
} }
DECLARE_RTL_COND(rtl_eriar_cond) DECLARE_RTL_COND(rtl_eriar_cond)

View File

@ -560,6 +560,8 @@ static struct sh_eth_cpu_data r7s72100_data = {
EESR_TDE, EESR_TDE,
.fdr_value = 0x0000070f, .fdr_value = 0x0000070f,
.trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
.no_psr = 1, .no_psr = 1,
.apr = 1, .apr = 1,
.mpr = 1, .mpr = 1,
@ -780,6 +782,8 @@ static struct sh_eth_cpu_data r7s9210_data = {
.fdr_value = 0x0000070f, .fdr_value = 0x0000070f,
.trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
.apr = 1, .apr = 1,
.mpr = 1, .mpr = 1,
.tpauser = 1, .tpauser = 1,
@ -1089,6 +1093,9 @@ static struct sh_eth_cpu_data sh771x_data = {
EESIPR_CEEFIP | EESIPR_CELFIP | EESIPR_CEEFIP | EESIPR_CELFIP |
EESIPR_RRFIP | EESIPR_RTLFIP | EESIPR_RTSFIP | EESIPR_RRFIP | EESIPR_RTLFIP | EESIPR_RTSFIP |
EESIPR_PREIP | EESIPR_CERFIP, EESIPR_PREIP | EESIPR_CERFIP,
.trscer_err_mask = DESC_I_RINT8,
.tsu = 1, .tsu = 1,
.dual_port = 1, .dual_port = 1,
}; };

View File

@ -233,6 +233,7 @@ static void common_default_data(struct plat_stmmacenet_data *plat)
static int intel_mgbe_common_data(struct pci_dev *pdev, static int intel_mgbe_common_data(struct pci_dev *pdev,
struct plat_stmmacenet_data *plat) struct plat_stmmacenet_data *plat)
{ {
char clk_name[20];
int ret; int ret;
int i; int i;
@ -301,8 +302,10 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
plat->eee_usecs_rate = plat->clk_ptp_rate; plat->eee_usecs_rate = plat->clk_ptp_rate;
/* Set system clock */ /* Set system clock */
sprintf(clk_name, "%s-%s", "stmmac", pci_name(pdev));
plat->stmmac_clk = clk_register_fixed_rate(&pdev->dev, plat->stmmac_clk = clk_register_fixed_rate(&pdev->dev,
"stmmac-clk", NULL, 0, clk_name, NULL, 0,
plat->clk_ptp_rate); plat->clk_ptp_rate);
if (IS_ERR(plat->stmmac_clk)) { if (IS_ERR(plat->stmmac_clk)) {
@ -446,7 +449,7 @@ static int tgl_common_data(struct pci_dev *pdev,
return intel_mgbe_common_data(pdev, plat); return intel_mgbe_common_data(pdev, plat);
} }
static int tgl_sgmii_data(struct pci_dev *pdev, static int tgl_sgmii_phy0_data(struct pci_dev *pdev,
struct plat_stmmacenet_data *plat) struct plat_stmmacenet_data *plat)
{ {
plat->bus_id = 1; plat->bus_id = 1;
@ -456,11 +459,25 @@ static int tgl_sgmii_data(struct pci_dev *pdev,
return tgl_common_data(pdev, plat); return tgl_common_data(pdev, plat);
} }
static struct stmmac_pci_info tgl_sgmii1g_info = { static struct stmmac_pci_info tgl_sgmii1g_phy0_info = {
.setup = tgl_sgmii_data, .setup = tgl_sgmii_phy0_data,
}; };
static int adls_sgmii_data(struct pci_dev *pdev, static int tgl_sgmii_phy1_data(struct pci_dev *pdev,
struct plat_stmmacenet_data *plat)
{
plat->bus_id = 2;
plat->phy_interface = PHY_INTERFACE_MODE_SGMII;
plat->serdes_powerup = intel_serdes_powerup;
plat->serdes_powerdown = intel_serdes_powerdown;
return tgl_common_data(pdev, plat);
}
static struct stmmac_pci_info tgl_sgmii1g_phy1_info = {
.setup = tgl_sgmii_phy1_data,
};
static int adls_sgmii_phy0_data(struct pci_dev *pdev,
struct plat_stmmacenet_data *plat) struct plat_stmmacenet_data *plat)
{ {
plat->bus_id = 1; plat->bus_id = 1;
@ -471,10 +488,24 @@ static int adls_sgmii_data(struct pci_dev *pdev,
return tgl_common_data(pdev, plat); return tgl_common_data(pdev, plat);
} }
static struct stmmac_pci_info adls_sgmii1g_info = { static struct stmmac_pci_info adls_sgmii1g_phy0_info = {
.setup = adls_sgmii_data, .setup = adls_sgmii_phy0_data,
}; };
static int adls_sgmii_phy1_data(struct pci_dev *pdev,
struct plat_stmmacenet_data *plat)
{
plat->bus_id = 2;
plat->phy_interface = PHY_INTERFACE_MODE_SGMII;
/* SerDes power up and power down are done in BIOS for ADL */
return tgl_common_data(pdev, plat);
}
static struct stmmac_pci_info adls_sgmii1g_phy1_info = {
.setup = adls_sgmii_phy1_data,
};
static const struct stmmac_pci_func_data galileo_stmmac_func_data[] = { static const struct stmmac_pci_func_data galileo_stmmac_func_data[] = {
{ {
.func = 6, .func = 6,
@ -756,11 +787,11 @@ static const struct pci_device_id intel_eth_pci_id_table[] = {
{ PCI_DEVICE_DATA(INTEL, EHL_PSE1_RGMII1G_ID, &ehl_pse1_rgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, EHL_PSE1_RGMII1G_ID, &ehl_pse1_rgmii1g_info) },
{ PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII1G_ID, &ehl_pse1_sgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII1G_ID, &ehl_pse1_sgmii1g_info) },
{ PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII2G5_ID, &ehl_pse1_sgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII2G5_ID, &ehl_pse1_sgmii1g_info) },
{ PCI_DEVICE_DATA(INTEL, TGL_SGMII1G_ID, &tgl_sgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, TGL_SGMII1G_ID, &tgl_sgmii1g_phy0_info) },
{ PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_0_ID, &tgl_sgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_0_ID, &tgl_sgmii1g_phy0_info) },
{ PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_1_ID, &tgl_sgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_1_ID, &tgl_sgmii1g_phy1_info) },
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_0_ID, &adls_sgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_0_ID, &adls_sgmii1g_phy0_info) },
{ PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_1_ID, &adls_sgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_1_ID, &adls_sgmii1g_phy1_info) },
{} {}
}; };
MODULE_DEVICE_TABLE(pci, intel_eth_pci_id_table); MODULE_DEVICE_TABLE(pci, intel_eth_pci_id_table);

View File

@ -402,20 +402,54 @@ static void dwmac4_rd_set_tx_ic(struct dma_desc *p)
p->des2 |= cpu_to_le32(TDES2_INTERRUPT_ON_COMPLETION); p->des2 |= cpu_to_le32(TDES2_INTERRUPT_ON_COMPLETION);
} }
static void dwmac4_display_ring(void *head, unsigned int size, bool rx) static void dwmac4_display_ring(void *head, unsigned int size, bool rx,
dma_addr_t dma_rx_phy, unsigned int desc_size)
{ {
struct dma_desc *p = (struct dma_desc *)head; dma_addr_t dma_addr;
int i; int i;
pr_info("%s descriptor ring:\n", rx ? "RX" : "TX"); pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
if (desc_size == sizeof(struct dma_desc)) {
struct dma_desc *p = (struct dma_desc *)head;
for (i = 0; i < size; i++) { for (i = 0; i < size; i++) {
pr_info("%03d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n", dma_addr = dma_rx_phy + i * sizeof(*p);
i, (unsigned int)virt_to_phys(p), pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
i, &dma_addr,
le32_to_cpu(p->des0), le32_to_cpu(p->des1), le32_to_cpu(p->des0), le32_to_cpu(p->des1),
le32_to_cpu(p->des2), le32_to_cpu(p->des3)); le32_to_cpu(p->des2), le32_to_cpu(p->des3));
p++; p++;
} }
} else if (desc_size == sizeof(struct dma_extended_desc)) {
struct dma_extended_desc *extp = (struct dma_extended_desc *)head;
for (i = 0; i < size; i++) {
dma_addr = dma_rx_phy + i * sizeof(*extp);
pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
i, &dma_addr,
le32_to_cpu(extp->basic.des0), le32_to_cpu(extp->basic.des1),
le32_to_cpu(extp->basic.des2), le32_to_cpu(extp->basic.des3),
le32_to_cpu(extp->des4), le32_to_cpu(extp->des5),
le32_to_cpu(extp->des6), le32_to_cpu(extp->des7));
extp++;
}
} else if (desc_size == sizeof(struct dma_edesc)) {
struct dma_edesc *ep = (struct dma_edesc *)head;
for (i = 0; i < size; i++) {
dma_addr = dma_rx_phy + i * sizeof(*ep);
pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
i, &dma_addr,
le32_to_cpu(ep->des4), le32_to_cpu(ep->des5),
le32_to_cpu(ep->des6), le32_to_cpu(ep->des7),
le32_to_cpu(ep->basic.des0), le32_to_cpu(ep->basic.des1),
le32_to_cpu(ep->basic.des2), le32_to_cpu(ep->basic.des3));
ep++;
}
} else {
pr_err("unsupported descriptor!");
}
} }
static void dwmac4_set_mss_ctxt(struct dma_desc *p, unsigned int mss) static void dwmac4_set_mss_ctxt(struct dma_desc *p, unsigned int mss)
@ -499,10 +533,15 @@ static void dwmac4_get_rx_header_len(struct dma_desc *p, unsigned int *len)
*len = le32_to_cpu(p->des2) & RDES2_HL; *len = le32_to_cpu(p->des2) & RDES2_HL;
} }
static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr) static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool buf2_valid)
{ {
p->des2 = cpu_to_le32(lower_32_bits(addr)); p->des2 = cpu_to_le32(lower_32_bits(addr));
p->des3 = cpu_to_le32(upper_32_bits(addr) | RDES3_BUFFER2_VALID_ADDR); p->des3 = cpu_to_le32(upper_32_bits(addr));
if (buf2_valid)
p->des3 |= cpu_to_le32(RDES3_BUFFER2_VALID_ADDR);
else
p->des3 &= cpu_to_le32(~RDES3_BUFFER2_VALID_ADDR);
} }
static void dwmac4_set_tbs(struct dma_edesc *p, u32 sec, u32 nsec) static void dwmac4_set_tbs(struct dma_edesc *p, u32 sec, u32 nsec)

View File

@ -124,6 +124,23 @@ static void dwmac4_dma_init_channel(void __iomem *ioaddr,
ioaddr + DMA_CHAN_INTR_ENA(chan)); ioaddr + DMA_CHAN_INTR_ENA(chan));
} }
static void dwmac410_dma_init_channel(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg, u32 chan)
{
u32 value;
/* common channel control register config */
value = readl(ioaddr + DMA_CHAN_CONTROL(chan));
if (dma_cfg->pblx8)
value = value | DMA_BUS_MODE_PBL;
writel(value, ioaddr + DMA_CHAN_CONTROL(chan));
/* Mask interrupts by writing to CSR7 */
writel(DMA_CHAN_INTR_DEFAULT_MASK_4_10,
ioaddr + DMA_CHAN_INTR_ENA(chan));
}
static void dwmac4_dma_init(void __iomem *ioaddr, static void dwmac4_dma_init(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg, int atds) struct stmmac_dma_cfg *dma_cfg, int atds)
{ {
@ -523,7 +540,7 @@ const struct stmmac_dma_ops dwmac4_dma_ops = {
const struct stmmac_dma_ops dwmac410_dma_ops = { const struct stmmac_dma_ops dwmac410_dma_ops = {
.reset = dwmac4_dma_reset, .reset = dwmac4_dma_reset,
.init = dwmac4_dma_init, .init = dwmac4_dma_init,
.init_chan = dwmac4_dma_init_channel, .init_chan = dwmac410_dma_init_channel,
.init_rx_chan = dwmac4_dma_init_rx_chan, .init_rx_chan = dwmac4_dma_init_rx_chan,
.init_tx_chan = dwmac4_dma_init_tx_chan, .init_tx_chan = dwmac4_dma_init_tx_chan,
.axi = dwmac4_dma_axi, .axi = dwmac4_dma_axi,

View File

@ -53,10 +53,6 @@ void dwmac4_dma_stop_tx(void __iomem *ioaddr, u32 chan)
value &= ~DMA_CONTROL_ST; value &= ~DMA_CONTROL_ST;
writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan)); writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan));
value = readl(ioaddr + GMAC_CONFIG);
value &= ~GMAC_CONFIG_TE;
writel(value, ioaddr + GMAC_CONFIG);
} }
void dwmac4_dma_start_rx(void __iomem *ioaddr, u32 chan) void dwmac4_dma_start_rx(void __iomem *ioaddr, u32 chan)

View File

@ -292,7 +292,7 @@ static void dwxgmac2_get_rx_header_len(struct dma_desc *p, unsigned int *len)
*len = le32_to_cpu(p->des2) & XGMAC_RDES2_HL; *len = le32_to_cpu(p->des2) & XGMAC_RDES2_HL;
} }
static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr) static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool is_valid)
{ {
p->des2 = cpu_to_le32(lower_32_bits(addr)); p->des2 = cpu_to_le32(lower_32_bits(addr));
p->des3 = cpu_to_le32(upper_32_bits(addr)); p->des3 = cpu_to_le32(upper_32_bits(addr));

View File

@ -417,19 +417,22 @@ static int enh_desc_get_rx_timestamp_status(void *desc, void *next_desc,
} }
} }
static void enh_desc_display_ring(void *head, unsigned int size, bool rx) static void enh_desc_display_ring(void *head, unsigned int size, bool rx,
dma_addr_t dma_rx_phy, unsigned int desc_size)
{ {
struct dma_extended_desc *ep = (struct dma_extended_desc *)head; struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
dma_addr_t dma_addr;
int i; int i;
pr_info("Extended %s descriptor ring:\n", rx ? "RX" : "TX"); pr_info("Extended %s descriptor ring:\n", rx ? "RX" : "TX");
for (i = 0; i < size; i++) { for (i = 0; i < size; i++) {
u64 x; u64 x;
dma_addr = dma_rx_phy + i * sizeof(*ep);
x = *(u64 *)ep; x = *(u64 *)ep;
pr_info("%03d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n", pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
i, (unsigned int)virt_to_phys(ep), i, &dma_addr,
(unsigned int)x, (unsigned int)(x >> 32), (unsigned int)x, (unsigned int)(x >> 32),
ep->basic.des2, ep->basic.des3); ep->basic.des2, ep->basic.des3);
ep++; ep++;

View File

@ -78,7 +78,8 @@ struct stmmac_desc_ops {
/* get rx timestamp status */ /* get rx timestamp status */
int (*get_rx_timestamp_status)(void *desc, void *next_desc, u32 ats); int (*get_rx_timestamp_status)(void *desc, void *next_desc, u32 ats);
/* Display ring */ /* Display ring */
void (*display_ring)(void *head, unsigned int size, bool rx); void (*display_ring)(void *head, unsigned int size, bool rx,
dma_addr_t dma_rx_phy, unsigned int desc_size);
/* set MSS via context descriptor */ /* set MSS via context descriptor */
void (*set_mss)(struct dma_desc *p, unsigned int mss); void (*set_mss)(struct dma_desc *p, unsigned int mss);
/* get descriptor skbuff address */ /* get descriptor skbuff address */
@ -91,7 +92,7 @@ struct stmmac_desc_ops {
int (*get_rx_hash)(struct dma_desc *p, u32 *hash, int (*get_rx_hash)(struct dma_desc *p, u32 *hash,
enum pkt_hash_types *type); enum pkt_hash_types *type);
void (*get_rx_header_len)(struct dma_desc *p, unsigned int *len); void (*get_rx_header_len)(struct dma_desc *p, unsigned int *len);
void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr); void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr, bool buf2_valid);
void (*set_sarc)(struct dma_desc *p, u32 sarc_type); void (*set_sarc)(struct dma_desc *p, u32 sarc_type);
void (*set_vlan_tag)(struct dma_desc *p, u16 tag, u16 inner_tag, void (*set_vlan_tag)(struct dma_desc *p, u16 tag, u16 inner_tag,
u32 inner_type); u32 inner_type);

View File

@ -269,19 +269,22 @@ static int ndesc_get_rx_timestamp_status(void *desc, void *next_desc, u32 ats)
return 1; return 1;
} }
static void ndesc_display_ring(void *head, unsigned int size, bool rx) static void ndesc_display_ring(void *head, unsigned int size, bool rx,
dma_addr_t dma_rx_phy, unsigned int desc_size)
{ {
struct dma_desc *p = (struct dma_desc *)head; struct dma_desc *p = (struct dma_desc *)head;
dma_addr_t dma_addr;
int i; int i;
pr_info("%s descriptor ring:\n", rx ? "RX" : "TX"); pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
for (i = 0; i < size; i++) { for (i = 0; i < size; i++) {
u64 x; u64 x;
dma_addr = dma_rx_phy + i * sizeof(*p);
x = *(u64 *)p; x = *(u64 *)p;
pr_info("%03d [0x%x]: 0x%x 0x%x 0x%x 0x%x", pr_info("%03d [%pad]: 0x%x 0x%x 0x%x 0x%x",
i, (unsigned int)virt_to_phys(p), i, &dma_addr,
(unsigned int)x, (unsigned int)(x >> 32), (unsigned int)x, (unsigned int)(x >> 32),
p->des2, p->des3); p->des2, p->des3);
p++; p++;

View File

@ -1133,6 +1133,7 @@ static int stmmac_phy_setup(struct stmmac_priv *priv)
static void stmmac_display_rx_rings(struct stmmac_priv *priv) static void stmmac_display_rx_rings(struct stmmac_priv *priv)
{ {
u32 rx_cnt = priv->plat->rx_queues_to_use; u32 rx_cnt = priv->plat->rx_queues_to_use;
unsigned int desc_size;
void *head_rx; void *head_rx;
u32 queue; u32 queue;
@ -1142,19 +1143,24 @@ static void stmmac_display_rx_rings(struct stmmac_priv *priv)
pr_info("\tRX Queue %u rings\n", queue); pr_info("\tRX Queue %u rings\n", queue);
if (priv->extend_desc) if (priv->extend_desc) {
head_rx = (void *)rx_q->dma_erx; head_rx = (void *)rx_q->dma_erx;
else desc_size = sizeof(struct dma_extended_desc);
} else {
head_rx = (void *)rx_q->dma_rx; head_rx = (void *)rx_q->dma_rx;
desc_size = sizeof(struct dma_desc);
}
/* Display RX ring */ /* Display RX ring */
stmmac_display_ring(priv, head_rx, priv->dma_rx_size, true); stmmac_display_ring(priv, head_rx, priv->dma_rx_size, true,
rx_q->dma_rx_phy, desc_size);
} }
} }
static void stmmac_display_tx_rings(struct stmmac_priv *priv) static void stmmac_display_tx_rings(struct stmmac_priv *priv)
{ {
u32 tx_cnt = priv->plat->tx_queues_to_use; u32 tx_cnt = priv->plat->tx_queues_to_use;
unsigned int desc_size;
void *head_tx; void *head_tx;
u32 queue; u32 queue;
@ -1164,14 +1170,19 @@ static void stmmac_display_tx_rings(struct stmmac_priv *priv)
pr_info("\tTX Queue %d rings\n", queue); pr_info("\tTX Queue %d rings\n", queue);
if (priv->extend_desc) if (priv->extend_desc) {
head_tx = (void *)tx_q->dma_etx; head_tx = (void *)tx_q->dma_etx;
else if (tx_q->tbs & STMMAC_TBS_AVAIL) desc_size = sizeof(struct dma_extended_desc);
} else if (tx_q->tbs & STMMAC_TBS_AVAIL) {
head_tx = (void *)tx_q->dma_entx; head_tx = (void *)tx_q->dma_entx;
else desc_size = sizeof(struct dma_edesc);
} else {
head_tx = (void *)tx_q->dma_tx; head_tx = (void *)tx_q->dma_tx;
desc_size = sizeof(struct dma_desc);
}
stmmac_display_ring(priv, head_tx, priv->dma_tx_size, false); stmmac_display_ring(priv, head_tx, priv->dma_tx_size, false,
tx_q->dma_tx_phy, desc_size);
} }
} }
@ -1303,9 +1314,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
return -ENOMEM; return -ENOMEM;
buf->sec_addr = page_pool_get_dma_addr(buf->sec_page); buf->sec_addr = page_pool_get_dma_addr(buf->sec_page);
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr); stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
} else { } else {
buf->sec_page = NULL; buf->sec_page = NULL;
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
} }
buf->addr = page_pool_get_dma_addr(buf->page); buf->addr = page_pool_get_dma_addr(buf->page);
@ -1367,6 +1379,88 @@ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i)
} }
} }
/**
* stmmac_reinit_rx_buffers - reinit the RX descriptor buffer.
* @priv: driver private structure
* Description: this function is called to re-allocate a receive buffer, perform
* the DMA mapping and init the descriptor.
*/
static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv)
{
u32 rx_count = priv->plat->rx_queues_to_use;
u32 queue;
int i;
for (queue = 0; queue < rx_count; queue++) {
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
for (i = 0; i < priv->dma_rx_size; i++) {
struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
if (buf->page) {
page_pool_recycle_direct(rx_q->page_pool, buf->page);
buf->page = NULL;
}
if (priv->sph && buf->sec_page) {
page_pool_recycle_direct(rx_q->page_pool, buf->sec_page);
buf->sec_page = NULL;
}
}
}
for (queue = 0; queue < rx_count; queue++) {
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
for (i = 0; i < priv->dma_rx_size; i++) {
struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
struct dma_desc *p;
if (priv->extend_desc)
p = &((rx_q->dma_erx + i)->basic);
else
p = rx_q->dma_rx + i;
if (!buf->page) {
buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
if (!buf->page)
goto err_reinit_rx_buffers;
buf->addr = page_pool_get_dma_addr(buf->page);
}
if (priv->sph && !buf->sec_page) {
buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool);
if (!buf->sec_page)
goto err_reinit_rx_buffers;
buf->sec_addr = page_pool_get_dma_addr(buf->sec_page);
}
stmmac_set_desc_addr(priv, p, buf->addr);
if (priv->sph)
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
else
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
if (priv->dma_buf_sz == BUF_SIZE_16KiB)
stmmac_init_desc3(priv, p);
}
}
return;
err_reinit_rx_buffers:
do {
while (--i >= 0)
stmmac_free_rx_buffer(priv, queue, i);
if (queue == 0)
break;
i = priv->dma_rx_size;
} while (queue-- > 0);
}
/** /**
* init_dma_rx_desc_rings - init the RX descriptor rings * init_dma_rx_desc_rings - init the RX descriptor rings
* @dev: net device structure * @dev: net device structure
@ -3648,7 +3742,10 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
DMA_FROM_DEVICE); DMA_FROM_DEVICE);
stmmac_set_desc_addr(priv, p, buf->addr); stmmac_set_desc_addr(priv, p, buf->addr);
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr); if (priv->sph)
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
else
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
stmmac_refill_desc3(priv, rx_q, p); stmmac_refill_desc3(priv, rx_q, p);
rx_q->rx_count_frames++; rx_q->rx_count_frames++;
@ -3736,18 +3833,23 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
unsigned int count = 0, error = 0, len = 0; unsigned int count = 0, error = 0, len = 0;
int status = 0, coe = priv->hw->rx_csum; int status = 0, coe = priv->hw->rx_csum;
unsigned int next_entry = rx_q->cur_rx; unsigned int next_entry = rx_q->cur_rx;
unsigned int desc_size;
struct sk_buff *skb = NULL; struct sk_buff *skb = NULL;
if (netif_msg_rx_status(priv)) { if (netif_msg_rx_status(priv)) {
void *rx_head; void *rx_head;
netdev_dbg(priv->dev, "%s: descriptor ring:\n", __func__); netdev_dbg(priv->dev, "%s: descriptor ring:\n", __func__);
if (priv->extend_desc) if (priv->extend_desc) {
rx_head = (void *)rx_q->dma_erx; rx_head = (void *)rx_q->dma_erx;
else desc_size = sizeof(struct dma_extended_desc);
} else {
rx_head = (void *)rx_q->dma_rx; rx_head = (void *)rx_q->dma_rx;
desc_size = sizeof(struct dma_desc);
}
stmmac_display_ring(priv, rx_head, priv->dma_rx_size, true); stmmac_display_ring(priv, rx_head, priv->dma_rx_size, true,
rx_q->dma_rx_phy, desc_size);
} }
while (count < limit) { while (count < limit) {
unsigned int buf1_len = 0, buf2_len = 0; unsigned int buf1_len = 0, buf2_len = 0;
@ -4315,24 +4417,27 @@ static int stmmac_set_mac_address(struct net_device *ndev, void *addr)
static struct dentry *stmmac_fs_dir; static struct dentry *stmmac_fs_dir;
static void sysfs_display_ring(void *head, int size, int extend_desc, static void sysfs_display_ring(void *head, int size, int extend_desc,
struct seq_file *seq) struct seq_file *seq, dma_addr_t dma_phy_addr)
{ {
int i; int i;
struct dma_extended_desc *ep = (struct dma_extended_desc *)head; struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
struct dma_desc *p = (struct dma_desc *)head; struct dma_desc *p = (struct dma_desc *)head;
dma_addr_t dma_addr;
for (i = 0; i < size; i++) { for (i = 0; i < size; i++) {
if (extend_desc) { if (extend_desc) {
seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n", dma_addr = dma_phy_addr + i * sizeof(*ep);
i, (unsigned int)virt_to_phys(ep), seq_printf(seq, "%d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
i, &dma_addr,
le32_to_cpu(ep->basic.des0), le32_to_cpu(ep->basic.des0),
le32_to_cpu(ep->basic.des1), le32_to_cpu(ep->basic.des1),
le32_to_cpu(ep->basic.des2), le32_to_cpu(ep->basic.des2),
le32_to_cpu(ep->basic.des3)); le32_to_cpu(ep->basic.des3));
ep++; ep++;
} else { } else {
seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n", dma_addr = dma_phy_addr + i * sizeof(*p);
i, (unsigned int)virt_to_phys(p), seq_printf(seq, "%d [%pad]: 0x%x 0x%x 0x%x 0x%x\n",
i, &dma_addr,
le32_to_cpu(p->des0), le32_to_cpu(p->des1), le32_to_cpu(p->des0), le32_to_cpu(p->des1),
le32_to_cpu(p->des2), le32_to_cpu(p->des3)); le32_to_cpu(p->des2), le32_to_cpu(p->des3));
p++; p++;
@ -4360,11 +4465,11 @@ static int stmmac_rings_status_show(struct seq_file *seq, void *v)
if (priv->extend_desc) { if (priv->extend_desc) {
seq_printf(seq, "Extended descriptor ring:\n"); seq_printf(seq, "Extended descriptor ring:\n");
sysfs_display_ring((void *)rx_q->dma_erx, sysfs_display_ring((void *)rx_q->dma_erx,
priv->dma_rx_size, 1, seq); priv->dma_rx_size, 1, seq, rx_q->dma_rx_phy);
} else { } else {
seq_printf(seq, "Descriptor ring:\n"); seq_printf(seq, "Descriptor ring:\n");
sysfs_display_ring((void *)rx_q->dma_rx, sysfs_display_ring((void *)rx_q->dma_rx,
priv->dma_rx_size, 0, seq); priv->dma_rx_size, 0, seq, rx_q->dma_rx_phy);
} }
} }
@ -4376,11 +4481,11 @@ static int stmmac_rings_status_show(struct seq_file *seq, void *v)
if (priv->extend_desc) { if (priv->extend_desc) {
seq_printf(seq, "Extended descriptor ring:\n"); seq_printf(seq, "Extended descriptor ring:\n");
sysfs_display_ring((void *)tx_q->dma_etx, sysfs_display_ring((void *)tx_q->dma_etx,
priv->dma_tx_size, 1, seq); priv->dma_tx_size, 1, seq, tx_q->dma_tx_phy);
} else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) { } else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) {
seq_printf(seq, "Descriptor ring:\n"); seq_printf(seq, "Descriptor ring:\n");
sysfs_display_ring((void *)tx_q->dma_tx, sysfs_display_ring((void *)tx_q->dma_tx,
priv->dma_tx_size, 0, seq); priv->dma_tx_size, 0, seq, tx_q->dma_tx_phy);
} }
} }
@ -5144,13 +5249,16 @@ int stmmac_dvr_remove(struct device *dev)
netdev_info(priv->dev, "%s: removing driver", __func__); netdev_info(priv->dev, "%s: removing driver", __func__);
stmmac_stop_all_dma(priv); stmmac_stop_all_dma(priv);
if (priv->plat->serdes_powerdown)
priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
stmmac_mac_set(priv, priv->ioaddr, false); stmmac_mac_set(priv, priv->ioaddr, false);
netif_carrier_off(ndev); netif_carrier_off(ndev);
unregister_netdev(ndev); unregister_netdev(ndev);
/* Serdes power down needs to happen after VLAN filter
* is deleted that is triggered by unregister_netdev().
*/
if (priv->plat->serdes_powerdown)
priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
stmmac_exit_fs(ndev); stmmac_exit_fs(ndev);
#endif #endif
@ -5257,6 +5365,8 @@ static void stmmac_reset_queues_param(struct stmmac_priv *priv)
tx_q->cur_tx = 0; tx_q->cur_tx = 0;
tx_q->dirty_tx = 0; tx_q->dirty_tx = 0;
tx_q->mss = 0; tx_q->mss = 0;
netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue));
} }
} }
@ -5318,7 +5428,7 @@ int stmmac_resume(struct device *dev)
mutex_lock(&priv->lock); mutex_lock(&priv->lock);
stmmac_reset_queues_param(priv); stmmac_reset_queues_param(priv);
stmmac_reinit_rx_buffers(priv);
stmmac_free_tx_skbufs(priv); stmmac_free_tx_skbufs(priv);
stmmac_clear_descriptors(priv); stmmac_clear_descriptors(priv);

View File

@ -3931,8 +3931,6 @@ static void niu_xmac_interrupt(struct niu *np)
mp->rx_mcasts += RXMAC_MC_FRM_CNT_COUNT; mp->rx_mcasts += RXMAC_MC_FRM_CNT_COUNT;
if (val & XRXMAC_STATUS_RXBCAST_CNT_EXP) if (val & XRXMAC_STATUS_RXBCAST_CNT_EXP)
mp->rx_bcasts += RXMAC_BC_FRM_CNT_COUNT; mp->rx_bcasts += RXMAC_BC_FRM_CNT_COUNT;
if (val & XRXMAC_STATUS_RXBCAST_CNT_EXP)
mp->rx_bcasts += RXMAC_BC_FRM_CNT_COUNT;
if (val & XRXMAC_STATUS_RXHIST1_CNT_EXP) if (val & XRXMAC_STATUS_RXHIST1_CNT_EXP)
mp->rx_hist_cnt1 += RXMAC_HIST_CNT1_COUNT; mp->rx_hist_cnt1 += RXMAC_HIST_CNT1_COUNT;
if (val & XRXMAC_STATUS_RXHIST2_CNT_EXP) if (val & XRXMAC_STATUS_RXHIST2_CNT_EXP)

View File

@ -2044,6 +2044,7 @@ bdx_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/*bdx_hw_reset(priv); */ /*bdx_hw_reset(priv); */
if (bdx_read_mac(priv)) { if (bdx_read_mac(priv)) {
pr_err("load MAC address failed\n"); pr_err("load MAC address failed\n");
err = -EFAULT;
goto err_out_iomap; goto err_out_iomap;
} }
SET_NETDEV_DEV(ndev, &pdev->dev); SET_NETDEV_DEV(ndev, &pdev->dev);

View File

@ -171,11 +171,6 @@ static void sp_encaps(struct sixpack *sp, unsigned char *icp, int len)
goto out_drop; goto out_drop;
} }
if (len > sp->mtu) { /* sp->mtu = AX25_MTU = max. PACLEN = 256 */
msg = "oversized transmit packet!";
goto out_drop;
}
if (p[0] > 5) { if (p[0] > 5) {
msg = "invalid KISS command"; msg = "invalid KISS command";
goto out_drop; goto out_drop;

View File

@ -229,7 +229,7 @@ int netvsc_send(struct net_device *net,
bool xdp_tx); bool xdp_tx);
void netvsc_linkstatus_callback(struct net_device *net, void netvsc_linkstatus_callback(struct net_device *net,
struct rndis_message *resp, struct rndis_message *resp,
void *data); void *data, u32 data_buflen);
int netvsc_recv_callback(struct net_device *net, int netvsc_recv_callback(struct net_device *net,
struct netvsc_device *nvdev, struct netvsc_device *nvdev,
struct netvsc_channel *nvchan); struct netvsc_channel *nvchan);

View File

@ -744,7 +744,7 @@ static netdev_tx_t netvsc_start_xmit(struct sk_buff *skb,
*/ */
void netvsc_linkstatus_callback(struct net_device *net, void netvsc_linkstatus_callback(struct net_device *net,
struct rndis_message *resp, struct rndis_message *resp,
void *data) void *data, u32 data_buflen)
{ {
struct rndis_indicate_status *indicate = &resp->msg.indicate_status; struct rndis_indicate_status *indicate = &resp->msg.indicate_status;
struct net_device_context *ndev_ctx = netdev_priv(net); struct net_device_context *ndev_ctx = netdev_priv(net);
@ -765,11 +765,16 @@ void netvsc_linkstatus_callback(struct net_device *net,
if (indicate->status == RNDIS_STATUS_LINK_SPEED_CHANGE) { if (indicate->status == RNDIS_STATUS_LINK_SPEED_CHANGE) {
u32 speed; u32 speed;
/* Validate status_buf_offset */ /* Validate status_buf_offset and status_buflen.
*
* Certain (pre-Fe) implementations of Hyper-V's vSwitch didn't account
* for the status buffer field in resp->msg_len; perform the validation
* using data_buflen (>= resp->msg_len).
*/
if (indicate->status_buflen < sizeof(speed) || if (indicate->status_buflen < sizeof(speed) ||
indicate->status_buf_offset < sizeof(*indicate) || indicate->status_buf_offset < sizeof(*indicate) ||
resp->msg_len - RNDIS_HEADER_SIZE < indicate->status_buf_offset || data_buflen - RNDIS_HEADER_SIZE < indicate->status_buf_offset ||
resp->msg_len - RNDIS_HEADER_SIZE - indicate->status_buf_offset data_buflen - RNDIS_HEADER_SIZE - indicate->status_buf_offset
< indicate->status_buflen) { < indicate->status_buflen) {
netdev_err(net, "invalid rndis_indicate_status packet\n"); netdev_err(net, "invalid rndis_indicate_status packet\n");
return; return;

View File

@ -620,7 +620,7 @@ int rndis_filter_receive(struct net_device *ndev,
case RNDIS_MSG_INDICATE: case RNDIS_MSG_INDICATE:
/* notification msgs */ /* notification msgs */
netvsc_linkstatus_callback(ndev, rndis_msg, data); netvsc_linkstatus_callback(ndev, rndis_msg, data, buflen);
break; break;
default: default:
netdev_err(ndev, netdev_err(ndev,

View File

@ -294,6 +294,7 @@ nsim_create(struct nsim_dev *nsim_dev, struct nsim_dev_port *nsim_dev_port)
dev_net_set(dev, nsim_dev_net(nsim_dev)); dev_net_set(dev, nsim_dev_net(nsim_dev));
ns = netdev_priv(dev); ns = netdev_priv(dev);
ns->netdev = dev; ns->netdev = dev;
u64_stats_init(&ns->syncp);
ns->nsim_dev = nsim_dev; ns->nsim_dev = nsim_dev;
ns->nsim_dev_port = nsim_dev_port; ns->nsim_dev_port = nsim_dev_port;
ns->nsim_bus_dev = nsim_dev->nsim_bus_dev; ns->nsim_bus_dev = nsim_dev->nsim_bus_dev;

View File

@ -290,6 +290,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
static irqreturn_t dp83822_handle_interrupt(struct phy_device *phydev) static irqreturn_t dp83822_handle_interrupt(struct phy_device *phydev)
{ {
bool trigger_machine = false;
int irq_status; int irq_status;
/* The MISR1 and MISR2 registers are holding the interrupt status in /* The MISR1 and MISR2 registers are holding the interrupt status in
@ -305,7 +306,7 @@ static irqreturn_t dp83822_handle_interrupt(struct phy_device *phydev)
return IRQ_NONE; return IRQ_NONE;
} }
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8)) if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
goto trigger_machine; trigger_machine = true;
irq_status = phy_read(phydev, MII_DP83822_MISR2); irq_status = phy_read(phydev, MII_DP83822_MISR2);
if (irq_status < 0) { if (irq_status < 0) {
@ -313,11 +314,11 @@ static irqreturn_t dp83822_handle_interrupt(struct phy_device *phydev)
return IRQ_NONE; return IRQ_NONE;
} }
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8)) if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
goto trigger_machine; trigger_machine = true;
if (!trigger_machine)
return IRQ_NONE; return IRQ_NONE;
trigger_machine:
phy_trigger_machine(phydev); phy_trigger_machine(phydev);
return IRQ_HANDLED; return IRQ_HANDLED;

View File

@ -264,6 +264,7 @@ static int dp83811_config_intr(struct phy_device *phydev)
static irqreturn_t dp83811_handle_interrupt(struct phy_device *phydev) static irqreturn_t dp83811_handle_interrupt(struct phy_device *phydev)
{ {
bool trigger_machine = false;
int irq_status; int irq_status;
/* The INT_STAT registers 1, 2 and 3 are holding the interrupt status /* The INT_STAT registers 1, 2 and 3 are holding the interrupt status
@ -279,7 +280,7 @@ static irqreturn_t dp83811_handle_interrupt(struct phy_device *phydev)
return IRQ_NONE; return IRQ_NONE;
} }
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8)) if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
goto trigger_machine; trigger_machine = true;
irq_status = phy_read(phydev, MII_DP83811_INT_STAT2); irq_status = phy_read(phydev, MII_DP83811_INT_STAT2);
if (irq_status < 0) { if (irq_status < 0) {
@ -287,7 +288,7 @@ static irqreturn_t dp83811_handle_interrupt(struct phy_device *phydev)
return IRQ_NONE; return IRQ_NONE;
} }
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8)) if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
goto trigger_machine; trigger_machine = true;
irq_status = phy_read(phydev, MII_DP83811_INT_STAT3); irq_status = phy_read(phydev, MII_DP83811_INT_STAT3);
if (irq_status < 0) { if (irq_status < 0) {
@ -295,11 +296,11 @@ static irqreturn_t dp83811_handle_interrupt(struct phy_device *phydev)
return IRQ_NONE; return IRQ_NONE;
} }
if (irq_status & ((irq_status & GENMASK(7, 0)) << 8)) if (irq_status & ((irq_status & GENMASK(7, 0)) << 8))
goto trigger_machine; trigger_machine = true;
if (!trigger_machine)
return IRQ_NONE; return IRQ_NONE;
trigger_machine:
phy_trigger_machine(phydev); phy_trigger_machine(phydev);
return IRQ_HANDLED; return IRQ_HANDLED;

View File

@ -276,14 +276,16 @@ int phy_ethtool_ksettings_set(struct phy_device *phydev,
phydev->autoneg = autoneg; phydev->autoneg = autoneg;
if (autoneg == AUTONEG_DISABLE) {
phydev->speed = speed; phydev->speed = speed;
phydev->duplex = duplex;
}
linkmode_copy(phydev->advertising, advertising); linkmode_copy(phydev->advertising, advertising);
linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
phydev->advertising, autoneg == AUTONEG_ENABLE); phydev->advertising, autoneg == AUTONEG_ENABLE);
phydev->duplex = duplex;
phydev->master_slave_set = cmd->base.master_slave_cfg; phydev->master_slave_set = cmd->base.master_slave_cfg;
phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl; phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;

View File

@ -230,7 +230,6 @@ static struct phy_driver genphy_driver;
static LIST_HEAD(phy_fixup_list); static LIST_HEAD(phy_fixup_list);
static DEFINE_MUTEX(phy_fixup_lock); static DEFINE_MUTEX(phy_fixup_lock);
#ifdef CONFIG_PM
static bool mdio_bus_phy_may_suspend(struct phy_device *phydev) static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
{ {
struct device_driver *drv = phydev->mdio.dev.driver; struct device_driver *drv = phydev->mdio.dev.driver;
@ -270,7 +269,7 @@ out:
return !phydev->suspended; return !phydev->suspended;
} }
static int mdio_bus_phy_suspend(struct device *dev) static __maybe_unused int mdio_bus_phy_suspend(struct device *dev)
{ {
struct phy_device *phydev = to_phy_device(dev); struct phy_device *phydev = to_phy_device(dev);
@ -290,7 +289,7 @@ static int mdio_bus_phy_suspend(struct device *dev)
return phy_suspend(phydev); return phy_suspend(phydev);
} }
static int mdio_bus_phy_resume(struct device *dev) static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
{ {
struct phy_device *phydev = to_phy_device(dev); struct phy_device *phydev = to_phy_device(dev);
int ret; int ret;
@ -316,7 +315,6 @@ no_resume:
static SIMPLE_DEV_PM_OPS(mdio_bus_phy_pm_ops, mdio_bus_phy_suspend, static SIMPLE_DEV_PM_OPS(mdio_bus_phy_pm_ops, mdio_bus_phy_suspend,
mdio_bus_phy_resume); mdio_bus_phy_resume);
#endif /* CONFIG_PM */
/** /**
* phy_register_fixup - creates a new phy_fixup and adds it to the list * phy_register_fixup - creates a new phy_fixup and adds it to the list

View File

@ -851,17 +851,17 @@ int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_
/* check if we got everything */ /* check if we got everything */
if (!ctx->data) { if (!ctx->data) {
dev_dbg(&intf->dev, "CDC Union missing and no IAD found\n"); dev_err(&intf->dev, "CDC Union missing and no IAD found\n");
goto error; goto error;
} }
if (cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting)) { if (cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting)) {
if (!ctx->mbim_desc) { if (!ctx->mbim_desc) {
dev_dbg(&intf->dev, "MBIM functional descriptor missing\n"); dev_err(&intf->dev, "MBIM functional descriptor missing\n");
goto error; goto error;
} }
} else { } else {
if (!ctx->ether_desc || !ctx->func_desc) { if (!ctx->ether_desc || !ctx->func_desc) {
dev_dbg(&intf->dev, "NCM or ECM functional descriptors missing\n"); dev_err(&intf->dev, "NCM or ECM functional descriptors missing\n");
goto error; goto error;
} }
} }
@ -870,7 +870,7 @@ int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_
if (ctx->data != ctx->control) { if (ctx->data != ctx->control) {
temp = usb_driver_claim_interface(driver, ctx->data, dev); temp = usb_driver_claim_interface(driver, ctx->data, dev);
if (temp) { if (temp) {
dev_dbg(&intf->dev, "failed to claim data intf\n"); dev_err(&intf->dev, "failed to claim data intf\n");
goto error; goto error;
} }
} }
@ -926,7 +926,7 @@ int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_
if (ctx->ether_desc) { if (ctx->ether_desc) {
temp = usbnet_get_ethernet_addr(dev, ctx->ether_desc->iMACAddress); temp = usbnet_get_ethernet_addr(dev, ctx->ether_desc->iMACAddress);
if (temp) { if (temp) {
dev_dbg(&intf->dev, "failed to get mac address\n"); dev_err(&intf->dev, "failed to get mac address\n");
goto error2; goto error2;
} }
dev_info(&intf->dev, "MAC-Address: %pM\n", dev->net->dev_addr); dev_info(&intf->dev, "MAC-Address: %pM\n", dev->net->dev_addr);

View File

@ -429,13 +429,6 @@ static ssize_t add_mux_store(struct device *d, struct device_attribute *attr, c
goto err; goto err;
} }
/* we don't want to modify a running netdev */
if (netif_running(dev->net)) {
netdev_err(dev->net, "Cannot change a running device\n");
ret = -EBUSY;
goto err;
}
ret = qmimux_register_device(dev->net, mux_id); ret = qmimux_register_device(dev->net, mux_id);
if (!ret) { if (!ret) {
info->flags |= QMI_WWAN_FLAG_MUX; info->flags |= QMI_WWAN_FLAG_MUX;
@ -465,13 +458,6 @@ static ssize_t del_mux_store(struct device *d, struct device_attribute *attr, c
if (!rtnl_trylock()) if (!rtnl_trylock())
return restart_syscall(); return restart_syscall();
/* we don't want to modify a running netdev */
if (netif_running(dev->net)) {
netdev_err(dev->net, "Cannot change a running device\n");
ret = -EBUSY;
goto err;
}
del_dev = qmimux_find_dev(dev, mux_id); del_dev = qmimux_find_dev(dev, mux_id);
if (!del_dev) { if (!del_dev) {
netdev_err(dev->net, "mux_id not present\n"); netdev_err(dev->net, "mux_id not present\n");

View File

@ -3021,29 +3021,6 @@ static void __rtl_set_wol(struct r8152 *tp, u32 wolopts)
device_set_wakeup_enable(&tp->udev->dev, false); device_set_wakeup_enable(&tp->udev->dev, false);
} }
static void r8153_mac_clk_spd(struct r8152 *tp, bool enable)
{
/* MAC clock speed down */
if (enable) {
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL,
ALDPS_SPDWN_RATIO);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2,
EEE_SPDWN_RATIO);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3,
PKT_AVAIL_SPDWN_EN | SUSPEND_SPDWN_EN |
U1U2_SPDWN_EN | L1_SPDWN_EN);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL4,
PWRSAVE_SPDWN_EN | RXDV_SPDWN_EN | TX10MIDLE_EN |
TP100_SPDWN_EN | TP500_SPDWN_EN | EEE_SPDWN_EN |
TP1000_SPDWN_EN);
} else {
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL, 0);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2, 0);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, 0);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL4, 0);
}
}
static void r8153_u1u2en(struct r8152 *tp, bool enable) static void r8153_u1u2en(struct r8152 *tp, bool enable)
{ {
u8 u1u2[8]; u8 u1u2[8];
@ -3338,11 +3315,9 @@ static void rtl8153_runtime_enable(struct r8152 *tp, bool enable)
if (enable) { if (enable) {
r8153_u1u2en(tp, false); r8153_u1u2en(tp, false);
r8153_u2p3en(tp, false); r8153_u2p3en(tp, false);
r8153_mac_clk_spd(tp, true);
rtl_runtime_suspend_enable(tp, true); rtl_runtime_suspend_enable(tp, true);
} else { } else {
rtl_runtime_suspend_enable(tp, false); rtl_runtime_suspend_enable(tp, false);
r8153_mac_clk_spd(tp, false);
switch (tp->version) { switch (tp->version) {
case RTL_VER_03: case RTL_VER_03:
@ -4718,7 +4693,6 @@ static void r8153_first_init(struct r8152 *tp)
{ {
u32 ocp_data; u32 ocp_data;
r8153_mac_clk_spd(tp, false);
rxdy_gated_en(tp, true); rxdy_gated_en(tp, true);
r8153_teredo_off(tp); r8153_teredo_off(tp);
@ -4769,8 +4743,6 @@ static void r8153_enter_oob(struct r8152 *tp)
{ {
u32 ocp_data; u32 ocp_data;
r8153_mac_clk_spd(tp, true);
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL); ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL);
ocp_data &= ~NOW_IS_OOB; ocp_data &= ~NOW_IS_OOB;
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL, ocp_data); ocp_write_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL, ocp_data);
@ -5496,10 +5468,15 @@ static void r8153_init(struct r8152 *tp)
ocp_write_word(tp, MCU_TYPE_USB, USB_CONNECT_TIMER, 0x0001); ocp_write_word(tp, MCU_TYPE_USB, USB_CONNECT_TIMER, 0x0001);
/* MAC clock speed down */
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL, 0);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2, 0);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, 0);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL4, 0);
r8153_power_cut_en(tp, false); r8153_power_cut_en(tp, false);
rtl_runtime_suspend_enable(tp, false); rtl_runtime_suspend_enable(tp, false);
r8153_u1u2en(tp, true); r8153_u1u2en(tp, true);
r8153_mac_clk_spd(tp, false);
usb_enable_lpm(tp->udev); usb_enable_lpm(tp->udev);
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6); ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6);

View File

@ -887,7 +887,7 @@ int usbnet_open (struct net_device *net)
// insist peer be connected // insist peer be connected
if (info->check_connect && (retval = info->check_connect (dev)) < 0) { if (info->check_connect && (retval = info->check_connect (dev)) < 0) {
netif_dbg(dev, ifup, dev->net, "can't open; %d\n", retval); netif_err(dev, ifup, dev->net, "can't open; %d\n", retval);
goto done; goto done;
} }

View File

@ -204,14 +204,18 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
priv->rx_skbuff = kcalloc(priv->rx_ring_size, priv->rx_skbuff = kcalloc(priv->rx_ring_size,
sizeof(*priv->rx_skbuff), sizeof(*priv->rx_skbuff),
GFP_KERNEL); GFP_KERNEL);
if (!priv->rx_skbuff) if (!priv->rx_skbuff) {
ret = -ENOMEM;
goto free_ucc_pram; goto free_ucc_pram;
}
priv->tx_skbuff = kcalloc(priv->tx_ring_size, priv->tx_skbuff = kcalloc(priv->tx_ring_size,
sizeof(*priv->tx_skbuff), sizeof(*priv->tx_skbuff),
GFP_KERNEL); GFP_KERNEL);
if (!priv->tx_skbuff) if (!priv->tx_skbuff) {
ret = -ENOMEM;
goto free_rx_skbuff; goto free_rx_skbuff;
}
priv->skb_curtx = 0; priv->skb_curtx = 0;
priv->skb_dirtytx = 0; priv->skb_dirtytx = 0;

View File

@ -292,7 +292,6 @@ static int lapbeth_open(struct net_device *dev)
return -ENODEV; return -ENODEV;
} }
netif_start_queue(dev);
return 0; return 0;
} }
@ -300,8 +299,6 @@ static int lapbeth_close(struct net_device *dev)
{ {
int err; int err;
netif_stop_queue(dev);
if ((err = lapb_unregister(dev)) != LAPB_OK) if ((err = lapb_unregister(dev)) != LAPB_OK)
pr_err("lapb_unregister error: %d\n", err); pr_err("lapb_unregister error: %d\n", err);

View File

@ -5450,8 +5450,8 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
} }
if (ab->hw_params.vdev_start_delay && if (ab->hw_params.vdev_start_delay &&
(arvif->vdev_type == WMI_VDEV_TYPE_AP || arvif->vdev_type != WMI_VDEV_TYPE_AP &&
arvif->vdev_type == WMI_VDEV_TYPE_MONITOR)) { arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) {
param.vdev_id = arvif->vdev_id; param.vdev_id = arvif->vdev_id;
param.peer_type = WMI_PEER_TYPE_DEFAULT; param.peer_type = WMI_PEER_TYPE_DEFAULT;
param.peer_addr = ar->mac_addr; param.peer_addr = ar->mac_addr;

View File

@ -1687,8 +1687,8 @@ static int ath11k_qmi_respond_fw_mem_request(struct ath11k_base *ab)
req->mem_seg[i].size = ab->qmi.target_mem[i].size; req->mem_seg[i].size = ab->qmi.target_mem[i].size;
req->mem_seg[i].type = ab->qmi.target_mem[i].type; req->mem_seg[i].type = ab->qmi.target_mem[i].type;
ath11k_dbg(ab, ATH11K_DBG_QMI, ath11k_dbg(ab, ATH11K_DBG_QMI,
"qmi req mem_seg[%d] 0x%llx %u %u\n", i, "qmi req mem_seg[%d] %pad %u %u\n", i,
ab->qmi.target_mem[i].paddr, &ab->qmi.target_mem[i].paddr,
ab->qmi.target_mem[i].size, ab->qmi.target_mem[i].size,
ab->qmi.target_mem[i].type); ab->qmi.target_mem[i].type);
} }

View File

@ -177,7 +177,8 @@ struct ath_frame_info {
s8 txq; s8 txq;
u8 keyix; u8 keyix;
u8 rtscts_rate; u8 rtscts_rate;
u8 retries : 7; u8 retries : 6;
u8 dyn_smps : 1;
u8 baw_tracked : 1; u8 baw_tracked : 1;
u8 tx_power; u8 tx_power;
enum ath9k_key_type keytype:2; enum ath9k_key_type keytype:2;

View File

@ -1271,6 +1271,11 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
is_40, is_sgi, is_sp); is_40, is_sgi, is_sp);
if (rix < 8 && (tx_info->flags & IEEE80211_TX_CTL_STBC)) if (rix < 8 && (tx_info->flags & IEEE80211_TX_CTL_STBC))
info->rates[i].RateFlags |= ATH9K_RATESERIES_STBC; info->rates[i].RateFlags |= ATH9K_RATESERIES_STBC;
if (rix >= 8 && fi->dyn_smps) {
info->rates[i].RateFlags |=
ATH9K_RATESERIES_RTS_CTS;
info->flags |= ATH9K_TXDESC_CTSENA;
}
info->txpower[i] = ath_get_rate_txpower(sc, bf, rix, info->txpower[i] = ath_get_rate_txpower(sc, bf, rix,
is_40, false); is_40, false);
@ -2114,6 +2119,7 @@ static void setup_frame_info(struct ieee80211_hw *hw,
fi->keyix = an->ps_key; fi->keyix = an->ps_key;
else else
fi->keyix = ATH9K_TXKEYIX_INVALID; fi->keyix = ATH9K_TXKEYIX_INVALID;
fi->dyn_smps = sta && sta->smps_mode == IEEE80211_SMPS_DYNAMIC;
fi->keytype = keytype; fi->keytype = keytype;
fi->framelen = framelen; fi->framelen = framelen;
fi->tx_power = txpower; fi->tx_power = txpower;

View File

@ -271,12 +271,12 @@ static int iwl_pnvm_get_from_efi(struct iwl_trans *trans,
err = efivar_entry_get(pnvm_efivar, NULL, &package_size, package); err = efivar_entry_get(pnvm_efivar, NULL, &package_size, package);
if (err) { if (err) {
IWL_DEBUG_FW(trans, IWL_DEBUG_FW(trans,
"PNVM UEFI variable not found %d (len %zd)\n", "PNVM UEFI variable not found %d (len %lu)\n",
err, package_size); err, package_size);
goto out; goto out;
} }
IWL_DEBUG_FW(trans, "Read PNVM fro UEFI with size %zd\n", package_size); IWL_DEBUG_FW(trans, "Read PNVM fro UEFI with size %lu\n", package_size);
*data = kmemdup(package->data, *len, GFP_KERNEL); *data = kmemdup(package->data, *len, GFP_KERNEL);
if (!*data) if (!*data)

View File

@ -205,6 +205,8 @@ static inline void iwl_op_mode_time_point(struct iwl_op_mode *op_mode,
enum iwl_fw_ini_time_point tp_id, enum iwl_fw_ini_time_point tp_id,
union iwl_dbg_tlv_tp_data *tp_data) union iwl_dbg_tlv_tp_data *tp_data)
{ {
if (!op_mode || !op_mode->ops || !op_mode->ops->time_point)
return;
op_mode->ops->time_point(op_mode, tp_id, tp_data); op_mode->ops->time_point(op_mode, tp_id, tp_data);
} }

View File

@ -1083,6 +1083,7 @@ static const struct dmi_system_id dmi_ppag_approved_list[] = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTek COMPUTER INC."), DMI_MATCH(DMI_SYS_VENDOR, "ASUSTek COMPUTER INC."),
}, },
}, },
{}
}; };
static int iwl_mvm_ppag_init(struct iwl_mvm *mvm) static int iwl_mvm_ppag_init(struct iwl_mvm *mvm)

View File

@ -1106,6 +1106,8 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
} }
} }
#if IS_ENABLED(CONFIG_IWLMVM)
/* /*
* Workaround for problematic SnJ device: sometimes when * Workaround for problematic SnJ device: sometimes when
* certain RF modules are connected to SnJ, the device ID * certain RF modules are connected to SnJ, the device ID
@ -1116,7 +1118,6 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (CSR_HW_REV_TYPE(iwl_trans->hw_rev) == IWL_CFG_MAC_TYPE_SNJ) if (CSR_HW_REV_TYPE(iwl_trans->hw_rev) == IWL_CFG_MAC_TYPE_SNJ)
iwl_trans->trans_cfg = &iwl_so_trans_cfg; iwl_trans->trans_cfg = &iwl_so_trans_cfg;
#if IS_ENABLED(CONFIG_IWLMVM)
/* /*
* special-case 7265D, it has the same PCI IDs. * special-case 7265D, it has the same PCI IDs.
* *

View File

@ -1129,6 +1129,8 @@ static int _iwl_pcie_rx_init(struct iwl_trans *trans)
iwl_pcie_rx_init_rxb_lists(rxq); iwl_pcie_rx_init_rxb_lists(rxq);
spin_unlock_bh(&rxq->lock);
if (!rxq->napi.poll) { if (!rxq->napi.poll) {
int (*poll)(struct napi_struct *, int) = iwl_pcie_napi_poll; int (*poll)(struct napi_struct *, int) = iwl_pcie_napi_poll;
@ -1149,7 +1151,6 @@ static int _iwl_pcie_rx_init(struct iwl_trans *trans)
napi_enable(&rxq->napi); napi_enable(&rxq->napi);
} }
spin_unlock_bh(&rxq->lock);
} }
/* move the pool to the default queue and allocator ownerships */ /* move the pool to the default queue and allocator ownerships */

View File

@ -345,7 +345,6 @@ mt76_dma_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q,
}; };
struct ieee80211_hw *hw; struct ieee80211_hw *hw;
int len, n = 0, ret = -ENOMEM; int len, n = 0, ret = -ENOMEM;
struct mt76_queue_entry e;
struct mt76_txwi_cache *t; struct mt76_txwi_cache *t;
struct sk_buff *iter; struct sk_buff *iter;
dma_addr_t addr; dma_addr_t addr;
@ -387,6 +386,11 @@ mt76_dma_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q,
} }
tx_info.nbuf = n; tx_info.nbuf = n;
if (q->queued + (tx_info.nbuf + 1) / 2 >= q->ndesc - 1) {
ret = -ENOMEM;
goto unmap;
}
dma_sync_single_for_cpu(dev->dev, t->dma_addr, dev->drv->txwi_size, dma_sync_single_for_cpu(dev->dev, t->dma_addr, dev->drv->txwi_size,
DMA_TO_DEVICE); DMA_TO_DEVICE);
ret = dev->drv->tx_prepare_skb(dev, txwi, q->qid, wcid, sta, &tx_info); ret = dev->drv->tx_prepare_skb(dev, txwi, q->qid, wcid, sta, &tx_info);
@ -395,11 +399,6 @@ mt76_dma_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q,
if (ret < 0) if (ret < 0)
goto unmap; goto unmap;
if (q->queued + (tx_info.nbuf + 1) / 2 >= q->ndesc - 1) {
ret = -ENOMEM;
goto unmap;
}
return mt76_dma_add_buf(dev, q, tx_info.buf, tx_info.nbuf, return mt76_dma_add_buf(dev, q, tx_info.buf, tx_info.nbuf,
tx_info.info, tx_info.skb, t); tx_info.info, tx_info.skb, t);
@ -419,9 +418,7 @@ free:
} }
#endif #endif
e.skb = tx_info.skb; dev_kfree_skb(tx_info.skb);
e.txwi = t;
dev->drv->tx_complete_skb(dev, &e);
mt76_put_txwi(dev, t); mt76_put_txwi(dev, t);
return ret; return ret;
} }
@ -515,13 +512,13 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
{ {
struct sk_buff *skb = q->rx_head; struct sk_buff *skb = q->rx_head;
struct skb_shared_info *shinfo = skb_shinfo(skb); struct skb_shared_info *shinfo = skb_shinfo(skb);
int nr_frags = shinfo->nr_frags;
if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) { if (nr_frags < ARRAY_SIZE(shinfo->frags)) {
struct page *page = virt_to_head_page(data); struct page *page = virt_to_head_page(data);
int offset = data - page_address(page) + q->buf_offset; int offset = data - page_address(page) + q->buf_offset;
skb_add_rx_frag(skb, shinfo->nr_frags, page, offset, len, skb_add_rx_frag(skb, nr_frags, page, offset, len, q->buf_size);
q->buf_size);
} else { } else {
skb_free_frag(data); skb_free_frag(data);
} }
@ -530,7 +527,10 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
return; return;
q->rx_head = NULL; q->rx_head = NULL;
if (nr_frags < ARRAY_SIZE(shinfo->frags))
dev->drv->rx_skb(dev, q - dev->q_rx, skb); dev->drv->rx_skb(dev, q - dev->q_rx, skb);
else
dev_kfree_skb(skb);
} }
static int static int

View File

@ -967,11 +967,6 @@ int mt7915_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
} }
txp->nbuf = nbuf; txp->nbuf = nbuf;
/* pass partial skb header to fw */
tx_info->buf[1].len = MT_CT_PARSE_LEN;
tx_info->buf[1].skip_unmap = true;
tx_info->nbuf = MT_CT_DMA_BUF_NUM;
txp->flags = cpu_to_le16(MT_CT_INFO_APPLY_TXD | MT_CT_INFO_FROM_HOST); txp->flags = cpu_to_le16(MT_CT_INFO_APPLY_TXD | MT_CT_INFO_FROM_HOST);
if (!key) if (!key)
@ -1009,6 +1004,11 @@ int mt7915_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
txp->rept_wds_wcid = cpu_to_le16(0x3ff); txp->rept_wds_wcid = cpu_to_le16(0x3ff);
tx_info->skb = DMA_DUMMY_DATA; tx_info->skb = DMA_DUMMY_DATA;
/* pass partial skb header to fw */
tx_info->buf[1].len = MT_CT_PARSE_LEN;
tx_info->buf[1].skip_unmap = true;
tx_info->nbuf = MT_CT_DMA_BUF_NUM;
return 0; return 0;
} }

View File

@ -543,7 +543,7 @@ mt7915_tm_set_tx_cont(struct mt7915_phy *phy, bool en)
tx_cont->bw = CMD_CBW_20MHZ; tx_cont->bw = CMD_CBW_20MHZ;
break; break;
default: default:
break; return -EINVAL;
} }
if (!en) { if (!en) {
@ -591,7 +591,7 @@ mt7915_tm_set_tx_cont(struct mt7915_phy *phy, bool en)
mode = MT_PHY_TYPE_HE_MU; mode = MT_PHY_TYPE_HE_MU;
break; break;
default: default:
break; return -EINVAL;
} }
rateval = mode << 6 | rate_idx; rateval = mode << 6 | rate_idx;

View File

@ -405,10 +405,8 @@ mt7921_mcu_tx_rate_report(struct mt7921_dev *dev, struct sk_buff *skb,
if (wlan_idx >= MT76_N_WCIDS) if (wlan_idx >= MT76_N_WCIDS)
return; return;
wcid = rcu_dereference(dev->mt76.wcid[wlan_idx]); wcid = rcu_dereference(dev->mt76.wcid[wlan_idx]);
if (!wcid) { if (!wcid)
stats->tx_rate = rate;
return; return;
}
msta = container_of(wcid, struct mt7921_sta, wcid); msta = container_of(wcid, struct mt7921_sta, wcid);
stats = &msta->stats; stats = &msta->stats;

View File

@ -557,8 +557,8 @@ check_frags:
} }
if (skb_has_frag_list(skb) && !first_shinfo) { if (skb_has_frag_list(skb) && !first_shinfo) {
first_shinfo = skb_shinfo(skb); first_shinfo = shinfo;
shinfo = skb_shinfo(skb_shinfo(skb)->frag_list); shinfo = skb_shinfo(shinfo->frag_list);
nr_frags = shinfo->nr_frags; nr_frags = shinfo->nr_frags;
goto check_frags; goto check_frags;

View File

@ -436,7 +436,7 @@ struct qeth_qdio_out_buffer {
int is_header[QDIO_MAX_ELEMENTS_PER_BUFFER]; int is_header[QDIO_MAX_ELEMENTS_PER_BUFFER];
struct qeth_qdio_out_q *q; struct qeth_qdio_out_q *q;
struct qeth_qdio_out_buffer *next_pending; struct list_head list_entry;
}; };
struct qeth_card; struct qeth_card;
@ -500,6 +500,7 @@ struct qeth_qdio_out_q {
struct qdio_buffer *qdio_bufs[QDIO_MAX_BUFFERS_PER_Q]; struct qdio_buffer *qdio_bufs[QDIO_MAX_BUFFERS_PER_Q];
struct qeth_qdio_out_buffer *bufs[QDIO_MAX_BUFFERS_PER_Q]; struct qeth_qdio_out_buffer *bufs[QDIO_MAX_BUFFERS_PER_Q];
struct qdio_outbuf_state *bufstates; /* convenience pointer */ struct qdio_outbuf_state *bufstates; /* convenience pointer */
struct list_head pending_bufs;
struct qeth_out_q_stats stats; struct qeth_out_q_stats stats;
spinlock_t lock; spinlock_t lock;
unsigned int priority; unsigned int priority;

View File

@ -73,8 +73,6 @@ static void qeth_free_qdio_queues(struct qeth_card *card);
static void qeth_notify_skbs(struct qeth_qdio_out_q *queue, static void qeth_notify_skbs(struct qeth_qdio_out_q *queue,
struct qeth_qdio_out_buffer *buf, struct qeth_qdio_out_buffer *buf,
enum iucv_tx_notify notification); enum iucv_tx_notify notification);
static void qeth_tx_complete_buf(struct qeth_qdio_out_buffer *buf, bool error,
int budget);
static void qeth_close_dev_handler(struct work_struct *work) static void qeth_close_dev_handler(struct work_struct *work)
{ {
@ -465,41 +463,6 @@ static enum iucv_tx_notify qeth_compute_cq_notification(int sbalf15,
return n; return n;
} }
static void qeth_cleanup_handled_pending(struct qeth_qdio_out_q *q, int bidx,
int forced_cleanup)
{
if (q->card->options.cq != QETH_CQ_ENABLED)
return;
if (q->bufs[bidx]->next_pending != NULL) {
struct qeth_qdio_out_buffer *head = q->bufs[bidx];
struct qeth_qdio_out_buffer *c = q->bufs[bidx]->next_pending;
while (c) {
if (forced_cleanup ||
atomic_read(&c->state) == QETH_QDIO_BUF_EMPTY) {
struct qeth_qdio_out_buffer *f = c;
QETH_CARD_TEXT(f->q->card, 5, "fp");
QETH_CARD_TEXT_(f->q->card, 5, "%lx", (long) f);
/* release here to avoid interleaving between
outbound tasklet and inbound tasklet
regarding notifications and lifecycle */
qeth_tx_complete_buf(c, forced_cleanup, 0);
c = f->next_pending;
WARN_ON_ONCE(head->next_pending != f);
head->next_pending = c;
kmem_cache_free(qeth_qdio_outbuf_cache, f);
} else {
head = c;
c = c->next_pending;
}
}
}
}
static void qeth_qdio_handle_aob(struct qeth_card *card, static void qeth_qdio_handle_aob(struct qeth_card *card,
unsigned long phys_aob_addr) unsigned long phys_aob_addr)
{ {
@ -507,6 +470,7 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
struct qaob *aob; struct qaob *aob;
struct qeth_qdio_out_buffer *buffer; struct qeth_qdio_out_buffer *buffer;
enum iucv_tx_notify notification; enum iucv_tx_notify notification;
struct qeth_qdio_out_q *queue;
unsigned int i; unsigned int i;
aob = (struct qaob *) phys_to_virt(phys_aob_addr); aob = (struct qaob *) phys_to_virt(phys_aob_addr);
@ -537,7 +501,7 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
qeth_notify_skbs(buffer->q, buffer, notification); qeth_notify_skbs(buffer->q, buffer, notification);
/* Free dangling allocations. The attached skbs are handled by /* Free dangling allocations. The attached skbs are handled by
* qeth_cleanup_handled_pending(). * qeth_tx_complete_pending_bufs().
*/ */
for (i = 0; for (i = 0;
i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card); i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card);
@ -549,7 +513,9 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
buffer->is_header[i] = 0; buffer->is_header[i] = 0;
} }
queue = buffer->q;
atomic_set(&buffer->state, QETH_QDIO_BUF_EMPTY); atomic_set(&buffer->state, QETH_QDIO_BUF_EMPTY);
napi_schedule(&queue->napi);
break; break;
default: default:
WARN_ON_ONCE(1); WARN_ON_ONCE(1);
@ -1424,9 +1390,6 @@ static void qeth_tx_complete_buf(struct qeth_qdio_out_buffer *buf, bool error,
struct qeth_qdio_out_q *queue = buf->q; struct qeth_qdio_out_q *queue = buf->q;
struct sk_buff *skb; struct sk_buff *skb;
if (atomic_read(&buf->state) == QETH_QDIO_BUF_PENDING)
qeth_notify_skbs(queue, buf, TX_NOTIFY_GENERALERROR);
/* Empty buffer? */ /* Empty buffer? */
if (buf->next_element_to_fill == 0) if (buf->next_element_to_fill == 0)
return; return;
@ -1488,14 +1451,38 @@ static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue,
atomic_set(&buf->state, QETH_QDIO_BUF_EMPTY); atomic_set(&buf->state, QETH_QDIO_BUF_EMPTY);
} }
static void qeth_tx_complete_pending_bufs(struct qeth_card *card,
struct qeth_qdio_out_q *queue,
bool drain)
{
struct qeth_qdio_out_buffer *buf, *tmp;
list_for_each_entry_safe(buf, tmp, &queue->pending_bufs, list_entry) {
if (drain || atomic_read(&buf->state) == QETH_QDIO_BUF_EMPTY) {
QETH_CARD_TEXT(card, 5, "fp");
QETH_CARD_TEXT_(card, 5, "%lx", (long) buf);
if (drain)
qeth_notify_skbs(queue, buf,
TX_NOTIFY_GENERALERROR);
qeth_tx_complete_buf(buf, drain, 0);
list_del(&buf->list_entry);
kmem_cache_free(qeth_qdio_outbuf_cache, buf);
}
}
}
static void qeth_drain_output_queue(struct qeth_qdio_out_q *q, bool free) static void qeth_drain_output_queue(struct qeth_qdio_out_q *q, bool free)
{ {
int j; int j;
qeth_tx_complete_pending_bufs(q->card, q, true);
for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) { for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) {
if (!q->bufs[j]) if (!q->bufs[j])
continue; continue;
qeth_cleanup_handled_pending(q, j, 1);
qeth_clear_output_buffer(q, q->bufs[j], true, 0); qeth_clear_output_buffer(q, q->bufs[j], true, 0);
if (free) { if (free) {
kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[j]); kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[j]);
@ -2615,7 +2602,6 @@ static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *q, int bidx)
skb_queue_head_init(&newbuf->skb_list); skb_queue_head_init(&newbuf->skb_list);
lockdep_set_class(&newbuf->skb_list.lock, &qdio_out_skb_queue_key); lockdep_set_class(&newbuf->skb_list.lock, &qdio_out_skb_queue_key);
newbuf->q = q; newbuf->q = q;
newbuf->next_pending = q->bufs[bidx];
atomic_set(&newbuf->state, QETH_QDIO_BUF_EMPTY); atomic_set(&newbuf->state, QETH_QDIO_BUF_EMPTY);
q->bufs[bidx] = newbuf; q->bufs[bidx] = newbuf;
return 0; return 0;
@ -2634,15 +2620,28 @@ static void qeth_free_output_queue(struct qeth_qdio_out_q *q)
static struct qeth_qdio_out_q *qeth_alloc_output_queue(void) static struct qeth_qdio_out_q *qeth_alloc_output_queue(void)
{ {
struct qeth_qdio_out_q *q = kzalloc(sizeof(*q), GFP_KERNEL); struct qeth_qdio_out_q *q = kzalloc(sizeof(*q), GFP_KERNEL);
unsigned int i;
if (!q) if (!q)
return NULL; return NULL;
if (qdio_alloc_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q)) { if (qdio_alloc_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q))
goto err_qdio_bufs;
for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; i++) {
if (qeth_init_qdio_out_buf(q, i))
goto err_out_bufs;
}
return q;
err_out_bufs:
while (i > 0)
kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[--i]);
qdio_free_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q);
err_qdio_bufs:
kfree(q); kfree(q);
return NULL; return NULL;
}
return q;
} }
static void qeth_tx_completion_timer(struct timer_list *timer) static void qeth_tx_completion_timer(struct timer_list *timer)
@ -2655,7 +2654,7 @@ static void qeth_tx_completion_timer(struct timer_list *timer)
static int qeth_alloc_qdio_queues(struct qeth_card *card) static int qeth_alloc_qdio_queues(struct qeth_card *card)
{ {
int i, j; unsigned int i;
QETH_CARD_TEXT(card, 2, "allcqdbf"); QETH_CARD_TEXT(card, 2, "allcqdbf");
@ -2684,18 +2683,12 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
card->qdio.out_qs[i] = queue; card->qdio.out_qs[i] = queue;
queue->card = card; queue->card = card;
queue->queue_no = i; queue->queue_no = i;
INIT_LIST_HEAD(&queue->pending_bufs);
spin_lock_init(&queue->lock); spin_lock_init(&queue->lock);
timer_setup(&queue->timer, qeth_tx_completion_timer, 0); timer_setup(&queue->timer, qeth_tx_completion_timer, 0);
queue->coalesce_usecs = QETH_TX_COALESCE_USECS; queue->coalesce_usecs = QETH_TX_COALESCE_USECS;
queue->max_coalesced_frames = QETH_TX_MAX_COALESCED_FRAMES; queue->max_coalesced_frames = QETH_TX_MAX_COALESCED_FRAMES;
queue->priority = QETH_QIB_PQUE_PRIO_DEFAULT; queue->priority = QETH_QIB_PQUE_PRIO_DEFAULT;
/* give outbound qeth_qdio_buffers their qdio_buffers */
for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) {
WARN_ON(queue->bufs[j]);
if (qeth_init_qdio_out_buf(queue, j))
goto out_freeoutqbufs;
}
} }
/* completion */ /* completion */
@ -2704,13 +2697,6 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
return 0; return 0;
out_freeoutqbufs:
while (j > 0) {
--j;
kmem_cache_free(qeth_qdio_outbuf_cache,
card->qdio.out_qs[i]->bufs[j]);
card->qdio.out_qs[i]->bufs[j] = NULL;
}
out_freeoutq: out_freeoutq:
while (i > 0) { while (i > 0) {
qeth_free_output_queue(card->qdio.out_qs[--i]); qeth_free_output_queue(card->qdio.out_qs[--i]);
@ -6107,6 +6093,8 @@ static void qeth_iqd_tx_complete(struct qeth_qdio_out_q *queue,
qeth_schedule_recovery(card); qeth_schedule_recovery(card);
} }
list_add(&buffer->list_entry,
&queue->pending_bufs);
/* Skip clearing the buffer: */ /* Skip clearing the buffer: */
return; return;
case QETH_QDIO_BUF_QAOB_OK: case QETH_QDIO_BUF_QAOB_OK:
@ -6162,6 +6150,8 @@ static int qeth_tx_poll(struct napi_struct *napi, int budget)
unsigned int bytes = 0; unsigned int bytes = 0;
int completed; int completed;
qeth_tx_complete_pending_bufs(card, queue, false);
if (qeth_out_queue_is_empty(queue)) { if (qeth_out_queue_is_empty(queue)) {
napi_complete(napi); napi_complete(napi);
return 0; return 0;
@ -6194,7 +6184,6 @@ static int qeth_tx_poll(struct napi_struct *napi, int budget)
qeth_handle_send_error(card, buffer, error); qeth_handle_send_error(card, buffer, error);
qeth_iqd_tx_complete(queue, bidx, error, budget); qeth_iqd_tx_complete(queue, bidx, error, budget);
qeth_cleanup_handled_pending(queue, bidx, false);
} }
netdev_tx_completed_queue(txq, packets, bytes); netdev_tx_completed_queue(txq, packets, bytes);
@ -7249,9 +7238,7 @@ int qeth_open(struct net_device *dev)
card->data.state = CH_STATE_UP; card->data.state = CH_STATE_UP;
netif_tx_start_all_queues(dev); netif_tx_start_all_queues(dev);
napi_enable(&card->napi);
local_bh_disable(); local_bh_disable();
napi_schedule(&card->napi);
if (IS_IQD(card)) { if (IS_IQD(card)) {
struct qeth_qdio_out_q *queue; struct qeth_qdio_out_q *queue;
unsigned int i; unsigned int i;
@ -7263,8 +7250,12 @@ int qeth_open(struct net_device *dev)
napi_schedule(&queue->napi); napi_schedule(&queue->napi);
} }
} }
napi_enable(&card->napi);
napi_schedule(&card->napi);
/* kick-start the NAPI softirq: */ /* kick-start the NAPI softirq: */
local_bh_enable(); local_bh_enable();
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(qeth_open); EXPORT_SYMBOL_GPL(qeth_open);
@ -7274,6 +7265,11 @@ int qeth_stop(struct net_device *dev)
struct qeth_card *card = dev->ml_priv; struct qeth_card *card = dev->ml_priv;
QETH_CARD_TEXT(card, 4, "qethstop"); QETH_CARD_TEXT(card, 4, "qethstop");
napi_disable(&card->napi);
cancel_delayed_work_sync(&card->buffer_reclaim_work);
qdio_stop_irq(CARD_DDEV(card));
if (IS_IQD(card)) { if (IS_IQD(card)) {
struct qeth_qdio_out_q *queue; struct qeth_qdio_out_q *queue;
unsigned int i; unsigned int i;
@ -7294,10 +7290,6 @@ int qeth_stop(struct net_device *dev)
netif_tx_disable(dev); netif_tx_disable(dev);
} }
napi_disable(&card->napi);
cancel_delayed_work_sync(&card->buffer_reclaim_work);
qdio_stop_irq(CARD_DDEV(card));
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(qeth_stop); EXPORT_SYMBOL_GPL(qeth_stop);

View File

@ -151,7 +151,7 @@ struct atm_dev {
const char *type; /* device type name */ const char *type; /* device type name */
int number; /* device index */ int number; /* device index */
void *dev_data; /* per-device data */ void *dev_data; /* per-device data */
void *phy_data; /* private PHY date */ void *phy_data; /* private PHY data */
unsigned long flags; /* device flags (ATM_DF_*) */ unsigned long flags; /* device flags (ATM_DF_*) */
struct list_head local; /* local ATM addresses */ struct list_head local; /* local ATM addresses */
struct list_head lecs; /* LECS ATM addresses learned via ILMI */ struct list_head lecs; /* LECS ATM addresses learned via ILMI */

View File

@ -65,8 +65,12 @@ static inline void can_skb_reserve(struct sk_buff *skb)
static inline void can_skb_set_owner(struct sk_buff *skb, struct sock *sk) static inline void can_skb_set_owner(struct sk_buff *skb, struct sock *sk)
{ {
if (sk) { /* If the socket has already been closed by user space, the
sock_hold(sk); * refcount may already be 0 (and the socket will be freed
* after the last TX skb has been freed). So only increase
* socket refcount if the refcount is > 0.
*/
if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) {
skb->destructor = sock_efree; skb->destructor = sock_efree;
skb->sk = sk; skb->sk = sk;
} }

View File

@ -3959,8 +3959,6 @@ int dev_change_xdp_fd(struct net_device *dev, struct netlink_ext_ack *extack,
int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
u32 dev_xdp_prog_id(struct net_device *dev, enum bpf_xdp_mode mode); u32 dev_xdp_prog_id(struct net_device *dev, enum bpf_xdp_mode mode);
int xdp_umem_query(struct net_device *dev, u16 queue_id);
int __dev_forward_skb(struct net_device *dev, struct sk_buff *skb); int __dev_forward_skb(struct net_device *dev, struct sk_buff *skb);
int dev_forward_skb(struct net_device *dev, struct sk_buff *skb); int dev_forward_skb(struct net_device *dev, struct sk_buff *skb);
int dev_forward_skb_nomtu(struct net_device *dev, struct sk_buff *skb); int dev_forward_skb_nomtu(struct net_device *dev, struct sk_buff *skb);

View File

@ -23,7 +23,7 @@ struct ts_config;
struct ts_state struct ts_state
{ {
unsigned int offset; unsigned int offset;
char cb[40]; char cb[48];
}; };
/** /**

View File

@ -79,8 +79,13 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
if (gso_type && skb->network_header) { if (gso_type && skb->network_header) {
struct flow_keys_basic keys; struct flow_keys_basic keys;
if (!skb->protocol) if (!skb->protocol) {
__be16 protocol = dev_parse_header_protocol(skb);
virtio_net_hdr_set_proto(skb, hdr); virtio_net_hdr_set_proto(skb, hdr);
if (protocol && protocol != skb->protocol)
return -EINVAL;
}
retry: retry:
if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys, if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
NULL, 0, 0, 0, NULL, 0, 0, 0,

Some files were not shown because too many files have changed in this diff Show More