1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

 1) Restore previous behavior of CAP_SYS_ADMIN wrt loading networking
    BPF programs, from Maciej Żenczykowski.

 2) Fix dropped broadcasts in mac80211 code, from Seevalamuthu
    Mariappan.

 3) Slay memory leak in nl80211 bss color attribute parsing code, from
    Luca Coelho.

 4) Get route from skb properly in ip_route_use_hint(), from Miaohe Lin.

 5) Don't allow anything other than ARPHRD_ETHER in llc code, from Eric
    Dumazet.

 6) xsk code dips too deeply into DMA mapping implementation internals.
    Add dma_need_sync and use it. From Christoph Hellwig

 7) Enforce power-of-2 for BPF ringbuf sizes. From Andrii Nakryiko.

 8) Check for disallowed attributes when loading flow dissector BPF
    programs. From Lorenz Bauer.

 9) Correct packet injection to L3 tunnel devices via AF_PACKET, from
    Jason A. Donenfeld.

10) Don't advertise checksum offload on ipa devices that don't support
    it. From Alex Elder.

11) Resolve several issues in TCP MD5 signature support. Missing memory
    barriers, bogus options emitted when using syncookies, and failure
    to allow md5 key changes in established states. All from Eric
    Dumazet.

12) Fix interface leak in hsr code, from Taehee Yoo.

13) VF reset fixes in hns3 driver, from Huazhong Tan.

14) Make loopback work again with ipv6 anycast, from David Ahern.

15) Fix TX starvation under high load in fec driver, from Tobias
    Waldekranz.

16) MLD2 payload lengths not checked properly in bridge multicast code,
    from Linus Lüssing.

17) Packet scheduler code that wants to find the inner protocol
    currently only works for one level of VLAN encapsulation. Allow
    Q-in-Q situations to work properly here, from Toke
    Høiland-Jørgensen.

18) Fix route leak in l2tp, from Xin Long.

19) Resolve conflict between the sk->sk_user_data usage of bpf reuseport
    support and various protocols. From Martin KaFai Lau.

20) Fix socket cgroup v2 reference counting in some situations, from
    Cong Wang.

21) Cure memory leak in mlx5 connection tracking offload support, from
    Eli Britstein.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (146 commits)
  mlxsw: pci: Fix use-after-free in case of failed devlink reload
  mlxsw: spectrum_router: Remove inappropriate usage of WARN_ON()
  net: macb: fix call to pm_runtime in the suspend/resume functions
  net: macb: fix macb_suspend() by removing call to netif_carrier_off()
  net: macb: fix macb_get/set_wol() when moving to phylink
  net: macb: mark device wake capable when "magic-packet" property present
  net: macb: fix wakeup test in runtime suspend/resume routines
  bnxt_en: fix NULL dereference in case SR-IOV configuration fails
  libbpf: Fix libbpf hashmap on (I)LP32 architectures
  net/mlx5e: CT: Fix memory leak in cleanup
  net/mlx5e: Fix port buffers cell size value
  net/mlx5e: Fix 50G per lane indication
  net/mlx5e: Fix CPU mapping after function reload to avoid aRFS RX crash
  net/mlx5e: Fix VXLAN configuration restore after function reload
  net/mlx5e: Fix usage of rcu-protected pointer
  net/mxl5e: Verify that rpriv is not NULL
  net/mlx5: E-Switch, Fix vlan or qos setting in legacy mode
  net/mlx5: Fix eeprom support for SFP module
  cgroup: Fix sock_cgroup_data on big-endian.
  selftests: bpf: Fix detach from sockmap tests
  ...
alistair/sunxi64-5.8
Linus Torvalds 2020-07-10 18:16:22 -07:00
commit 5a764898af
197 changed files with 1645 additions and 933 deletions

View File

@ -204,6 +204,14 @@ Returns the maximum size of a mapping for the device. The size parameter
of the mapping functions like dma_map_single(), dma_map_page() and
others should not be larger than the returned value.
::
bool
dma_need_sync(struct device *dev, dma_addr_t dma_addr);
Returns %true if dma_sync_single_for_{device,cpu} calls are required to
transfer memory ownership. Returns %false if those calls can be skipped.
::
unsigned long

View File

@ -434,7 +434,7 @@ can set up your network then:
ifconfig arc0 insight
route add insight arc0
route add freedom arc0 /* I would use the subnet here (like I said
to to in "single protocol" above),
to in "single protocol" above),
but the rest of the subnet
unfortunately lies across the PPP
link on freedom, which confuses

View File

@ -6,7 +6,7 @@ AX.25
To use the amateur radio protocols within Linux you will need to get a
suitable copy of the AX.25 Utilities. More detailed information about
AX.25, NET/ROM and ROSE, associated programs and and utilities can be
AX.25, NET/ROM and ROSE, associated programs and utilities can be
found on http://www.linux-ax25.org.
There is an active mailing list for discussing Linux amateur radio matters

View File

@ -144,7 +144,7 @@ UCAN_COMMAND_SET_BITTIMING
*Host2Dev; mandatory*
Setup bittiming by sending the the structure
Setup bittiming by sending the structure
``ucan_ctl_payload_t.cmd_set_bittiming`` (see ``struct bittiming`` for
details)
@ -232,7 +232,7 @@ UCAN_IN_TX_COMPLETE
zero
The CAN device has sent a message to the CAN bus. It answers with a
list of of tuples <echo-ids, flags>.
list of tuples <echo-ids, flags>.
The echo-id identifies the frame from (echos the id from a previous
UCAN_OUT_TX message). The flag indicates the result of the

View File

@ -95,7 +95,7 @@ Ethernet switch.
Networking stack hooks
----------------------
When a master netdev is used with DSA, a small hook is placed in in the
When a master netdev is used with DSA, a small hook is placed in the
networking stack is in order to have the DSA subsystem process the Ethernet
switch specific tagging protocol. DSA accomplishes this by registering a
specific (and fake) Ethernet type (later becoming ``skb->protocol``) with the

View File

@ -741,7 +741,7 @@ tcp_fastopen - INTEGER
Default: 0x1
Note that that additional client or server features are only
Note that additional client or server features are only
effective if the basic support (0x1 and 0x2) are enabled respectively.
tcp_fastopen_blackhole_timeout_sec - INTEGER

View File

@ -114,7 +114,7 @@ drop_entry - INTEGER
modes (when there is no enough available memory, the strategy
is enabled and the variable is automatically set to 2,
otherwise the strategy is disabled and the variable is set to
1), and 3 means that that the strategy is always enabled.
1), and 3 means that the strategy is always enabled.
drop_packet - INTEGER
- 0 - disabled (default)

View File

@ -186,7 +186,7 @@ About the AF_RXRPC driver:
time [tunable] after the last connection using it discarded, in case a new
connection is made that could use it.
(#) A client-side connection is only shared between calls if they have have
(#) A client-side connection is only shared between calls if they have
the same key struct describing their security (and assuming the calls
would otherwise share the connection). Non-secured calls would also be
able to share connections with each other.

View File

@ -2929,6 +2929,7 @@ F: include/uapi/linux/atm*
ATMEL MACB ETHERNET DRIVER
M: Nicolas Ferre <nicolas.ferre@microchip.com>
M: Claudiu Beznea <claudiu.beznea@microchip.com>
S: Supported
F: drivers/net/ethernet/cadence/

View File

@ -1268,6 +1268,9 @@ static int ksz8795_switch_init(struct ksz_device *dev)
return -ENOMEM;
}
/* set the real number of ports */
dev->ds->num_ports = dev->port_cnt;
return 0;
}

View File

@ -1588,6 +1588,9 @@ static int ksz9477_switch_init(struct ksz_device *dev)
return -ENOMEM;
}
/* set the real number of ports */
dev->ds->num_ports = dev->port_cnt;
return 0;
}

View File

@ -79,6 +79,7 @@ MODULE_DEVICE_TABLE(i2c, ksz9477_i2c_id);
static const struct of_device_id ksz9477_dt_ids[] = {
{ .compatible = "microchip,ksz9477" },
{ .compatible = "microchip,ksz9897" },
{ .compatible = "microchip,ksz9893" },
{ .compatible = "microchip,ksz9567" },
{},
};

View File

@ -1700,7 +1700,7 @@ void hw_atl_rpfl3l4_ipv6_src_addr_set(struct aq_hw_s *aq_hw, u8 location,
for (i = 0; i < 4; ++i)
aq_hw_write_reg(aq_hw,
HW_ATL_RPF_L3_SRCA_ADR(location + i),
ipv6_src[i]);
ipv6_src[3 - i]);
}
void hw_atl_rpfl3l4_ipv6_dest_addr_set(struct aq_hw_s *aq_hw, u8 location,
@ -1711,7 +1711,7 @@ void hw_atl_rpfl3l4_ipv6_dest_addr_set(struct aq_hw_s *aq_hw, u8 location,
for (i = 0; i < 4; ++i)
aq_hw_write_reg(aq_hw,
HW_ATL_RPF_L3_DSTA_ADR(location + i),
ipv6_dest[i]);
ipv6_dest[3 - i]);
}
u32 hw_atl_sem_ram_get(struct aq_hw_s *self)

View File

@ -1360,7 +1360,7 @@
*/
/* Register address for bitfield pif_rpf_l3_da0_i[31:0] */
#define HW_ATL_RPF_L3_DSTA_ADR(filter) (0x000053B0 + (filter) * 0x4)
#define HW_ATL_RPF_L3_DSTA_ADR(filter) (0x000053D0 + (filter) * 0x4)
/* Bitmask for bitfield l3_da0[1F:0] */
#define HW_ATL_RPF_L3_DSTA_MSK 0xFFFFFFFFu
/* Inverted bitmask for bitfield l3_da0[1F:0] */

View File

@ -396,6 +396,7 @@ static void bnxt_free_vf_resources(struct bnxt *bp)
}
}
bp->pf.active_vfs = 0;
kfree(bp->pf.vf);
bp->pf.vf = NULL;
}
@ -835,7 +836,6 @@ void bnxt_sriov_disable(struct bnxt *bp)
bnxt_free_vf_resources(bp);
bp->pf.active_vfs = 0;
/* Reclaim all resources for the PF. */
rtnl_lock();
bnxt_restore_pf_fw_resources(bp);

View File

@ -2821,11 +2821,13 @@ static void macb_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
{
struct macb *bp = netdev_priv(netdev);
wol->supported = 0;
wol->wolopts = 0;
if (bp->wol & MACB_WOL_HAS_MAGIC_PACKET)
if (bp->wol & MACB_WOL_HAS_MAGIC_PACKET) {
phylink_ethtool_get_wol(bp->phylink, wol);
wol->supported |= WAKE_MAGIC;
if (bp->wol & MACB_WOL_ENABLED)
wol->wolopts |= WAKE_MAGIC;
}
}
static int macb_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
@ -2833,9 +2835,13 @@ static int macb_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
struct macb *bp = netdev_priv(netdev);
int ret;
/* Pass the order to phylink layer */
ret = phylink_ethtool_set_wol(bp->phylink, wol);
if (!ret)
return 0;
/* Don't manage WoL on MAC if handled by the PHY
* or if there's a failure in talking to the PHY
*/
if (!ret || ret != -EOPNOTSUPP)
return ret;
if (!(bp->wol & MACB_WOL_HAS_MAGIC_PACKET) ||
(wol->wolopts & ~WAKE_MAGIC))
@ -4422,7 +4428,7 @@ static int macb_probe(struct platform_device *pdev)
bp->wol = 0;
if (of_get_property(np, "magic-packet", NULL))
bp->wol |= MACB_WOL_HAS_MAGIC_PACKET;
device_init_wakeup(&pdev->dev, bp->wol & MACB_WOL_HAS_MAGIC_PACKET);
device_set_wakeup_capable(&pdev->dev, bp->wol & MACB_WOL_HAS_MAGIC_PACKET);
spin_lock_init(&bp->lock);
@ -4598,10 +4604,10 @@ static int __maybe_unused macb_suspend(struct device *dev)
bp->pm_data.scrt2 = gem_readl_n(bp, ETHT, SCRT2_ETHT);
}
netif_carrier_off(netdev);
if (bp->ptp_info)
bp->ptp_info->ptp_remove(netdev);
pm_runtime_force_suspend(dev);
if (!device_may_wakeup(dev))
pm_runtime_force_suspend(dev);
return 0;
}
@ -4616,7 +4622,8 @@ static int __maybe_unused macb_resume(struct device *dev)
if (!netif_running(netdev))
return 0;
pm_runtime_force_resume(dev);
if (!device_may_wakeup(dev))
pm_runtime_force_resume(dev);
if (bp->wol & MACB_WOL_ENABLED) {
macb_writel(bp, IDR, MACB_BIT(WOL));
@ -4654,7 +4661,7 @@ static int __maybe_unused macb_runtime_suspend(struct device *dev)
struct net_device *netdev = dev_get_drvdata(dev);
struct macb *bp = netdev_priv(netdev);
if (!(device_may_wakeup(&bp->dev->dev))) {
if (!(device_may_wakeup(dev))) {
clk_disable_unprepare(bp->tx_clk);
clk_disable_unprepare(bp->hclk);
clk_disable_unprepare(bp->pclk);
@ -4670,7 +4677,7 @@ static int __maybe_unused macb_runtime_resume(struct device *dev)
struct net_device *netdev = dev_get_drvdata(dev);
struct macb *bp = netdev_priv(netdev);
if (!(device_may_wakeup(&bp->dev->dev))) {
if (!(device_may_wakeup(dev))) {
clk_prepare_enable(bp->pclk);
clk_prepare_enable(bp->hclk);
clk_prepare_enable(bp->tx_clk);

View File

@ -1112,16 +1112,16 @@ static bool is_addr_all_mask(u8 *ipmask, int family)
struct in_addr *addr;
addr = (struct in_addr *)ipmask;
if (ntohl(addr->s_addr) == 0xffffffff)
if (addr->s_addr == htonl(0xffffffff))
return true;
} else if (family == AF_INET6) {
struct in6_addr *addr6;
addr6 = (struct in6_addr *)ipmask;
if (ntohl(addr6->s6_addr32[0]) == 0xffffffff &&
ntohl(addr6->s6_addr32[1]) == 0xffffffff &&
ntohl(addr6->s6_addr32[2]) == 0xffffffff &&
ntohl(addr6->s6_addr32[3]) == 0xffffffff)
if (addr6->s6_addr32[0] == htonl(0xffffffff) &&
addr6->s6_addr32[1] == htonl(0xffffffff) &&
addr6->s6_addr32[2] == htonl(0xffffffff) &&
addr6->s6_addr32[3] == htonl(0xffffffff))
return true;
}
return false;

View File

@ -3493,7 +3493,7 @@ int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
drv_fw = &fw_info->fw_hdr;
/* Read the header of the firmware on the card */
ret = -t4_read_flash(adap, FLASH_FW_START,
ret = t4_read_flash(adap, FLASH_FW_START,
sizeof(*card_fw) / sizeof(uint32_t),
(uint32_t *)card_fw, 1);
if (ret == 0) {
@ -3522,8 +3522,8 @@ int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
should_install_fs_fw(adap, card_fw_usable,
be32_to_cpu(fs_fw->fw_ver),
be32_to_cpu(card_fw->fw_ver))) {
ret = -t4_fw_upgrade(adap, adap->mbox, fw_data,
fw_size, 0);
ret = t4_fw_upgrade(adap, adap->mbox, fw_data,
fw_size, 0);
if (ret != 0) {
dev_err(adap->pdev_dev,
"failed to install firmware: %d\n", ret);
@ -3554,7 +3554,7 @@ int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
FW_HDR_FW_VER_MICRO_G(c), FW_HDR_FW_VER_BUILD_G(c),
FW_HDR_FW_VER_MAJOR_G(k), FW_HDR_FW_VER_MINOR_G(k),
FW_HDR_FW_VER_MICRO_G(k), FW_HDR_FW_VER_BUILD_G(k));
ret = EINVAL;
ret = -EINVAL;
goto bye;
}

View File

@ -266,7 +266,7 @@ static irqreturn_t enetc_msix(int irq, void *data)
/* disable interrupts */
enetc_wr_reg(v->rbier, 0);
for_each_set_bit(i, &v->tx_rings_map, v->count_tx_rings)
for_each_set_bit(i, &v->tx_rings_map, ENETC_MAX_NUM_TXQS)
enetc_wr_reg(v->tbier_base + ENETC_BDR_OFF(i), 0);
napi_schedule_irqoff(&v->napi);
@ -302,7 +302,7 @@ static int enetc_poll(struct napi_struct *napi, int budget)
/* enable interrupts */
enetc_wr_reg(v->rbier, ENETC_RBIER_RXTIE);
for_each_set_bit(i, &v->tx_rings_map, v->count_tx_rings)
for_each_set_bit(i, &v->tx_rings_map, ENETC_MAX_NUM_TXQS)
enetc_wr_reg(v->tbier_base + ENETC_BDR_OFF(i),
ENETC_TBIER_TXTIE);

View File

@ -525,11 +525,6 @@ struct fec_enet_private {
unsigned int total_tx_ring_size;
unsigned int total_rx_ring_size;
unsigned long work_tx;
unsigned long work_rx;
unsigned long work_ts;
unsigned long work_mdio;
struct platform_device *pdev;
int dev_id;

View File

@ -75,8 +75,6 @@ static void fec_enet_itr_coal_init(struct net_device *ndev);
#define DRIVER_NAME "fec"
#define FEC_ENET_GET_QUQUE(_x) ((_x == 0) ? 1 : ((_x == 1) ? 2 : 0))
/* Pause frame feild and FIFO threshold */
#define FEC_ENET_FCE (1 << 5)
#define FEC_ENET_RSEM_V 0x84
@ -1248,8 +1246,6 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
fep = netdev_priv(ndev);
queue_id = FEC_ENET_GET_QUQUE(queue_id);
txq = fep->tx_queue[queue_id];
/* get next bdp of dirty_tx */
nq = netdev_get_tx_queue(ndev, queue_id);
@ -1340,17 +1336,14 @@ skb_done:
writel(0, txq->bd.reg_desc_active);
}
static void
fec_enet_tx(struct net_device *ndev)
static void fec_enet_tx(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
u16 queue_id;
/* First process class A queue, then Class B and Best Effort queue */
for_each_set_bit(queue_id, &fep->work_tx, FEC_ENET_MAX_TX_QS) {
clear_bit(queue_id, &fep->work_tx);
fec_enet_tx_queue(ndev, queue_id);
}
return;
int i;
/* Make sure that AVB queues are processed first. */
for (i = fep->num_tx_queues - 1; i >= 0; i--)
fec_enet_tx_queue(ndev, i);
}
static int
@ -1426,7 +1419,6 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
#ifdef CONFIG_M532x
flush_cache_all();
#endif
queue_id = FEC_ENET_GET_QUQUE(queue_id);
rxq = fep->rx_queue[queue_id];
/* First, grab all of the stats for the incoming packet.
@ -1550,6 +1542,7 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
htons(ETH_P_8021Q),
vlan_tag);
skb_record_rx_queue(skb, queue_id);
napi_gro_receive(&fep->napi, skb);
if (is_copybreak) {
@ -1595,57 +1588,21 @@ rx_processing_done:
return pkt_received;
}
static int
fec_enet_rx(struct net_device *ndev, int budget)
static int fec_enet_rx(struct net_device *ndev, int budget)
{
int pkt_received = 0;
u16 queue_id;
struct fec_enet_private *fep = netdev_priv(ndev);
int i, done = 0;
for_each_set_bit(queue_id, &fep->work_rx, FEC_ENET_MAX_RX_QS) {
int ret;
/* Make sure that AVB queues are processed first. */
for (i = fep->num_rx_queues - 1; i >= 0; i--)
done += fec_enet_rx_queue(ndev, budget - done, i);
ret = fec_enet_rx_queue(ndev,
budget - pkt_received, queue_id);
if (ret < budget - pkt_received)
clear_bit(queue_id, &fep->work_rx);
pkt_received += ret;
}
return pkt_received;
return done;
}
static bool
fec_enet_collect_events(struct fec_enet_private *fep, uint int_events)
static bool fec_enet_collect_events(struct fec_enet_private *fep)
{
if (int_events == 0)
return false;
if (int_events & FEC_ENET_RXF_0)
fep->work_rx |= (1 << 2);
if (int_events & FEC_ENET_RXF_1)
fep->work_rx |= (1 << 0);
if (int_events & FEC_ENET_RXF_2)
fep->work_rx |= (1 << 1);
if (int_events & FEC_ENET_TXF_0)
fep->work_tx |= (1 << 2);
if (int_events & FEC_ENET_TXF_1)
fep->work_tx |= (1 << 0);
if (int_events & FEC_ENET_TXF_2)
fep->work_tx |= (1 << 1);
return true;
}
static irqreturn_t
fec_enet_interrupt(int irq, void *dev_id)
{
struct net_device *ndev = dev_id;
struct fec_enet_private *fep = netdev_priv(ndev);
uint int_events;
irqreturn_t ret = IRQ_NONE;
int_events = readl(fep->hwp + FEC_IEVENT);
@ -1653,9 +1610,18 @@ fec_enet_interrupt(int irq, void *dev_id)
int_events &= ~FEC_ENET_MII;
writel(int_events, fep->hwp + FEC_IEVENT);
fec_enet_collect_events(fep, int_events);
if ((fep->work_tx || fep->work_rx) && fep->link) {
return int_events != 0;
}
static irqreturn_t
fec_enet_interrupt(int irq, void *dev_id)
{
struct net_device *ndev = dev_id;
struct fec_enet_private *fep = netdev_priv(ndev);
irqreturn_t ret = IRQ_NONE;
if (fec_enet_collect_events(fep) && fep->link) {
ret = IRQ_HANDLED;
if (napi_schedule_prep(&fep->napi)) {
@ -1672,17 +1638,19 @@ static int fec_enet_rx_napi(struct napi_struct *napi, int budget)
{
struct net_device *ndev = napi->dev;
struct fec_enet_private *fep = netdev_priv(ndev);
int pkts;
int done = 0;
pkts = fec_enet_rx(ndev, budget);
do {
done += fec_enet_rx(ndev, budget - done);
fec_enet_tx(ndev);
} while ((done < budget) && fec_enet_collect_events(fep));
fec_enet_tx(ndev);
if (pkts < budget) {
napi_complete_done(napi, pkts);
if (done < budget) {
napi_complete_done(napi, done);
writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
}
return pkts;
return done;
}
/* ------------------------------------------------------------------------- */

View File

@ -4127,9 +4127,8 @@ static void hns3_client_uninit(struct hnae3_handle *handle, bool reset)
hns3_put_ring_config(priv);
hns3_dbg_uninit(handle);
out_netdev_free:
hns3_dbg_uninit(handle);
free_netdev(netdev);
}

View File

@ -180,18 +180,21 @@ static void hns3_lb_check_skb_data(struct hns3_enet_ring *ring,
{
struct hns3_enet_tqp_vector *tqp_vector = ring->tqp_vector;
unsigned char *packet = skb->data;
u32 len = skb_headlen(skb);
u32 i;
for (i = 0; i < skb->len; i++)
len = min_t(u32, len, HNS3_NIC_LB_TEST_PACKET_SIZE);
for (i = 0; i < len; i++)
if (packet[i] != (unsigned char)(i & 0xff))
break;
/* The packet is correctly received */
if (i == skb->len)
if (i == HNS3_NIC_LB_TEST_PACKET_SIZE)
tqp_vector->rx_group.total_packets++;
else
print_hex_dump(KERN_ERR, "selftest:", DUMP_PREFIX_OFFSET, 16, 1,
skb->data, skb->len, true);
skb->data, len, true);
dev_kfree_skb_any(skb);
}

View File

@ -9859,7 +9859,7 @@ retry:
set_bit(HCLGE_STATE_RST_HANDLING, &hdev->state);
hdev->reset_type = HNAE3_FLR_RESET;
ret = hclge_reset_prepare(hdev);
if (ret) {
if (ret || hdev->reset_pending) {
dev_err(&hdev->pdev->dev, "fail to prepare FLR, ret=%d\n",
ret);
if (hdev->reset_pending ||

View File

@ -1793,6 +1793,11 @@ static int hclgevf_reset_prepare_wait(struct hclgevf_dev *hdev)
if (hdev->reset_type == HNAE3_VF_FUNC_RESET) {
hclgevf_build_send_msg(&send_msg, HCLGE_MBX_RESET, 0);
ret = hclgevf_send_mbx_msg(hdev, &send_msg, true, NULL, 0);
if (ret) {
dev_err(&hdev->pdev->dev,
"failed to assert VF reset, ret = %d\n", ret);
return ret;
}
hdev->rst_stats.vf_func_rst_cnt++;
}

View File

@ -814,6 +814,8 @@ err_aeqs_init:
err_init_msix:
err_pfhwdev_alloc:
hinic_free_hwif(hwif);
if (err > 0)
err = -EIO;
return ERR_PTR(err);
}

View File

@ -370,6 +370,53 @@ int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
MSG_NOT_RESP, timeout);
}
static void recv_mgmt_msg_work_handler(struct work_struct *work)
{
struct hinic_mgmt_msg_handle_work *mgmt_work =
container_of(work, struct hinic_mgmt_msg_handle_work, work);
struct hinic_pf_to_mgmt *pf_to_mgmt = mgmt_work->pf_to_mgmt;
struct pci_dev *pdev = pf_to_mgmt->hwif->pdev;
u8 *buf_out = pf_to_mgmt->mgmt_ack_buf;
struct hinic_mgmt_cb *mgmt_cb;
unsigned long cb_state;
u16 out_size = 0;
memset(buf_out, 0, MAX_PF_MGMT_BUF_SIZE);
if (mgmt_work->mod >= HINIC_MOD_MAX) {
dev_err(&pdev->dev, "Unknown MGMT MSG module = %d\n",
mgmt_work->mod);
kfree(mgmt_work->msg);
kfree(mgmt_work);
return;
}
mgmt_cb = &pf_to_mgmt->mgmt_cb[mgmt_work->mod];
cb_state = cmpxchg(&mgmt_cb->state,
HINIC_MGMT_CB_ENABLED,
HINIC_MGMT_CB_ENABLED | HINIC_MGMT_CB_RUNNING);
if ((cb_state == HINIC_MGMT_CB_ENABLED) && (mgmt_cb->cb))
mgmt_cb->cb(mgmt_cb->handle, mgmt_work->cmd,
mgmt_work->msg, mgmt_work->msg_len,
buf_out, &out_size);
else
dev_err(&pdev->dev, "No MGMT msg handler, mod: %d, cmd: %d\n",
mgmt_work->mod, mgmt_work->cmd);
mgmt_cb->state &= ~HINIC_MGMT_CB_RUNNING;
if (!mgmt_work->async_mgmt_to_pf)
/* MGMT sent sync msg, send the response */
msg_to_mgmt_async(pf_to_mgmt, mgmt_work->mod, mgmt_work->cmd,
buf_out, out_size, MGMT_RESP,
mgmt_work->msg_id);
kfree(mgmt_work->msg);
kfree(mgmt_work);
}
/**
* mgmt_recv_msg_handler - handler for message from mgmt cpu
* @pf_to_mgmt: PF to MGMT channel
@ -378,40 +425,34 @@ int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
static void mgmt_recv_msg_handler(struct hinic_pf_to_mgmt *pf_to_mgmt,
struct hinic_recv_msg *recv_msg)
{
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
struct pci_dev *pdev = hwif->pdev;
u8 *buf_out = recv_msg->buf_out;
struct hinic_mgmt_cb *mgmt_cb;
unsigned long cb_state;
u16 out_size = 0;
struct hinic_mgmt_msg_handle_work *mgmt_work = NULL;
struct pci_dev *pdev = pf_to_mgmt->hwif->pdev;
if (recv_msg->mod >= HINIC_MOD_MAX) {
dev_err(&pdev->dev, "Unknown MGMT MSG module = %d\n",
recv_msg->mod);
mgmt_work = kzalloc(sizeof(*mgmt_work), GFP_KERNEL);
if (!mgmt_work) {
dev_err(&pdev->dev, "Allocate mgmt work memory failed\n");
return;
}
mgmt_cb = &pf_to_mgmt->mgmt_cb[recv_msg->mod];
if (recv_msg->msg_len) {
mgmt_work->msg = kzalloc(recv_msg->msg_len, GFP_KERNEL);
if (!mgmt_work->msg) {
dev_err(&pdev->dev, "Allocate mgmt msg memory failed\n");
kfree(mgmt_work);
return;
}
}
cb_state = cmpxchg(&mgmt_cb->state,
HINIC_MGMT_CB_ENABLED,
HINIC_MGMT_CB_ENABLED | HINIC_MGMT_CB_RUNNING);
mgmt_work->pf_to_mgmt = pf_to_mgmt;
mgmt_work->msg_len = recv_msg->msg_len;
memcpy(mgmt_work->msg, recv_msg->msg, recv_msg->msg_len);
mgmt_work->msg_id = recv_msg->msg_id;
mgmt_work->mod = recv_msg->mod;
mgmt_work->cmd = recv_msg->cmd;
mgmt_work->async_mgmt_to_pf = recv_msg->async_mgmt_to_pf;
if ((cb_state == HINIC_MGMT_CB_ENABLED) && (mgmt_cb->cb))
mgmt_cb->cb(mgmt_cb->handle, recv_msg->cmd,
recv_msg->msg, recv_msg->msg_len,
buf_out, &out_size);
else
dev_err(&pdev->dev, "No MGMT msg handler, mod: %d, cmd: %d\n",
recv_msg->mod, recv_msg->cmd);
mgmt_cb->state &= ~HINIC_MGMT_CB_RUNNING;
if (!recv_msg->async_mgmt_to_pf)
/* MGMT sent sync msg, send the response */
msg_to_mgmt_async(pf_to_mgmt, recv_msg->mod, recv_msg->cmd,
buf_out, out_size, MGMT_RESP,
recv_msg->msg_id);
INIT_WORK(&mgmt_work->work, recv_mgmt_msg_work_handler);
queue_work(pf_to_mgmt->workq, &mgmt_work->work);
}
/**
@ -546,6 +587,12 @@ static int alloc_msg_buf(struct hinic_pf_to_mgmt *pf_to_mgmt)
if (!pf_to_mgmt->sync_msg_buf)
return -ENOMEM;
pf_to_mgmt->mgmt_ack_buf = devm_kzalloc(&pdev->dev,
MAX_PF_MGMT_BUF_SIZE,
GFP_KERNEL);
if (!pf_to_mgmt->mgmt_ack_buf)
return -ENOMEM;
return 0;
}
@ -571,6 +618,11 @@ int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
return 0;
sema_init(&pf_to_mgmt->sync_msg_lock, 1);
pf_to_mgmt->workq = create_singlethread_workqueue("hinic_mgmt");
if (!pf_to_mgmt->workq) {
dev_err(&pdev->dev, "Failed to initialize MGMT workqueue\n");
return -ENOMEM;
}
pf_to_mgmt->sync_msg_id = 0;
err = alloc_msg_buf(pf_to_mgmt);
@ -605,4 +657,5 @@ void hinic_pf_to_mgmt_free(struct hinic_pf_to_mgmt *pf_to_mgmt)
hinic_aeq_unregister_hw_cb(&hwdev->aeqs, HINIC_MSG_FROM_MGMT_CPU);
hinic_api_cmd_free(pf_to_mgmt->cmd_chain);
destroy_workqueue(pf_to_mgmt->workq);
}

View File

@ -119,6 +119,7 @@ struct hinic_pf_to_mgmt {
struct semaphore sync_msg_lock;
u16 sync_msg_id;
u8 *sync_msg_buf;
void *mgmt_ack_buf;
struct hinic_recv_msg recv_resp_msg_from_mgmt;
struct hinic_recv_msg recv_msg_from_mgmt;
@ -126,6 +127,21 @@ struct hinic_pf_to_mgmt {
struct hinic_api_cmd_chain *cmd_chain[HINIC_API_CMD_MAX];
struct hinic_mgmt_cb mgmt_cb[HINIC_MOD_MAX];
struct workqueue_struct *workq;
};
struct hinic_mgmt_msg_handle_work {
struct work_struct work;
struct hinic_pf_to_mgmt *pf_to_mgmt;
void *msg;
u16 msg_len;
enum hinic_mod_type mod;
u8 cmd;
u16 msg_id;
int async_mgmt_to_pf;
};
void hinic_register_mgmt_msg_cb(struct hinic_pf_to_mgmt *pf_to_mgmt,

View File

@ -3959,7 +3959,7 @@ static void mvneta_mac_config(struct phylink_config *config, unsigned int mode,
/* When at 2.5G, the link partner can send frames with shortened
* preambles.
*/
if (state->speed == SPEED_2500)
if (state->interface == PHY_INTERFACE_MODE_2500BASEX)
new_ctrl4 |= MVNETA_GMAC4_SHORT_PREAMBLE_ENABLE;
if (pp->phy_interface != state->interface) {

View File

@ -203,7 +203,7 @@ io_error:
static inline u16 gm_phy_read(struct sky2_hw *hw, unsigned port, u16 reg)
{
u16 v;
u16 v = 0;
__gm_phy_read(hw, port, reg, &v);
return v;
}

View File

@ -29,6 +29,7 @@ struct mlx5e_dcbx {
bool manual_buffer;
u32 cable_len;
u32 xoff;
u16 port_buff_cell_sz;
};
#define MLX5E_MAX_DSCP (64)

View File

@ -78,11 +78,26 @@ static const u32 mlx5e_ext_link_speed[MLX5E_EXT_LINK_MODES_NUMBER] = {
[MLX5E_400GAUI_8] = 400000,
};
bool mlx5e_ptys_ext_supported(struct mlx5_core_dev *mdev)
{
struct mlx5e_port_eth_proto eproto;
int err;
if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet))
return true;
err = mlx5_port_query_eth_proto(mdev, 1, true, &eproto);
if (err)
return false;
return !!eproto.cap;
}
static void mlx5e_port_get_speed_arr(struct mlx5_core_dev *mdev,
const u32 **arr, u32 *size,
bool force_legacy)
{
bool ext = force_legacy ? false : MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
bool ext = force_legacy ? false : mlx5e_ptys_ext_supported(mdev);
*size = ext ? ARRAY_SIZE(mlx5e_ext_link_speed) :
ARRAY_SIZE(mlx5e_link_speed);
@ -177,7 +192,7 @@ int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
bool ext;
int err;
ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
ext = mlx5e_ptys_ext_supported(mdev);
err = mlx5_port_query_eth_proto(mdev, 1, ext, &eproto);
if (err)
goto out;
@ -205,7 +220,7 @@ int mlx5e_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
int err;
int i;
ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
ext = mlx5e_ptys_ext_supported(mdev);
err = mlx5_port_query_eth_proto(mdev, 1, ext, &eproto);
if (err)
return err;

View File

@ -54,7 +54,7 @@ int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
int mlx5e_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed,
bool force_legacy);
bool mlx5e_ptys_ext_supported(struct mlx5_core_dev *mdev);
int mlx5e_port_query_pbmc(struct mlx5_core_dev *mdev, void *out);
int mlx5e_port_set_pbmc(struct mlx5_core_dev *mdev, void *in);
int mlx5e_port_query_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer);

View File

@ -34,6 +34,7 @@
int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
struct mlx5e_port_buffer *port_buffer)
{
u16 port_buff_cell_sz = priv->dcbx.port_buff_cell_sz;
struct mlx5_core_dev *mdev = priv->mdev;
int sz = MLX5_ST_SZ_BYTES(pbmc_reg);
u32 total_used = 0;
@ -57,11 +58,11 @@ int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
port_buffer->buffer[i].epsb =
MLX5_GET(bufferx_reg, buffer, epsb);
port_buffer->buffer[i].size =
MLX5_GET(bufferx_reg, buffer, size) << MLX5E_BUFFER_CELL_SHIFT;
MLX5_GET(bufferx_reg, buffer, size) * port_buff_cell_sz;
port_buffer->buffer[i].xon =
MLX5_GET(bufferx_reg, buffer, xon_threshold) << MLX5E_BUFFER_CELL_SHIFT;
MLX5_GET(bufferx_reg, buffer, xon_threshold) * port_buff_cell_sz;
port_buffer->buffer[i].xoff =
MLX5_GET(bufferx_reg, buffer, xoff_threshold) << MLX5E_BUFFER_CELL_SHIFT;
MLX5_GET(bufferx_reg, buffer, xoff_threshold) * port_buff_cell_sz;
total_used += port_buffer->buffer[i].size;
mlx5e_dbg(HW, priv, "buffer %d: size=%d, xon=%d, xoff=%d, epsb=%d, lossy=%d\n", i,
@ -73,7 +74,7 @@ int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
}
port_buffer->port_buffer_size =
MLX5_GET(pbmc_reg, out, port_buffer_size) << MLX5E_BUFFER_CELL_SHIFT;
MLX5_GET(pbmc_reg, out, port_buffer_size) * port_buff_cell_sz;
port_buffer->spare_buffer_size =
port_buffer->port_buffer_size - total_used;
@ -88,9 +89,9 @@ out:
static int port_set_buffer(struct mlx5e_priv *priv,
struct mlx5e_port_buffer *port_buffer)
{
u16 port_buff_cell_sz = priv->dcbx.port_buff_cell_sz;
struct mlx5_core_dev *mdev = priv->mdev;
int sz = MLX5_ST_SZ_BYTES(pbmc_reg);
void *buffer;
void *in;
int err;
int i;
@ -104,16 +105,18 @@ static int port_set_buffer(struct mlx5e_priv *priv,
goto out;
for (i = 0; i < MLX5E_MAX_BUFFER; i++) {
buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]);
void *buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]);
u64 size = port_buffer->buffer[i].size;
u64 xoff = port_buffer->buffer[i].xoff;
u64 xon = port_buffer->buffer[i].xon;
MLX5_SET(bufferx_reg, buffer, size,
port_buffer->buffer[i].size >> MLX5E_BUFFER_CELL_SHIFT);
MLX5_SET(bufferx_reg, buffer, lossy,
port_buffer->buffer[i].lossy);
MLX5_SET(bufferx_reg, buffer, xoff_threshold,
port_buffer->buffer[i].xoff >> MLX5E_BUFFER_CELL_SHIFT);
MLX5_SET(bufferx_reg, buffer, xon_threshold,
port_buffer->buffer[i].xon >> MLX5E_BUFFER_CELL_SHIFT);
do_div(size, port_buff_cell_sz);
do_div(xoff, port_buff_cell_sz);
do_div(xon, port_buff_cell_sz);
MLX5_SET(bufferx_reg, buffer, size, size);
MLX5_SET(bufferx_reg, buffer, lossy, port_buffer->buffer[i].lossy);
MLX5_SET(bufferx_reg, buffer, xoff_threshold, xoff);
MLX5_SET(bufferx_reg, buffer, xon_threshold, xon);
}
err = mlx5e_port_set_pbmc(mdev, in);
@ -143,7 +146,7 @@ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
}
static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
u32 xoff, unsigned int max_mtu)
u32 xoff, unsigned int max_mtu, u16 port_buff_cell_sz)
{
int i;
@ -155,7 +158,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
}
if (port_buffer->buffer[i].size <
(xoff + max_mtu + (1 << MLX5E_BUFFER_CELL_SHIFT))) {
(xoff + max_mtu + port_buff_cell_sz)) {
pr_err("buffer_size[%d]=%d is not enough for lossless buffer\n",
i, port_buffer->buffer[i].size);
return -ENOMEM;
@ -175,6 +178,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
* @pfc_en: <input> current pfc configuration
* @buffer: <input> current prio to buffer mapping
* @xoff: <input> xoff value
* @port_buff_cell_sz: <input> port buffer cell_size
* @port_buffer: <output> port receive buffer configuration
* @change: <output>
*
@ -189,7 +193,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
* sets change to true if buffer configuration was modified.
*/
static int update_buffer_lossy(unsigned int max_mtu,
u8 pfc_en, u8 *buffer, u32 xoff,
u8 pfc_en, u8 *buffer, u32 xoff, u16 port_buff_cell_sz,
struct mlx5e_port_buffer *port_buffer,
bool *change)
{
@ -225,7 +229,7 @@ static int update_buffer_lossy(unsigned int max_mtu,
}
if (changed) {
err = update_xoff_threshold(port_buffer, xoff, max_mtu);
err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz);
if (err)
return err;
@ -262,6 +266,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
u32 *buffer_size,
u8 *prio2buffer)
{
u16 port_buff_cell_sz = priv->dcbx.port_buff_cell_sz;
struct mlx5e_port_buffer port_buffer;
u32 xoff = calculate_xoff(priv, mtu);
bool update_prio2buffer = false;
@ -282,7 +287,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
if (change & MLX5E_PORT_BUFFER_CABLE_LEN) {
update_buffer = true;
err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
err = update_xoff_threshold(&port_buffer, xoff, max_mtu, port_buff_cell_sz);
if (err)
return err;
}
@ -292,7 +297,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
if (err)
return err;
err = update_buffer_lossy(max_mtu, pfc->pfc_en, buffer, xoff,
err = update_buffer_lossy(max_mtu, pfc->pfc_en, buffer, xoff, port_buff_cell_sz,
&port_buffer, &update_buffer);
if (err)
return err;
@ -304,7 +309,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
if (err)
return err;
err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer,
err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer, port_buff_cell_sz,
xoff, &port_buffer, &update_buffer);
if (err)
return err;
@ -329,7 +334,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
return -EINVAL;
update_buffer = true;
err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
err = update_xoff_threshold(&port_buffer, xoff, max_mtu, port_buff_cell_sz);
if (err)
return err;
}
@ -337,7 +342,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
/* Need to update buffer configuration if xoff value is changed */
if (!update_buffer && xoff != priv->dcbx.xoff) {
update_buffer = true;
err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
err = update_xoff_threshold(&port_buffer, xoff, max_mtu, port_buff_cell_sz);
if (err)
return err;
}

View File

@ -36,7 +36,6 @@
#include "port.h"
#define MLX5E_MAX_BUFFER 8
#define MLX5E_BUFFER_CELL_SHIFT 7
#define MLX5E_DEFAULT_CABLE_LEN 7 /* 7 meters */
#define MLX5_BUFFER_SUPPORTED(mdev) (MLX5_CAP_GEN(mdev, pcam_reg) && \

View File

@ -6,7 +6,6 @@
#include <linux/rculist.h>
#include <linux/rtnetlink.h>
#include <linux/workqueue.h>
#include <linux/rwlock.h>
#include <linux/spinlock.h>
#include <linux/notifier.h>
#include <net/netevent.h>

View File

@ -1097,6 +1097,7 @@ mlx5_tc_ct_flush_ft_entry(void *ptr, void *arg)
struct mlx5_ct_entry *entry = ptr;
mlx5_tc_ct_entry_del_rules(ct_priv, entry);
kfree(entry);
}
static void

View File

@ -1217,6 +1217,24 @@ static int mlx5e_trust_initialize(struct mlx5e_priv *priv)
return 0;
}
#define MLX5E_BUFFER_CELL_SHIFT 7
static u16 mlx5e_query_port_buffers_cell_size(struct mlx5e_priv *priv)
{
struct mlx5_core_dev *mdev = priv->mdev;
u32 out[MLX5_ST_SZ_DW(sbcam_reg)] = {};
u32 in[MLX5_ST_SZ_DW(sbcam_reg)] = {};
if (!MLX5_CAP_GEN(mdev, sbcam_reg))
return (1 << MLX5E_BUFFER_CELL_SHIFT);
if (mlx5_core_access_reg(mdev, in, sizeof(in), out, sizeof(out),
MLX5_REG_SBCAM, 0, 0))
return (1 << MLX5E_BUFFER_CELL_SHIFT);
return MLX5_GET(sbcam_reg, out, cap_cell_size);
}
void mlx5e_dcbnl_initialize(struct mlx5e_priv *priv)
{
struct mlx5e_dcbx *dcbx = &priv->dcbx;
@ -1234,6 +1252,7 @@ void mlx5e_dcbnl_initialize(struct mlx5e_priv *priv)
if (priv->dcbx.mode == MLX5E_DCBX_PARAM_VER_OPER_HOST)
priv->dcbx.cap |= DCB_CAP_DCBX_HOST;
priv->dcbx.port_buff_cell_sz = mlx5e_query_port_buffers_cell_size(priv);
priv->dcbx.manual_buffer = false;
priv->dcbx.cable_len = MLX5E_DEFAULT_CABLE_LEN;

View File

@ -200,7 +200,7 @@ static void mlx5e_ethtool_get_speed_arr(struct mlx5_core_dev *mdev,
struct ptys2ethtool_config **arr,
u32 *size)
{
bool ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
bool ext = mlx5e_ptys_ext_supported(mdev);
*arr = ext ? ptys2ext_ethtool_table : ptys2legacy_ethtool_table;
*size = ext ? ARRAY_SIZE(ptys2ext_ethtool_table) :
@ -883,7 +883,7 @@ static void get_lp_advertising(struct mlx5_core_dev *mdev, u32 eth_proto_lp,
struct ethtool_link_ksettings *link_ksettings)
{
unsigned long *lp_advertising = link_ksettings->link_modes.lp_advertising;
bool ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
bool ext = mlx5e_ptys_ext_supported(mdev);
ptys2ethtool_adver_link(lp_advertising, eth_proto_lp, ext);
}
@ -913,7 +913,7 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
__func__, err);
goto err_query_regs;
}
ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
ext = !!MLX5_GET_ETH_PROTO(ptys_reg, out, true, eth_proto_capability);
eth_proto_cap = MLX5_GET_ETH_PROTO(ptys_reg, out, ext,
eth_proto_capability);
eth_proto_admin = MLX5_GET_ETH_PROTO(ptys_reg, out, ext,
@ -1066,7 +1066,7 @@ int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
autoneg = link_ksettings->base.autoneg;
speed = link_ksettings->base.speed;
ext_supported = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
ext_supported = mlx5e_ptys_ext_supported(mdev);
ext = ext_requested(autoneg, adver, ext_supported);
if (!ext_supported && ext)
return -EOPNOTSUPP;

View File

@ -3104,9 +3104,6 @@ int mlx5e_open(struct net_device *netdev)
mlx5_set_port_admin_status(priv->mdev, MLX5_PORT_UP);
mutex_unlock(&priv->state_lock);
if (mlx5_vxlan_allowed(priv->mdev->vxlan))
udp_tunnel_get_rx_info(netdev);
return err;
}
@ -5121,6 +5118,10 @@ static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
if (err)
goto err_destroy_flow_steering;
#ifdef CONFIG_MLX5_EN_ARFS
priv->netdev->rx_cpu_rmap = mlx5_eq_table_get_rmap(priv->mdev);
#endif
return 0;
err_destroy_flow_steering:
@ -5202,6 +5203,8 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv)
rtnl_lock();
if (netif_running(netdev))
mlx5e_open(netdev);
if (mlx5_vxlan_allowed(priv->mdev->vxlan))
udp_tunnel_get_rx_info(netdev);
netif_device_attach(netdev);
rtnl_unlock();
}
@ -5216,6 +5219,8 @@ static void mlx5e_nic_disable(struct mlx5e_priv *priv)
rtnl_lock();
if (netif_running(priv->netdev))
mlx5e_close(priv->netdev);
if (mlx5_vxlan_allowed(priv->mdev->vxlan))
udp_tunnel_drop_rx_info(priv->netdev);
netif_device_detach(priv->netdev);
rtnl_unlock();
@ -5288,10 +5293,6 @@ int mlx5e_netdev_init(struct net_device *netdev,
/* netdev init */
netif_carrier_off(netdev);
#ifdef CONFIG_MLX5_EN_ARFS
netdev->rx_cpu_rmap = mlx5_eq_table_get_rmap(mdev);
#endif
return 0;
err_free_cpumask:

View File

@ -4670,9 +4670,10 @@ static bool is_flow_rule_duplicate_allowed(struct net_device *dev,
struct mlx5e_rep_priv *rpriv)
{
/* Offloaded flow rule is allowed to duplicate on non-uplink representor
* sharing tc block with other slaves of a lag device.
* sharing tc block with other slaves of a lag device. Rpriv can be NULL if this
* function is called from NIC mode.
*/
return netif_is_lag_port(dev) && rpriv->rep->vport != MLX5_VPORT_UPLINK;
return netif_is_lag_port(dev) && rpriv && rpriv->rep->vport != MLX5_VPORT_UPLINK;
}
int mlx5e_configure_flower(struct net_device *dev, struct mlx5e_priv *priv,
@ -4686,13 +4687,12 @@ int mlx5e_configure_flower(struct net_device *dev, struct mlx5e_priv *priv,
rcu_read_lock();
flow = rhashtable_lookup(tc_ht, &f->cookie, tc_ht_params);
rcu_read_unlock();
if (flow) {
/* Same flow rule offloaded to non-uplink representor sharing tc block,
* just return 0.
*/
if (is_flow_rule_duplicate_allowed(dev, rpriv) && flow->orig_dev != dev)
goto out;
goto rcu_unlock;
NL_SET_ERR_MSG_MOD(extack,
"flow cookie already exists, ignoring");
@ -4700,8 +4700,12 @@ int mlx5e_configure_flower(struct net_device *dev, struct mlx5e_priv *priv,
"flow cookie %lx already exists, ignoring\n",
f->cookie);
err = -EEXIST;
goto out;
goto rcu_unlock;
}
rcu_unlock:
rcu_read_unlock();
if (flow)
goto out;
trace_mlx5e_configure_flower(f);
err = mlx5e_tc_add_flow(priv, f, flags, dev, &flow);

View File

@ -217,7 +217,6 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
}
/* Create ingress allow rule */
memset(spec, 0, sizeof(*spec));
spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
vport->ingress.allow_rule = mlx5_add_flow_rules(vport->ingress.acl, spec,

View File

@ -293,7 +293,40 @@ static int mlx5_query_module_num(struct mlx5_core_dev *dev, int *module_num)
return 0;
}
static int mlx5_eeprom_page(int offset)
static int mlx5_query_module_id(struct mlx5_core_dev *dev, int module_num,
u8 *module_id)
{
u32 in[MLX5_ST_SZ_DW(mcia_reg)] = {};
u32 out[MLX5_ST_SZ_DW(mcia_reg)];
int err, status;
u8 *ptr;
MLX5_SET(mcia_reg, in, i2c_device_address, MLX5_I2C_ADDR_LOW);
MLX5_SET(mcia_reg, in, module, module_num);
MLX5_SET(mcia_reg, in, device_address, 0);
MLX5_SET(mcia_reg, in, page_number, 0);
MLX5_SET(mcia_reg, in, size, 1);
MLX5_SET(mcia_reg, in, l, 0);
err = mlx5_core_access_reg(dev, in, sizeof(in), out,
sizeof(out), MLX5_REG_MCIA, 0, 0);
if (err)
return err;
status = MLX5_GET(mcia_reg, out, status);
if (status) {
mlx5_core_err(dev, "query_mcia_reg failed: status: 0x%x\n",
status);
return -EIO;
}
ptr = MLX5_ADDR_OF(mcia_reg, out, dword_0);
*module_id = ptr[0];
return 0;
}
static int mlx5_qsfp_eeprom_page(u16 offset)
{
if (offset < MLX5_EEPROM_PAGE_LENGTH)
/* Addresses between 0-255 - page 00 */
@ -307,7 +340,7 @@ static int mlx5_eeprom_page(int offset)
MLX5_EEPROM_HIGH_PAGE_LENGTH);
}
static int mlx5_eeprom_high_page_offset(int page_num)
static int mlx5_qsfp_eeprom_high_page_offset(int page_num)
{
if (!page_num) /* Page 0 always start from low page */
return 0;
@ -316,35 +349,62 @@ static int mlx5_eeprom_high_page_offset(int page_num)
return page_num * MLX5_EEPROM_HIGH_PAGE_LENGTH;
}
static void mlx5_qsfp_eeprom_params_set(u16 *i2c_addr, int *page_num, u16 *offset)
{
*i2c_addr = MLX5_I2C_ADDR_LOW;
*page_num = mlx5_qsfp_eeprom_page(*offset);
*offset -= mlx5_qsfp_eeprom_high_page_offset(*page_num);
}
static void mlx5_sfp_eeprom_params_set(u16 *i2c_addr, int *page_num, u16 *offset)
{
*i2c_addr = MLX5_I2C_ADDR_LOW;
*page_num = 0;
if (*offset < MLX5_EEPROM_PAGE_LENGTH)
return;
*i2c_addr = MLX5_I2C_ADDR_HIGH;
*offset -= MLX5_EEPROM_PAGE_LENGTH;
}
int mlx5_query_module_eeprom(struct mlx5_core_dev *dev,
u16 offset, u16 size, u8 *data)
{
int module_num, page_num, status, err;
int module_num, status, err, page_num = 0;
u32 in[MLX5_ST_SZ_DW(mcia_reg)] = {};
u32 out[MLX5_ST_SZ_DW(mcia_reg)];
u32 in[MLX5_ST_SZ_DW(mcia_reg)];
u16 i2c_addr;
void *ptr = MLX5_ADDR_OF(mcia_reg, out, dword_0);
u16 i2c_addr = 0;
u8 module_id;
void *ptr;
err = mlx5_query_module_num(dev, &module_num);
if (err)
return err;
memset(in, 0, sizeof(in));
size = min_t(int, size, MLX5_EEPROM_MAX_BYTES);
err = mlx5_query_module_id(dev, module_num, &module_id);
if (err)
return err;
/* Get the page number related to the given offset */
page_num = mlx5_eeprom_page(offset);
/* Set the right offset according to the page number,
* For page_num > 0, relative offset is always >= 128 (high page).
*/
offset -= mlx5_eeprom_high_page_offset(page_num);
switch (module_id) {
case MLX5_MODULE_ID_SFP:
mlx5_sfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
break;
case MLX5_MODULE_ID_QSFP:
case MLX5_MODULE_ID_QSFP_PLUS:
case MLX5_MODULE_ID_QSFP28:
mlx5_qsfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
break;
default:
mlx5_core_err(dev, "Module ID not recognized: 0x%x\n", module_id);
return -EINVAL;
}
if (offset + size > MLX5_EEPROM_PAGE_LENGTH)
/* Cross pages read, read until offset 256 in low page */
size -= offset + size - MLX5_EEPROM_PAGE_LENGTH;
i2c_addr = MLX5_I2C_ADDR_LOW;
size = min_t(int, size, MLX5_EEPROM_MAX_BYTES);
MLX5_SET(mcia_reg, in, l, 0);
MLX5_SET(mcia_reg, in, module, module_num);
@ -365,6 +425,7 @@ int mlx5_query_module_eeprom(struct mlx5_core_dev *dev,
return -EIO;
}
ptr = MLX5_ADDR_OF(mcia_reg, out, dword_0);
memcpy(data, ptr, size);
return size;

View File

@ -1414,23 +1414,12 @@ static int mlxsw_pci_init(void *bus_priv, struct mlxsw_core *mlxsw_core,
u16 num_pages;
int err;
mutex_init(&mlxsw_pci->cmd.lock);
init_waitqueue_head(&mlxsw_pci->cmd.wait);
mlxsw_pci->core = mlxsw_core;
mbox = mlxsw_cmd_mbox_alloc();
if (!mbox)
return -ENOMEM;
err = mlxsw_pci_mbox_alloc(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
if (err)
goto mbox_put;
err = mlxsw_pci_mbox_alloc(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
if (err)
goto err_out_mbox_alloc;
err = mlxsw_pci_sw_reset(mlxsw_pci, mlxsw_pci->id);
if (err)
goto err_sw_reset;
@ -1537,9 +1526,6 @@ err_query_fw:
mlxsw_pci_free_irq_vectors(mlxsw_pci);
err_alloc_irq:
err_sw_reset:
mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
err_out_mbox_alloc:
mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
mbox_put:
mlxsw_cmd_mbox_free(mbox);
return err;
@ -1553,8 +1539,6 @@ static void mlxsw_pci_fini(void *bus_priv)
mlxsw_pci_aqs_fini(mlxsw_pci);
mlxsw_pci_fw_area_fini(mlxsw_pci);
mlxsw_pci_free_irq_vectors(mlxsw_pci);
mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
}
static struct mlxsw_pci_queue *
@ -1776,6 +1760,37 @@ static const struct mlxsw_bus mlxsw_pci_bus = {
.features = MLXSW_BUS_F_TXRX | MLXSW_BUS_F_RESET,
};
static int mlxsw_pci_cmd_init(struct mlxsw_pci *mlxsw_pci)
{
int err;
mutex_init(&mlxsw_pci->cmd.lock);
init_waitqueue_head(&mlxsw_pci->cmd.wait);
err = mlxsw_pci_mbox_alloc(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
if (err)
goto err_in_mbox_alloc;
err = mlxsw_pci_mbox_alloc(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
if (err)
goto err_out_mbox_alloc;
return 0;
err_out_mbox_alloc:
mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
err_in_mbox_alloc:
mutex_destroy(&mlxsw_pci->cmd.lock);
return err;
}
static void mlxsw_pci_cmd_fini(struct mlxsw_pci *mlxsw_pci)
{
mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
mutex_destroy(&mlxsw_pci->cmd.lock);
}
static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
const char *driver_name = pdev->driver->name;
@ -1831,6 +1846,10 @@ static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
mlxsw_pci->pdev = pdev;
pci_set_drvdata(pdev, mlxsw_pci);
err = mlxsw_pci_cmd_init(mlxsw_pci);
if (err)
goto err_pci_cmd_init;
mlxsw_pci->bus_info.device_kind = driver_name;
mlxsw_pci->bus_info.device_name = pci_name(mlxsw_pci->pdev);
mlxsw_pci->bus_info.dev = &pdev->dev;
@ -1848,6 +1867,8 @@ static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return 0;
err_bus_device_register:
mlxsw_pci_cmd_fini(mlxsw_pci);
err_pci_cmd_init:
iounmap(mlxsw_pci->hw_addr);
err_ioremap:
err_pci_resource_len_check:
@ -1865,6 +1886,7 @@ static void mlxsw_pci_remove(struct pci_dev *pdev)
struct mlxsw_pci *mlxsw_pci = pci_get_drvdata(pdev);
mlxsw_core_bus_device_unregister(mlxsw_pci->core, false);
mlxsw_pci_cmd_fini(mlxsw_pci);
iounmap(mlxsw_pci->hw_addr);
pci_release_regions(mlxsw_pci->pdev);
pci_disable_device(mlxsw_pci->pdev);

View File

@ -6262,7 +6262,7 @@ static int mlxsw_sp_router_fib_event(struct notifier_block *nb,
}
fib_work = kzalloc(sizeof(*fib_work), GFP_ATOMIC);
if (WARN_ON(!fib_work))
if (!fib_work)
return NOTIFY_BAD;
fib_work->mlxsw_sp = router->mlxsw_sp;

View File

@ -468,12 +468,18 @@ static void ionic_get_ringparam(struct net_device *netdev,
ring->rx_pending = lif->nrxq_descs;
}
static void ionic_set_ringsize(struct ionic_lif *lif, void *arg)
{
struct ethtool_ringparam *ring = arg;
lif->ntxq_descs = ring->tx_pending;
lif->nrxq_descs = ring->rx_pending;
}
static int ionic_set_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring)
{
struct ionic_lif *lif = netdev_priv(netdev);
bool running;
int err;
if (ring->rx_mini_pending || ring->rx_jumbo_pending) {
netdev_info(netdev, "Changing jumbo or mini descriptors not supported\n");
@ -491,22 +497,7 @@ static int ionic_set_ringparam(struct net_device *netdev,
ring->rx_pending == lif->nrxq_descs)
return 0;
err = ionic_wait_for_bit(lif, IONIC_LIF_F_QUEUE_RESET);
if (err)
return err;
running = test_bit(IONIC_LIF_F_UP, lif->state);
if (running)
ionic_stop(netdev);
lif->ntxq_descs = ring->tx_pending;
lif->nrxq_descs = ring->rx_pending;
if (running)
ionic_open(netdev);
clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state);
return 0;
return ionic_reset_queues(lif, ionic_set_ringsize, ring);
}
static void ionic_get_channels(struct net_device *netdev,
@ -521,12 +512,17 @@ static void ionic_get_channels(struct net_device *netdev,
ch->combined_count = lif->nxqs;
}
static void ionic_set_queuecount(struct ionic_lif *lif, void *arg)
{
struct ethtool_channels *ch = arg;
lif->nxqs = ch->combined_count;
}
static int ionic_set_channels(struct net_device *netdev,
struct ethtool_channels *ch)
{
struct ionic_lif *lif = netdev_priv(netdev);
bool running;
int err;
if (!ch->combined_count || ch->other_count ||
ch->rx_count || ch->tx_count)
@ -535,21 +531,7 @@ static int ionic_set_channels(struct net_device *netdev,
if (ch->combined_count == lif->nxqs)
return 0;
err = ionic_wait_for_bit(lif, IONIC_LIF_F_QUEUE_RESET);
if (err)
return err;
running = test_bit(IONIC_LIF_F_UP, lif->state);
if (running)
ionic_stop(netdev);
lif->nxqs = ch->combined_count;
if (running)
ionic_open(netdev);
clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state);
return 0;
return ionic_reset_queues(lif, ionic_set_queuecount, ch);
}
static u32 ionic_get_priv_flags(struct net_device *netdev)

View File

@ -1313,7 +1313,7 @@ static int ionic_change_mtu(struct net_device *netdev, int new_mtu)
return err;
netdev->mtu = new_mtu;
err = ionic_reset_queues(lif);
err = ionic_reset_queues(lif, NULL, NULL);
return err;
}
@ -1325,7 +1325,7 @@ static void ionic_tx_timeout_work(struct work_struct *ws)
netdev_info(lif->netdev, "Tx Timeout recovery\n");
rtnl_lock();
ionic_reset_queues(lif);
ionic_reset_queues(lif, NULL, NULL);
rtnl_unlock();
}
@ -1673,6 +1673,14 @@ int ionic_open(struct net_device *netdev)
if (err)
goto err_out;
err = netif_set_real_num_tx_queues(netdev, lif->nxqs);
if (err)
goto err_txrx_deinit;
err = netif_set_real_num_rx_queues(netdev, lif->nxqs);
if (err)
goto err_txrx_deinit;
/* don't start the queues until we have link */
if (netif_carrier_ok(netdev)) {
err = ionic_start_queues(lif);
@ -1980,7 +1988,7 @@ static const struct net_device_ops ionic_netdev_ops = {
.ndo_get_vf_stats = ionic_get_vf_stats,
};
int ionic_reset_queues(struct ionic_lif *lif)
int ionic_reset_queues(struct ionic_lif *lif, ionic_reset_cb cb, void *arg)
{
bool running;
int err = 0;
@ -1993,12 +2001,19 @@ int ionic_reset_queues(struct ionic_lif *lif)
if (running) {
netif_device_detach(lif->netdev);
err = ionic_stop(lif->netdev);
if (err)
goto reset_out;
}
if (!err && running) {
ionic_open(lif->netdev);
if (cb)
cb(lif, arg);
if (running) {
err = ionic_open(lif->netdev);
netif_device_attach(lif->netdev);
}
reset_out:
clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state);
return err;

View File

@ -248,6 +248,8 @@ static inline u32 ionic_coal_hw_to_usec(struct ionic *ionic, u32 units)
return (units * div) / mult;
}
typedef void (*ionic_reset_cb)(struct ionic_lif *lif, void *arg);
void ionic_link_status_check_request(struct ionic_lif *lif);
void ionic_get_stats64(struct net_device *netdev,
struct rtnl_link_stats64 *ns);
@ -267,7 +269,7 @@ int ionic_lif_rss_config(struct ionic_lif *lif, u16 types,
int ionic_open(struct net_device *netdev);
int ionic_stop(struct net_device *netdev);
int ionic_reset_queues(struct ionic_lif *lif);
int ionic_reset_queues(struct ionic_lif *lif, ionic_reset_cb cb, void *arg);
static inline void debug_stats_txq_post(struct ionic_qcq *qcq,
struct ionic_txq_desc *desc, bool dbell)

View File

@ -876,6 +876,8 @@ struct qed_dev {
struct qed_dbg_feature dbg_features[DBG_FEATURE_NUM];
u8 engine_for_debug;
bool disable_ilt_dump;
bool dbg_bin_dump;
DECLARE_HASHTABLE(connections, 10);
const struct firmware *firmware;

View File

@ -7506,6 +7506,12 @@ static enum dbg_status format_feature(struct qed_hwfn *p_hwfn,
if (p_hwfn->cdev->print_dbg_data)
qed_dbg_print_feature(text_buf, text_size_bytes);
/* Just return the original binary buffer if requested */
if (p_hwfn->cdev->dbg_bin_dump) {
vfree(text_buf);
return DBG_STATUS_OK;
}
/* Free the old dump_buf and point the dump_buf to the newly allocagted
* and formatted text buffer.
*/
@ -7733,7 +7739,9 @@ int qed_dbg_mcp_trace_size(struct qed_dev *cdev)
#define REGDUMP_HEADER_SIZE_SHIFT 0
#define REGDUMP_HEADER_SIZE_MASK 0xffffff
#define REGDUMP_HEADER_FEATURE_SHIFT 24
#define REGDUMP_HEADER_FEATURE_MASK 0x3f
#define REGDUMP_HEADER_FEATURE_MASK 0x1f
#define REGDUMP_HEADER_BIN_DUMP_SHIFT 29
#define REGDUMP_HEADER_BIN_DUMP_MASK 0x1
#define REGDUMP_HEADER_OMIT_ENGINE_SHIFT 30
#define REGDUMP_HEADER_OMIT_ENGINE_MASK 0x1
#define REGDUMP_HEADER_ENGINE_SHIFT 31
@ -7771,6 +7779,7 @@ static u32 qed_calc_regdump_header(struct qed_dev *cdev,
feature, feature_size);
SET_FIELD(res, REGDUMP_HEADER_FEATURE, feature);
SET_FIELD(res, REGDUMP_HEADER_BIN_DUMP, 1);
SET_FIELD(res, REGDUMP_HEADER_OMIT_ENGINE, omit_engine);
SET_FIELD(res, REGDUMP_HEADER_ENGINE, engine);
@ -7794,6 +7803,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer)
omit_engine = 1;
mutex_lock(&qed_dbg_lock);
cdev->dbg_bin_dump = true;
org_engine = qed_get_debug_engine(cdev);
for (cur_engine = 0; cur_engine < cdev->num_hwfns; cur_engine++) {
@ -7931,6 +7941,10 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer)
DP_ERR(cdev, "qed_dbg_mcp_trace failed. rc = %d\n", rc);
}
/* Re-populate nvm attribute info */
qed_mcp_nvm_info_free(p_hwfn);
qed_mcp_nvm_info_populate(p_hwfn);
/* nvm cfg1 */
rc = qed_dbg_nvm_image(cdev,
(u8 *)buffer + offset +
@ -7993,6 +8007,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer)
QED_NVM_IMAGE_MDUMP, "QED_NVM_IMAGE_MDUMP", rc);
}
cdev->dbg_bin_dump = false;
mutex_unlock(&qed_dbg_lock);
return 0;

View File

@ -4472,12 +4472,6 @@ static int qed_get_dev_info(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
return 0;
}
static void qed_nvm_info_free(struct qed_hwfn *p_hwfn)
{
kfree(p_hwfn->nvm_info.image_att);
p_hwfn->nvm_info.image_att = NULL;
}
static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
void __iomem *p_regview,
void __iomem *p_doorbells,
@ -4562,7 +4556,7 @@ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
return rc;
err3:
if (IS_LEAD_HWFN(p_hwfn))
qed_nvm_info_free(p_hwfn);
qed_mcp_nvm_info_free(p_hwfn);
err2:
if (IS_LEAD_HWFN(p_hwfn))
qed_iov_free_hw_info(p_hwfn->cdev);
@ -4623,7 +4617,7 @@ int qed_hw_prepare(struct qed_dev *cdev,
if (rc) {
if (IS_PF(cdev)) {
qed_init_free(p_hwfn);
qed_nvm_info_free(p_hwfn);
qed_mcp_nvm_info_free(p_hwfn);
qed_mcp_free(p_hwfn);
qed_hw_hwfn_free(p_hwfn);
}
@ -4657,7 +4651,7 @@ void qed_hw_remove(struct qed_dev *cdev)
qed_iov_free_hw_info(cdev);
qed_nvm_info_free(p_hwfn);
qed_mcp_nvm_info_free(p_hwfn);
}
static void qed_chain_free_next_ptr(struct qed_dev *cdev,

View File

@ -3280,6 +3280,13 @@ err0:
return rc;
}
void qed_mcp_nvm_info_free(struct qed_hwfn *p_hwfn)
{
kfree(p_hwfn->nvm_info.image_att);
p_hwfn->nvm_info.image_att = NULL;
p_hwfn->nvm_info.valid = false;
}
int
qed_mcp_get_nvm_image_att(struct qed_hwfn *p_hwfn,
enum qed_nvm_images image_id,

View File

@ -1220,6 +1220,13 @@ void qed_mcp_read_ufp_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
*/
int qed_mcp_nvm_info_populate(struct qed_hwfn *p_hwfn);
/**
* @brief Delete nvm info shadow in the given hardware function
*
* @param p_hwfn
*/
void qed_mcp_nvm_info_free(struct qed_hwfn *p_hwfn);
/**
* @brief Get the engine affinity configuration.
*

View File

@ -47,15 +47,23 @@ static int rmnet_unregister_real_device(struct net_device *real_dev)
return 0;
}
static int rmnet_register_real_device(struct net_device *real_dev)
static int rmnet_register_real_device(struct net_device *real_dev,
struct netlink_ext_ack *extack)
{
struct rmnet_port *port;
int rc, entry;
ASSERT_RTNL();
if (rmnet_is_real_dev_registered(real_dev))
if (rmnet_is_real_dev_registered(real_dev)) {
port = rmnet_get_port_rtnl(real_dev);
if (port->rmnet_mode != RMNET_EPMODE_VND) {
NL_SET_ERR_MSG_MOD(extack, "bridge device already exists");
return -EINVAL;
}
return 0;
}
port = kzalloc(sizeof(*port), GFP_KERNEL);
if (!port)
@ -133,7 +141,7 @@ static int rmnet_newlink(struct net *src_net, struct net_device *dev,
mux_id = nla_get_u16(data[IFLA_RMNET_MUX_ID]);
err = rmnet_register_real_device(real_dev);
err = rmnet_register_real_device(real_dev, extack);
if (err)
goto err0;
@ -422,7 +430,7 @@ int rmnet_add_bridge(struct net_device *rmnet_dev,
}
if (port->rmnet_mode != RMNET_EPMODE_VND) {
NL_SET_ERR_MSG_MOD(extack, "bridge device already exists");
NL_SET_ERR_MSG_MOD(extack, "more than one bridge dev attached");
return -EINVAL;
}
@ -433,7 +441,7 @@ int rmnet_add_bridge(struct net_device *rmnet_dev,
return -EBUSY;
}
err = rmnet_register_real_device(slave_dev);
err = rmnet_register_real_device(slave_dev, extack);
if (err)
return -EBUSY;

View File

@ -500,6 +500,13 @@ static int gsi_channel_stop_command(struct gsi_channel *channel)
int ret;
state = gsi_channel_state(channel);
/* Channel could have entered STOPPED state since last call
* if it timed out. If so, we're done.
*/
if (state == GSI_CHANNEL_STATE_STOPPED)
return 0;
if (state != GSI_CHANNEL_STATE_STARTED &&
state != GSI_CHANNEL_STATE_STOP_IN_PROC)
return -EINVAL;
@ -789,20 +796,11 @@ int gsi_channel_start(struct gsi *gsi, u32 channel_id)
int gsi_channel_stop(struct gsi *gsi, u32 channel_id)
{
struct gsi_channel *channel = &gsi->channel[channel_id];
enum gsi_channel_state state;
u32 retries;
int ret;
gsi_channel_freeze(channel);
/* Channel could have entered STOPPED state since last call if the
* STOP command timed out. We won't stop a channel if stopping it
* was successful previously (so we still want the freeze above).
*/
state = gsi_channel_state(channel);
if (state == GSI_CHANNEL_STATE_STOPPED)
return 0;
/* RX channels might require a little time to enter STOPPED state */
retries = channel->toward_ipa ? 0 : GSI_CHANNEL_STOP_RX_RETRIES;

View File

@ -586,6 +586,21 @@ u32 ipa_cmd_tag_process_count(void)
return 4;
}
void ipa_cmd_tag_process(struct ipa *ipa)
{
u32 count = ipa_cmd_tag_process_count();
struct gsi_trans *trans;
trans = ipa_cmd_trans_alloc(ipa, count);
if (trans) {
ipa_cmd_tag_process_add(trans);
gsi_trans_commit_wait(trans);
} else {
dev_err(&ipa->pdev->dev,
"error allocating %u entry tag transaction\n", count);
}
}
static struct ipa_cmd_info *
ipa_cmd_info_alloc(struct ipa_endpoint *endpoint, u32 tre_count)
{

View File

@ -171,6 +171,14 @@ void ipa_cmd_tag_process_add(struct gsi_trans *trans);
*/
u32 ipa_cmd_tag_process_count(void);
/**
* ipa_cmd_tag_process() - Perform a tag process
*
* @Return: The number of elements to allocate in a transaction
* to hold tag process commands
*/
void ipa_cmd_tag_process(struct ipa *ipa);
/**
* ipa_cmd_trans_alloc() - Allocate a transaction for the command TX endpoint
* @ipa: IPA pointer

View File

@ -44,7 +44,6 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = {
.endpoint = {
.seq_type = IPA_SEQ_INVALID,
.config = {
.checksum = true,
.aggregation = true,
.status_enable = true,
.rx = {

View File

@ -1450,6 +1450,8 @@ void ipa_endpoint_suspend(struct ipa *ipa)
if (ipa->modem_netdev)
ipa_modem_suspend(ipa->modem_netdev);
ipa_cmd_tag_process(ipa);
ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]);
ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]);
}

View File

@ -6,6 +6,7 @@
#include <linux/types.h>
#include "ipa_gsi.h"
#include "gsi_trans.h"
#include "ipa.h"
#include "ipa_endpoint.h"

View File

@ -8,7 +8,9 @@
#include <linux/types.h>
struct gsi;
struct gsi_trans;
struct ipa_gsi_endpoint_data;
/**
* ipa_gsi_trans_complete() - GSI transaction completion callback

View File

@ -119,7 +119,7 @@ struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = {
sizeof_field(struct ipa_driver_init_complete_rsp,
rsp),
.tlv_type = 0x02,
.elem_size = offsetof(struct ipa_driver_init_complete_rsp,
.offset = offsetof(struct ipa_driver_init_complete_rsp,
rsp),
.ei_array = qmi_response_type_v01_ei,
},
@ -137,7 +137,7 @@ struct qmi_elem_info ipa_init_complete_ind_ei[] = {
sizeof_field(struct ipa_init_complete_ind,
status),
.tlv_type = 0x02,
.elem_size = offsetof(struct ipa_init_complete_ind,
.offset = offsetof(struct ipa_init_complete_ind,
status),
.ei_array = qmi_response_type_v01_ei,
},
@ -218,7 +218,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
sizeof_field(struct ipa_init_modem_driver_req,
platform_type_valid),
.tlv_type = 0x10,
.elem_size = offsetof(struct ipa_init_modem_driver_req,
.offset = offsetof(struct ipa_init_modem_driver_req,
platform_type_valid),
},
{

View File

@ -4052,9 +4052,8 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
return err;
netdev_lockdep_set_classes(dev);
lockdep_set_class_and_subclass(&dev->addr_list_lock,
&macsec_netdev_addr_lock_key,
dev->lower_level);
lockdep_set_class(&dev->addr_list_lock,
&macsec_netdev_addr_lock_key);
err = netdev_upper_dev_link(real_dev, dev, extack);
if (err < 0)

View File

@ -880,9 +880,8 @@ static struct lock_class_key macvlan_netdev_addr_lock_key;
static void macvlan_set_lockdep_class(struct net_device *dev)
{
netdev_lockdep_set_classes(dev);
lockdep_set_class_and_subclass(&dev->addr_list_lock,
&macvlan_netdev_addr_lock_key,
dev->lower_level);
lockdep_set_class(&dev->addr_list_lock,
&macvlan_netdev_addr_lock_key);
}
static int macvlan_init(struct net_device *dev)

View File

@ -62,6 +62,7 @@
#include <net/rtnetlink.h>
#include <net/sock.h>
#include <net/xdp.h>
#include <net/ip_tunnels.h>
#include <linux/seq_file.h>
#include <linux/uio.h>
#include <linux/skb_array.h>
@ -1351,6 +1352,7 @@ static void tun_net_init(struct net_device *dev)
switch (tun->flags & TUN_TYPE_MASK) {
case IFF_TUN:
dev->netdev_ops = &tun_netdev_ops;
dev->header_ops = &ip_tunnel_header_ops;
/* Point-to-Point TUN Device */
dev->hard_header_len = 0;

View File

@ -1370,6 +1370,7 @@ static const struct usb_device_id products[] = {
{QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)}, /* SIMCom 7100E, 7230E, 7600E ++ */
{QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */
{QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */
{QMI_QUIRK_SET_DTR(0x2c7c, 0x0195, 4)}, /* Quectel EG95 */
{QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */
{QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */
{QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */

View File

@ -1287,11 +1287,14 @@ static int smsc95xx_bind(struct usbnet *dev, struct usb_interface *intf)
/* Init all registers */
ret = smsc95xx_reset(dev);
if (ret)
goto free_pdata;
/* detect device revision as different features may be available */
ret = smsc95xx_read_reg(dev, ID_REV, &val);
if (ret < 0)
return ret;
goto free_pdata;
val >>= 16;
pdata->chip_id = val;
pdata->mdix_ctrl = get_mdix_status(dev->net);
@ -1317,6 +1320,10 @@ static int smsc95xx_bind(struct usbnet *dev, struct usb_interface *intf)
schedule_delayed_work(&pdata->carrier_check, CARRIER_CHECK_DELAY);
return 0;
free_pdata:
kfree(pdata);
return ret;
}
static void smsc95xx_unbind(struct usbnet *dev, struct usb_interface *intf)

View File

@ -303,7 +303,6 @@ static void lapbeth_setup(struct net_device *dev)
dev->netdev_ops = &lapbeth_netdev_ops;
dev->needs_free_netdev = true;
dev->type = ARPHRD_X25;
dev->hard_header_len = 3;
dev->mtu = 1000;
dev->addr_len = 0;
}
@ -324,6 +323,14 @@ static int lapbeth_new_device(struct net_device *dev)
if (!ndev)
goto out;
/* When transmitting data:
* first this driver removes a pseudo header of 1 byte,
* then the lapb module prepends an LAPB header of at most 3 bytes,
* then this driver prepends a length field of 2 bytes,
* then the underlying Ethernet device prepends its own header.
*/
ndev->hard_header_len = -1 + 3 + 2 + dev->hard_header_len;
lapbeth = netdev_priv(ndev);
lapbeth->axdev = ndev;

View File

@ -262,6 +262,7 @@ static void wg_setup(struct net_device *dev)
max(sizeof(struct ipv6hdr), sizeof(struct iphdr));
dev->netdev_ops = &netdev_ops;
dev->header_ops = &ip_tunnel_header_ops;
dev->hard_header_len = 0;
dev->addr_len = 0;
dev->needed_headroom = DATA_PACKET_HEAD_ROOM;

View File

@ -11,6 +11,7 @@
#include <linux/skbuff.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <net/ip_tunnels.h>
struct wg_device;
struct wg_peer;
@ -65,25 +66,9 @@ struct packet_cb {
#define PACKET_CB(skb) ((struct packet_cb *)((skb)->cb))
#define PACKET_PEER(skb) (PACKET_CB(skb)->keypair->entry.peer)
/* Returns either the correct skb->protocol value, or 0 if invalid. */
static inline __be16 wg_examine_packet_protocol(struct sk_buff *skb)
{
if (skb_network_header(skb) >= skb->head &&
(skb_network_header(skb) + sizeof(struct iphdr)) <=
skb_tail_pointer(skb) &&
ip_hdr(skb)->version == 4)
return htons(ETH_P_IP);
if (skb_network_header(skb) >= skb->head &&
(skb_network_header(skb) + sizeof(struct ipv6hdr)) <=
skb_tail_pointer(skb) &&
ipv6_hdr(skb)->version == 6)
return htons(ETH_P_IPV6);
return 0;
}
static inline bool wg_check_packet_protocol(struct sk_buff *skb)
{
__be16 real_protocol = wg_examine_packet_protocol(skb);
__be16 real_protocol = ip_tunnel_parse_protocol(skb);
return real_protocol && skb->protocol == real_protocol;
}

View File

@ -387,7 +387,7 @@ static void wg_packet_consume_data_done(struct wg_peer *peer,
*/
skb->ip_summed = CHECKSUM_UNNECESSARY;
skb->csum_level = ~0; /* All levels */
skb->protocol = wg_examine_packet_protocol(skb);
skb->protocol = ip_tunnel_parse_protocol(skb);
if (skb->protocol == htons(ETH_P_IP)) {
len = ntohs(ip_hdr(skb)->tot_len);
if (unlikely(len < sizeof(struct iphdr)))

View File

@ -33,7 +33,7 @@ int netns_bpf_prog_query(const union bpf_attr *attr,
union bpf_attr __user *uattr);
int netns_bpf_prog_attach(const union bpf_attr *attr,
struct bpf_prog *prog);
int netns_bpf_prog_detach(const union bpf_attr *attr);
int netns_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype);
int netns_bpf_link_create(const union bpf_attr *attr,
struct bpf_prog *prog);
#else
@ -49,7 +49,8 @@ static inline int netns_bpf_prog_attach(const union bpf_attr *attr,
return -EOPNOTSUPP;
}
static inline int netns_bpf_prog_detach(const union bpf_attr *attr)
static inline int netns_bpf_prog_detach(const union bpf_attr *attr,
enum bpf_prog_type ptype)
{
return -EOPNOTSUPP;
}

View File

@ -1543,13 +1543,16 @@ static inline void bpf_map_offload_map_free(struct bpf_map *map)
#endif /* CONFIG_NET && CONFIG_BPF_SYSCALL */
#if defined(CONFIG_BPF_STREAM_PARSER)
int sock_map_prog_update(struct bpf_map *map, struct bpf_prog *prog, u32 which);
int sock_map_prog_update(struct bpf_map *map, struct bpf_prog *prog,
struct bpf_prog *old, u32 which);
int sock_map_get_from_fd(const union bpf_attr *attr, struct bpf_prog *prog);
int sock_map_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype);
void sock_map_unhash(struct sock *sk);
void sock_map_close(struct sock *sk, long timeout);
#else
static inline int sock_map_prog_update(struct bpf_map *map,
struct bpf_prog *prog, u32 which)
struct bpf_prog *prog,
struct bpf_prog *old, u32 which)
{
return -EOPNOTSUPP;
}
@ -1559,6 +1562,12 @@ static inline int sock_map_get_from_fd(const union bpf_attr *attr,
{
return -EINVAL;
}
static inline int sock_map_prog_detach(const union bpf_attr *attr,
enum bpf_prog_type ptype)
{
return -EOPNOTSUPP;
}
#endif /* CONFIG_BPF_STREAM_PARSER */
#if defined(CONFIG_INET) && defined(CONFIG_BPF_SYSCALL)

View File

@ -82,6 +82,11 @@ static inline bool btf_type_is_int(const struct btf_type *t)
return BTF_INFO_KIND(t->info) == BTF_KIND_INT;
}
static inline bool btf_type_is_small_int(const struct btf_type *t)
{
return btf_type_is_int(t) && t->size <= sizeof(u64);
}
static inline bool btf_type_is_enum(const struct btf_type *t)
{
return BTF_INFO_KIND(t->info) == BTF_KIND_ENUM;

View File

@ -790,7 +790,9 @@ struct sock_cgroup_data {
union {
#ifdef __LITTLE_ENDIAN
struct {
u8 is_data;
u8 is_data : 1;
u8 no_refcnt : 1;
u8 unused : 6;
u8 padding;
u16 prioidx;
u32 classid;
@ -800,7 +802,9 @@ struct sock_cgroup_data {
u32 classid;
u16 prioidx;
u8 padding;
u8 is_data;
u8 unused : 6;
u8 no_refcnt : 1;
u8 is_data : 1;
} __packed;
#endif
u64 val;

View File

@ -822,6 +822,7 @@ extern spinlock_t cgroup_sk_update_lock;
void cgroup_sk_alloc_disable(void);
void cgroup_sk_alloc(struct sock_cgroup_data *skcd);
void cgroup_sk_clone(struct sock_cgroup_data *skcd);
void cgroup_sk_free(struct sock_cgroup_data *skcd);
static inline struct cgroup *sock_cgroup_ptr(struct sock_cgroup_data *skcd)
@ -835,7 +836,7 @@ static inline struct cgroup *sock_cgroup_ptr(struct sock_cgroup_data *skcd)
*/
v = READ_ONCE(skcd->val);
if (v & 1)
if (v & 3)
return &cgrp_dfl_root.cgrp;
return (struct cgroup *)(unsigned long)v ?: &cgrp_dfl_root.cgrp;
@ -847,6 +848,7 @@ static inline struct cgroup *sock_cgroup_ptr(struct sock_cgroup_data *skcd)
#else /* CONFIG_CGROUP_DATA */
static inline void cgroup_sk_alloc(struct sock_cgroup_data *skcd) {}
static inline void cgroup_sk_clone(struct sock_cgroup_data *skcd) {}
static inline void cgroup_sk_free(struct sock_cgroup_data *skcd) {}
#endif /* CONFIG_CGROUP_DATA */

View File

@ -85,4 +85,5 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs);
int dma_direct_supported(struct device *dev, u64 mask);
bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr);
#endif /* _LINUX_DMA_DIRECT_H */

View File

@ -461,6 +461,7 @@ int dma_set_mask(struct device *dev, u64 mask);
int dma_set_coherent_mask(struct device *dev, u64 mask);
u64 dma_get_required_mask(struct device *dev);
size_t dma_max_mapping_size(struct device *dev);
bool dma_need_sync(struct device *dev, dma_addr_t dma_addr);
unsigned long dma_get_merge_boundary(struct device *dev);
#else /* CONFIG_HAS_DMA */
static inline dma_addr_t dma_map_page_attrs(struct device *dev,
@ -571,6 +572,10 @@ static inline size_t dma_max_mapping_size(struct device *dev)
{
return 0;
}
static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
{
return false;
}
static inline unsigned long dma_get_merge_boundary(struct device *dev)
{
return 0;

View File

@ -3333,13 +3333,17 @@ struct ieee80211_multiple_bssid_configuration {
#define WLAN_AKM_SUITE_TDLS SUITE(0x000FAC, 7)
#define WLAN_AKM_SUITE_SAE SUITE(0x000FAC, 8)
#define WLAN_AKM_SUITE_FT_OVER_SAE SUITE(0x000FAC, 9)
#define WLAN_AKM_SUITE_AP_PEER_KEY SUITE(0x000FAC, 10)
#define WLAN_AKM_SUITE_8021X_SUITE_B SUITE(0x000FAC, 11)
#define WLAN_AKM_SUITE_8021X_SUITE_B_192 SUITE(0x000FAC, 12)
#define WLAN_AKM_SUITE_FT_8021X_SHA384 SUITE(0x000FAC, 13)
#define WLAN_AKM_SUITE_FILS_SHA256 SUITE(0x000FAC, 14)
#define WLAN_AKM_SUITE_FILS_SHA384 SUITE(0x000FAC, 15)
#define WLAN_AKM_SUITE_FT_FILS_SHA256 SUITE(0x000FAC, 16)
#define WLAN_AKM_SUITE_FT_FILS_SHA384 SUITE(0x000FAC, 17)
#define WLAN_AKM_SUITE_OWE SUITE(0x000FAC, 18)
#define WLAN_AKM_SUITE_FT_PSK_SHA384 SUITE(0x000FAC, 19)
#define WLAN_AKM_SUITE_PSK_SHA384 SUITE(0x000FAC, 20)
#define WLAN_MAX_KEY_LEN 32

View File

@ -25,6 +25,8 @@
#define VLAN_ETH_DATA_LEN 1500 /* Max. octets in payload */
#define VLAN_ETH_FRAME_LEN 1518 /* Max. octets in frame sans FCS */
#define VLAN_MAX_DEPTH 8 /* Max. number of nested VLAN tags parsed */
/*
* struct vlan_hdr - vlan header
* @h_vlan_TCI: priority and VLAN ID
@ -577,10 +579,10 @@ static inline int vlan_get_tag(const struct sk_buff *skb, u16 *vlan_tci)
* Returns the EtherType of the packet, regardless of whether it is
* vlan encapsulated (normal or hardware accelerated) or not.
*/
static inline __be16 __vlan_get_protocol(struct sk_buff *skb, __be16 type,
static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
int *depth)
{
unsigned int vlan_depth = skb->mac_len;
unsigned int vlan_depth = skb->mac_len, parse_depth = VLAN_MAX_DEPTH;
/* if type is 802.1Q/AD then the header should already be
* present at mac_len - VLAN_HLEN (if mac_len > 0), or at
@ -595,13 +597,12 @@ static inline __be16 __vlan_get_protocol(struct sk_buff *skb, __be16 type,
vlan_depth = ETH_HLEN;
}
do {
struct vlan_hdr *vh;
struct vlan_hdr vhdr, *vh;
if (unlikely(!pskb_may_pull(skb,
vlan_depth + VLAN_HLEN)))
vh = skb_header_pointer(skb, vlan_depth, sizeof(vhdr), &vhdr);
if (unlikely(!vh || !--parse_depth))
return 0;
vh = (struct vlan_hdr *)(skb->data + vlan_depth);
type = vh->h_vlan_encapsulated_proto;
vlan_depth += VLAN_HLEN;
} while (eth_type_vlan(type));
@ -620,11 +621,25 @@ static inline __be16 __vlan_get_protocol(struct sk_buff *skb, __be16 type,
* Returns the EtherType of the packet, regardless of whether it is
* vlan encapsulated (normal or hardware accelerated) or not.
*/
static inline __be16 vlan_get_protocol(struct sk_buff *skb)
static inline __be16 vlan_get_protocol(const struct sk_buff *skb)
{
return __vlan_get_protocol(skb, skb->protocol, NULL);
}
/* A getter for the SKB protocol field which will handle VLAN tags consistently
* whether VLAN acceleration is enabled or not.
*/
static inline __be16 skb_protocol(const struct sk_buff *skb, bool skip_vlan)
{
if (!skip_vlan)
/* VLAN acceleration strips the VLAN header from the skb and
* moves it to skb->vlan_proto
*/
return skb_vlan_tag_present(skb) ? skb->vlan_proto : skb->protocol;
return vlan_get_protocol(skb);
}
static inline void vlan_set_encap_proto(struct sk_buff *skb,
struct vlan_hdr *vhdr)
{

View File

@ -147,6 +147,7 @@ enum {
MLX5_REG_MCDA = 0x9063,
MLX5_REG_MCAM = 0x907f,
MLX5_REG_MIRC = 0x9162,
MLX5_REG_SBCAM = 0xB01F,
MLX5_REG_RESOURCE_DUMP = 0xC000,
};

View File

@ -9960,6 +9960,34 @@ struct mlx5_ifc_pptb_reg_bits {
u8 untagged_buff[0x4];
};
struct mlx5_ifc_sbcam_reg_bits {
u8 reserved_at_0[0x8];
u8 feature_group[0x8];
u8 reserved_at_10[0x8];
u8 access_reg_group[0x8];
u8 reserved_at_20[0x20];
u8 sb_access_reg_cap_mask[4][0x20];
u8 reserved_at_c0[0x80];
u8 sb_feature_cap_mask[4][0x20];
u8 reserved_at_1c0[0x40];
u8 cap_total_buffer_size[0x20];
u8 cap_cell_size[0x10];
u8 cap_max_pg_buffers[0x8];
u8 cap_num_pool_supported[0x8];
u8 reserved_at_240[0x8];
u8 cap_sbsr_stat_size[0x8];
u8 cap_max_tclass_data[0x8];
u8 cap_max_cpu_ingress_tclass_sb[0x8];
};
struct mlx5_ifc_pbmc_reg_bits {
u8 reserved_at_0[0x8];
u8 local_port[0x8];

View File

@ -430,6 +430,19 @@ static inline void psock_set_prog(struct bpf_prog **pprog,
bpf_prog_put(prog);
}
static inline int psock_replace_prog(struct bpf_prog **pprog,
struct bpf_prog *prog,
struct bpf_prog *old)
{
if (cmpxchg(pprog, old, prog) != old)
return -ENOENT;
if (old)
bpf_prog_put(old);
return 0;
}
static inline void psock_progs_drop(struct sk_psock_progs *progs)
{
psock_set_prog(&progs->msg_parser, NULL);

View File

@ -400,7 +400,15 @@ static inline struct neighbour *dst_neigh_lookup(const struct dst_entry *dst, co
static inline struct neighbour *dst_neigh_lookup_skb(const struct dst_entry *dst,
struct sk_buff *skb)
{
struct neighbour *n = dst->ops->neigh_lookup(dst, skb, NULL);
struct neighbour *n = NULL;
/* The packets from tunnel devices (eg bareudp) may have only
* metadata in the dst pointer of skb. Hence a pointer check of
* neigh_lookup is needed.
*/
if (dst->ops->neigh_lookup)
n = dst->ops->neigh_lookup(dst, skb, NULL);
return IS_ERR(n) ? NULL : n;
}

View File

@ -372,7 +372,8 @@ flow_dissector_init_keys(struct flow_dissector_key_control *key_control,
}
#ifdef CONFIG_BPF_SYSCALL
int flow_dissector_bpf_prog_attach(struct net *net, struct bpf_prog *prog);
int flow_dissector_bpf_prog_attach_check(struct net *net,
struct bpf_prog *prog);
#endif /* CONFIG_BPF_SYSCALL */
#endif

View File

@ -35,13 +35,6 @@ struct genl_info;
* do additional, common, filtering and return an error
* @post_doit: called after an operation's doit callback, it may
* undo operations done by pre_doit, for example release locks
* @mcast_bind: a socket bound to the given multicast group (which
* is given as the offset into the groups array)
* @mcast_unbind: a socket was unbound from the given multicast group.
* Note that unbind() will not be called symmetrically if the
* generic netlink family is removed while there are still open
* sockets.
* @attrbuf: buffer to store parsed attributes (private)
* @mcgrps: multicast groups used by this family
* @n_mcgrps: number of multicast groups
* @mcgrp_offset: starting number of multicast group IDs in this family
@ -64,9 +57,6 @@ struct genl_family {
void (*post_doit)(const struct genl_ops *ops,
struct sk_buff *skb,
struct genl_info *info);
int (*mcast_bind)(struct net *net, int group);
void (*mcast_unbind)(struct net *net, int group);
struct nlattr ** attrbuf; /* private */
const struct genl_ops * ops;
const struct genl_multicast_group *mcgrps;
unsigned int n_ops;

View File

@ -4,6 +4,7 @@
#include <linux/ip.h>
#include <linux/skbuff.h>
#include <linux/if_vlan.h>
#include <net/inet_sock.h>
#include <net/dsfield.h>
@ -172,7 +173,7 @@ static inline void ipv6_copy_dscp(unsigned int dscp, struct ipv6hdr *inner)
static inline int INET_ECN_set_ce(struct sk_buff *skb)
{
switch (skb->protocol) {
switch (skb_protocol(skb, true)) {
case cpu_to_be16(ETH_P_IP):
if (skb_network_header(skb) + sizeof(struct iphdr) <=
skb_tail_pointer(skb))
@ -191,7 +192,7 @@ static inline int INET_ECN_set_ce(struct sk_buff *skb)
static inline int INET_ECN_set_ect1(struct sk_buff *skb)
{
switch (skb->protocol) {
switch (skb_protocol(skb, true)) {
case cpu_to_be16(ETH_P_IP):
if (skb_network_header(skb) + sizeof(struct iphdr) <=
skb_tail_pointer(skb))
@ -272,12 +273,16 @@ static inline int IP_ECN_decapsulate(const struct iphdr *oiph,
{
__u8 inner;
if (skb->protocol == htons(ETH_P_IP))
switch (skb_protocol(skb, true)) {
case htons(ETH_P_IP):
inner = ip_hdr(skb)->tos;
else if (skb->protocol == htons(ETH_P_IPV6))
break;
case htons(ETH_P_IPV6):
inner = ipv6_get_dsfield(ipv6_hdr(skb));
else
break;
default:
return 0;
}
return INET_ECN_decapsulate(skb, oiph->tos, inner);
}
@ -287,12 +292,16 @@ static inline int IP6_ECN_decapsulate(const struct ipv6hdr *oipv6h,
{
__u8 inner;
if (skb->protocol == htons(ETH_P_IP))
switch (skb_protocol(skb, true)) {
case htons(ETH_P_IP):
inner = ip_hdr(skb)->tos;
else if (skb->protocol == htons(ETH_P_IPV6))
break;
case htons(ETH_P_IPV6):
inner = ipv6_get_dsfield(ipv6_hdr(skb));
else
break;
default:
return 0;
}
return INET_ECN_decapsulate(skb, ipv6_get_dsfield(oipv6h), inner);
}

View File

@ -290,6 +290,9 @@ int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[],
struct ip_tunnel_parm *p, __u32 fwmark);
void ip_tunnel_setup(struct net_device *dev, unsigned int net_id);
extern const struct header_ops ip_tunnel_header_ops;
__be16 ip_tunnel_parse_protocol(const struct sk_buff *skb);
struct ip_tunnel_encap_ops {
size_t (*encap_hlen)(struct ip_tunnel_encap *e);
int (*build_header)(struct sk_buff *skb, struct ip_tunnel_encap *e,

View File

@ -9,10 +9,13 @@
#include <linux/bpf-netns.h>
struct bpf_prog;
struct bpf_prog_array;
struct netns_bpf {
struct bpf_prog __rcu *progs[MAX_NETNS_BPF_ATTACH_TYPE];
struct bpf_link *links[MAX_NETNS_BPF_ATTACH_TYPE];
/* Array of programs to run compiled from progs or links */
struct bpf_prog_array __rcu *run_array[MAX_NETNS_BPF_ATTACH_TYPE];
struct bpf_prog *progs[MAX_NETNS_BPF_ATTACH_TYPE];
struct list_head links[MAX_NETNS_BPF_ATTACH_TYPE];
};
#endif /* __NETNS_BPF_H__ */

View File

@ -136,17 +136,6 @@ static inline void qdisc_run(struct Qdisc *q)
}
}
static inline __be16 tc_skb_protocol(const struct sk_buff *skb)
{
/* We need to take extra care in case the skb came via
* vlan accelerated path. In that case, use skb->vlan_proto
* as the original vlan header was already stripped.
*/
if (skb_vlan_tag_present(skb))
return skb->vlan_proto;
return skb->protocol;
}
/* Calculate maximal size of packet seen by hard_start_xmit
routine of this device.
*/

View File

@ -533,7 +533,8 @@ enum sk_pacing {
* be copied.
*/
#define SK_USER_DATA_NOCOPY 1UL
#define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY)
#define SK_USER_DATA_BPF 2UL /* Managed by BPF */
#define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY | SK_USER_DATA_BPF)
/**
* sk_user_data_is_nocopy - Test if sk_user_data pointer must not be copied

View File

@ -40,7 +40,7 @@ struct xsk_buff_pool {
u32 headroom;
u32 chunk_size;
u32 frame_len;
bool cheap_dma;
bool dma_need_sync;
bool unaligned;
void *addrs;
struct device *dev;
@ -80,7 +80,7 @@ static inline dma_addr_t xp_get_frame_dma(struct xdp_buff_xsk *xskb)
void xp_dma_sync_for_cpu_slow(struct xdp_buff_xsk *xskb);
static inline void xp_dma_sync_for_cpu(struct xdp_buff_xsk *xskb)
{
if (xskb->pool->cheap_dma)
if (!xskb->pool->dma_need_sync)
return;
xp_dma_sync_for_cpu_slow(xskb);
@ -91,7 +91,7 @@ void xp_dma_sync_for_device_slow(struct xsk_buff_pool *pool, dma_addr_t dma,
static inline void xp_dma_sync_for_device(struct xsk_buff_pool *pool,
dma_addr_t dma, size_t size)
{
if (pool->cheap_dma)
if (!pool->dma_need_sync)
return;
xp_dma_sync_for_device_slow(pool, dma, size);

View File

@ -3171,13 +3171,12 @@ union bpf_attr {
* int bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags)
* Description
* Copy *size* bytes from *data* into a ring buffer *ringbuf*.
* If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
* new data availability is sent.
* IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of
* new data availability is sent unconditionally.
* If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification
* of new data availability is sent.
* If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification
* of new data availability is sent unconditionally.
* Return
* 0, on success;
* < 0, on error.
* 0 on success, or a negative error in case of failure.
*
* void *bpf_ringbuf_reserve(void *ringbuf, u64 size, u64 flags)
* Description
@ -3189,20 +3188,20 @@ union bpf_attr {
* void bpf_ringbuf_submit(void *data, u64 flags)
* Description
* Submit reserved ring buffer sample, pointed to by *data*.
* If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
* new data availability is sent.
* IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of
* new data availability is sent unconditionally.
* If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification
* of new data availability is sent.
* If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification
* of new data availability is sent unconditionally.
* Return
* Nothing. Always succeeds.
*
* void bpf_ringbuf_discard(void *data, u64 flags)
* Description
* Discard reserved ring buffer sample, pointed to by *data*.
* If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
* new data availability is sent.
* IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of
* new data availability is sent unconditionally.
* If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification
* of new data availability is sent.
* If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification
* of new data availability is sent unconditionally.
* Return
* Nothing. Always succeeds.
*
@ -3210,16 +3209,18 @@ union bpf_attr {
* Description
* Query various characteristics of provided ring buffer. What
* exactly is queries is determined by *flags*:
* - BPF_RB_AVAIL_DATA - amount of data not yet consumed;
* - BPF_RB_RING_SIZE - the size of ring buffer;
* - BPF_RB_CONS_POS - consumer position (can wrap around);
* - BPF_RB_PROD_POS - producer(s) position (can wrap around);
* Data returned is just a momentary snapshots of actual values
*
* * **BPF_RB_AVAIL_DATA**: Amount of data not yet consumed.
* * **BPF_RB_RING_SIZE**: The size of ring buffer.
* * **BPF_RB_CONS_POS**: Consumer position (can wrap around).
* * **BPF_RB_PROD_POS**: Producer(s) position (can wrap around).
*
* Data returned is just a momentary snapshot of actual values
* and could be inaccurate, so this facility should be used to
* power heuristics and for reporting, not to make 100% correct
* calculation.
* Return
* Requested value, or 0, if flags are not recognized.
* Requested value, or 0, if *flags* are not recognized.
*
* int bpf_csum_level(struct sk_buff *skb, u64 level)
* Description

View File

@ -3746,7 +3746,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
return false;
t = btf_type_skip_modifiers(btf, t->type, NULL);
if (!btf_type_is_int(t)) {
if (!btf_type_is_small_int(t)) {
bpf_log(log,
"ret type %s not allowed for fmod_ret\n",
btf_kind_str[BTF_INFO_KIND(t->info)]);
@ -3768,7 +3768,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
/* skip modifiers */
while (btf_type_is_modifier(t))
t = btf_type_by_id(btf, t->type);
if (btf_type_is_int(t) || btf_type_is_enum(t))
if (btf_type_is_small_int(t) || btf_type_is_enum(t))
/* accessing a scalar */
return true;
if (!btf_type_is_ptr(t)) {

View File

@ -19,18 +19,21 @@ struct bpf_netns_link {
* with netns_bpf_mutex held.
*/
struct net *net;
struct list_head node; /* node in list of links attached to net */
};
/* Protects updates to netns_bpf */
DEFINE_MUTEX(netns_bpf_mutex);
/* Must be called with netns_bpf_mutex held. */
static void __net_exit bpf_netns_link_auto_detach(struct bpf_link *link)
static void netns_bpf_run_array_detach(struct net *net,
enum netns_bpf_attach_type type)
{
struct bpf_netns_link *net_link =
container_of(link, struct bpf_netns_link, link);
struct bpf_prog_array *run_array;
net_link->net = NULL;
run_array = rcu_replace_pointer(net->bpf.run_array[type], NULL,
lockdep_is_held(&netns_bpf_mutex));
bpf_prog_array_free(run_array);
}
static void bpf_netns_link_release(struct bpf_link *link)
@ -40,22 +43,18 @@ static void bpf_netns_link_release(struct bpf_link *link)
enum netns_bpf_attach_type type = net_link->netns_type;
struct net *net;
/* Link auto-detached by dying netns. */
if (!net_link->net)
return;
mutex_lock(&netns_bpf_mutex);
/* Recheck after potential sleep. We can race with cleanup_net
* here, but if we see a non-NULL struct net pointer pre_exit
* has not happened yet and will block on netns_bpf_mutex.
/* We can race with cleanup_net, but if we see a non-NULL
* struct net pointer, pre_exit has not run yet and wait for
* netns_bpf_mutex.
*/
net = net_link->net;
if (!net)
goto out_unlock;
net->bpf.links[type] = NULL;
RCU_INIT_POINTER(net->bpf.progs[type], NULL);
netns_bpf_run_array_detach(net, type);
list_del(&net_link->node);
out_unlock:
mutex_unlock(&netns_bpf_mutex);
@ -76,6 +75,7 @@ static int bpf_netns_link_update_prog(struct bpf_link *link,
struct bpf_netns_link *net_link =
container_of(link, struct bpf_netns_link, link);
enum netns_bpf_attach_type type = net_link->netns_type;
struct bpf_prog_array *run_array;
struct net *net;
int ret = 0;
@ -93,8 +93,11 @@ static int bpf_netns_link_update_prog(struct bpf_link *link,
goto out_unlock;
}
run_array = rcu_dereference_protected(net->bpf.run_array[type],
lockdep_is_held(&netns_bpf_mutex));
WRITE_ONCE(run_array->items[0].prog, new_prog);
old_prog = xchg(&link->prog, new_prog);
rcu_assign_pointer(net->bpf.progs[type], new_prog);
bpf_prog_put(old_prog);
out_unlock:
@ -142,14 +145,38 @@ static const struct bpf_link_ops bpf_netns_link_ops = {
.show_fdinfo = bpf_netns_link_show_fdinfo,
};
/* Must be called with netns_bpf_mutex held. */
static int __netns_bpf_prog_query(const union bpf_attr *attr,
union bpf_attr __user *uattr,
struct net *net,
enum netns_bpf_attach_type type)
{
__u32 __user *prog_ids = u64_to_user_ptr(attr->query.prog_ids);
struct bpf_prog_array *run_array;
u32 prog_cnt = 0, flags = 0;
run_array = rcu_dereference_protected(net->bpf.run_array[type],
lockdep_is_held(&netns_bpf_mutex));
if (run_array)
prog_cnt = bpf_prog_array_length(run_array);
if (copy_to_user(&uattr->query.attach_flags, &flags, sizeof(flags)))
return -EFAULT;
if (copy_to_user(&uattr->query.prog_cnt, &prog_cnt, sizeof(prog_cnt)))
return -EFAULT;
if (!attr->query.prog_cnt || !prog_ids || !prog_cnt)
return 0;
return bpf_prog_array_copy_to_user(run_array, prog_ids,
attr->query.prog_cnt);
}
int netns_bpf_prog_query(const union bpf_attr *attr,
union bpf_attr __user *uattr)
{
__u32 __user *prog_ids = u64_to_user_ptr(attr->query.prog_ids);
u32 prog_id, prog_cnt = 0, flags = 0;
enum netns_bpf_attach_type type;
struct bpf_prog *attached;
struct net *net;
int ret;
if (attr->query.query_flags)
return -EINVAL;
@ -162,36 +189,25 @@ int netns_bpf_prog_query(const union bpf_attr *attr,
if (IS_ERR(net))
return PTR_ERR(net);
rcu_read_lock();
attached = rcu_dereference(net->bpf.progs[type]);
if (attached) {
prog_cnt = 1;
prog_id = attached->aux->id;
}
rcu_read_unlock();
mutex_lock(&netns_bpf_mutex);
ret = __netns_bpf_prog_query(attr, uattr, net, type);
mutex_unlock(&netns_bpf_mutex);
put_net(net);
if (copy_to_user(&uattr->query.attach_flags, &flags, sizeof(flags)))
return -EFAULT;
if (copy_to_user(&uattr->query.prog_cnt, &prog_cnt, sizeof(prog_cnt)))
return -EFAULT;
if (!attr->query.prog_cnt || !prog_ids || !prog_cnt)
return 0;
if (copy_to_user(prog_ids, &prog_id, sizeof(u32)))
return -EFAULT;
return 0;
return ret;
}
int netns_bpf_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
{
struct bpf_prog_array *run_array;
enum netns_bpf_attach_type type;
struct bpf_prog *attached;
struct net *net;
int ret;
if (attr->target_fd || attr->attach_flags || attr->replace_bpf_fd)
return -EINVAL;
type = to_netns_bpf_attach_type(attr->attach_type);
if (type < 0)
return -EINVAL;
@ -200,19 +216,47 @@ int netns_bpf_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
mutex_lock(&netns_bpf_mutex);
/* Attaching prog directly is not compatible with links */
if (net->bpf.links[type]) {
if (!list_empty(&net->bpf.links[type])) {
ret = -EEXIST;
goto out_unlock;
}
switch (type) {
case NETNS_BPF_FLOW_DISSECTOR:
ret = flow_dissector_bpf_prog_attach(net, prog);
ret = flow_dissector_bpf_prog_attach_check(net, prog);
break;
default:
ret = -EINVAL;
break;
}
if (ret)
goto out_unlock;
attached = net->bpf.progs[type];
if (attached == prog) {
/* The same program cannot be attached twice */
ret = -EINVAL;
goto out_unlock;
}
run_array = rcu_dereference_protected(net->bpf.run_array[type],
lockdep_is_held(&netns_bpf_mutex));
if (run_array) {
WRITE_ONCE(run_array->items[0].prog, prog);
} else {
run_array = bpf_prog_array_alloc(1, GFP_KERNEL);
if (!run_array) {
ret = -ENOMEM;
goto out_unlock;
}
run_array->items[0].prog = prog;
rcu_assign_pointer(net->bpf.run_array[type], run_array);
}
net->bpf.progs[type] = prog;
if (attached)
bpf_prog_put(attached);
out_unlock:
mutex_unlock(&netns_bpf_mutex);
@ -221,63 +265,74 @@ out_unlock:
/* Must be called with netns_bpf_mutex held. */
static int __netns_bpf_prog_detach(struct net *net,
enum netns_bpf_attach_type type)
enum netns_bpf_attach_type type,
struct bpf_prog *old)
{
struct bpf_prog *attached;
/* Progs attached via links cannot be detached */
if (net->bpf.links[type])
if (!list_empty(&net->bpf.links[type]))
return -EINVAL;
attached = rcu_dereference_protected(net->bpf.progs[type],
lockdep_is_held(&netns_bpf_mutex));
if (!attached)
attached = net->bpf.progs[type];
if (!attached || attached != old)
return -ENOENT;
RCU_INIT_POINTER(net->bpf.progs[type], NULL);
netns_bpf_run_array_detach(net, type);
net->bpf.progs[type] = NULL;
bpf_prog_put(attached);
return 0;
}
int netns_bpf_prog_detach(const union bpf_attr *attr)
int netns_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)
{
enum netns_bpf_attach_type type;
struct bpf_prog *prog;
int ret;
if (attr->target_fd)
return -EINVAL;
type = to_netns_bpf_attach_type(attr->attach_type);
if (type < 0)
return -EINVAL;
prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
if (IS_ERR(prog))
return PTR_ERR(prog);
mutex_lock(&netns_bpf_mutex);
ret = __netns_bpf_prog_detach(current->nsproxy->net_ns, type);
ret = __netns_bpf_prog_detach(current->nsproxy->net_ns, type, prog);
mutex_unlock(&netns_bpf_mutex);
bpf_prog_put(prog);
return ret;
}
static int netns_bpf_link_attach(struct net *net, struct bpf_link *link,
enum netns_bpf_attach_type type)
{
struct bpf_prog *prog;
struct bpf_netns_link *net_link =
container_of(link, struct bpf_netns_link, link);
struct bpf_prog_array *run_array;
int err;
mutex_lock(&netns_bpf_mutex);
/* Allow attaching only one prog or link for now */
if (net->bpf.links[type]) {
if (!list_empty(&net->bpf.links[type])) {
err = -E2BIG;
goto out_unlock;
}
/* Links are not compatible with attaching prog directly */
prog = rcu_dereference_protected(net->bpf.progs[type],
lockdep_is_held(&netns_bpf_mutex));
if (prog) {
if (net->bpf.progs[type]) {
err = -EEXIST;
goto out_unlock;
}
switch (type) {
case NETNS_BPF_FLOW_DISSECTOR:
err = flow_dissector_bpf_prog_attach(net, link->prog);
err = flow_dissector_bpf_prog_attach_check(net, link->prog);
break;
default:
err = -EINVAL;
@ -286,7 +341,15 @@ static int netns_bpf_link_attach(struct net *net, struct bpf_link *link,
if (err)
goto out_unlock;
net->bpf.links[type] = link;
run_array = bpf_prog_array_alloc(1, GFP_KERNEL);
if (!run_array) {
err = -ENOMEM;
goto out_unlock;
}
run_array->items[0].prog = link->prog;
rcu_assign_pointer(net->bpf.run_array[type], run_array);
list_add_tail(&net_link->node, &net->bpf.links[type]);
out_unlock:
mutex_unlock(&netns_bpf_mutex);
@ -345,23 +408,34 @@ out_put_net:
return err;
}
static int __net_init netns_bpf_pernet_init(struct net *net)
{
int type;
for (type = 0; type < MAX_NETNS_BPF_ATTACH_TYPE; type++)
INIT_LIST_HEAD(&net->bpf.links[type]);
return 0;
}
static void __net_exit netns_bpf_pernet_pre_exit(struct net *net)
{
enum netns_bpf_attach_type type;
struct bpf_link *link;
struct bpf_netns_link *net_link;
mutex_lock(&netns_bpf_mutex);
for (type = 0; type < MAX_NETNS_BPF_ATTACH_TYPE; type++) {
link = net->bpf.links[type];
if (link)
bpf_netns_link_auto_detach(link);
else
__netns_bpf_prog_detach(net, type);
netns_bpf_run_array_detach(net, type);
list_for_each_entry(net_link, &net->bpf.links[type], node)
net_link->net = NULL; /* auto-detach link */
if (net->bpf.progs[type])
bpf_prog_put(net->bpf.progs[type]);
}
mutex_unlock(&netns_bpf_mutex);
}
static struct pernet_operations netns_bpf_pernet_ops __net_initdata = {
.init = netns_bpf_pernet_init,
.pre_exit = netns_bpf_pernet_pre_exit,
};

View File

@ -20,11 +20,14 @@ static struct reuseport_array *reuseport_array(struct bpf_map *map)
/* The caller must hold the reuseport_lock */
void bpf_sk_reuseport_detach(struct sock *sk)
{
struct sock __rcu **socks;
uintptr_t sk_user_data;
write_lock_bh(&sk->sk_callback_lock);
socks = sk->sk_user_data;
if (socks) {
sk_user_data = (uintptr_t)sk->sk_user_data;
if (sk_user_data & SK_USER_DATA_BPF) {
struct sock __rcu **socks;
socks = (void *)(sk_user_data & SK_USER_DATA_PTRMASK);
WRITE_ONCE(sk->sk_user_data, NULL);
/*
* Do not move this NULL assignment outside of
@ -252,6 +255,7 @@ int bpf_fd_reuseport_array_update_elem(struct bpf_map *map, void *key,
struct sock *free_osk = NULL, *osk, *nsk;
struct sock_reuseport *reuse;
u32 index = *(u32 *)key;
uintptr_t sk_user_data;
struct socket *socket;
int err, fd;
@ -305,7 +309,9 @@ int bpf_fd_reuseport_array_update_elem(struct bpf_map *map, void *key,
if (err)
goto put_file_unlock;
WRITE_ONCE(nsk->sk_user_data, &array->ptrs[index]);
sk_user_data = (uintptr_t)&array->ptrs[index] | SK_USER_DATA_NOCOPY |
SK_USER_DATA_BPF;
WRITE_ONCE(nsk->sk_user_data, (void *)sk_user_data);
rcu_assign_pointer(array->ptrs[index], nsk);
free_osk = osk;
err = 0;

View File

@ -132,15 +132,6 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
{
struct bpf_ringbuf *rb;
if (!data_sz || !PAGE_ALIGNED(data_sz))
return ERR_PTR(-EINVAL);
#ifdef CONFIG_64BIT
/* on 32-bit arch, it's impossible to overflow record's hdr->pgoff */
if (data_sz > RINGBUF_MAX_DATA_SZ)
return ERR_PTR(-E2BIG);
#endif
rb = bpf_ringbuf_area_alloc(data_sz, numa_node);
if (!rb)
return ERR_PTR(-ENOMEM);
@ -166,9 +157,16 @@ static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
return ERR_PTR(-EINVAL);
if (attr->key_size || attr->value_size ||
attr->max_entries == 0 || !PAGE_ALIGNED(attr->max_entries))
!is_power_of_2(attr->max_entries) ||
!PAGE_ALIGNED(attr->max_entries))
return ERR_PTR(-EINVAL);
#ifdef CONFIG_64BIT
/* on 32-bit arch, it's impossible to overflow record's hdr->pgoff */
if (attr->max_entries > RINGBUF_MAX_DATA_SZ)
return ERR_PTR(-E2BIG);
#endif
rb_map = kzalloc(sizeof(*rb_map), GFP_USER);
if (!rb_map)
return ERR_PTR(-ENOMEM);

View File

@ -2121,7 +2121,7 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
!bpf_capable())
return -EPERM;
if (is_net_admin_prog_type(type) && !capable(CAP_NET_ADMIN))
if (is_net_admin_prog_type(type) && !capable(CAP_NET_ADMIN) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (is_perfmon_prog_type(type) && !perfmon_capable())
return -EPERM;
@ -2893,13 +2893,11 @@ static int bpf_prog_detach(const union bpf_attr *attr)
switch (ptype) {
case BPF_PROG_TYPE_SK_MSG:
case BPF_PROG_TYPE_SK_SKB:
return sock_map_get_from_fd(attr, NULL);
return sock_map_prog_detach(attr, ptype);
case BPF_PROG_TYPE_LIRC_MODE2:
return lirc_prog_detach(attr);
case BPF_PROG_TYPE_FLOW_DISSECTOR:
if (!capable(CAP_NET_ADMIN))
return -EPERM;
return netns_bpf_prog_detach(attr);
return netns_bpf_prog_detach(attr, ptype);
case BPF_PROG_TYPE_CGROUP_DEVICE:
case BPF_PROG_TYPE_CGROUP_SKB:
case BPF_PROG_TYPE_CGROUP_SOCK:

View File

@ -399,8 +399,7 @@ static bool reg_type_not_null(enum bpf_reg_type type)
return type == PTR_TO_SOCKET ||
type == PTR_TO_TCP_SOCK ||
type == PTR_TO_MAP_VALUE ||
type == PTR_TO_SOCK_COMMON ||
type == PTR_TO_BTF_ID;
type == PTR_TO_SOCK_COMMON;
}
static bool reg_type_may_be_null(enum bpf_reg_type type)
@ -9801,7 +9800,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
int i, j, subprog_start, subprog_end = 0, len, subprog;
struct bpf_insn *insn;
void *old_bpf_func;
int err;
int err, num_exentries;
if (env->subprog_cnt <= 1)
return 0;
@ -9876,6 +9875,14 @@ static int jit_subprogs(struct bpf_verifier_env *env)
func[i]->aux->nr_linfo = prog->aux->nr_linfo;
func[i]->aux->jited_linfo = prog->aux->jited_linfo;
func[i]->aux->linfo_idx = env->subprog_info[i].linfo_idx;
num_exentries = 0;
insn = func[i]->insnsi;
for (j = 0; j < func[i]->len; j++, insn++) {
if (BPF_CLASS(insn->code) == BPF_LDX &&
BPF_MODE(insn->code) == BPF_PROBE_MEM)
num_exentries++;
}
func[i]->aux->num_exentries = num_exentries;
func[i] = bpf_int_jit_compile(func[i]);
if (!func[i]->jited) {
err = -ENOTSUPP;

View File

@ -6439,18 +6439,8 @@ void cgroup_sk_alloc_disable(void)
void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
{
if (cgroup_sk_alloc_disabled)
return;
/* Socket clone path */
if (skcd->val) {
/*
* We might be cloning a socket which is left in an empty
* cgroup and the cgroup might have already been rmdir'd.
* Don't use cgroup_get_live().
*/
cgroup_get(sock_cgroup_ptr(skcd));
cgroup_bpf_get(sock_cgroup_ptr(skcd));
if (cgroup_sk_alloc_disabled) {
skcd->no_refcnt = 1;
return;
}
@ -6475,10 +6465,27 @@ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
rcu_read_unlock();
}
void cgroup_sk_clone(struct sock_cgroup_data *skcd)
{
if (skcd->val) {
if (skcd->no_refcnt)
return;
/*
* We might be cloning a socket which is left in an empty
* cgroup and the cgroup might have already been rmdir'd.
* Don't use cgroup_get_live().
*/
cgroup_get(sock_cgroup_ptr(skcd));
cgroup_bpf_get(sock_cgroup_ptr(skcd));
}
}
void cgroup_sk_free(struct sock_cgroup_data *skcd)
{
struct cgroup *cgrp = sock_cgroup_ptr(skcd);
if (skcd->no_refcnt)
return;
cgroup_bpf_put(cgrp);
cgroup_put(cgrp);
}

Some files were not shown because too many files have changed in this diff Show More