1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

  1) Fix jmp to 1st instruction in x64 JIT, from Alexei Starovoitov.

  2) Severl kTLS fixes in mlx5 driver, from Tariq Toukan.

  3) Fix severe performance regression due to lack of SKB coalescing of
     fragments during local delivery, from Guillaume Nault.

  4) Error path memory leak in sch_taprio, from Ivan Khoronzhuk.

  5) Fix batched events in skbedit packet action, from Roman Mashak.

  6) Propagate VLAN TX offload to hw_enc_features in bond and team
     drivers, from Yue Haibing.

  7) RXRPC local endpoint refcounting fix and read after free in
     rxrpc_queue_local(), from David Howells.

  8) Fix endian bug in ibmveth multicast list handling, from Thomas
     Falcon.

  9) Oops, make nlmsg_parse() wrap around the correct function,
     __nlmsg_parse not __nla_parse(). Fix from David Ahern.

 10) Memleak in sctp_scend_reset_streams(), fro Zheng Bin.

 11) Fix memory leak in cxgb4, from Wenwen Wang.

 12) Yet another race in AF_PACKET, from Eric Dumazet.

 13) Fix false detection of retransmit failures in tipc, from Tuong
     Lien.

 14) Use after free in ravb_tstamp_skb, from Tho Vu.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (101 commits)
  ravb: Fix use-after-free ravb_tstamp_skb
  netfilter: nf_tables: map basechain priority to hardware priority
  net: sched: use major priority number as hardware priority
  wimax/i2400m: fix a memory leak bug
  net: cavium: fix driver name
  ibmvnic: Unmap DMA address of TX descriptor buffers after use
  bnxt_en: Fix to include flow direction in L2 key
  bnxt_en: Use correct src_fid to determine direction of the flow
  bnxt_en: Suppress HWRM errors for HWRM_NVM_GET_VARIABLE command
  bnxt_en: Fix handling FRAG_ERR when NVM_INSTALL_UPDATE cmd fails
  bnxt_en: Improve RX doorbell sequence.
  bnxt_en: Fix VNIC clearing logic for 57500 chips.
  net: kalmia: fix memory leaks
  cx82310_eth: fix a memory leak bug
  bnx2x: Fix VF's VLAN reconfiguration in reload.
  Bluetooth: Add debug setting for changing minimum encryption key size
  tipc: fix false detection of retransmit failures
  lan78xx: Fix memory leaks
  MAINTAINERS: r8169: Update path to the driver
  MAINTAINERS: PHY LIBRARY: Update files in the record
  ...
alistair/sunxi64-5.4-dsi
Linus Torvalds 2019-08-19 10:00:01 -07:00
commit 06821504fd
125 changed files with 1155 additions and 687 deletions

View File

@ -39,7 +39,6 @@ Table : Subdirectories in /proc/sys/net
802 E802 protocol ax25 AX25
ethernet Ethernet protocol rose X.25 PLP layer
ipv4 IP version 4 x25 X.25 protocol
ipx IPX token-ring IBM token ring
bridge Bridging decnet DEC net
ipv6 IP version 6 tipc TIPC
========= =================== = ========== ==================
@ -401,33 +400,7 @@ interface.
(network) that the route leads to, the router (may be directly connected), the
route flags, and the device the route is using.
5. IPX
------
The IPX protocol has no tunable values in proc/sys/net.
The IPX protocol does, however, provide proc/net/ipx. This lists each IPX
socket giving the local and remote addresses in Novell format (that is
network:node:port). In accordance with the strange Novell tradition,
everything but the port is in hex. Not_Connected is displayed for sockets that
are not tied to a specific remote address. The Tx and Rx queue sizes indicate
the number of bytes pending for transmission and reception. The state
indicates the state the socket is in and the uid is the owning uid of the
socket.
The /proc/net/ipx_interface file lists all IPX interfaces. For each interface
it gives the network number, the node number, and indicates if the network is
the primary network. It also indicates which device it is bound to (or
Internal for internal networks) and the Frame Type if appropriate. Linux
supports 802.3, 802.2, 802.2 SNAP and DIX (Blue Book) ethernet framing for
IPX.
The /proc/net/ipx_route table holds a list of IPX routes. For each route it
gives the destination network, the router node (or Directly) and the network
address of the router (or Connected) for internal networks.
6. TIPC
5. TIPC
-------
tipc_rmem

View File

@ -506,21 +506,3 @@ Drivers should ignore the changes to TLS the device feature flags.
These flags will be acted upon accordingly by the core ``ktls`` code.
TLS device feature flags only control adding of new TLS connection
offloads, old connections will remain active after flags are cleared.
Known bugs
==========
skb_orphan() leaks clear text
-----------------------------
Currently drivers depend on the :c:member:`sk` member of
:c:type:`struct sk_buff <sk_buff>` to identify segments requiring
encryption. Any operation which removes or does not preserve the socket
association such as :c:func:`skb_orphan` or :c:func:`skb_clone`
will cause the driver to miss the packets and lead to clear text leaks.
Redirects leak clear text
-------------------------
In the RX direction, if segment has already been decrypted by the device
and it gets redirected or mirrored - clear text will be transmitted out.

View File

@ -204,8 +204,8 @@ Ethernet device, which instead of receiving packets from a physical
media, receives them from user space program and instead of sending
packets via physical media sends them to the user space program.
Let's say that you configured IPX on the tap0, then whenever
the kernel sends an IPX packet to tap0, it is passed to the application
Let's say that you configured IPv6 on the tap0, then whenever
the kernel sends an IPv6 packet to tap0, it is passed to the application
(VTun for example). The application encrypts, compresses and sends it to
the other side over TCP or UDP. The application on the other side decompresses
and decrypts the data received and writes the packet to the TAP device,

View File

@ -183,7 +183,7 @@ M: Realtek linux nic maintainers <nic_swsd@realtek.com>
M: Heiner Kallweit <hkallweit1@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/realtek/r8169.c
F: drivers/net/ethernet/realtek/r8169*
8250/16?50 (AND CLONE UARTS) SERIAL DRIVER
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
@ -6065,7 +6065,7 @@ M: Florian Fainelli <f.fainelli@gmail.com>
M: Heiner Kallweit <hkallweit1@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/ABI/testing/sysfs-bus-mdio
F: Documentation/ABI/testing/sysfs-class-net-phydev
F: Documentation/devicetree/bindings/net/ethernet-phy.yaml
F: Documentation/devicetree/bindings/net/mdio*
F: Documentation/networking/phy.rst

View File

@ -390,8 +390,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
emit_prologue(&prog, bpf_prog->aux->stack_depth,
bpf_prog_was_classic(bpf_prog));
addrs[0] = prog - temp;
for (i = 0; i < insn_cnt; i++, insn++) {
for (i = 1; i <= insn_cnt; i++, insn++) {
const s32 imm32 = insn->imm;
u32 dst_reg = insn->dst_reg;
u32 src_reg = insn->src_reg;
@ -1105,7 +1106,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
extra_pass = true;
goto skip_init_addrs;
}
addrs = kmalloc_array(prog->len, sizeof(*addrs), GFP_KERNEL);
addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
if (!addrs) {
prog = orig_prog;
goto out_addrs;
@ -1115,7 +1116,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
* Before first pass, make a rough estimation of addrs[]
* each BPF instruction is translated to less than 64 bytes
*/
for (proglen = 0, i = 0; i < prog->len; i++) {
for (proglen = 0, i = 0; i <= prog->len; i++) {
proglen += 64;
addrs[i] = proglen;
}
@ -1180,7 +1181,7 @@ out_image:
if (!image || !prog->is_func || extra_pass) {
if (image)
bpf_prog_fill_jited_linfo(prog, addrs);
bpf_prog_fill_jited_linfo(prog, addrs + 1);
out_addrs:
kfree(addrs);
kfree(jit_data);

View File

@ -99,6 +99,27 @@ static int qca_send_reset(struct hci_dev *hdev)
return 0;
}
int qca_send_pre_shutdown_cmd(struct hci_dev *hdev)
{
struct sk_buff *skb;
int err;
bt_dev_dbg(hdev, "QCA pre shutdown cmd");
skb = __hci_cmd_sync(hdev, QCA_PRE_SHUTDOWN_CMD, 0,
NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
bt_dev_err(hdev, "QCA preshutdown_cmd failed (%d)", err);
return err;
}
kfree_skb(skb);
return 0;
}
EXPORT_SYMBOL_GPL(qca_send_pre_shutdown_cmd);
static void qca_tlv_check_data(struct rome_config *config,
const struct firmware *fw)
{
@ -119,6 +140,7 @@ static void qca_tlv_check_data(struct rome_config *config,
BT_DBG("Length\t\t : %d bytes", length);
config->dnld_mode = ROME_SKIP_EVT_NONE;
config->dnld_type = ROME_SKIP_EVT_NONE;
switch (config->type) {
case TLV_TYPE_PATCH:
@ -268,7 +290,7 @@ static int qca_inject_cmd_complete_event(struct hci_dev *hdev)
evt = skb_put(skb, sizeof(*evt));
evt->ncmd = 1;
evt->opcode = QCA_HCI_CC_OPCODE;
evt->opcode = cpu_to_le16(QCA_HCI_CC_OPCODE);
skb_put_u8(skb, QCA_HCI_CC_SUCCESS);
@ -323,7 +345,7 @@ static int qca_download_firmware(struct hci_dev *hdev,
*/
if (config->dnld_type == ROME_SKIP_EVT_VSE_CC ||
config->dnld_type == ROME_SKIP_EVT_VSE)
return qca_inject_cmd_complete_event(hdev);
ret = qca_inject_cmd_complete_event(hdev);
out:
release_firmware(fw);
@ -388,6 +410,9 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
return err;
}
/* Give the controller some time to get ready to receive the NVM */
msleep(10);
/* Download NVM configuration */
config.type = TLV_TYPE_NVM;
if (firmware_name)

View File

@ -13,6 +13,7 @@
#define EDL_PATCH_TLV_REQ_CMD (0x1E)
#define EDL_NVM_ACCESS_SET_REQ_CMD (0x01)
#define MAX_SIZE_PER_TLV_SEGMENT (243)
#define QCA_PRE_SHUTDOWN_CMD (0xFC08)
#define EDL_CMD_REQ_RES_EVT (0x00)
#define EDL_PATCH_VER_RES_EVT (0x19)
@ -135,6 +136,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
const char *firmware_name);
int qca_read_soc_version(struct hci_dev *hdev, u32 *soc_version);
int qca_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr);
int qca_send_pre_shutdown_cmd(struct hci_dev *hdev);
static inline bool qca_is_wcn399x(enum qca_btsoc_type soc_type)
{
return soc_type == QCA_WCN3990 || soc_type == QCA_WCN3998;
@ -167,4 +169,9 @@ static inline bool qca_is_wcn399x(enum qca_btsoc_type soc_type)
{
return false;
}
static inline int qca_send_pre_shutdown_cmd(struct hci_dev *hdev)
{
return -EOPNOTSUPP;
}
#endif

View File

@ -2762,8 +2762,10 @@ static int btusb_mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
fw_size = fw->size;
/* The size of patch header is 30 bytes, should be skip */
if (fw_size < 30)
if (fw_size < 30) {
err = -EINVAL;
goto err_release_fw;
}
fw_size -= 30;
fw_ptr += 30;

View File

@ -705,7 +705,7 @@ static void device_want_to_sleep(struct hci_uart *hu)
unsigned long flags;
struct qca_data *qca = hu->priv;
BT_DBG("hu %p want to sleep", hu);
BT_DBG("hu %p want to sleep in %d state", hu, qca->rx_ibs_state);
spin_lock_irqsave(&qca->hci_ibs_lock, flags);
@ -720,7 +720,7 @@ static void device_want_to_sleep(struct hci_uart *hu)
break;
case HCI_IBS_RX_ASLEEP:
/* Fall through */
break;
default:
/* Any other state is illegal */
@ -912,7 +912,7 @@ static int qca_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
if (hdr->evt == HCI_EV_VENDOR)
complete(&qca->drop_ev_comp);
kfree(skb);
kfree_skb(skb);
return 0;
}
@ -1386,6 +1386,9 @@ static int qca_power_off(struct hci_dev *hdev)
{
struct hci_uart *hu = hci_get_drvdata(hdev);
/* Perform pre shutdown command */
qca_send_pre_shutdown_cmd(hdev);
qca_power_shutdown(hu);
return 0;
}

View File

@ -1126,6 +1126,8 @@ static void bond_compute_features(struct bonding *bond)
done:
bond_dev->vlan_features = vlan_features;
bond_dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL |
NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_STAG_TX |
NETIF_F_GSO_UDP_L4;
bond_dev->mpls_features = mpls_features;
bond_dev->gso_max_segs = gso_max_segs;

View File

@ -1223,12 +1223,8 @@ static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
{
struct sja1105_private *priv = ds->priv;
struct device *dev = ds->dev;
u16 rx_vid, tx_vid;
int i;
rx_vid = dsa_8021q_rx_vid(ds, port);
tx_vid = dsa_8021q_tx_vid(ds, port);
for (i = 0; i < SJA1105_MAX_L2_LOOKUP_COUNT; i++) {
struct sja1105_l2_lookup_entry l2_lookup = {0};
u8 macaddr[ETH_ALEN];

View File

@ -3057,12 +3057,13 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
/* if VF indicate to PF this function is going down (PF will delete sp
* elements and clear initializations
*/
if (IS_VF(bp))
if (IS_VF(bp)) {
bnx2x_clear_vlan_info(bp);
bnx2x_vfpf_close_vf(bp);
else if (unload_mode != UNLOAD_RECOVERY)
} else if (unload_mode != UNLOAD_RECOVERY) {
/* if this is a normal/close unload need to clean up chip*/
bnx2x_chip_cleanup(bp, unload_mode, keep_link);
else {
} else {
/* Send the UNLOAD_REQUEST to the MCP */
bnx2x_send_unload_req(bp, unload_mode);

View File

@ -425,6 +425,8 @@ void bnx2x_set_reset_global(struct bnx2x *bp);
void bnx2x_disable_close_the_gate(struct bnx2x *bp);
int bnx2x_init_hw_func_cnic(struct bnx2x *bp);
void bnx2x_clear_vlan_info(struct bnx2x *bp);
/**
* bnx2x_sp_event - handle ramrods completion.
*

View File

@ -8482,11 +8482,21 @@ int bnx2x_set_vlan_one(struct bnx2x *bp, u16 vlan,
return rc;
}
void bnx2x_clear_vlan_info(struct bnx2x *bp)
{
struct bnx2x_vlan_entry *vlan;
/* Mark that hw forgot all entries */
list_for_each_entry(vlan, &bp->vlan_reg, link)
vlan->hw = false;
bp->vlan_cnt = 0;
}
static int bnx2x_del_all_vlans(struct bnx2x *bp)
{
struct bnx2x_vlan_mac_obj *vlan_obj = &bp->sp_objs[0].vlan_obj;
unsigned long ramrod_flags = 0, vlan_flags = 0;
struct bnx2x_vlan_entry *vlan;
int rc;
__set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
@ -8495,10 +8505,7 @@ static int bnx2x_del_all_vlans(struct bnx2x *bp)
if (rc)
return rc;
/* Mark that hw forgot all entries */
list_for_each_entry(vlan, &bp->vlan_reg, link)
vlan->hw = false;
bp->vlan_cnt = 0;
bnx2x_clear_vlan_info(bp);
return 0;
}

View File

@ -2021,9 +2021,9 @@ static void __bnxt_poll_work_done(struct bnxt *bp, struct bnxt_napi *bnapi)
if (bnapi->events & BNXT_RX_EVENT) {
struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
if (bnapi->events & BNXT_AGG_EVENT)
bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod);
bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
}
bnapi->events = 0;
}
@ -5064,6 +5064,7 @@ static void bnxt_set_db(struct bnxt *bp, struct bnxt_db_info *db, u32 ring_type,
static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
{
bool agg_rings = !!(bp->flags & BNXT_FLAG_AGG_RINGS);
int i, rc = 0;
u32 type;
@ -5139,7 +5140,9 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
if (rc)
goto err_out;
bnxt_set_db(bp, &rxr->rx_db, type, map_idx, ring->fw_ring_id);
bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
/* If we have agg rings, post agg buffers first. */
if (!agg_rings)
bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
bp->grp_info[map_idx].rx_fw_ring_id = ring->fw_ring_id;
if (bp->flags & BNXT_FLAG_CHIP_P5) {
struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
@ -5158,7 +5161,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
}
}
if (bp->flags & BNXT_FLAG_AGG_RINGS) {
if (agg_rings) {
type = HWRM_RING_ALLOC_AGG;
for (i = 0; i < bp->rx_nr_rings; i++) {
struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
@ -5174,6 +5177,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
bnxt_set_db(bp, &rxr->rx_agg_db, type, map_idx,
ring->fw_ring_id);
bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod);
bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
bp->grp_info[grp_idx].agg_fw_ring_id = ring->fw_ring_id;
}
}
@ -7016,19 +7020,29 @@ static void bnxt_hwrm_clear_vnic_rss(struct bnxt *bp)
bnxt_hwrm_vnic_set_rss(bp, i, false);
}
static void bnxt_hwrm_resource_free(struct bnxt *bp, bool close_path,
bool irq_re_init)
static void bnxt_clear_vnic(struct bnxt *bp)
{
if (bp->vnic_info) {
bnxt_hwrm_clear_vnic_filter(bp);
if (!bp->vnic_info)
return;
bnxt_hwrm_clear_vnic_filter(bp);
if (!(bp->flags & BNXT_FLAG_CHIP_P5)) {
/* clear all RSS setting before free vnic ctx */
bnxt_hwrm_clear_vnic_rss(bp);
bnxt_hwrm_vnic_ctx_free(bp);
/* before free the vnic, undo the vnic tpa settings */
if (bp->flags & BNXT_FLAG_TPA)
bnxt_set_tpa(bp, false);
bnxt_hwrm_vnic_free(bp);
}
/* before free the vnic, undo the vnic tpa settings */
if (bp->flags & BNXT_FLAG_TPA)
bnxt_set_tpa(bp, false);
bnxt_hwrm_vnic_free(bp);
if (bp->flags & BNXT_FLAG_CHIP_P5)
bnxt_hwrm_vnic_ctx_free(bp);
}
static void bnxt_hwrm_resource_free(struct bnxt *bp, bool close_path,
bool irq_re_init)
{
bnxt_clear_vnic(bp);
bnxt_hwrm_ring_free(bp, close_path);
bnxt_hwrm_ring_grp_free(bp);
if (irq_re_init) {

View File

@ -98,10 +98,13 @@ static int bnxt_hwrm_nvm_req(struct bnxt *bp, u32 param_id, void *msg,
if (idx)
req->dimensions = cpu_to_le16(1);
if (req->req_type == cpu_to_le16(HWRM_NVM_SET_VARIABLE))
if (req->req_type == cpu_to_le16(HWRM_NVM_SET_VARIABLE)) {
memcpy(data_addr, buf, bytesize);
rc = hwrm_send_message(bp, msg, msg_len, HWRM_CMD_TIMEOUT);
rc = hwrm_send_message(bp, msg, msg_len, HWRM_CMD_TIMEOUT);
} else {
rc = hwrm_send_message_silent(bp, msg, msg_len,
HWRM_CMD_TIMEOUT);
}
if (!rc && req->req_type == cpu_to_le16(HWRM_NVM_GET_VARIABLE))
memcpy(buf, data_addr, bytesize);

View File

@ -2016,21 +2016,19 @@ static int bnxt_flash_package_from_file(struct net_device *dev,
mutex_lock(&bp->hwrm_cmd_lock);
hwrm_err = _hwrm_send_message(bp, &install, sizeof(install),
INSTALL_PACKAGE_TIMEOUT);
if (hwrm_err)
goto flash_pkg_exit;
if (resp->error_code) {
if (hwrm_err) {
u8 error_code = ((struct hwrm_err_output *)resp)->cmd_err;
if (error_code == NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) {
if (resp->error_code && error_code ==
NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) {
install.flags |= cpu_to_le16(
NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG);
hwrm_err = _hwrm_send_message(bp, &install,
sizeof(install),
INSTALL_PACKAGE_TIMEOUT);
if (hwrm_err)
goto flash_pkg_exit;
}
if (hwrm_err)
goto flash_pkg_exit;
}
if (resp->result) {

View File

@ -1236,7 +1236,7 @@ static int __bnxt_tc_del_flow(struct bnxt *bp,
static void bnxt_tc_set_flow_dir(struct bnxt *bp, struct bnxt_tc_flow *flow,
u16 src_fid)
{
flow->dir = (bp->pf.fw_fid == src_fid) ? BNXT_DIR_RX : BNXT_DIR_TX;
flow->l2_key.dir = (bp->pf.fw_fid == src_fid) ? BNXT_DIR_RX : BNXT_DIR_TX;
}
static void bnxt_tc_set_src_fid(struct bnxt *bp, struct bnxt_tc_flow *flow,
@ -1285,9 +1285,7 @@ static int bnxt_tc_add_flow(struct bnxt *bp, u16 src_fid,
goto free_node;
bnxt_tc_set_src_fid(bp, flow, src_fid);
if (bp->fw_cap & BNXT_FW_CAP_OVS_64BIT_HANDLE)
bnxt_tc_set_flow_dir(bp, flow, src_fid);
bnxt_tc_set_flow_dir(bp, flow, flow->src_fid);
if (!bnxt_tc_can_offload(bp, flow)) {
rc = -EOPNOTSUPP;
@ -1407,7 +1405,7 @@ static void bnxt_fill_cfa_stats_req(struct bnxt *bp,
* 2. 15th bit of flow_handle must specify the flow
* direction (TX/RX).
*/
if (flow_node->flow.dir == BNXT_DIR_RX)
if (flow_node->flow.l2_key.dir == BNXT_DIR_RX)
handle = CFA_FLOW_INFO_REQ_FLOW_HANDLE_DIR_RX |
CFA_FLOW_INFO_REQ_FLOW_HANDLE_MAX_MASK;
else

View File

@ -23,6 +23,9 @@ struct bnxt_tc_l2_key {
__be16 inner_vlan_tci;
__be16 ether_type;
u8 num_vlans;
u8 dir;
#define BNXT_DIR_RX 1
#define BNXT_DIR_TX 0
};
struct bnxt_tc_l3_key {
@ -98,9 +101,6 @@ struct bnxt_tc_flow {
/* flow applicable to pkts ingressing on this fid */
u16 src_fid;
u8 dir;
#define BNXT_DIR_RX 1
#define BNXT_DIR_TX 0
struct bnxt_tc_l2_key l2_key;
struct bnxt_tc_l2_key l2_mask;
struct bnxt_tc_l3_key l3_key;

View File

@ -10,7 +10,7 @@
#include "cavium_ptp.h"
#define DRV_NAME "Cavium PTP Driver"
#define DRV_NAME "cavium_ptp"
#define PCI_DEVICE_ID_CAVIUM_PTP 0xA00C
#define PCI_DEVICE_ID_CAVIUM_RST 0xA00E

View File

@ -237,8 +237,10 @@ int octeon_setup_iq(struct octeon_device *oct,
}
oct->num_iqs++;
if (oct->fn_list.enable_io_queues(oct))
if (oct->fn_list.enable_io_queues(oct)) {
octeon_delete_instr_queue(oct, iq_no);
return 1;
}
return 0;
}

View File

@ -3236,8 +3236,10 @@ static ssize_t blocked_fl_write(struct file *filp, const char __user *ubuf,
return -ENOMEM;
err = bitmap_parse_user(ubuf, count, t, adap->sge.egr_sz);
if (err)
if (err) {
kvfree(t);
return err;
}
bitmap_copy(adap->sge.blocked_fl, t, adap->sge.egr_sz);
kvfree(t);

View File

@ -167,7 +167,7 @@ struct nps_enet_priv {
};
/**
* nps_reg_set - Sets ENET register with provided value.
* nps_enet_reg_set - Sets ENET register with provided value.
* @priv: Pointer to EZchip ENET private data structure.
* @reg: Register offset from base address.
* @value: Value to set in register.
@ -179,7 +179,7 @@ static inline void nps_enet_reg_set(struct nps_enet_priv *priv,
}
/**
* nps_reg_get - Gets value of specified ENET register.
* nps_enet_reg_get - Gets value of specified ENET register.
* @priv: Pointer to EZchip ENET private data structure.
* @reg: Register offset from base address.
*

View File

@ -1605,7 +1605,7 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
struct net_device *netdev;
struct ibmveth_adapter *adapter;
unsigned char *mac_addr_p;
unsigned int *mcastFilterSize_p;
__be32 *mcastFilterSize_p;
long ret;
unsigned long ret_attr;
@ -1627,8 +1627,9 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
return -EINVAL;
}
mcastFilterSize_p = (unsigned int *)vio_get_attribute(dev,
VETH_MCAST_FILTER_SIZE, NULL);
mcastFilterSize_p = (__be32 *)vio_get_attribute(dev,
VETH_MCAST_FILTER_SIZE,
NULL);
if (!mcastFilterSize_p) {
dev_err(&dev->dev, "Can't find VETH_MCAST_FILTER_SIZE "
"attribute\n");
@ -1645,7 +1646,7 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
adapter->vdev = dev;
adapter->netdev = netdev;
adapter->mcastFilterSize = *mcastFilterSize_p;
adapter->mcastFilterSize = be32_to_cpu(*mcastFilterSize_p);
adapter->pool_config = 0;
netif_napi_add(netdev, &adapter->napi, ibmveth_poll, 16);

View File

@ -1568,6 +1568,8 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num],
(u64)tx_buff->indir_dma,
(u64)num_entries);
dma_unmap_single(dev, tx_buff->indir_dma,
sizeof(tx_buff->indir_arr), DMA_TO_DEVICE);
} else {
tx_buff->num_entries = num_entries;
lpar_rc = send_subcrq(adapter, handle_array[queue_num],
@ -2788,7 +2790,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,
union sub_crq *next;
int index;
int i, j;
u8 *first;
restart_loop:
while (pending_scrq(adapter, scrq)) {
@ -2818,14 +2819,6 @@ restart_loop:
txbuff->data_dma[j] = 0;
}
/* if sub_crq was sent indirectly */
first = &txbuff->indir_arr[0].generic.first;
if (*first == IBMVNIC_CRQ_CMD) {
dma_unmap_single(dev, txbuff->indir_dma,
sizeof(txbuff->indir_arr),
DMA_TO_DEVICE);
*first = 0;
}
if (txbuff->last_frag) {
dev_kfree_skb_any(txbuff->skb);

View File

@ -7897,11 +7897,8 @@ static void ixgbe_service_task(struct work_struct *work)
return;
}
if (ixgbe_check_fw_error(adapter)) {
if (!test_bit(__IXGBE_DOWN, &adapter->state)) {
rtnl_lock();
if (!test_bit(__IXGBE_DOWN, &adapter->state))
unregister_netdev(adapter->netdev);
rtnl_unlock();
}
ixgbe_service_event_complete(adapter);
return;
}

View File

@ -1187,7 +1187,7 @@ int mlx4_en_config_rss_steer(struct mlx4_en_priv *priv)
err = mlx4_qp_alloc(mdev->dev, priv->base_qpn, rss_map->indir_qp);
if (err) {
en_err(priv, "Failed to allocate RSS indirection QP\n");
goto rss_err;
goto qp_alloc_err;
}
rss_map->indir_qp->event = mlx4_en_sqp_event;
@ -1241,6 +1241,7 @@ indir_err:
MLX4_QP_STATE_RST, NULL, 0, 0, rss_map->indir_qp);
mlx4_qp_remove(mdev->dev, rss_map->indir_qp);
mlx4_qp_free(mdev->dev, rss_map->indir_qp);
qp_alloc_err:
kfree(rss_map->indir_qp);
rss_map->indir_qp = NULL;
rss_err:

View File

@ -184,8 +184,13 @@ static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
struct mlx5e_tx_wqe {
struct mlx5_wqe_ctrl_seg ctrl;
struct mlx5_wqe_eth_seg eth;
struct mlx5_wqe_data_seg data[0];
union {
struct {
struct mlx5_wqe_eth_seg eth;
struct mlx5_wqe_data_seg data[0];
};
u8 tls_progress_params_ctx[0];
};
};
struct mlx5e_rx_wqe_ll {
@ -1100,6 +1105,8 @@ u32 mlx5e_ethtool_get_rxfh_key_size(struct mlx5e_priv *priv);
u32 mlx5e_ethtool_get_rxfh_indir_size(struct mlx5e_priv *priv);
int mlx5e_ethtool_get_ts_info(struct mlx5e_priv *priv,
struct ethtool_ts_info *info);
int mlx5e_ethtool_flash_device(struct mlx5e_priv *priv,
struct ethtool_flash *flash);
void mlx5e_ethtool_get_pauseparam(struct mlx5e_priv *priv,
struct ethtool_pauseparam *pauseparam);
int mlx5e_ethtool_set_pauseparam(struct mlx5e_priv *priv,

View File

@ -76,26 +76,21 @@ static int mlx5e_tx_reporter_err_cqe_recover(struct mlx5e_txqsq *sq)
u8 state;
int err;
if (!test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state))
return 0;
err = mlx5_core_query_sq_state(mdev, sq->sqn, &state);
if (err) {
netdev_err(dev, "Failed to query SQ 0x%x state. err = %d\n",
sq->sqn, err);
return err;
goto out;
}
if (state != MLX5_SQC_STATE_ERR) {
netdev_err(dev, "SQ 0x%x not in ERROR state\n", sq->sqn);
return -EINVAL;
}
if (state != MLX5_SQC_STATE_ERR)
goto out;
mlx5e_tx_disable_queue(sq->txq);
err = mlx5e_wait_for_sq_flush(sq);
if (err)
return err;
goto out;
/* At this point, no new packets will arrive from the stack as TXQ is
* marked with QUEUE_STATE_DRV_XOFF. In addition, NAPI cleared all
@ -104,13 +99,17 @@ static int mlx5e_tx_reporter_err_cqe_recover(struct mlx5e_txqsq *sq)
err = mlx5e_sq_to_ready(sq, state);
if (err)
return err;
goto out;
mlx5e_reset_txqsq_cc_pc(sq);
sq->stats->recover++;
clear_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state);
mlx5e_activate_txqsq(sq);
return 0;
out:
clear_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state);
return err;
}
static int mlx5_tx_health_report(struct devlink_health_reporter *tx_reporter,

View File

@ -143,7 +143,10 @@ void mlx5e_activate_xsk(struct mlx5e_channel *c)
{
set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
/* TX queue is created active. */
spin_lock(&c->xskicosq_lock);
mlx5e_trigger_irq(&c->xskicosq);
spin_unlock(&c->xskicosq_lock);
}
void mlx5e_deactivate_xsk(struct mlx5e_channel *c)

View File

@ -11,12 +11,14 @@
#include "accel/tls.h"
#define MLX5E_KTLS_STATIC_UMR_WQE_SZ \
(sizeof(struct mlx5e_umr_wqe) + MLX5_ST_SZ_BYTES(tls_static_params))
(offsetof(struct mlx5e_umr_wqe, tls_static_params_ctx) + \
MLX5_ST_SZ_BYTES(tls_static_params))
#define MLX5E_KTLS_STATIC_WQEBBS \
(DIV_ROUND_UP(MLX5E_KTLS_STATIC_UMR_WQE_SZ, MLX5_SEND_WQE_BB))
#define MLX5E_KTLS_PROGRESS_WQE_SZ \
(sizeof(struct mlx5e_tx_wqe) + MLX5_ST_SZ_BYTES(tls_progress_params))
(offsetof(struct mlx5e_tx_wqe, tls_progress_params_ctx) + \
MLX5_ST_SZ_BYTES(tls_progress_params))
#define MLX5E_KTLS_PROGRESS_WQEBBS \
(DIV_ROUND_UP(MLX5E_KTLS_PROGRESS_WQE_SZ, MLX5_SEND_WQE_BB))
#define MLX5E_KTLS_MAX_DUMP_WQEBBS 2

View File

@ -69,7 +69,7 @@ build_static_params(struct mlx5e_umr_wqe *wqe, u16 pc, u32 sqn,
cseg->qpn_ds = cpu_to_be32((sqn << MLX5_WQE_CTRL_QPN_SHIFT) |
STATIC_PARAMS_DS_CNT);
cseg->fm_ce_se = fence ? MLX5_FENCE_MODE_INITIATOR_SMALL : 0;
cseg->imm = cpu_to_be32(priv_tx->tisn);
cseg->tisn = cpu_to_be32(priv_tx->tisn << 8);
ucseg->flags = MLX5_UMR_INLINE;
ucseg->bsf_octowords = cpu_to_be16(MLX5_ST_SZ_BYTES(tls_static_params) / 16);
@ -80,7 +80,7 @@ build_static_params(struct mlx5e_umr_wqe *wqe, u16 pc, u32 sqn,
static void
fill_progress_params_ctx(void *ctx, struct mlx5e_ktls_offload_context_tx *priv_tx)
{
MLX5_SET(tls_progress_params, ctx, pd, priv_tx->tisn);
MLX5_SET(tls_progress_params, ctx, tisn, priv_tx->tisn);
MLX5_SET(tls_progress_params, ctx, record_tracker_state,
MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_START);
MLX5_SET(tls_progress_params, ctx, auth_state,
@ -104,7 +104,7 @@ build_progress_params(struct mlx5e_tx_wqe *wqe, u16 pc, u32 sqn,
PROGRESS_PARAMS_DS_CNT);
cseg->fm_ce_se = fence ? MLX5_FENCE_MODE_INITIATOR_SMALL : 0;
fill_progress_params_ctx(wqe->data, priv_tx);
fill_progress_params_ctx(wqe->tls_progress_params_ctx, priv_tx);
}
static void tx_fill_wi(struct mlx5e_txqsq *sq,
@ -278,7 +278,7 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, struct sk_buff *skb,
cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_DUMP);
cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt);
cseg->imm = cpu_to_be32(tisn);
cseg->tisn = cpu_to_be32(tisn << 8);
cseg->fm_ce_se = first ? MLX5_FENCE_MODE_INITIATOR_SMALL : 0;
eseg->inline_hdr.sz = cpu_to_be16(ihs);
@ -434,7 +434,7 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
priv_tx->expected_seq = seq + datalen;
cseg = &(*wqe)->ctrl;
cseg->imm = cpu_to_be32(priv_tx->tisn);
cseg->tisn = cpu_to_be32(priv_tx->tisn << 8);
stats->tls_encrypted_packets += skb_is_gso(skb) ? skb_shinfo(skb)->gso_segs : 1;
stats->tls_encrypted_bytes += datalen;

View File

@ -437,12 +437,6 @@ arfs_hash_bucket(struct arfs_table *arfs_t, __be16 src_port,
return &arfs_t->rules_hash[bucket_idx];
}
static u8 arfs_get_ip_proto(const struct sk_buff *skb)
{
return (skb->protocol == htons(ETH_P_IP)) ?
ip_hdr(skb)->protocol : ipv6_hdr(skb)->nexthdr;
}
static struct arfs_table *arfs_get_table(struct mlx5e_arfs_tables *arfs,
u8 ip_proto, __be16 etype)
{
@ -602,31 +596,9 @@ out:
arfs_may_expire_flow(priv);
}
/* return L4 destination port from ip4/6 packets */
static __be16 arfs_get_dst_port(const struct sk_buff *skb)
{
char *transport_header;
transport_header = skb_transport_header(skb);
if (arfs_get_ip_proto(skb) == IPPROTO_TCP)
return ((struct tcphdr *)transport_header)->dest;
return ((struct udphdr *)transport_header)->dest;
}
/* return L4 source port from ip4/6 packets */
static __be16 arfs_get_src_port(const struct sk_buff *skb)
{
char *transport_header;
transport_header = skb_transport_header(skb);
if (arfs_get_ip_proto(skb) == IPPROTO_TCP)
return ((struct tcphdr *)transport_header)->source;
return ((struct udphdr *)transport_header)->source;
}
static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv,
struct arfs_table *arfs_t,
const struct sk_buff *skb,
const struct flow_keys *fk,
u16 rxq, u32 flow_id)
{
struct arfs_rule *rule;
@ -641,19 +613,19 @@ static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv,
INIT_WORK(&rule->arfs_work, arfs_handle_work);
tuple = &rule->tuple;
tuple->etype = skb->protocol;
tuple->etype = fk->basic.n_proto;
tuple->ip_proto = fk->basic.ip_proto;
if (tuple->etype == htons(ETH_P_IP)) {
tuple->src_ipv4 = ip_hdr(skb)->saddr;
tuple->dst_ipv4 = ip_hdr(skb)->daddr;
tuple->src_ipv4 = fk->addrs.v4addrs.src;
tuple->dst_ipv4 = fk->addrs.v4addrs.dst;
} else {
memcpy(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr,
memcpy(&tuple->src_ipv6, &fk->addrs.v6addrs.src,
sizeof(struct in6_addr));
memcpy(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr,
memcpy(&tuple->dst_ipv6, &fk->addrs.v6addrs.dst,
sizeof(struct in6_addr));
}
tuple->ip_proto = arfs_get_ip_proto(skb);
tuple->src_port = arfs_get_src_port(skb);
tuple->dst_port = arfs_get_dst_port(skb);
tuple->src_port = fk->ports.src;
tuple->dst_port = fk->ports.dst;
rule->flow_id = flow_id;
rule->filter_id = priv->fs.arfs.last_filter_id++ % RPS_NO_FILTER;
@ -664,37 +636,33 @@ static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv,
return rule;
}
static bool arfs_cmp_ips(struct arfs_tuple *tuple,
const struct sk_buff *skb)
static bool arfs_cmp(const struct arfs_tuple *tuple, const struct flow_keys *fk)
{
if (tuple->etype == htons(ETH_P_IP) &&
tuple->src_ipv4 == ip_hdr(skb)->saddr &&
tuple->dst_ipv4 == ip_hdr(skb)->daddr)
return true;
if (tuple->etype == htons(ETH_P_IPV6) &&
(!memcmp(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr,
sizeof(struct in6_addr))) &&
(!memcmp(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr,
sizeof(struct in6_addr))))
return true;
if (tuple->src_port != fk->ports.src || tuple->dst_port != fk->ports.dst)
return false;
if (tuple->etype != fk->basic.n_proto)
return false;
if (tuple->etype == htons(ETH_P_IP))
return tuple->src_ipv4 == fk->addrs.v4addrs.src &&
tuple->dst_ipv4 == fk->addrs.v4addrs.dst;
if (tuple->etype == htons(ETH_P_IPV6))
return !memcmp(&tuple->src_ipv6, &fk->addrs.v6addrs.src,
sizeof(struct in6_addr)) &&
!memcmp(&tuple->dst_ipv6, &fk->addrs.v6addrs.dst,
sizeof(struct in6_addr));
return false;
}
static struct arfs_rule *arfs_find_rule(struct arfs_table *arfs_t,
const struct sk_buff *skb)
const struct flow_keys *fk)
{
struct arfs_rule *arfs_rule;
struct hlist_head *head;
__be16 src_port = arfs_get_src_port(skb);
__be16 dst_port = arfs_get_dst_port(skb);
head = arfs_hash_bucket(arfs_t, src_port, dst_port);
head = arfs_hash_bucket(arfs_t, fk->ports.src, fk->ports.dst);
hlist_for_each_entry(arfs_rule, head, hlist) {
if (arfs_rule->tuple.src_port == src_port &&
arfs_rule->tuple.dst_port == dst_port &&
arfs_cmp_ips(&arfs_rule->tuple, skb)) {
if (arfs_cmp(&arfs_rule->tuple, fk))
return arfs_rule;
}
}
return NULL;
@ -707,20 +675,24 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
struct mlx5e_arfs_tables *arfs = &priv->fs.arfs;
struct arfs_table *arfs_t;
struct arfs_rule *arfs_rule;
struct flow_keys fk;
if (skb->protocol != htons(ETH_P_IP) &&
skb->protocol != htons(ETH_P_IPV6))
if (!skb_flow_dissect_flow_keys(skb, &fk, 0))
return -EPROTONOSUPPORT;
if (fk.basic.n_proto != htons(ETH_P_IP) &&
fk.basic.n_proto != htons(ETH_P_IPV6))
return -EPROTONOSUPPORT;
if (skb->encapsulation)
return -EPROTONOSUPPORT;
arfs_t = arfs_get_table(arfs, arfs_get_ip_proto(skb), skb->protocol);
arfs_t = arfs_get_table(arfs, fk.basic.ip_proto, fk.basic.n_proto);
if (!arfs_t)
return -EPROTONOSUPPORT;
spin_lock_bh(&arfs->arfs_lock);
arfs_rule = arfs_find_rule(arfs_t, skb);
arfs_rule = arfs_find_rule(arfs_t, &fk);
if (arfs_rule) {
if (arfs_rule->rxq == rxq_index) {
spin_unlock_bh(&arfs->arfs_lock);
@ -728,8 +700,7 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
}
arfs_rule->rxq = rxq_index;
} else {
arfs_rule = arfs_alloc_rule(priv, arfs_t, skb,
rxq_index, flow_id);
arfs_rule = arfs_alloc_rule(priv, arfs_t, &fk, rxq_index, flow_id);
if (!arfs_rule) {
spin_unlock_bh(&arfs->arfs_lock);
return -ENOMEM;

View File

@ -1081,6 +1081,14 @@ int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
link_modes = autoneg == AUTONEG_ENABLE ? ethtool2ptys_adver_func(adver) :
mlx5e_port_speed2linkmodes(mdev, speed, !ext);
if ((link_modes & MLX5E_PROT_MASK(MLX5E_56GBASE_R4)) &&
autoneg != AUTONEG_ENABLE) {
netdev_err(priv->netdev, "%s: 56G link speed requires autoneg enabled\n",
__func__);
err = -EINVAL;
goto out;
}
link_modes = link_modes & eproto.cap;
if (!link_modes) {
netdev_err(priv->netdev, "%s: Not supported link mode(s) requested",
@ -1338,6 +1346,9 @@ int mlx5e_ethtool_set_pauseparam(struct mlx5e_priv *priv,
struct mlx5_core_dev *mdev = priv->mdev;
int err;
if (!MLX5_CAP_GEN(mdev, vport_group_manager))
return -EOPNOTSUPP;
if (pauseparam->autoneg)
return -EINVAL;
@ -1679,6 +1690,40 @@ static int mlx5e_get_module_eeprom(struct net_device *netdev,
return 0;
}
int mlx5e_ethtool_flash_device(struct mlx5e_priv *priv,
struct ethtool_flash *flash)
{
struct mlx5_core_dev *mdev = priv->mdev;
struct net_device *dev = priv->netdev;
const struct firmware *fw;
int err;
if (flash->region != ETHTOOL_FLASH_ALL_REGIONS)
return -EOPNOTSUPP;
err = request_firmware_direct(&fw, flash->data, &dev->dev);
if (err)
return err;
dev_hold(dev);
rtnl_unlock();
err = mlx5_firmware_flash(mdev, fw, NULL);
release_firmware(fw);
rtnl_lock();
dev_put(dev);
return err;
}
static int mlx5e_flash_device(struct net_device *dev,
struct ethtool_flash *flash)
{
struct mlx5e_priv *priv = netdev_priv(dev);
return mlx5e_ethtool_flash_device(priv, flash);
}
static int set_pflag_cqe_based_moder(struct net_device *netdev, bool enable,
bool is_rx_cq)
{
@ -1961,6 +2006,7 @@ const struct ethtool_ops mlx5e_ethtool_ops = {
.set_wol = mlx5e_set_wol,
.get_module_info = mlx5e_get_module_info,
.get_module_eeprom = mlx5e_get_module_eeprom,
.flash_device = mlx5e_flash_device,
.get_priv_flags = mlx5e_get_priv_flags,
.set_priv_flags = mlx5e_set_priv_flags,
.self_test = mlx5e_self_test,

View File

@ -1321,7 +1321,6 @@ err_free_txqsq:
void mlx5e_activate_txqsq(struct mlx5e_txqsq *sq)
{
sq->txq = netdev_get_tx_queue(sq->channel->netdev, sq->txq_ix);
clear_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state);
set_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
netdev_tx_reset_queue(sq->txq);
netif_tx_start_queue(sq->txq);

View File

@ -1480,7 +1480,7 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
struct mlx5_flow_spec *spec,
struct flow_cls_offload *f,
struct net_device *filter_dev,
u8 *match_level, u8 *tunnel_match_level)
u8 *inner_match_level, u8 *outer_match_level)
{
struct netlink_ext_ack *extack = f->common.extack;
void *headers_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
@ -1495,8 +1495,9 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
struct flow_dissector *dissector = rule->match.dissector;
u16 addr_type = 0;
u8 ip_proto = 0;
u8 *match_level;
*match_level = MLX5_MATCH_NONE;
match_level = outer_match_level;
if (dissector->used_keys &
~(BIT(FLOW_DISSECTOR_KEY_META) |
@ -1524,12 +1525,14 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
}
if (mlx5e_get_tc_tun(filter_dev)) {
if (parse_tunnel_attr(priv, spec, f, filter_dev, tunnel_match_level))
if (parse_tunnel_attr(priv, spec, f, filter_dev,
outer_match_level))
return -EOPNOTSUPP;
/* In decap flow, header pointers should point to the inner
/* At this point, header pointers should point to the inner
* headers, outer header were already set by parse_tunnel_attr
*/
match_level = inner_match_level;
headers_c = get_match_headers_criteria(MLX5_FLOW_CONTEXT_ACTION_DECAP,
spec);
headers_v = get_match_headers_value(MLX5_FLOW_CONTEXT_ACTION_DECAP,
@ -1831,35 +1834,41 @@ static int parse_cls_flower(struct mlx5e_priv *priv,
struct flow_cls_offload *f,
struct net_device *filter_dev)
{
u8 inner_match_level, outer_match_level, non_tunnel_match_level;
struct netlink_ext_ack *extack = f->common.extack;
struct mlx5_core_dev *dev = priv->mdev;
struct mlx5_eswitch *esw = dev->priv.eswitch;
struct mlx5e_rep_priv *rpriv = priv->ppriv;
u8 match_level, tunnel_match_level = MLX5_MATCH_NONE;
struct mlx5_eswitch_rep *rep;
int err;
err = __parse_cls_flower(priv, spec, f, filter_dev, &match_level, &tunnel_match_level);
inner_match_level = MLX5_MATCH_NONE;
outer_match_level = MLX5_MATCH_NONE;
err = __parse_cls_flower(priv, spec, f, filter_dev, &inner_match_level,
&outer_match_level);
non_tunnel_match_level = (inner_match_level == MLX5_MATCH_NONE) ?
outer_match_level : inner_match_level;
if (!err && (flow->flags & MLX5E_TC_FLOW_ESWITCH)) {
rep = rpriv->rep;
if (rep->vport != MLX5_VPORT_UPLINK &&
(esw->offloads.inline_mode != MLX5_INLINE_MODE_NONE &&
esw->offloads.inline_mode < match_level)) {
esw->offloads.inline_mode < non_tunnel_match_level)) {
NL_SET_ERR_MSG_MOD(extack,
"Flow is not offloaded due to min inline setting");
netdev_warn(priv->netdev,
"Flow is not offloaded due to min inline setting, required %d actual %d\n",
match_level, esw->offloads.inline_mode);
non_tunnel_match_level, esw->offloads.inline_mode);
return -EOPNOTSUPP;
}
}
if (flow->flags & MLX5E_TC_FLOW_ESWITCH) {
flow->esw_attr->match_level = match_level;
flow->esw_attr->tunnel_match_level = tunnel_match_level;
flow->esw_attr->inner_match_level = inner_match_level;
flow->esw_attr->outer_match_level = outer_match_level;
} else {
flow->nic_attr->match_level = match_level;
flow->nic_attr->match_level = non_tunnel_match_level;
}
return err;
@ -3158,7 +3167,7 @@ mlx5e_flow_esw_attr_init(struct mlx5_esw_flow_attr *esw_attr,
esw_attr->parse_attr = parse_attr;
esw_attr->chain = f->common.chain_index;
esw_attr->prio = TC_H_MAJ(f->common.prio) >> 16;
esw_attr->prio = f->common.prio;
esw_attr->in_rep = in_rep;
esw_attr->in_mdev = in_mdev;

View File

@ -377,8 +377,8 @@ struct mlx5_esw_flow_attr {
struct mlx5_termtbl_handle *termtbl;
} dests[MLX5_MAX_FLOW_FWD_VPORTS];
u32 mod_hdr_id;
u8 match_level;
u8 tunnel_match_level;
u8 inner_match_level;
u8 outer_match_level;
struct mlx5_fc *counter;
u32 chain;
u16 prio;

View File

@ -207,14 +207,10 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
mlx5_eswitch_set_rule_source_port(esw, spec, attr);
if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_DECAP) {
if (attr->tunnel_match_level != MLX5_MATCH_NONE)
spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
if (attr->match_level != MLX5_MATCH_NONE)
spec->match_criteria_enable |= MLX5_MATCH_INNER_HEADERS;
} else if (attr->match_level != MLX5_MATCH_NONE) {
if (attr->outer_match_level != MLX5_MATCH_NONE)
spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
}
if (attr->inner_match_level != MLX5_MATCH_NONE)
spec->match_criteria_enable |= MLX5_MATCH_INNER_HEADERS;
if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
flow_act.modify_id = attr->mod_hdr_id;
@ -290,7 +286,7 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
mlx5_eswitch_set_rule_source_port(esw, spec, attr);
spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
if (attr->match_level != MLX5_MATCH_NONE)
if (attr->outer_match_level != MLX5_MATCH_NONE)
spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
rule = mlx5_add_flow_rules(fast_fdb, spec, &flow_act, dest, i);

View File

@ -122,6 +122,14 @@ static int mlx5i_get_ts_info(struct net_device *netdev,
return mlx5e_ethtool_get_ts_info(priv, info);
}
static int mlx5i_flash_device(struct net_device *netdev,
struct ethtool_flash *flash)
{
struct mlx5e_priv *priv = mlx5i_epriv(netdev);
return mlx5e_ethtool_flash_device(priv, flash);
}
enum mlx5_ptys_width {
MLX5_PTYS_WIDTH_1X = 1 << 0,
MLX5_PTYS_WIDTH_2X = 1 << 1,
@ -233,6 +241,7 @@ const struct ethtool_ops mlx5i_ethtool_ops = {
.get_ethtool_stats = mlx5i_get_ethtool_stats,
.get_ringparam = mlx5i_get_ringparam,
.set_ringparam = mlx5i_set_ringparam,
.flash_device = mlx5i_flash_device,
.get_channels = mlx5i_get_channels,
.set_channels = mlx5i_set_channels,
.get_coalesce = mlx5i_get_coalesce,

View File

@ -27,6 +27,7 @@ int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
case 128:
general_obj_key_size =
MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_128;
key_p += sz_bytes;
break;
case 256:
general_obj_key_size =

View File

@ -471,7 +471,7 @@ int mlxsw_sp_acl_rulei_commit(struct mlxsw_sp_acl_rule_info *rulei)
void mlxsw_sp_acl_rulei_priority(struct mlxsw_sp_acl_rule_info *rulei,
unsigned int priority)
{
rulei->priority = priority >> 16;
rulei->priority = priority;
}
void mlxsw_sp_acl_rulei_keymask_u32(struct mlxsw_sp_acl_rule_info *rulei,

View File

@ -29,7 +29,7 @@
struct mlxsw_sp_ptp_state {
struct mlxsw_sp *mlxsw_sp;
struct rhashtable unmatched_ht;
struct rhltable unmatched_ht;
spinlock_t unmatched_lock; /* protects the HT */
struct delayed_work ht_gc_dw;
u32 gc_cycle;
@ -45,7 +45,7 @@ struct mlxsw_sp1_ptp_key {
struct mlxsw_sp1_ptp_unmatched {
struct mlxsw_sp1_ptp_key key;
struct rhash_head ht_node;
struct rhlist_head ht_node;
struct rcu_head rcu;
struct sk_buff *skb;
u64 timestamp;
@ -359,7 +359,7 @@ static int mlxsw_sp_ptp_parse(struct sk_buff *skb,
/* Returns NULL on successful insertion, a pointer on conflict, or an ERR_PTR on
* error.
*/
static struct mlxsw_sp1_ptp_unmatched *
static int
mlxsw_sp1_ptp_unmatched_save(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp1_ptp_key key,
struct sk_buff *skb,
@ -368,41 +368,51 @@ mlxsw_sp1_ptp_unmatched_save(struct mlxsw_sp *mlxsw_sp,
int cycles = MLXSW_SP1_PTP_HT_GC_TIMEOUT / MLXSW_SP1_PTP_HT_GC_INTERVAL;
struct mlxsw_sp_ptp_state *ptp_state = mlxsw_sp->ptp_state;
struct mlxsw_sp1_ptp_unmatched *unmatched;
struct mlxsw_sp1_ptp_unmatched *conflict;
int err;
unmatched = kzalloc(sizeof(*unmatched), GFP_ATOMIC);
if (!unmatched)
return ERR_PTR(-ENOMEM);
return -ENOMEM;
unmatched->key = key;
unmatched->skb = skb;
unmatched->timestamp = timestamp;
unmatched->gc_cycle = mlxsw_sp->ptp_state->gc_cycle + cycles;
conflict = rhashtable_lookup_get_insert_fast(&ptp_state->unmatched_ht,
&unmatched->ht_node,
mlxsw_sp1_ptp_unmatched_ht_params);
if (conflict)
err = rhltable_insert(&ptp_state->unmatched_ht, &unmatched->ht_node,
mlxsw_sp1_ptp_unmatched_ht_params);
if (err)
kfree(unmatched);
return conflict;
return err;
}
static struct mlxsw_sp1_ptp_unmatched *
mlxsw_sp1_ptp_unmatched_lookup(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp1_ptp_key key)
struct mlxsw_sp1_ptp_key key, int *p_length)
{
return rhashtable_lookup(&mlxsw_sp->ptp_state->unmatched_ht, &key,
mlxsw_sp1_ptp_unmatched_ht_params);
struct mlxsw_sp1_ptp_unmatched *unmatched, *last = NULL;
struct rhlist_head *tmp, *list;
int length = 0;
list = rhltable_lookup(&mlxsw_sp->ptp_state->unmatched_ht, &key,
mlxsw_sp1_ptp_unmatched_ht_params);
rhl_for_each_entry_rcu(unmatched, tmp, list, ht_node) {
last = unmatched;
length++;
}
*p_length = length;
return last;
}
static int
mlxsw_sp1_ptp_unmatched_remove(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp1_ptp_unmatched *unmatched)
{
return rhashtable_remove_fast(&mlxsw_sp->ptp_state->unmatched_ht,
&unmatched->ht_node,
mlxsw_sp1_ptp_unmatched_ht_params);
return rhltable_remove(&mlxsw_sp->ptp_state->unmatched_ht,
&unmatched->ht_node,
mlxsw_sp1_ptp_unmatched_ht_params);
}
/* This function is called in the following scenarios:
@ -489,75 +499,38 @@ static void mlxsw_sp1_ptp_got_piece(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp1_ptp_key key,
struct sk_buff *skb, u64 timestamp)
{
struct mlxsw_sp1_ptp_unmatched *unmatched, *conflict;
struct mlxsw_sp1_ptp_unmatched *unmatched;
int length;
int err;
rcu_read_lock();
unmatched = mlxsw_sp1_ptp_unmatched_lookup(mlxsw_sp, key);
spin_lock(&mlxsw_sp->ptp_state->unmatched_lock);
if (unmatched) {
/* There was an unmatched entry when we looked, but it may have
* been removed before we took the lock.
*/
err = mlxsw_sp1_ptp_unmatched_remove(mlxsw_sp, unmatched);
if (err)
unmatched = NULL;
}
if (!unmatched) {
/* We have no unmatched entry, but one may have been added after
* we looked, but before we took the lock.
*/
unmatched = mlxsw_sp1_ptp_unmatched_save(mlxsw_sp, key,
skb, timestamp);
if (IS_ERR(unmatched)) {
if (skb)
mlxsw_sp1_ptp_packet_finish(mlxsw_sp, skb,
key.local_port,
key.ingress, NULL);
unmatched = NULL;
} else if (unmatched) {
/* Save just told us, under lock, that the entry is
* there, so this has to work.
*/
err = mlxsw_sp1_ptp_unmatched_remove(mlxsw_sp,
unmatched);
WARN_ON_ONCE(err);
}
}
/* If unmatched is non-NULL here, it comes either from the lookup, or
* from the save attempt above. In either case the entry was removed
* from the hash table. If unmatched is NULL, a new unmatched entry was
* added to the hash table, and there was no conflict.
*/
unmatched = mlxsw_sp1_ptp_unmatched_lookup(mlxsw_sp, key, &length);
if (skb && unmatched && unmatched->timestamp) {
unmatched->skb = skb;
} else if (timestamp && unmatched && unmatched->skb) {
unmatched->timestamp = timestamp;
} else if (unmatched) {
/* unmatched holds an older entry of the same type: either an
* skb if we are handling skb, or a timestamp if we are handling
* timestamp. We can't match that up, so save what we have.
} else {
/* Either there is no entry to match, or one that is there is
* incompatible.
*/
conflict = mlxsw_sp1_ptp_unmatched_save(mlxsw_sp, key,
skb, timestamp);
if (IS_ERR(conflict)) {
if (skb)
mlxsw_sp1_ptp_packet_finish(mlxsw_sp, skb,
key.local_port,
key.ingress, NULL);
} else {
/* Above, we removed an object with this key from the
* hash table, under lock, so conflict can not be a
* valid pointer.
*/
WARN_ON_ONCE(conflict);
}
if (length < 100)
err = mlxsw_sp1_ptp_unmatched_save(mlxsw_sp, key,
skb, timestamp);
else
err = -E2BIG;
if (err && skb)
mlxsw_sp1_ptp_packet_finish(mlxsw_sp, skb,
key.local_port,
key.ingress, NULL);
unmatched = NULL;
}
if (unmatched) {
err = mlxsw_sp1_ptp_unmatched_remove(mlxsw_sp, unmatched);
WARN_ON_ONCE(err);
}
spin_unlock(&mlxsw_sp->ptp_state->unmatched_lock);
@ -669,9 +642,8 @@ mlxsw_sp1_ptp_ht_gc_collect(struct mlxsw_sp_ptp_state *ptp_state,
local_bh_disable();
spin_lock(&ptp_state->unmatched_lock);
err = rhashtable_remove_fast(&ptp_state->unmatched_ht,
&unmatched->ht_node,
mlxsw_sp1_ptp_unmatched_ht_params);
err = rhltable_remove(&ptp_state->unmatched_ht, &unmatched->ht_node,
mlxsw_sp1_ptp_unmatched_ht_params);
spin_unlock(&ptp_state->unmatched_lock);
if (err)
@ -702,7 +674,7 @@ static void mlxsw_sp1_ptp_ht_gc(struct work_struct *work)
ptp_state = container_of(dwork, struct mlxsw_sp_ptp_state, ht_gc_dw);
gc_cycle = ptp_state->gc_cycle++;
rhashtable_walk_enter(&ptp_state->unmatched_ht, &iter);
rhltable_walk_enter(&ptp_state->unmatched_ht, &iter);
rhashtable_walk_start(&iter);
while ((obj = rhashtable_walk_next(&iter))) {
if (IS_ERR(obj))
@ -855,8 +827,8 @@ struct mlxsw_sp_ptp_state *mlxsw_sp1_ptp_init(struct mlxsw_sp *mlxsw_sp)
spin_lock_init(&ptp_state->unmatched_lock);
err = rhashtable_init(&ptp_state->unmatched_ht,
&mlxsw_sp1_ptp_unmatched_ht_params);
err = rhltable_init(&ptp_state->unmatched_ht,
&mlxsw_sp1_ptp_unmatched_ht_params);
if (err)
goto err_hashtable_init;
@ -891,7 +863,7 @@ err_fifo_clr:
err_mtptpt1_set:
mlxsw_sp_ptp_mtptpt_set(mlxsw_sp, MLXSW_REG_MTPTPT_TRAP_ID_PTP0, 0);
err_mtptpt_set:
rhashtable_destroy(&ptp_state->unmatched_ht);
rhltable_destroy(&ptp_state->unmatched_ht);
err_hashtable_init:
kfree(ptp_state);
return ERR_PTR(err);
@ -906,8 +878,8 @@ void mlxsw_sp1_ptp_fini(struct mlxsw_sp_ptp_state *ptp_state)
mlxsw_sp1_ptp_set_fifo_clr_on_trap(mlxsw_sp, false);
mlxsw_sp_ptp_mtptpt_set(mlxsw_sp, MLXSW_REG_MTPTPT_TRAP_ID_PTP1, 0);
mlxsw_sp_ptp_mtptpt_set(mlxsw_sp, MLXSW_REG_MTPTPT_TRAP_ID_PTP0, 0);
rhashtable_free_and_destroy(&ptp_state->unmatched_ht,
&mlxsw_sp1_ptp_unmatched_free_fn, NULL);
rhltable_free_and_destroy(&ptp_state->unmatched_ht,
&mlxsw_sp1_ptp_unmatched_free_fn, NULL);
kfree(ptp_state);
}

View File

@ -13,12 +13,6 @@ struct ocelot_port_block {
struct ocelot_port *port;
};
static u16 get_prio(u32 prio)
{
/* prio starts from 0x1000 while the ids starts from 0 */
return prio >> 16;
}
static int ocelot_flower_parse_action(struct flow_cls_offload *f,
struct ocelot_ace_rule *rule)
{
@ -168,7 +162,7 @@ static int ocelot_flower_parse(struct flow_cls_offload *f,
}
finished_key_parsing:
ocelot_rule->prio = get_prio(f->common.prio);
ocelot_rule->prio = f->common.prio;
ocelot_rule->id = f->cookie;
return ocelot_flower_parse_action(f, ocelot_rule);
}
@ -218,7 +212,7 @@ static int ocelot_flower_destroy(struct flow_cls_offload *f,
struct ocelot_ace_rule rule;
int ret;
rule.prio = get_prio(f->common.prio);
rule.prio = f->common.prio;
rule.port = port_block->port;
rule.id = f->cookie;
@ -236,7 +230,7 @@ static int ocelot_flower_stats_update(struct flow_cls_offload *f,
struct ocelot_ace_rule rule;
int ret;
rule.prio = get_prio(f->common.prio);
rule.prio = f->common.prio;
rule.port = port_block->port;
rule.id = f->cookie;
ret = ocelot_ace_rule_stats_update(&rule);

View File

@ -3919,7 +3919,7 @@ static int myri10ge_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
* setup (if available). */
status = myri10ge_request_irq(mgp);
if (status != 0)
goto abort_with_firmware;
goto abort_with_slices;
myri10ge_free_irq(mgp);
/* Save configuration space to be restored if the

View File

@ -93,7 +93,7 @@ nfp_flower_install_rate_limiter(struct nfp_app *app, struct net_device *netdev,
return -EOPNOTSUPP;
}
if (flow->common.prio != (1 << 16)) {
if (flow->common.prio != 1) {
NL_SET_ERR_MSG_MOD(extack, "unsupported offload: qos rate limit offload requires highest priority");
return -EOPNOTSUPP;
}

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Renesas Ethernet AVB device driver
*
* Copyright (C) 2014-2015 Renesas Electronics Corporation
* Copyright (C) 2014-2019 Renesas Electronics Corporation
* Copyright (C) 2015 Renesas Solutions Corp.
* Copyright (C) 2015-2016 Cogent Embedded, Inc. <source@cogentembedded.com>
*
@ -513,7 +513,10 @@ static void ravb_get_tx_tstamp(struct net_device *ndev)
kfree(ts_skb);
if (tag == tfa_tag) {
skb_tstamp_tx(skb, &shhwtstamps);
dev_consume_skb_any(skb);
break;
} else {
dev_kfree_skb_any(skb);
}
}
ravb_modify(ndev, TCCR, TCCR_TFR, TCCR_TFR);
@ -1564,7 +1567,7 @@ static netdev_tx_t ravb_start_xmit(struct sk_buff *skb, struct net_device *ndev)
}
goto unmap;
}
ts_skb->skb = skb;
ts_skb->skb = skb_get(skb);
ts_skb->tag = priv->ts_skb_tag++;
priv->ts_skb_tag &= 0x3ff;
list_add_tail(&ts_skb->list, &priv->ts_skb_list);
@ -1693,6 +1696,7 @@ static int ravb_close(struct net_device *ndev)
/* Clear the timestamp list */
list_for_each_entry_safe(ts_skb, ts_skb2, &priv->ts_skb_list, list) {
list_del(&ts_skb->list);
kfree_skb(ts_skb->skb);
kfree(ts_skb);
}

View File

@ -94,7 +94,7 @@ static int tc_fill_entry(struct stmmac_priv *priv,
struct stmmac_tc_entry *entry, *frag = NULL;
struct tc_u32_sel *sel = cls->knode.sel;
u32 off, data, mask, real_off, rem;
u32 prio = cls->common.prio;
u32 prio = cls->common.prio << 16;
int ret;
/* Only 1 match per entry */

View File

@ -1504,7 +1504,7 @@ tc35815_rx(struct net_device *dev, int limit)
pci_unmap_single(lp->pci_dev,
lp->rx_skbs[cur_bd].skb_dma,
RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
if (!HAVE_DMA_RXALIGN(lp) && NET_IP_ALIGN)
if (!HAVE_DMA_RXALIGN(lp) && NET_IP_ALIGN != 0)
memmove(skb->data, skb->data - NET_IP_ALIGN,
pkt_len);
data = skb_put(skb, pkt_len);

View File

@ -371,9 +371,10 @@ tsi108_stat_carry_one(int carry, int carry_bit, int carry_shift,
static void tsi108_stat_carry(struct net_device *dev)
{
struct tsi108_prv_data *data = netdev_priv(dev);
unsigned long flags;
u32 carry1, carry2;
spin_lock_irq(&data->misclock);
spin_lock_irqsave(&data->misclock, flags);
carry1 = TSI_READ(TSI108_STAT_CARRY1);
carry2 = TSI_READ(TSI108_STAT_CARRY2);
@ -441,7 +442,7 @@ static void tsi108_stat_carry(struct net_device *dev)
TSI108_STAT_TXPAUSEDROP_CARRY,
&data->tx_pause_drop);
spin_unlock_irq(&data->misclock);
spin_unlock_irqrestore(&data->misclock, flags);
}
/* Read a stat counter atomically with respect to carries.

View File

@ -1239,12 +1239,15 @@ static void netvsc_get_stats64(struct net_device *net,
struct rtnl_link_stats64 *t)
{
struct net_device_context *ndev_ctx = netdev_priv(net);
struct netvsc_device *nvdev = rcu_dereference_rtnl(ndev_ctx->nvdev);
struct netvsc_device *nvdev;
struct netvsc_vf_pcpu_stats vf_tot;
int i;
rcu_read_lock();
nvdev = rcu_dereference(ndev_ctx->nvdev);
if (!nvdev)
return;
goto out;
netdev_stats_to_stats64(t, &net->stats);
@ -1283,6 +1286,8 @@ static void netvsc_get_stats64(struct net_device *net,
t->rx_packets += packets;
t->multicast += multicast;
}
out:
rcu_read_unlock();
}
static int netvsc_set_mac_addr(struct net_device *ndev, void *p)

View File

@ -73,46 +73,47 @@ static void nsim_dev_port_debugfs_exit(struct nsim_dev_port *nsim_dev_port)
debugfs_remove_recursive(nsim_dev_port->ddir);
}
static struct net *nsim_devlink_net(struct devlink *devlink)
{
return &init_net;
}
static u64 nsim_dev_ipv4_fib_resource_occ_get(void *priv)
{
struct nsim_dev *nsim_dev = priv;
struct net *net = priv;
return nsim_fib_get_val(nsim_dev->fib_data,
NSIM_RESOURCE_IPV4_FIB, false);
return nsim_fib_get_val(net, NSIM_RESOURCE_IPV4_FIB, false);
}
static u64 nsim_dev_ipv4_fib_rules_res_occ_get(void *priv)
{
struct nsim_dev *nsim_dev = priv;
struct net *net = priv;
return nsim_fib_get_val(nsim_dev->fib_data,
NSIM_RESOURCE_IPV4_FIB_RULES, false);
return nsim_fib_get_val(net, NSIM_RESOURCE_IPV4_FIB_RULES, false);
}
static u64 nsim_dev_ipv6_fib_resource_occ_get(void *priv)
{
struct nsim_dev *nsim_dev = priv;
struct net *net = priv;
return nsim_fib_get_val(nsim_dev->fib_data,
NSIM_RESOURCE_IPV6_FIB, false);
return nsim_fib_get_val(net, NSIM_RESOURCE_IPV6_FIB, false);
}
static u64 nsim_dev_ipv6_fib_rules_res_occ_get(void *priv)
{
struct nsim_dev *nsim_dev = priv;
struct net *net = priv;
return nsim_fib_get_val(nsim_dev->fib_data,
NSIM_RESOURCE_IPV6_FIB_RULES, false);
return nsim_fib_get_val(net, NSIM_RESOURCE_IPV6_FIB_RULES, false);
}
static int nsim_dev_resources_register(struct devlink *devlink)
{
struct nsim_dev *nsim_dev = devlink_priv(devlink);
struct devlink_resource_size_params params = {
.size_max = (u64)-1,
.size_granularity = 1,
.unit = DEVLINK_RESOURCE_UNIT_ENTRY
};
struct net *net = nsim_devlink_net(devlink);
int err;
u64 n;
@ -126,8 +127,7 @@ static int nsim_dev_resources_register(struct devlink *devlink)
goto out;
}
n = nsim_fib_get_val(nsim_dev->fib_data,
NSIM_RESOURCE_IPV4_FIB, true);
n = nsim_fib_get_val(net, NSIM_RESOURCE_IPV4_FIB, true);
err = devlink_resource_register(devlink, "fib", n,
NSIM_RESOURCE_IPV4_FIB,
NSIM_RESOURCE_IPV4, &params);
@ -136,8 +136,7 @@ static int nsim_dev_resources_register(struct devlink *devlink)
return err;
}
n = nsim_fib_get_val(nsim_dev->fib_data,
NSIM_RESOURCE_IPV4_FIB_RULES, true);
n = nsim_fib_get_val(net, NSIM_RESOURCE_IPV4_FIB_RULES, true);
err = devlink_resource_register(devlink, "fib-rules", n,
NSIM_RESOURCE_IPV4_FIB_RULES,
NSIM_RESOURCE_IPV4, &params);
@ -156,8 +155,7 @@ static int nsim_dev_resources_register(struct devlink *devlink)
goto out;
}
n = nsim_fib_get_val(nsim_dev->fib_data,
NSIM_RESOURCE_IPV6_FIB, true);
n = nsim_fib_get_val(net, NSIM_RESOURCE_IPV6_FIB, true);
err = devlink_resource_register(devlink, "fib", n,
NSIM_RESOURCE_IPV6_FIB,
NSIM_RESOURCE_IPV6, &params);
@ -166,8 +164,7 @@ static int nsim_dev_resources_register(struct devlink *devlink)
return err;
}
n = nsim_fib_get_val(nsim_dev->fib_data,
NSIM_RESOURCE_IPV6_FIB_RULES, true);
n = nsim_fib_get_val(net, NSIM_RESOURCE_IPV6_FIB_RULES, true);
err = devlink_resource_register(devlink, "fib-rules", n,
NSIM_RESOURCE_IPV6_FIB_RULES,
NSIM_RESOURCE_IPV6, &params);
@ -179,19 +176,19 @@ static int nsim_dev_resources_register(struct devlink *devlink)
devlink_resource_occ_get_register(devlink,
NSIM_RESOURCE_IPV4_FIB,
nsim_dev_ipv4_fib_resource_occ_get,
nsim_dev);
net);
devlink_resource_occ_get_register(devlink,
NSIM_RESOURCE_IPV4_FIB_RULES,
nsim_dev_ipv4_fib_rules_res_occ_get,
nsim_dev);
net);
devlink_resource_occ_get_register(devlink,
NSIM_RESOURCE_IPV6_FIB,
nsim_dev_ipv6_fib_resource_occ_get,
nsim_dev);
net);
devlink_resource_occ_get_register(devlink,
NSIM_RESOURCE_IPV6_FIB_RULES,
nsim_dev_ipv6_fib_rules_res_occ_get,
nsim_dev);
net);
out:
return err;
}
@ -199,11 +196,11 @@ out:
static int nsim_dev_reload(struct devlink *devlink,
struct netlink_ext_ack *extack)
{
struct nsim_dev *nsim_dev = devlink_priv(devlink);
enum nsim_resource_id res_ids[] = {
NSIM_RESOURCE_IPV4_FIB, NSIM_RESOURCE_IPV4_FIB_RULES,
NSIM_RESOURCE_IPV6_FIB, NSIM_RESOURCE_IPV6_FIB_RULES
};
struct net *net = nsim_devlink_net(devlink);
int i;
for (i = 0; i < ARRAY_SIZE(res_ids); ++i) {
@ -212,8 +209,7 @@ static int nsim_dev_reload(struct devlink *devlink,
err = devlink_resource_size_get(devlink, res_ids[i], &val);
if (!err) {
err = nsim_fib_set_max(nsim_dev->fib_data,
res_ids[i], val, extack);
err = nsim_fib_set_max(net, res_ids[i], val, extack);
if (err)
return err;
}
@ -285,15 +281,9 @@ nsim_dev_create(struct nsim_bus_dev *nsim_bus_dev, unsigned int port_count)
mutex_init(&nsim_dev->port_list_lock);
nsim_dev->fw_update_status = true;
nsim_dev->fib_data = nsim_fib_create();
if (IS_ERR(nsim_dev->fib_data)) {
err = PTR_ERR(nsim_dev->fib_data);
goto err_devlink_free;
}
err = nsim_dev_resources_register(devlink);
if (err)
goto err_fib_destroy;
goto err_devlink_free;
err = devlink_register(devlink, &nsim_bus_dev->dev);
if (err)
@ -315,8 +305,6 @@ err_dl_unregister:
devlink_unregister(devlink);
err_resources_unregister:
devlink_resources_unregister(devlink, NULL);
err_fib_destroy:
nsim_fib_destroy(nsim_dev->fib_data);
err_devlink_free:
devlink_free(devlink);
return ERR_PTR(err);
@ -330,7 +318,6 @@ static void nsim_dev_destroy(struct nsim_dev *nsim_dev)
nsim_dev_debugfs_exit(nsim_dev);
devlink_unregister(devlink);
devlink_resources_unregister(devlink, NULL);
nsim_fib_destroy(nsim_dev->fib_data);
mutex_destroy(&nsim_dev->port_list_lock);
devlink_free(devlink);
}

View File

@ -18,6 +18,7 @@
#include <net/ip_fib.h>
#include <net/ip6_fib.h>
#include <net/fib_rules.h>
#include <net/netns/generic.h>
#include "netdevsim.h"
@ -32,14 +33,15 @@ struct nsim_per_fib_data {
};
struct nsim_fib_data {
struct notifier_block fib_nb;
struct nsim_per_fib_data ipv4;
struct nsim_per_fib_data ipv6;
};
u64 nsim_fib_get_val(struct nsim_fib_data *fib_data,
enum nsim_resource_id res_id, bool max)
static unsigned int nsim_fib_net_id;
u64 nsim_fib_get_val(struct net *net, enum nsim_resource_id res_id, bool max)
{
struct nsim_fib_data *fib_data = net_generic(net, nsim_fib_net_id);
struct nsim_fib_entry *entry;
switch (res_id) {
@ -62,10 +64,10 @@ u64 nsim_fib_get_val(struct nsim_fib_data *fib_data,
return max ? entry->max : entry->num;
}
int nsim_fib_set_max(struct nsim_fib_data *fib_data,
enum nsim_resource_id res_id, u64 val,
int nsim_fib_set_max(struct net *net, enum nsim_resource_id res_id, u64 val,
struct netlink_ext_ack *extack)
{
struct nsim_fib_data *fib_data = net_generic(net, nsim_fib_net_id);
struct nsim_fib_entry *entry;
int err = 0;
@ -118,9 +120,9 @@ static int nsim_fib_rule_account(struct nsim_fib_entry *entry, bool add,
return err;
}
static int nsim_fib_rule_event(struct nsim_fib_data *data,
struct fib_notifier_info *info, bool add)
static int nsim_fib_rule_event(struct fib_notifier_info *info, bool add)
{
struct nsim_fib_data *data = net_generic(info->net, nsim_fib_net_id);
struct netlink_ext_ack *extack = info->extack;
int err = 0;
@ -155,9 +157,9 @@ static int nsim_fib_account(struct nsim_fib_entry *entry, bool add,
return err;
}
static int nsim_fib_event(struct nsim_fib_data *data,
struct fib_notifier_info *info, bool add)
static int nsim_fib_event(struct fib_notifier_info *info, bool add)
{
struct nsim_fib_data *data = net_generic(info->net, nsim_fib_net_id);
struct netlink_ext_ack *extack = info->extack;
int err = 0;
@ -176,22 +178,18 @@ static int nsim_fib_event(struct nsim_fib_data *data,
static int nsim_fib_event_nb(struct notifier_block *nb, unsigned long event,
void *ptr)
{
struct nsim_fib_data *data = container_of(nb, struct nsim_fib_data,
fib_nb);
struct fib_notifier_info *info = ptr;
int err = 0;
switch (event) {
case FIB_EVENT_RULE_ADD: /* fall through */
case FIB_EVENT_RULE_DEL:
err = nsim_fib_rule_event(data, info,
event == FIB_EVENT_RULE_ADD);
err = nsim_fib_rule_event(info, event == FIB_EVENT_RULE_ADD);
break;
case FIB_EVENT_ENTRY_ADD: /* fall through */
case FIB_EVENT_ENTRY_DEL:
err = nsim_fib_event(data, info,
event == FIB_EVENT_ENTRY_ADD);
err = nsim_fib_event(info, event == FIB_EVENT_ENTRY_ADD);
break;
}
@ -201,23 +199,30 @@ static int nsim_fib_event_nb(struct notifier_block *nb, unsigned long event,
/* inconsistent dump, trying again */
static void nsim_fib_dump_inconsistent(struct notifier_block *nb)
{
struct nsim_fib_data *data = container_of(nb, struct nsim_fib_data,
fib_nb);
struct nsim_fib_data *data;
struct net *net;
data->ipv4.fib.num = 0ULL;
data->ipv4.rules.num = 0ULL;
data->ipv6.fib.num = 0ULL;
data->ipv6.rules.num = 0ULL;
rcu_read_lock();
for_each_net_rcu(net) {
data = net_generic(net, nsim_fib_net_id);
data->ipv4.fib.num = 0ULL;
data->ipv4.rules.num = 0ULL;
data->ipv6.fib.num = 0ULL;
data->ipv6.rules.num = 0ULL;
}
rcu_read_unlock();
}
struct nsim_fib_data *nsim_fib_create(void)
{
struct nsim_fib_data *data;
int err;
static struct notifier_block nsim_fib_nb = {
.notifier_call = nsim_fib_event_nb,
};
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return ERR_PTR(-ENOMEM);
/* Initialize per network namespace state */
static int __net_init nsim_fib_netns_init(struct net *net)
{
struct nsim_fib_data *data = net_generic(net, nsim_fib_net_id);
data->ipv4.fib.max = (u64)-1;
data->ipv4.rules.max = (u64)-1;
@ -225,22 +230,37 @@ struct nsim_fib_data *nsim_fib_create(void)
data->ipv6.fib.max = (u64)-1;
data->ipv6.rules.max = (u64)-1;
data->fib_nb.notifier_call = nsim_fib_event_nb;
err = register_fib_notifier(&data->fib_nb, nsim_fib_dump_inconsistent);
if (err) {
return 0;
}
static struct pernet_operations nsim_fib_net_ops = {
.init = nsim_fib_netns_init,
.id = &nsim_fib_net_id,
.size = sizeof(struct nsim_fib_data),
};
void nsim_fib_exit(void)
{
unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
}
int nsim_fib_init(void)
{
int err;
err = register_pernet_subsys(&nsim_fib_net_ops);
if (err < 0) {
pr_err("Failed to register pernet subsystem\n");
goto err_out;
}
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
pr_err("Failed to register fib notifier\n");
goto err_out;
}
return data;
err_out:
kfree(data);
return ERR_PTR(err);
}
void nsim_fib_destroy(struct nsim_fib_data *data)
{
unregister_fib_notifier(&data->fib_nb);
kfree(data);
return err;
}

View File

@ -357,12 +357,18 @@ static int __init nsim_module_init(void)
if (err)
goto err_dev_exit;
err = rtnl_link_register(&nsim_link_ops);
err = nsim_fib_init();
if (err)
goto err_bus_exit;
err = rtnl_link_register(&nsim_link_ops);
if (err)
goto err_fib_exit;
return 0;
err_fib_exit:
nsim_fib_exit();
err_bus_exit:
nsim_bus_exit();
err_dev_exit:
@ -373,6 +379,7 @@ err_dev_exit:
static void __exit nsim_module_exit(void)
{
rtnl_link_unregister(&nsim_link_ops);
nsim_fib_exit();
nsim_bus_exit();
nsim_dev_exit();
}

View File

@ -169,12 +169,10 @@ int nsim_dev_port_add(struct nsim_bus_dev *nsim_bus_dev,
int nsim_dev_port_del(struct nsim_bus_dev *nsim_bus_dev,
unsigned int port_index);
struct nsim_fib_data *nsim_fib_create(void);
void nsim_fib_destroy(struct nsim_fib_data *fib_data);
u64 nsim_fib_get_val(struct nsim_fib_data *fib_data,
enum nsim_resource_id res_id, bool max);
int nsim_fib_set_max(struct nsim_fib_data *fib_data,
enum nsim_resource_id res_id, u64 val,
int nsim_fib_init(void);
void nsim_fib_exit(void);
u64 nsim_fib_get_val(struct net *net, enum nsim_resource_id res_id, bool max);
int nsim_fib_set_max(struct net *net, enum nsim_resource_id res_id, u64 val,
struct netlink_ext_ack *extack);
#if IS_ENABLED(CONFIG_XFRM_OFFLOAD)

View File

@ -257,36 +257,20 @@ static int at803x_config_init(struct phy_device *phydev)
* after HW reset: RX delay enabled and TX delay disabled
* after SW reset: RX delay enabled, while TX delay retains the
* value before reset.
*
* So let's first disable the RX and TX delays in PHY and enable
* them based on the mode selected (this also takes care of RGMII
* mode where we expect delays to be disabled)
*/
ret = at803x_disable_rx_delay(phydev);
if (ret < 0)
return ret;
ret = at803x_disable_tx_delay(phydev);
if (ret < 0)
return ret;
if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) {
/* If RGMII_ID or RGMII_RXID are specified enable RX delay,
* otherwise keep it disabled
*/
phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID)
ret = at803x_enable_rx_delay(phydev);
if (ret < 0)
return ret;
}
else
ret = at803x_disable_rx_delay(phydev);
if (ret < 0)
return ret;
if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) {
/* If RGMII_ID or RGMII_TXID are specified enable TX delay,
* otherwise keep it disabled
*/
phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID)
ret = at803x_enable_tx_delay(phydev);
}
else
ret = at803x_disable_tx_delay(phydev);
return ret;
}

View File

@ -219,6 +219,20 @@ int genphy_c45_read_link(struct phy_device *phydev)
int val, devad;
bool link = true;
if (phydev->c45_ids.devices_in_package & MDIO_DEVS_AN) {
val = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1);
if (val < 0)
return val;
/* Autoneg is being started, therefore disregard current
* link status and report link as down.
*/
if (val & MDIO_AN_CTRL1_RESTART) {
phydev->link = 0;
return 0;
}
}
while (mmd_mask && link) {
devad = __ffs(mmd_mask);
mmd_mask &= ~BIT(devad);

View File

@ -1752,7 +1752,17 @@ EXPORT_SYMBOL(genphy_aneg_done);
*/
int genphy_update_link(struct phy_device *phydev)
{
int status;
int status = 0, bmcr;
bmcr = phy_read(phydev, MII_BMCR);
if (bmcr < 0)
return bmcr;
/* Autoneg is being started, therefore disregard BMSR value and
* report link as down.
*/
if (bmcr & BMCR_ANRESTART)
goto done;
/* The link state is latched low so that momentary link
* drops can be detected. Do not double-read the status

View File

@ -1004,6 +1004,8 @@ static void __team_compute_features(struct team *team)
team->dev->vlan_features = vlan_features;
team->dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL |
NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_STAG_TX |
NETIF_F_GSO_UDP_L4;
team->dev->hard_header_len = max_hard_header_len;

View File

@ -163,7 +163,8 @@ static int cx82310_bind(struct usbnet *dev, struct usb_interface *intf)
}
if (!timeout) {
dev_err(&udev->dev, "firmware not ready in time\n");
return -ETIMEDOUT;
ret = -ETIMEDOUT;
goto err;
}
/* enable ethernet mode (?) */

View File

@ -113,16 +113,16 @@ kalmia_init_and_get_ethernet_addr(struct usbnet *dev, u8 *ethernet_addr)
status = kalmia_send_init_packet(dev, usb_buf, ARRAY_SIZE(init_msg_1),
usb_buf, 24);
if (status != 0)
return status;
goto out;
memcpy(usb_buf, init_msg_2, 12);
status = kalmia_send_init_packet(dev, usb_buf, ARRAY_SIZE(init_msg_2),
usb_buf, 28);
if (status != 0)
return status;
goto out;
memcpy(ethernet_addr, usb_buf + 10, ETH_ALEN);
out:
kfree(usb_buf);
return status;
}

View File

@ -3792,7 +3792,7 @@ static int lan78xx_probe(struct usb_interface *intf,
ret = register_netdev(netdev);
if (ret != 0) {
netif_err(dev, probe, netdev, "couldn't register the device\n");
goto out3;
goto out4;
}
usb_set_intfdata(intf, dev);
@ -3807,12 +3807,14 @@ static int lan78xx_probe(struct usb_interface *intf,
ret = lan78xx_phy_init(dev);
if (ret < 0)
goto out4;
goto out5;
return 0;
out4:
out5:
unregister_netdev(netdev);
out4:
usb_free_urb(dev->urb_intr);
out3:
lan78xx_unbind(dev, intf);
out2:

View File

@ -351,13 +351,15 @@ int i2400m_barker_db_init(const char *_options)
}
result = i2400m_barker_db_add(barker);
if (result < 0)
goto error_add;
goto error_parse_add;
}
kfree(options_orig);
}
return 0;
error_parse_add:
error_parse:
kfree(options_orig);
error_add:
kfree(i2400m_barker_db);
return result;

View File

@ -925,6 +925,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
skb_shinfo(skb)->nr_frags = MAX_SKB_FRAGS;
nskb = xenvif_alloc_skb(0);
if (unlikely(nskb == NULL)) {
skb_shinfo(skb)->nr_frags = 0;
kfree_skb(skb);
xenvif_tx_err(queue, &txreq, extra_count, idx);
if (net_ratelimit())
@ -940,6 +941,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
/* Failure in xenvif_set_skb_gso is fatal. */
skb_shinfo(skb)->nr_frags = 0;
kfree_skb(skb);
kfree_skb(nskb);
break;

View File

@ -629,6 +629,7 @@ struct qeth_seqno {
struct qeth_reply {
struct list_head list;
struct completion received;
spinlock_t lock;
int (*callback)(struct qeth_card *, struct qeth_reply *,
unsigned long);
u32 seqno;

View File

@ -544,6 +544,7 @@ static struct qeth_reply *qeth_alloc_reply(struct qeth_card *card)
if (reply) {
refcount_set(&reply->refcnt, 1);
init_completion(&reply->received);
spin_lock_init(&reply->lock);
}
return reply;
}
@ -799,6 +800,13 @@ static void qeth_issue_next_read_cb(struct qeth_card *card,
if (!reply->callback) {
rc = 0;
goto no_callback;
}
spin_lock_irqsave(&reply->lock, flags);
if (reply->rc) {
/* Bail out when the requestor has already left: */
rc = reply->rc;
} else {
if (cmd) {
reply->offset = (u16)((char *)cmd - (char *)iob->data);
@ -807,7 +815,9 @@ static void qeth_issue_next_read_cb(struct qeth_card *card,
rc = reply->callback(card, reply, (unsigned long)iob);
}
}
spin_unlock_irqrestore(&reply->lock, flags);
no_callback:
if (rc <= 0)
qeth_notify_reply(reply, rc);
qeth_put_reply(reply);
@ -1749,6 +1759,16 @@ static int qeth_send_control_data(struct qeth_card *card,
rc = (timeout == -ERESTARTSYS) ? -EINTR : -ETIME;
qeth_dequeue_reply(card, reply);
if (reply_cb) {
/* Wait until the callback for a late reply has completed: */
spin_lock_irq(&reply->lock);
if (rc)
/* Zap any callback that's still pending: */
reply->rc = rc;
spin_unlock_irq(&reply->lock);
}
if (!rc)
rc = reply->rc;
qeth_put_reply(reply);

View File

@ -446,11 +446,11 @@ enum {
};
enum {
MLX5_OPC_MOD_TLS_TIS_STATIC_PARAMS = 0x20,
MLX5_OPC_MOD_TLS_TIS_STATIC_PARAMS = 0x1,
};
enum {
MLX5_OPC_MOD_TLS_TIS_PROGRESS_PARAMS = 0x20,
MLX5_OPC_MOD_TLS_TIS_PROGRESS_PARAMS = 0x1,
};
enum {

View File

@ -10054,9 +10054,8 @@ struct mlx5_ifc_tls_static_params_bits {
};
struct mlx5_ifc_tls_progress_params_bits {
u8 valid[0x1];
u8 reserved_at_1[0x7];
u8 pd[0x18];
u8 reserved_at_0[0x8];
u8 tisn[0x18];
u8 next_record_tcp_sn[0x20];

View File

@ -1374,6 +1374,14 @@ static inline void skb_copy_hash(struct sk_buff *to, const struct sk_buff *from)
to->l4_hash = from->l4_hash;
};
static inline void skb_copy_decrypted(struct sk_buff *to,
const struct sk_buff *from)
{
#ifdef CONFIG_TLS_DEVICE
to->decrypted = from->decrypted;
#endif
}
#ifdef NET_SKBUFF_DATA_USES_OFFSET
static inline unsigned char *skb_end_pointer(const struct sk_buff *skb)
{

View File

@ -292,6 +292,9 @@ struct ucred {
#define MSG_BATCH 0x40000 /* sendmmsg(): more messages coming */
#define MSG_EOF MSG_FIN
#define MSG_NO_SHARED_FRAGS 0x80000 /* sendpage() internal : page frags are not shared */
#define MSG_SENDPAGE_DECRYPTED 0x100000 /* sendpage() internal : page may carry
* plain text and require encryption
*/
#define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */
#define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */

View File

@ -278,6 +278,7 @@ struct hci_dev {
__u16 conn_info_min_age;
__u16 conn_info_max_age;
__u16 auth_payload_timeout;
__u8 min_enc_key_size;
__u8 ssp_debug_mode;
__u8 hw_error_code;
__u32 clock;

View File

@ -171,7 +171,7 @@ int inet_frag_queue_insert(struct inet_frag_queue *q, struct sk_buff *skb,
void *inet_frag_reasm_prepare(struct inet_frag_queue *q, struct sk_buff *skb,
struct sk_buff *parent);
void inet_frag_reasm_finish(struct inet_frag_queue *q, struct sk_buff *head,
void *reasm_data);
void *reasm_data, bool try_coalesce);
struct sk_buff *inet_frag_pull_head(struct inet_frag_queue *q);
#endif

View File

@ -61,7 +61,6 @@ struct net {
spinlock_t rules_mod_lock;
u32 hash_mix;
atomic64_t cookie_gen;
struct list_head list; /* list of network namespaces */
struct list_head exit_list; /* To linked to call pernet exit

View File

@ -421,8 +421,7 @@ struct nft_set {
unsigned char *udata;
/* runtime data below here */
const struct nft_set_ops *ops ____cacheline_aligned;
u16 flags:13,
bound:1,
u16 flags:14,
genmask:2;
u8 klen;
u8 dlen;
@ -1348,12 +1347,15 @@ struct nft_trans_rule {
struct nft_trans_set {
struct nft_set *set;
u32 set_id;
bool bound;
};
#define nft_trans_set(trans) \
(((struct nft_trans_set *)trans->data)->set)
#define nft_trans_set_id(trans) \
(((struct nft_trans_set *)trans->data)->set_id)
#define nft_trans_set_bound(trans) \
(((struct nft_trans_set *)trans->data)->bound)
struct nft_trans_chain {
bool update;
@ -1384,12 +1386,15 @@ struct nft_trans_table {
struct nft_trans_elem {
struct nft_set *set;
struct nft_set_elem elem;
bool bound;
};
#define nft_trans_elem_set(trans) \
(((struct nft_trans_elem *)trans->data)->set)
#define nft_trans_elem(trans) \
(((struct nft_trans_elem *)trans->data)->elem)
#define nft_trans_elem_set_bound(trans) \
(((struct nft_trans_elem *)trans->data)->bound)
struct nft_trans_obj {
struct nft_object *obj;

View File

@ -73,4 +73,6 @@ int nft_flow_rule_offload_commit(struct net *net);
(__reg)->key = __key; \
memset(&(__reg)->mask, 0xff, (__reg)->len);
int nft_chain_offload_priority(struct nft_base_chain *basechain);
#endif

View File

@ -684,9 +684,8 @@ static inline int nlmsg_parse(const struct nlmsghdr *nlh, int hdrlen,
const struct nla_policy *policy,
struct netlink_ext_ack *extack)
{
return __nla_parse(tb, maxtype, nlmsg_attrdata(nlh, hdrlen),
nlmsg_attrlen(nlh, hdrlen), policy,
NL_VALIDATE_STRICT, extack);
return __nlmsg_parse(nlh, hdrlen, tb, maxtype, policy,
NL_VALIDATE_STRICT, extack);
}
/**

View File

@ -646,7 +646,7 @@ tc_cls_common_offload_init(struct flow_cls_common_offload *cls_common,
{
cls_common->chain_index = tp->chain->index;
cls_common->protocol = tp->protocol;
cls_common->prio = tp->prio;
cls_common->prio = tp->prio >> 16;
if (tc_skip_sw(flags) || flags & TCA_CLS_FLAGS_VERBOSE)
cls_common->extack = extack;
}

View File

@ -2482,6 +2482,7 @@ static inline bool sk_fullsock(const struct sock *sk)
/* Checks if this SKB belongs to an HW offloaded socket
* and whether any SW fallbacks are required based on dev.
* Check decrypted mark in case skb_orphan() cleared socket.
*/
static inline struct sk_buff *sk_validate_xmit_skb(struct sk_buff *skb,
struct net_device *dev)
@ -2489,8 +2490,15 @@ static inline struct sk_buff *sk_validate_xmit_skb(struct sk_buff *skb,
#ifdef CONFIG_SOCK_VALIDATE_XMIT
struct sock *sk = skb->sk;
if (sk && sk_fullsock(sk) && sk->sk_validate_xmit_skb)
if (sk && sk_fullsock(sk) && sk->sk_validate_xmit_skb) {
skb = sk->sk_validate_xmit_skb(sk, dev, skb);
#ifdef CONFIG_TLS_DEVICE
} else if (unlikely(skb->decrypted)) {
pr_warn_ratelimited("unencrypted skb with no associated socket - dropping\n");
kfree_skb(skb);
skb = NULL;
#endif
}
#endif
return skb;

View File

@ -498,10 +498,10 @@ rxrpc_tx_points;
#define E_(a, b) { a, b }
TRACE_EVENT(rxrpc_local,
TP_PROTO(struct rxrpc_local *local, enum rxrpc_local_trace op,
TP_PROTO(unsigned int local_debug_id, enum rxrpc_local_trace op,
int usage, const void *where),
TP_ARGS(local, op, usage, where),
TP_ARGS(local_debug_id, op, usage, where),
TP_STRUCT__entry(
__field(unsigned int, local )
@ -511,7 +511,7 @@ TRACE_EVENT(rxrpc_local,
),
TP_fast_assign(
__entry->local = local->debug_id;
__entry->local = local_debug_id;
__entry->op = op;
__entry->usage = usage;
__entry->where = where;

View File

@ -1466,8 +1466,8 @@ union bpf_attr {
* If no cookie has been set yet, generate a new cookie. Once
* generated, the socket cookie remains stable for the life of the
* socket. This helper can be useful for monitoring per socket
* networking traffic statistics as it provides a unique socket
* identifier per namespace.
* networking traffic statistics as it provides a global socket
* identifier that can be assumed unique.
* Return
* A 8-byte long non-decreasing number on success, or 0 if the
* socket field is missing inside *skb*.

View File

@ -2303,7 +2303,7 @@ __batadv_mcast_flags_dump(struct sk_buff *msg, u32 portid,
while (bucket_tmp < hash->size) {
if (batadv_mcast_flags_dump_bucket(msg, portid, cb, hash,
*bucket, &idx_tmp))
bucket_tmp, &idx_tmp))
break;
bucket_tmp++;
@ -2420,8 +2420,10 @@ void batadv_mcast_purge_orig(struct batadv_orig_node *orig)
batadv_mcast_want_unsnoop_update(bat_priv, orig, BATADV_NO_FLAGS);
batadv_mcast_want_ipv4_update(bat_priv, orig, BATADV_NO_FLAGS);
batadv_mcast_want_ipv6_update(bat_priv, orig, BATADV_NO_FLAGS);
batadv_mcast_want_rtr4_update(bat_priv, orig, BATADV_NO_FLAGS);
batadv_mcast_want_rtr6_update(bat_priv, orig, BATADV_NO_FLAGS);
batadv_mcast_want_rtr4_update(bat_priv, orig,
BATADV_MCAST_WANT_NO_RTR4);
batadv_mcast_want_rtr6_update(bat_priv, orig,
BATADV_MCAST_WANT_NO_RTR6);
spin_unlock_bh(&orig->mcast_handler_lock);
}

View File

@ -3202,6 +3202,7 @@ struct hci_dev *hci_alloc_dev(void)
hdev->conn_info_min_age = DEFAULT_CONN_INFO_MIN_AGE;
hdev->conn_info_max_age = DEFAULT_CONN_INFO_MAX_AGE;
hdev->auth_payload_timeout = DEFAULT_AUTH_PAYLOAD_TIMEOUT;
hdev->min_enc_key_size = HCI_MIN_ENC_KEY_SIZE;
mutex_init(&hdev->lock);
mutex_init(&hdev->req_lock);

View File

@ -433,6 +433,35 @@ static int auto_accept_delay_set(void *data, u64 val)
return 0;
}
static int min_encrypt_key_size_set(void *data, u64 val)
{
struct hci_dev *hdev = data;
if (val < 1 || val > 16)
return -EINVAL;
hci_dev_lock(hdev);
hdev->min_enc_key_size = val;
hci_dev_unlock(hdev);
return 0;
}
static int min_encrypt_key_size_get(void *data, u64 *val)
{
struct hci_dev *hdev = data;
hci_dev_lock(hdev);
*val = hdev->min_enc_key_size;
hci_dev_unlock(hdev);
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(min_encrypt_key_size_fops,
min_encrypt_key_size_get,
min_encrypt_key_size_set, "%llu\n");
static int auto_accept_delay_get(void *data, u64 *val)
{
struct hci_dev *hdev = data;
@ -545,6 +574,8 @@ void hci_debugfs_create_bredr(struct hci_dev *hdev)
if (lmp_ssp_capable(hdev)) {
debugfs_create_file("ssp_debug_mode", 0444, hdev->debugfs,
hdev, &ssp_debug_mode_fops);
debugfs_create_file("min_encrypt_key_size", 0644, hdev->debugfs,
hdev, &min_encrypt_key_size_fops);
debugfs_create_file("auto_accept_delay", 0644, hdev->debugfs,
hdev, &auto_accept_delay_fops);
}

View File

@ -101,6 +101,7 @@ static int hidp_send_message(struct hidp_session *session, struct socket *sock,
{
struct sk_buff *skb;
struct sock *sk = sock->sk;
int ret;
BT_DBG("session %p data %p size %d", session, data, size);
@ -114,13 +115,17 @@ static int hidp_send_message(struct hidp_session *session, struct socket *sock,
}
skb_put_u8(skb, hdr);
if (data && size > 0)
if (data && size > 0) {
skb_put_data(skb, data, size);
ret = size;
} else {
ret = 0;
}
skb_queue_tail(transmit, skb);
wake_up_interruptible(sk_sleep(sk));
return 0;
return ret;
}
static int hidp_send_ctrl_message(struct hidp_session *session,

View File

@ -1361,7 +1361,7 @@ static bool l2cap_check_enc_key_size(struct hci_conn *hcon)
* actually encrypted before enforcing a key size.
*/
return (!test_bit(HCI_CONN_ENCRYPT, &hcon->flags) ||
hcon->enc_key_size >= HCI_MIN_ENC_KEY_SIZE);
hcon->enc_key_size >= hcon->hdev->min_enc_key_size);
}
static void l2cap_do_start(struct l2cap_chan *chan)

View File

@ -1992,6 +1992,19 @@ void skb_set_owner_w(struct sk_buff *skb, struct sock *sk)
}
EXPORT_SYMBOL(skb_set_owner_w);
static bool can_skb_orphan_partial(const struct sk_buff *skb)
{
#ifdef CONFIG_TLS_DEVICE
/* Drivers depend on in-order delivery for crypto offload,
* partial orphan breaks out-of-order-OK logic.
*/
if (skb->decrypted)
return false;
#endif
return (skb->destructor == sock_wfree ||
(IS_ENABLED(CONFIG_INET) && skb->destructor == tcp_wfree));
}
/* This helper is used by netem, as it can hold packets in its
* delay queue. We want to allow the owner socket to send more
* packets, as if they were already TX completed by a typical driver.
@ -2003,11 +2016,7 @@ void skb_orphan_partial(struct sk_buff *skb)
if (skb_is_tcp_pure_ack(skb))
return;
if (skb->destructor == sock_wfree
#ifdef CONFIG_INET
|| skb->destructor == tcp_wfree
#endif
) {
if (can_skb_orphan_partial(skb)) {
struct sock *sk = skb->sk;
if (refcount_inc_not_zero(&sk->sk_refcnt)) {

View File

@ -19,6 +19,7 @@ static const struct sock_diag_handler *sock_diag_handlers[AF_MAX];
static int (*inet_rcv_compat)(struct sk_buff *skb, struct nlmsghdr *nlh);
static DEFINE_MUTEX(sock_diag_table_mutex);
static struct workqueue_struct *broadcast_wq;
static atomic64_t cookie_gen;
u64 sock_gen_cookie(struct sock *sk)
{
@ -27,7 +28,7 @@ u64 sock_gen_cookie(struct sock *sk)
if (res)
return res;
res = atomic64_inc_return(&sock_net(sk)->cookie_gen);
res = atomic64_inc_return(&cookie_gen);
atomic64_cmpxchg(&sk->sk_cookie, 0, res);
}
}

View File

@ -153,6 +153,9 @@ static void dsa_switch_mdb_add_bitmap(struct dsa_switch *ds,
{
int port;
if (!ds->ops->port_mdb_add)
return;
for_each_set_bit(port, bitmap, ds->num_ports)
ds->ops->port_mdb_add(ds, port, mdb);
}

View File

@ -170,7 +170,7 @@ static int lowpan_frag_reasm(struct lowpan_frag_queue *fq, struct sk_buff *skb,
reasm_data = inet_frag_reasm_prepare(&fq->q, skb, prev_tail);
if (!reasm_data)
goto out_oom;
inet_frag_reasm_finish(&fq->q, skb, reasm_data);
inet_frag_reasm_finish(&fq->q, skb, reasm_data, false);
skb->dev = ldev;
skb->tstamp = fq->q.stamp;

View File

@ -475,11 +475,12 @@ void *inet_frag_reasm_prepare(struct inet_frag_queue *q, struct sk_buff *skb,
EXPORT_SYMBOL(inet_frag_reasm_prepare);
void inet_frag_reasm_finish(struct inet_frag_queue *q, struct sk_buff *head,
void *reasm_data)
void *reasm_data, bool try_coalesce)
{
struct sk_buff **nextp = (struct sk_buff **)reasm_data;
struct rb_node *rbn;
struct sk_buff *fp;
int sum_truesize;
skb_push(head, head->data - skb_network_header(head));
@ -487,25 +488,41 @@ void inet_frag_reasm_finish(struct inet_frag_queue *q, struct sk_buff *head,
fp = FRAG_CB(head)->next_frag;
rbn = rb_next(&head->rbnode);
rb_erase(&head->rbnode, &q->rb_fragments);
sum_truesize = head->truesize;
while (rbn || fp) {
/* fp points to the next sk_buff in the current run;
* rbn points to the next run.
*/
/* Go through the current run. */
while (fp) {
*nextp = fp;
nextp = &fp->next;
fp->prev = NULL;
memset(&fp->rbnode, 0, sizeof(fp->rbnode));
fp->sk = NULL;
head->data_len += fp->len;
head->len += fp->len;
struct sk_buff *next_frag = FRAG_CB(fp)->next_frag;
bool stolen;
int delta;
sum_truesize += fp->truesize;
if (head->ip_summed != fp->ip_summed)
head->ip_summed = CHECKSUM_NONE;
else if (head->ip_summed == CHECKSUM_COMPLETE)
head->csum = csum_add(head->csum, fp->csum);
head->truesize += fp->truesize;
fp = FRAG_CB(fp)->next_frag;
if (try_coalesce && skb_try_coalesce(head, fp, &stolen,
&delta)) {
kfree_skb_partial(fp, stolen);
} else {
fp->prev = NULL;
memset(&fp->rbnode, 0, sizeof(fp->rbnode));
fp->sk = NULL;
head->data_len += fp->len;
head->len += fp->len;
head->truesize += fp->truesize;
*nextp = fp;
nextp = &fp->next;
}
fp = next_frag;
}
/* Move to the next run. */
if (rbn) {
@ -516,7 +533,7 @@ void inet_frag_reasm_finish(struct inet_frag_queue *q, struct sk_buff *head,
rbn = rbnext;
}
}
sub_frag_mem_limit(q->fqdir, head->truesize);
sub_frag_mem_limit(q->fqdir, sum_truesize);
*nextp = NULL;
skb_mark_not_on_list(head);

View File

@ -393,6 +393,11 @@ err:
return err;
}
static bool ip_frag_coalesce_ok(const struct ipq *qp)
{
return qp->q.key.v4.user == IP_DEFRAG_LOCAL_DELIVER;
}
/* Build a new IP datagram from all its fragments. */
static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
struct sk_buff *prev_tail, struct net_device *dev)
@ -421,7 +426,8 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
if (len > 65535)
goto out_oversize;
inet_frag_reasm_finish(&qp->q, skb, reasm_data);
inet_frag_reasm_finish(&qp->q, skb, reasm_data,
ip_frag_coalesce_ok(qp));
skb->dev = dev;
IPCB(skb)->frag_max_size = max(qp->max_df_size, qp->q.max_size);

View File

@ -984,6 +984,9 @@ new_segment:
if (!skb)
goto wait_for_memory;
#ifdef CONFIG_TLS_DEVICE
skb->decrypted = !!(flags & MSG_SENDPAGE_DECRYPTED);
#endif
skb_entail(sk, skb);
copy = size_goal;
}

View File

@ -398,10 +398,14 @@ more_data:
static int tcp_bpf_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
{
struct sk_msg tmp, *msg_tx = NULL;
int flags = msg->msg_flags | MSG_NO_SHARED_FRAGS;
int copied = 0, err = 0;
struct sk_psock *psock;
long timeo;
int flags;
/* Don't let internal do_tcp_sendpages() flags through */
flags = (msg->msg_flags & ~MSG_SENDPAGE_DECRYPTED);
flags |= MSG_NO_SHARED_FRAGS;
psock = sk_psock_get(sk);
if (unlikely(!psock))

View File

@ -1320,6 +1320,7 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue,
buff = sk_stream_alloc_skb(sk, nsize, gfp, true);
if (!buff)
return -ENOMEM; /* We'll just try again later. */
skb_copy_decrypted(buff, skb);
sk->sk_wmem_queued += buff->truesize;
sk_mem_charge(sk, buff->truesize);
@ -1874,6 +1875,7 @@ static int tso_fragment(struct sock *sk, struct sk_buff *skb, unsigned int len,
buff = sk_stream_alloc_skb(sk, 0, gfp, true);
if (unlikely(!buff))
return -ENOMEM;
skb_copy_decrypted(buff, skb);
sk->sk_wmem_queued += buff->truesize;
sk_mem_charge(sk, buff->truesize);
@ -2143,6 +2145,7 @@ static int tcp_mtu_probe(struct sock *sk)
sk_mem_charge(sk, nskb->truesize);
skb = tcp_send_head(sk);
skb_copy_decrypted(nskb, skb);
TCP_SKB_CB(nskb)->seq = TCP_SKB_CB(skb)->seq;
TCP_SKB_CB(nskb)->end_seq = TCP_SKB_CB(skb)->seq + probe_size;

View File

@ -348,7 +348,7 @@ static int nf_ct_frag6_reasm(struct frag_queue *fq, struct sk_buff *skb,
skb_reset_transport_header(skb);
inet_frag_reasm_finish(&fq->q, skb, reasm_data);
inet_frag_reasm_finish(&fq->q, skb, reasm_data, false);
skb->ignore_df = 1;
skb->dev = dev;

View File

@ -282,7 +282,7 @@ static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *skb,
skb_reset_transport_header(skb);
inet_frag_reasm_finish(&fq->q, skb, reasm_data);
inet_frag_reasm_finish(&fq->q, skb, reasm_data, true);
skb->dev = dev;
ipv6_hdr(skb)->payload_len = htons(payload_len);

View File

@ -453,13 +453,12 @@ EXPORT_SYMBOL_GPL(nf_ct_invert_tuple);
* table location, we assume id gets exposed to userspace.
*
* Following nf_conn items do not change throughout lifetime
* of the nf_conn after it has been committed to main hash table:
* of the nf_conn:
*
* 1. nf_conn address
* 2. nf_conn->ext address
* 3. nf_conn->master address (normally NULL)
* 4. tuple
* 5. the associated net namespace
* 2. nf_conn->master address (normally NULL)
* 3. the associated net namespace
* 4. the original direction tuple
*/
u32 nf_ct_get_id(const struct nf_conn *ct)
{
@ -469,9 +468,10 @@ u32 nf_ct_get_id(const struct nf_conn *ct)
net_get_random_once(&ct_id_seed, sizeof(ct_id_seed));
a = (unsigned long)ct;
b = (unsigned long)ct->master ^ net_hash_mix(nf_ct_net(ct));
c = (unsigned long)ct->ext;
d = (unsigned long)siphash(&ct->tuplehash, sizeof(ct->tuplehash),
b = (unsigned long)ct->master;
c = (unsigned long)nf_ct_net(ct);
d = (unsigned long)siphash(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
sizeof(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple),
&ct_id_seed);
#ifdef CONFIG_64BIT
return siphash_4u64((u64)a, (u64)b, (u64)c, (u64)d, &ct_id_seed);

View File

@ -111,15 +111,16 @@ static void flow_offload_fixup_tcp(struct ip_ct_tcp *tcp)
#define NF_FLOWTABLE_TCP_PICKUP_TIMEOUT (120 * HZ)
#define NF_FLOWTABLE_UDP_PICKUP_TIMEOUT (30 * HZ)
static void flow_offload_fixup_ct_state(struct nf_conn *ct)
static inline __s32 nf_flow_timeout_delta(unsigned int timeout)
{
return (__s32)(timeout - (u32)jiffies);
}
static void flow_offload_fixup_ct_timeout(struct nf_conn *ct)
{
const struct nf_conntrack_l4proto *l4proto;
int l4num = nf_ct_protonum(ct);
unsigned int timeout;
int l4num;
l4num = nf_ct_protonum(ct);
if (l4num == IPPROTO_TCP)
flow_offload_fixup_tcp(&ct->proto.tcp);
l4proto = nf_ct_l4proto_find(l4num);
if (!l4proto)
@ -132,7 +133,20 @@ static void flow_offload_fixup_ct_state(struct nf_conn *ct)
else
return;
ct->timeout = nfct_time_stamp + timeout;
if (nf_flow_timeout_delta(ct->timeout) > (__s32)timeout)
ct->timeout = nfct_time_stamp + timeout;
}
static void flow_offload_fixup_ct_state(struct nf_conn *ct)
{
if (nf_ct_protonum(ct) == IPPROTO_TCP)
flow_offload_fixup_tcp(&ct->proto.tcp);
}
static void flow_offload_fixup_ct(struct nf_conn *ct)
{
flow_offload_fixup_ct_state(ct);
flow_offload_fixup_ct_timeout(ct);
}
void flow_offload_free(struct flow_offload *flow)
@ -208,6 +222,11 @@ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow)
}
EXPORT_SYMBOL_GPL(flow_offload_add);
static inline bool nf_flow_has_expired(const struct flow_offload *flow)
{
return nf_flow_timeout_delta(flow->timeout) <= 0;
}
static void flow_offload_del(struct nf_flowtable *flow_table,
struct flow_offload *flow)
{
@ -223,6 +242,11 @@ static void flow_offload_del(struct nf_flowtable *flow_table,
e = container_of(flow, struct flow_offload_entry, flow);
clear_bit(IPS_OFFLOAD_BIT, &e->ct->status);
if (nf_flow_has_expired(flow))
flow_offload_fixup_ct(e->ct);
else if (flow->flags & FLOW_OFFLOAD_TEARDOWN)
flow_offload_fixup_ct_timeout(e->ct);
flow_offload_free(flow);
}
@ -298,11 +322,6 @@ nf_flow_table_iterate(struct nf_flowtable *flow_table,
return err;
}
static inline bool nf_flow_has_expired(const struct flow_offload *flow)
{
return (__s32)(flow->timeout - (u32)jiffies) <= 0;
}
static void nf_flow_offload_gc_step(struct flow_offload *flow, void *data)
{
struct nf_flowtable *flow_table = data;

View File

@ -214,6 +214,25 @@ static bool nf_flow_exceeds_mtu(const struct sk_buff *skb, unsigned int mtu)
return true;
}
static int nf_flow_offload_dst_check(struct dst_entry *dst)
{
if (unlikely(dst_xfrm(dst)))
return dst_check(dst, 0) ? 0 : -1;
return 0;
}
static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb,
const struct nf_hook_state *state,
struct dst_entry *dst)
{
skb_orphan(skb);
skb_dst_set_noref(skb, dst);
skb->tstamp = 0;
dst_output(state->net, state->sk, skb);
return NF_STOLEN;
}
unsigned int
nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
const struct nf_hook_state *state)
@ -254,6 +273,11 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
if (nf_flow_state_check(flow, ip_hdr(skb)->protocol, skb, thoff))
return NF_ACCEPT;
if (nf_flow_offload_dst_check(&rt->dst)) {
flow_offload_teardown(flow);
return NF_ACCEPT;
}
if (nf_flow_nat_ip(flow, skb, thoff, dir) < 0)
return NF_DROP;
@ -261,6 +285,13 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
iph = ip_hdr(skb);
ip_decrease_ttl(iph);
if (unlikely(dst_xfrm(&rt->dst))) {
memset(skb->cb, 0, sizeof(struct inet_skb_parm));
IPCB(skb)->iif = skb->dev->ifindex;
IPCB(skb)->flags = IPSKB_FORWARDED;
return nf_flow_xmit_xfrm(skb, state, &rt->dst);
}
skb->dev = outdev;
nexthop = rt_nexthop(rt, flow->tuplehash[!dir].tuple.src_v4.s_addr);
skb_dst_set_noref(skb, &rt->dst);
@ -467,6 +498,11 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
sizeof(*ip6h)))
return NF_ACCEPT;
if (nf_flow_offload_dst_check(&rt->dst)) {
flow_offload_teardown(flow);
return NF_ACCEPT;
}
if (skb_try_make_writable(skb, sizeof(*ip6h)))
return NF_DROP;
@ -477,6 +513,13 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
ip6h = ipv6_hdr(skb);
ip6h->hop_limit--;
if (unlikely(dst_xfrm(&rt->dst))) {
memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
IP6CB(skb)->iif = skb->dev->ifindex;
IP6CB(skb)->flags = IP6SKB_FORWARDED;
return nf_flow_xmit_xfrm(skb, state, &rt->dst);
}
skb->dev = outdev;
nexthop = rt6_nexthop(rt, &flow->tuplehash[!dir].tuple.src_v6);
skb_dst_set_noref(skb, &rt->dst);

View File

@ -138,9 +138,14 @@ static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
return;
list_for_each_entry_reverse(trans, &net->nft.commit_list, list) {
if (trans->msg_type == NFT_MSG_NEWSET &&
nft_trans_set(trans) == set) {
set->bound = true;
switch (trans->msg_type) {
case NFT_MSG_NEWSET:
if (nft_trans_set(trans) == set)
nft_trans_set_bound(trans) = true;
break;
case NFT_MSG_NEWSETELEM:
if (nft_trans_elem_set(trans) == set)
nft_trans_elem_set_bound(trans) = true;
break;
}
}
@ -1662,6 +1667,10 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
chain->flags |= NFT_BASE_CHAIN | flags;
basechain->policy = NF_ACCEPT;
if (chain->flags & NFT_CHAIN_HW_OFFLOAD &&
nft_chain_offload_priority(basechain) < 0)
return -EOPNOTSUPP;
flow_block_init(&basechain->flow_block);
} else {
chain = kzalloc(sizeof(*chain), GFP_KERNEL);
@ -6906,7 +6915,7 @@ static int __nf_tables_abort(struct net *net)
break;
case NFT_MSG_NEWSET:
trans->ctx.table->use--;
if (nft_trans_set(trans)->bound) {
if (nft_trans_set_bound(trans)) {
nft_trans_destroy(trans);
break;
}
@ -6918,7 +6927,7 @@ static int __nf_tables_abort(struct net *net)
nft_trans_destroy(trans);
break;
case NFT_MSG_NEWSETELEM:
if (nft_trans_elem_set(trans)->bound) {
if (nft_trans_elem_set_bound(trans)) {
nft_trans_destroy(trans);
break;
}

View File

@ -103,10 +103,11 @@ void nft_offload_update_dependency(struct nft_offload_ctx *ctx,
}
static void nft_flow_offload_common_init(struct flow_cls_common_offload *common,
__be16 proto,
struct netlink_ext_ack *extack)
__be16 proto, int priority,
struct netlink_ext_ack *extack)
{
common->protocol = proto;
common->prio = priority;
common->extack = extack;
}
@ -124,6 +125,15 @@ static int nft_setup_cb_call(struct nft_base_chain *basechain,
return 0;
}
int nft_chain_offload_priority(struct nft_base_chain *basechain)
{
if (basechain->ops.priority <= 0 ||
basechain->ops.priority > USHRT_MAX)
return -1;
return 0;
}
static int nft_flow_offload_rule(struct nft_trans *trans,
enum flow_cls_command command)
{
@ -142,7 +152,8 @@ static int nft_flow_offload_rule(struct nft_trans *trans,
if (flow)
proto = flow->proto;
nft_flow_offload_common_init(&cls_flow.common, proto, &extack);
nft_flow_offload_common_init(&cls_flow.common, proto,
basechain->ops.priority, &extack);
cls_flow.command = command;
cls_flow.cookie = (unsigned long) rule;
if (flow)

Some files were not shown because too many files have changed in this diff Show More