1
0
Fork 0
Commit Graph

71180 Commits (4988410f8d3a6fa04381072e2406a1d3979ffb95)

Author SHA1 Message Date
Jayaprakash Shanmugam 4988410f8d i40e: Retry AQC GetPhyAbilities to overcome I2CRead hangs
- When the I2C is busy, the PHY reads are delayed.  The firmware will
  return EGAIN in these cases with an expectation that the SW will
  trigger the reads again
- This patch retries the operation for a maximum period of 500ms

Signed-off-by: Jayaprakash Shanmugam <jayaprakash.shanmugam@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:32:18 -07:00
Lihong Yang b861fb762a i40e: add check for return from find_first_bit call
The find_first_bit function will return the size passed to search
if the first set bit is not found. This patch adds the check in case
that happens as the return value would be used as the index in an array
and that would have caused the out-of-bounds access.

Detected by CoverityScan, CID 1295969 Out-of-bounds access

Signed-off-by: Lihong Yang <lihong.yang@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:30:41 -07:00
Jacob Keller 6f853d4f8e i40e: allow XPS with QoS enabled
Recently, the kernel gained support for enabling XPS and QoS at the
same time. Thus, we no longer need to worry about the number of
traffic classes when enabling XPS.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:28:56 -07:00
Jacob Keller 95bc2fb4c6 i40e/i40evf: bundle more descriptors when allocating buffers
Double the number of descriptors we'll bundle into one tail bump when
receiving. Empirical testing has shown that we reduce CPU utilization
and don't appear to reduce throughput or packet rate. 32 seems to be the
sweet spot, as it's half the default polling budget, so we'd essentially
reduce from 4 tail writes when polling down to 2. Increasing this up to
64 appears to have negative impacts as it may become possible that we
don't bump the tail each time we get polled, which could cause a long
delay between returning descriptors to the hardware.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:27:42 -07:00
Jacob Keller 11f29003d6 i40e/i40evf: bump tail only in multiples of 8
Hardware only fetches descriptors on cachelines of 8, essentially
ignoring the lower 3 bits of the tail register. Thus, it is pointless to
bump tail by an unaligned access as the hardware will ignore some of the
new descriptors we allocated. Thus, it's ideal if we can ensure tail
writes are always aligned to 8.

At first, it seems like we'd already do this, since we allocate
descriptors in batches which are a multiple of 8. Since we'd always
increment by a multiple of 8, it seems like the value should always be
aligned.

However, this ignores allocation failures. If we fail to allocate
a buffer, our tail register will become unaligned. Once it has become
unaligned it will essentially be stuck unaligned until a buffer
allocation happens to fail at the exact amount necessary to re-align it.

We can do better, by simply rounding down the number of buffers we're
about to allocate (cleaned_count) such that "next_to_clean
+ cleaned_count" is rounded to the nearest multiple of 8.

We do this by calculating how far off that value is and subtracting it
from the cleaned_count. This essentially defers allocation of buffers if
they're going to be ignored by hardware anyways, and re-aligns our
next_to_use and tail values after a failure to allocate a descriptor.

This calculation ensures that we always align the tail writes in a way
the hardware expects and don't unnecessarily allocate buffers which
won't be fetched immediately.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:26:29 -07:00
Jacob Keller 7362be9eee i40e: reduce lrxqthresh from 2 to 1
The lrxq thresh value tells hardware to immediately interrupt when there
are fewer than N*64 packets left in the ring.

Counter intuitively, empirical testing has shown that decreasing this
value from 2 to 1, and thus changing from an immediate interrupt at
fewer than 128 descriptors down to 64 descriptors causes a small
increase in the maximum total packets per second we can receive. This
increase occurs even when we're polling with interrupts masked, as the
hardware must still handle interrupts internally even if we've disabled
them in software.

Also reduce the value for any VFs we allocate.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:24:46 -07:00
Jacob Keller dbadbbe235 i40e/i40evf: always set the CLEARPBA flag when re-enabling interrupts
In the past we changed driver behavior to not clear the PBA when
re-enabling interrupts. This change was motivated by the flawed belief
that clearing the PBA would cause a lost interrupt if a receive
interrupt occurred while interrupts were disabled.

According to empirical testing this isn't the case. Additionally, the
data sheet specifically says that we should set the CLEARPBA bit when
re-enabling interrupts in a polling setup.

This reverts commit 40d72a5098 ("i40e/i40evf: don't lose interrupts")

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:21:26 -07:00
Jacob Keller 4270255929 i40e/i40evf: fix incorrect default ITR values on driver load
The ITR register expects to be programmed in units of 2 microseconds.
Because of this, all of the drivers I40E_ITR_* constants are in terms of
this 2 microsecond register.

Unfortunately, the rx_itr_default value is expected to be programmed in
microseconds.

Effectively the driver defaults to an ITR value of half the expected
value (in terms of minimum microseconds between interrupts).

Fix this by changing the default values to be calculated using
ITR_REG_TO_USEC macro which indicates that we're converting from the
register units into microseconds.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:19:46 -07:00
Alan Brady c766b9af9a i40evf: fix mac filter removal timing issue
Due to the asynchronous nature in which mac filters are added and
deleted, there exists a bug in which filters are erroneously removed if
removed then added again quickly.

The events are as such:
    - filter marked for removal
    - same filter is re-added before watchdog that cleans up filters
    - we skip re-adding the filter because we have it already in the
list
    - watchdog filter cleanup kicks off and filter is removed

So when we were re-adding the same filter, it didn't actually get added
because it already existed in the list, but was marked for removal and
had yet to actually be removed.

This patch fixes the issue by making sure that when adding a filter, if
we find it already existing in our list, make sure it is not marked to
be removed.

Signed-off-by: Alan Brady <alan.brady@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:17:03 -07:00
Lihong Yang 784548c40d i40e: use the safe hash table iterator when deleting mac filters
This patch replaces hash_for_each function with hash_for_each_safe
when calling  __i40e_del_filter. The hash_for_each_safe function is
the right one to use when iterating over a hash table to safely remove
a hash entry. Otherwise, incorrect values may be read from freed memory.

Detected by CoverityScan, CID 1402048 Read from pointer after free

Signed-off-by: Lihong Yang <lihong.yang@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 14:12:54 -07:00
Jacob Keller b48be9978e i40e: fix flags declaration
Since we don't yet have more than 32 flags, we'll use a u32 for both the
hw_features and flag field. Should we gain more flags in the future, we
may need to convert to a u64 or separate flags out into two fields.

This was overlooked in the previous commit 2781de2134c4 ("i40e/i40evf:
organize and re-number feature flags"), where the feature flag was not
converted form u64 to u32.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-10-09 12:37:28 -07:00
Tariq Toukan 7ba5e7bd64 net/mlx4_en: Use __force to fix a sparse warning in TX datapath
In TX data-path, we intentionally do not byte-swap, as documented
in code and in the cited commit log.
This fixes sparse warning:
en_tx.c:720:23: warning: incorrect type in argument 1 (different base types)
en_tx.c:720:23:    expected unsigned int [unsigned] [usertype] <noident>
en_tx.c:720:23:    got restricted __be32 [usertype] doorbell_qpn

Fixes: 492f5add4b ("net/mlx4_en: Doorbell is byteswapped in Little Endian archs")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:33:05 -07:00
Tariq Toukan b71322d9db net/mlx4_core: Fix cast warning in fw.c
Fix the following SPARSE warning, in MLX4_GET() macro:
drivers/net/ethernet/mellanox/mlx4/fw.c:233:9: warning: cast to restricted __be64

Fixes: 17d5ceb6e4 ("net/mlx4_core: Fix unaligned accesses")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:33:05 -07:00
Tariq Toukan bb428a5c4d net/mlx4: Fix endianness issue in qp context params
Should take care of the endianness before assigning to params2 field.

Fixes: 53f33ae295 ("net/mlx4_core: Port aggregation upper layer interface")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:33:05 -07:00
Michal Kalderon 1e28eaad07 qed: Add iWARP support for fpdu spanned over more than two tcp packets
We continue to maintain a maximum of three buffers per fpdu, to ensure
that there are enough buffers for additional unaligned mpa packets.
To support this, if a fpdu is split over more than two tcp packets, we
use an intermediate buffer to copy the data to the previous buffer, then
we can release the data. We need an intermediate buffer as the initial
buffer partial packet could be located at the end of the packet, not
leaving room for additional data. This is a corner case, and will usually
not be the case.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:27 -07:00
Michal Kalderon c7d1d83999 qed: Add support for MPA header being split over two tcp packets
There is a special case where an MPA header is split over to tcp
packets, in this case we need to wait for the next packet to
get the fpdu length. We use the incomplete_bytes to mark this
fpdu as a "special" one which requires updating the length with
the next packet

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:27 -07:00
Michal Kalderon d531038eeb qed: Add support for freeing two ll2 buffers for corner cases
When posting a packet on the ll2 tx, we can provide a cookie that
will be returned upon tx completion. This cookie is the ll2 iwarp buffer
which is then reposted to the rx ring. Part of the unaligned mpa flow
is determining when a buffer can be reposted. Each buffer needs to be
sent only once as a cookie for on the tx ring. In packed fpdu case, only
the last packet will be sent with the buffer, meaning we need to handle the
case that a cookie can be NULL on tx complete. In addition, when a fpdu
splits over two buffers, but there are no more fpdus on the second buffer,
two buffers need to be provided as a cookie. To avoid changing the ll2
interface to provide two cookies, we introduce a piggy buf pointer,
relevant for iWARP only, that holds a pointer to a second buffer that
needs to be released during tx completion.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:27 -07:00
Michal Kalderon 469981b17a qed: Add unaligned and packed packet processing
The fpdu data structure is preallocated per connection.
Each connection stores the current status of the connection:
either nothing pending, or there is a partial fpdu that is waiting for
the rest of the fpdu (incomplete bytes != 0).
The same structure is also used for splitting a packet when there are
packed fpdus. The structure is initialized with all data required
for sending the fpdu back to the FW. A fpdu will always be spanned across
a maximum of 3 tx bds. One for the header, one for the partial fdpu
received and one for the remainder (unaligned) packet.
In case of packed fpdu's, two fragments are used, one for the header
and one for the data.
Corner cases are not handled in the patch for clarity, and will be added
as a separate patch.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:26 -07:00
Michal Kalderon fcb39f6c10 qed: Add mpa buffer descriptors for storing and processing mpa fpdus
The mpa buff is a descriptor for iwarp ll2 buffers that contains
additional information required for aligining fpdu's.
In some cases, an additional packet will arrive which will complete
the alignment of a fpdu, but we won't be able to post the fpdu due to
insufficient place on the tx ring. In this case we can't loose the data
and require storing it for later. Processing is therefore done
in two places, during rx completion, where we initialize a mpa buffer
descriptor and add it to the pending list, and during tx-completion, since
we free up an entry in the tx chain we can process any pending mpa packets.
The mpa buff descriptors are pre-allocated since we have to ensure that
we won't reach a state where we can't store an incoming unaligned packet.
All packets received on the ll2 MUST be processed by the driver at some
stage. Since they are preallocated, we hold a free list.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:26 -07:00
Michal Kalderon ae3488ff37 qed: Add ll2 connection for processing unaligned MPA packets
This patch adds only the establishment and termination of the
ll2 connection that handles unaligned MPA packets.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:26 -07:00
Michal Kalderon 6f34a284f3 qed: Add LL2 slowpath handling
For iWARP unaligned MPA flow, a slowpath event of flushing an
MPA connection that entered an unaligned state is required.
The flush ramrod is received on the ll2 queue, and a pre-registered
callback function is called to handle the flush event.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:26 -07:00
Michal Kalderon 89d6511309 qed: Add the source of a packet sent on an iWARP ll2 connection
When a packet is sent back to iWARP FW via the tx ll2 connection
the FW needs to know the source of the packet. Whether it is
OOO or unaligned MPA related. Since OOO is implemented entirely
inside the ll2 code (and shared with iSCSI), packets are marked
as IN_ORDER inside the ll2 code. For unaligned mpa the value
will be determined in the iWARP code and sent on the pkt->vlan
field.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:26 -07:00
Michal Kalderon 6df60fe703 qed: Fix initialization of ll2 offload feature
enable_ip_cksum, enable_l4_cksum, calc_ip_len were added in
commit stated below but not passed through to FW. This was OK
until now as it wasn't used, but is required for the iWARP
unaligned flow

Fixes:7c7973b2ae27 ("qed: LL2 to use packed information for tx")

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:26 -07:00
Michal Kalderon 77caa792f5 qed: Add ll2 option for dropping a tx packet
The option of sending a packet on the ll2 and dropping it exists in
hardware and was not used until now, thus not exposed.
The iWARP unaligned MPA flow requires this functionality for
flushing the tx queue.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:26 -07:00
Michal Kalderon ed468ebee0 qed: Add ll2 ability of opening a secondary queue
When more than one ll2 queue is opened ( that is not an OOO queue )
ll2 code does not have enough information to determine whether
the queue is the main one or not, so a new field is added to the
acquire input data to expose the control of determining whether
the queue is the main queue or a secondary queue.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:26 -07:00
Michal Kalderon f5823fe689 qed: Add ll2 option to limit the number of bds per packet
iWARP uses 3 ll2 connections, the maximum number of bds is known
during connection setup. This patch modifies the static array in
the ll2_tx_packet descriptor to be a flexible array and
significantlly reduces memory size.

In addition, some redundant fields in the ll2_tx_packet were
removed, which also contributed to decreasing the descriptor size.

Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:21:26 -07:00
Yotam Gigi 593bc28ae2 mlxsw: spectrum_switchdev: Support bridge mrouter notifications
Support the SWITCHDEV_ATTR_ID_BRIDGE_MROUTER port attribute switchdev
notification.

To do that, add the mrouter flag to struct mlxsw_sp_bridge_device, which
indicates whether the bridge device was set to be mrouter port. This field
is set when:
 - A new bridge is created, where the value is taken from the kernel
   bridge value.
 - A switchdev SWITCHDEV_ATTR_ID_BRIDGE_MROUTER notification is sent.

In addition, change the bridge MID entries to include the router port when
the bridge device is configured to be mrouter port. The MID entries are
updated in the following cases:
 - When a new MID entry is created, update the router port according to the
   bridge mrouter state.
 - When a SWITCHDEV_ATTR_ID_BRIDGE_MROUTER notification is sent, update all
   the bridge's MID entries.

This is aligned with the case where a bridge slave is configured to be
mrouter port.

Signed-off-by: Yotam Gigi <yotamg@mellanox.com>
Reviewed-by: Nogah Frankel <nogahf@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:18:11 -07:00
Yotam Gigi c4db953f00 mlxsw: spectrum_switchdev: Add support for router port in SMID entries
In Spectrum, MDB entries point to MID entries, that indicate which ports a
packet should be forwarded to. Add the support in creating MID entries that
forward the packet to the Spectrum router port.

This will be later used to handle the bridge mrouter port switchdev
notifications.

Signed-off-by: Yotam Gigi <yotamg@mellanox.com>
Reviewed-by: Nogah Frankel <nogahf@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:18:11 -07:00
Yotam Gigi b35750f191 mlxsw: spectrum: router: Export the mlxsw_sp_router_port function
In Spectrum hardware, the router port is a virtual port that is the gateway
to the routing mechanism. Hence, in order for a packet to be L3 forwarded,
it must first be L2 forwarded to the router port inside the hardware.

Further patches in this patchset are going to introduce support in bridge
device used as an mrouter port. In this case, the router port index will be
needed in order to update the MDB entries to include the router port. Thus,
export the mlxsw_sp_router_port function, which returns the index of the
Spectrum router port.

Signed-off-by: Yotam Gigi <yotamg@mellanox.com>
Reviewed-by: Nogah Frankel <nogahf@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 10:18:11 -07:00
Jakub Kicinski 2de1be1db2 nfp: bpf: pass dst register to ld_field instruction
ld_field instruction is a bit special because the encoding uses
two source registers and one of them becomes the output.  We do
need to pass the dst register to our encoding helpers though,
otherwise the "write both banks" flag will not be observed.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:03 -07:00
Jakub Kicinski 2e85d3884f nfp: bpf: byte swap the instructions
Device expects the instructions in little endian.  Make sure we
byte swap on big endian hosts.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:03 -07:00
Jakub Kicinski 1c03e03f9b nfp: bpf: pad code with valid nops
We need to append up to 8 nops after last instruction to make
sure the CPU will not fetch garbage instructions with invalid
ECC if the code store was not initialized.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:03 -07:00
Jakub Kicinski fd068ddc88 nfp: bpf: calculate code store ECC
In the initial PoC firmware I simply disabled ECC on the instruction
store.  Do the ECC calculation for generated instructions in the driver.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:03 -07:00
Jakub Kicinski 18e53b6cb9 nfp: bpf: move to datapath ABI version 2
Datapath ABI version 2 stores the packet information in LMEM
instead of NNRs.  We also have strict restrictions on which
GPRs we can use.  Only GPRs 0-23 are reserved for BPF.

Adjust the static register locations and "ABI" registers.
Note that packet length is packed with other info so we have
to extract it into one of the scratch registers, OTOH since
LMEM can be used in restricted operands we don't have to
extract packet pointer.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:03 -07:00
Jakub Kicinski 995e101ffa nfp: bpf: encode extended LM pointer operands
Most instructions have special fields which allow switching
between base and extended Local Memory pointers.  Introduce
those to register encoding, we will use the extra LM pointers
to access high addresses of the stack.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:03 -07:00
Jakub Kicinski 9f15d0f438 nfp: bpf: encode LMEM accesses
NFP LMEM is a large, indirectly accessed register file.  There
are two basic indirect access registers.  Each access operation
may either use offset (up to 8 or 16 words) or perform post
decrement/increment.

Add encodings of LMEM indexes as instruction operands.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:03 -07:00
Jakub Kicinski 8afd9c961e nfp: add more white space to the instruction defines
We need to add longer OP_* defines, move the values away.
Purely whitespace commit.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:02 -07:00
Jakub Kicinski 509144e250 nfp: bpf: remove packet marking support
Temporarily drop support for skb->mark.  We are primarily focusing
on XDP offload, and implementing skb->mark on the new datapath has
lower priority.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:02 -07:00
Jakub Kicinski 226e0e94ce nfp: bpf: remove register rename
Remove the register renumbering optimization.  To implement calling
map and other helpers we need more strict register layout.  We can't
freely reassign register numbers.

This will have the effect of running in 4 context/thread mode, which
should be OK since we are moving towards integrating the BPF closer
with FW app datapath anyway, and the target datapath itself runs in
4 context mode.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:02 -07:00
Jakub Kicinski 3cae131933 nfp: bpf: encode all 64bit shifts
Add encodings of all 64bit shift operations.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:02 -07:00
Jakub Kicinski 2a15bb1aba nfp: bpf: move software reg helpers and cmd table out of translator
Move the software reg helpers and some static data to nfp_asm.c.
They are related to the previous patch, but move is done in a separate
commit for ease of review.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:02 -07:00
Jakub Kicinski b3f868df3c nfp: bpf: use the power of sparse to check we encode registers right
Define a new __bitwise type for software representation of registers.
This will allow us to catch incorrect parameter types using sparse.

Accessors we define also allow us to return correct enum type and
therefore ensure all switches handle all register types.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:02 -07:00
Jakub Kicinski a52b35c39e nfp: bpf: lift the single-port limitation
Limiting the eBPF offload to a single port was a workaround
required for the PoC application FW which has not been
released externally.  It's not necessary any more.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:02 -07:00
Jakub Kicinski 3a4b0129bf nfp: output control messages to trace_devlink_hwmsg()
Use standard devlink trace point to allow tracing of control
messages.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:51:02 -07:00
Yunsheng Lin 1db9b1bf82 net: hns3: Cleanup for non-static function in hns3 driver
This patch fixes the following warning from sparse:
warning: symbol 'hns3_set_multicast_list' was not declared.
Should it be static.

hns3_set_multicast_list turns out to be not used, so delete it.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:46:54 -07:00
Yunsheng Lin a90bb9a5ea net: hns3: Cleanup for endian issue in hns3 driver
This patch fixes a lot of endian issues detected by sparse.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:46:54 -07:00
Yunsheng Lin d44f9b631f net: hns3: Cleanup for struct that used to send cmd to firmware
The hclge_tm module has already added _cmd to the end of struct
that used to send cmd to firmware. This will help us finding the
endian issues.
This patch adds the _cmd to the end of struct that used to send
cmd to firmware in hclge_main module.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:46:54 -07:00
Yunsheng Lin 5392902d33 net: hns3: Consistently using GENMASK in hns3 driver
This patch uses GENMASK to generate bit mask whenever
possible in hns3 driver.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:46:54 -07:00
Yunsheng Lin 56cf68c730 net: hns3: Cleanup indentation for Kconfig in the the hisilicon folder
This patch fixes a few indentation for Kconfig file in the
hisilicon folder.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:46:54 -07:00
Yunsheng Lin 9780cb97af net: hns3: Add hns3_get_handle macro in hns3 driver
There are many places that will need to get the handle
of netdev, so add a macro to get the handle of netdev.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-10-09 09:46:54 -07:00