Commit graph

51 commits

Author SHA1 Message Date
Alexander Duyck e4b317e909 ixgbe: Add feature offset value to ring features
The mask value for ring features was overloaded for FCoE which can lead to
some confusion.  In order to avoid any confusion I am splitting the mask
value and adding an offset value.  This can be used for the start of the
FCoE rings, and in the future I hope to use it to store the start of the
registers for SR-IOV.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-11 02:01:14 -07:00
Alexander Duyck c087663ec8 ixgbe: Add upper limit to ring features
We are currently using indices to indicate the upper limit on a ring
feature.  However since we can switch back and forth on features such as
DCB and that has effects on other features such as RSS it is preferable to
instead store the upper limit separate from the current value for the
number of rings related to the feature.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-11 01:53:21 -07:00
Alexander Duyck 49c7ffbe7b ixgbe: count q_vectors instead of MSI-X vectors
It makes much more sense for us to count q_vectors instead of MSI-X
vectors.  We were using num_msix_vectors to find the number of q_vectors in
multiple places.  This was wasteful since we only had one place that
actually needs the number of MSI-X vectors and that is in slow path.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-07-11 01:50:59 -07:00
David S. Miller b26d344c6b Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Conflicts:
	drivers/net/caif/caif_hsi.c
	drivers/net/usb/qmi_wwan.c

The qmi_wwan merge was trivial.

The caif_hsi.c, on the other hand, was not.  It's a conflict between
1c385f1fdf ("caif-hsi: Replace platform
device with ops structure.") in the net-next tree and commit
39abbaef19 ("caif-hsi: Postpone init of
HIS until open()") in the net tree.

I did my best with that one and will ask Sjur to check it out.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-28 17:37:00 -07:00
Alexander Duyck 57efd44c8c ixgbe: Do not pad FCoE frames as this can cause issues with FCoE DDP
FCoE target mode was experiencing issues due to the fact that we were
sending up data frames that were padded to 60 bytes after the DDP logic had
already stripped the frame down to 52 or 56 depending on the use of VLANs.
This was resulting in the FCoE DDP logic having issues since it thought the
frame still had data in it due to the padding.

To resolve this, adding code so that we do not pad FCoE frames prior to
handling them to the stack.

CC: <stable@vger.kernel.org>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-26 16:44:34 -07:00
Jacob Keller 1d1a79b5b9 ixgbe: Check PTP Rx timestamps via BPF filter
This patch fixes a potential Rx timestamp deadlock that causes the Rx
timestamping to stall indefinitely. The issue could occur when a PTP packet is
timestamped by hardware but never reaches the Rx queue. In order to prevent a
permanent loss of timestamping, the RXSTMP(L/H) registers have to be read to
unlock them. (This used to only occur when a packet that was timestamped
reached the software.) However the registers can't be read early otherwise
there is no way to correlate them to the packet.

This patch introduces a filter function which can be used to determine if a
packet should have been timestamped. Supplied with the filter setup by the
hwtstamp ioctl, check to make sure the PTP protocol and message type match the
expected values. If so, then read the timestamp registers (to free them.) At
this point check the descriptor bit, if the bit is set then we know this
packet correlates to the timestamp stored in the RXTSTAMP registers.
Otherwise, assume that packet was dropped by the hardware, and ignore this
timestamp value. However, we have at least unlocked the rxtstamp registers for
future timestamping.

Due to the way the driver handles skb data, it cannot be directly accessed. In
order to work around this, a copy of the skb data into a linear buffer is
made. From this buffer it becomes possible to read the data correctly

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Richard Cochran <richardcochran@gmail.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-06-14 03:13:48 -07:00
Don Skidmore 1210982bb6 ixgbe: cleanup the hwmon function calls
When the hwmon code was initially added it was with the assumption that a
sysfs patch would be also coming soon.  Since that isn't the case some
clean up needs to be done.  This patch does that.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-05-09 23:14:43 -07:00
Jacob E Keller 681ae1adc4 ixgbe: Enable timesync clock-out feature for PPS support on X540
This patch enables the PPS system in the PHC framework, by enabling
the clock-out feature on the X540 device. Causes the SDP0 to be set as
a 1Hz clock. Also configures the timesync interrupt cause in order to
report each pulse to the PPS via the PHC framework, which can be used
for general system clock synchronization. (This allows a stable method
for tuning the general system time via the on-board SYSTIM register
based clock.)

Signed-off-by: Jacob E Keller <jacob.e.keller@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-05-09 22:55:39 -07:00
Jacob Keller 3a6a4edaa5 ixgbe: Hardware Timestamping + PTP Hardware Clock (PHC)
This patch enables hardware timestamping for use with PTP software by
extracting a ns counter from an arbitrary fixed point cycles counter.
The hardware generates SYSTIME registers using the DMA tick which
changes based on the current link speed. These SYSTIME registers are
converted to ns using the cyclecounter and timecounter structures
provided by the kernel. Using the SO_TIMESTAMPING api, software can
enable and access timestamps for PTP packets.

The SO_TIMESTAMPING API has space for 3 different kinds of timestamps,
SYS, RAW, and SOF. SYS hardware timestamps are hardware ns values that
are then scaled to the software clock. RAW hardware timestamps are the
direct raw value of the ns counter. SOF software timestamps are the
software timestamp calculated as close as possible to the software
transmit, but are not offloaded to the hardware. This patch only
supports the RAW hardware timestamps due to inefficiency of the SYS
design.

This patch also enables the PHC subsystem features for atomically
adjusting the cycle register, and adjusting the clock frequency in
parts per billion. This frequency adjustment works by slightly
adjusting the value added to the cycle registers each DMA tick. This
causes the hardware registers to overflow rapidly (approximately once
every 34 seconds, when at 10gig link). To solve this, the timecounter
structure is used, along with a timer set for every 25 seconds. This
allows for detecting register overflow and converting the cycle
counter registers into ns values needed for providing useful
timestamps to the network stack.

Only the basic required clock functions are supported at this time,
although the hardware supports some ancillary features and these could
easily be enabled in the future.

Note that use of this hardware timestamping requires modifying daemon
software to use the SO_TIMESTAMPING API for timestamps, and the
ptp_clock PHC framework for accessing the clock. The timestamps have
no relation to the system time at all, so software must use the posix
clock generated by the PHC framework instead.

Signed-off-by: Jacob E Keller <jacob.e.keller@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-05-09 22:48:51 -07:00
Alexander Duyck 3ebe8fdeb0 ixgbe: Set Drop_EN bit when multiple Rx queues are present w/o flow control
The drop enable bit can be used to improve the performance of the adapter
in the case of multiple queues being present.  This performance gain is due
to the fact that some slower CPUs can cause the FIFO to backfill preventing
faster CPUs from receiving additional work.  By setting the drop enable bit
we prevent this and instead just drop the packets that would have been
bound for the slower CPU.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-05-09 22:31:44 -07:00
David S. Miller 0d6c4a2e46 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Conflicts:
	drivers/net/ethernet/intel/e1000e/param.c
	drivers/net/wireless/iwlwifi/iwl-agn-rx.c
	drivers/net/wireless/iwlwifi/iwl-trans-pcie-rx.c
	drivers/net/wireless/iwlwifi/iwl-trans.h

Resolved the iwlwifi conflict with mainline using 3-way diff posted
by John Linville and Stephen Rothwell.  In 'net' we added a bug
fix to make iwlwifi report a more accurate skb->truesize but this
conflicted with RX path changes that happened meanwhile in net-next.

In e1000e a conflict arose in the validation code for settings of
adapter->itr.  'net-next' had more sophisticated logic so that
logic was used.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-07 23:35:40 -04:00
John Fastabend f525c6d295 ixgbe: dcb: BIT_APP_UPCHG not set by ixgbe_copy_dcb_cfg()
After this commit:

commit aacc1bea19
Author: Multanen, Eric W <eric.w.multanen@intel.com>
Date:   Wed Mar 28 07:49:09 2012 +0000

    ixgbe: driver fix for link flap

The BIT_APP_UPCHG bit is no longer set when ixgbe_dcbnl_set_all() is
called. This results in the FCoE app user priority never getting set
and the driver will not configure the tx_rings correctly for FCoE
packets which use the SAN MTU and FCoE offloads.

We resolve this regression by fixing ixgbe_copy_dcb_cfg() to also
check for FCoE application changes. Additionally, we can drop the
IEEE variants of get_dcb_app() because this path is never called
with the IEEE mode enabled.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-05-03 03:04:13 -07:00
Don Skidmore 3ca8bc6de2 ixgbe: add hwmon interface to export thermal data
Some of our adapters have thermal data available, this patch exports
this data via hwmon sysfs interface.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-05-02 02:12:23 -07:00
Jacob Keller 8e2813f59e ixgbe: check for WoL support in single function
This patch consolidates the case logic for checking whether a device supports
WoL into a single place. Previously ethtool and probe used similar logic that
was copied and maintained separately. This patch encapsulates the core logic
into a function so that a user only has to update one place.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-04-27 02:31:26 -07:00
Don Skidmore 70e5576cb0 ixgbe: fix typo in enumeration name
This was pointed out to me by Xiaojun Zhang on Source Forge.

CC: Xiaojun Zhang <zhangxiaojun@sourceforge.net>
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-27 23:34:54 -07:00
Jeff Kirsher 8af3c33f4d ixgbe: fix namespace issues when FCoE/DCB is not enabled
Resolve namespace issues when FCoE or DCB is not enabled.
The issue is with certain configurations we end up with namespace
problems. A simple example:

ixgbe_main.c
 - defines func A()
 - uses func A()

ixgbe_fcoe.c
 - uses func A()

ixgbe.h
 - has prototype for func A()

For default (FCoE included) all is good.  But when it isn't the namespace
checker complains about how func A() could be static.

To resolve this, created a ixgbe_lib file to contain functions used
by DCB/FCoE and their helper functions so that they are always in
namespace whether or not DCB/FCoE is enabled.

Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
2012-03-19 13:59:11 -07:00
Alexander Duyck 567d2de291 ixgbe: Correct flag values set by ixgbe_fix_features
This patch replaces the variable name data with the variable name features
for ixgbe_fix_features and ixgbe_set_features.  This helps to make some
issues more obvious such as the fact that we were disabling Rx VLAN tag
stripping when we should have been forcing it to be enabled when DCB is
enabled.

In addition there was deprecated code present that was disabling the LRO
flag if we had the itr value set too low.  I have updated this logic so
that we will now allow the LRO flag to be set, but will not enable RSC
until the rx-usecs value is high enough to allow enough time for Rx packet
coalescing.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-19 13:43:34 -07:00
Alexander Duyck ef6afc0cac ixgbe: Add support for enabling UDP RSS via the ethtool rx-flow-hash command
This patch adds support for enabling or disabling UDP RSS via the
ethtool -N rx-flow-hash command.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-19 02:03:15 -07:00
Alexander Duyck d3ee429443 ixgbe: Update layout of ixgbe_ring structure to improve cache performance
This change makes it so that only the 2nd cache line in the ring structure
should see frequent updates.  The advantage to this is that it should
reduce the amount of cross CPU cache bouncing since only the 2nd cache line
will be changing between most network transactions.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-19 01:54:36 -07:00
Alexander Duyck 244e27ad4d ixgbe: Store Tx flags and protocol information to tx_buffer sooner
This change makes it so that we store the tx_flags and protocol information
to the tx_buffer_info structure sooner. This allows us to avoid unnecessary
read/write transactions since we are placing the data in the final location
earlier.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-19 01:38:42 -07:00
Alexander Duyck 729739b754 ixgbe: always write DMA for single_mapped value with skb
This change makes it so that we always write the DMA address for the skb
itself on the same tx_buffer struct that the skb is written on.  This way
we don't need the MAPPED_AS_PAGE flag and we always know it will be the
first DMA value that we will have to unmap.

In addition I have found an issue in which we were leaking a DMA mapping if
the value happened to be 0 which is possible on some platforms.  In order
to resolve that I have updated the transmit path to use the length instead
of the DMA mapping in order to determine if a mapping is actually present.

One other tweak in this patch is that it only writes the olinfo information
on the first descriptor.  As it turns out it isn't necessary to write it
for anything but the first descriptor so there is no need to carry it
forward.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-17 01:41:49 -07:00
Alexander Duyck fd0db0ed02 ixgbe: Place skb on first buffer_info structure to avoid using stack space
Instead of keeping a local copy of the skb on the stack for as long as long
as we do it makes sense to instead just place it on the first tx_buffer
structure so that we can save space on the stack and avoid unnecessary
read/write operations copying the pointer out of the stack and onto the
ring later.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-17 01:08:23 -07:00
Alexander Duyck 7d7ce682f8 ixgbe: Use packets to track Tx completions instead of a seperate value
A separate value was added to track Tx completions in order to determine if
the Tx unit was hung.  However we can do the same thing using the number of
packets completed without having to add another stat to the Tx ring.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-17 01:06:59 -07:00
Alexander Duyck f800326dca ixgbe: Replace standard receive path with a page based receive
This patch replaces the existing Rx hot-path in the ixgbe driver with a new
implementation that is based on performing a double buffered receive.  The
ixgbe driver already had something similar in place for its' packet split
path, however in that case we were still receiving the header for the
packet into the sk_buff.  The big change here is the entire receive path
will receive into pages only, and then pull the header out of the page and
copy it into the sk_buff data.  There are several motivations behind this
approach.

First, this allows us to avoid several cache misses as we were taking a
set of cache misses for allocating the sk_buff and then another set for
receiving data into the sk_buff.  We are able to avoid these misses on
receive now as we allocate the sk_buff when data is available.

Second we are able to see a considerable performance gain when an IOMMU is
enabled because we are no longer unmapping every buffer on receive.
Instead we can delay the unmap until we are unable to use the page, and
instead we can simply call sync_single_range on the half of the page that
contains new data.

Finally we are able to drop a considerable amount of code from the driver
as we no longer have to support 2 different receive modes, packet split and
one buffer.  This allows us to optimize the Rx path further since less
branching is required.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-17 01:04:27 -07:00
Alexander Duyck 621bd70eda ixgbe: Replace eitr_low and eitr_high with static values in ixgbe_update_itr
There isn't much point in using variables to store the values of eitr_low
and eitr_high since they are not user changeable.  As such I am replacing
them with the constants 10 and 20 in order to avoid any confusion on what
the values actually are.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-14 00:33:38 -07:00
Alexander Duyck a557928e26 ixgbe: Add iterator for cycling through rings on a q_vector
Since there are multiple spots where we have to cycle through all of the
rings on a q_vector it makes sense to just add a function for iterating
through all of them.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-12 20:55:24 -07:00
Alexander Duyck de88eeeb16 ixgbe: Allocate rings as part of the q_vector
This patch makes the rings a part of the q_vector directly instead of
indirectly.  Specifically on x86 systems this helps to avoid any cache
set conflicts between the q_vector, the tx_rings, and the rx_rings as the
critical stride is 4K and in order to cross that boundary you would need to
have over 15 rings on a single q_vector.

In addition this allows for smarter allocations when Flow Director is
enabled.  Previously Flow Director would set the irq_affinity hints based
on the CPU and was still using a node interleaving approach which on some
systems would end up with the two values mismatched.  With the new approach
we can set the affinity for the irq_vector and use the CPU for that
affinity to determine the node value for the node and the rings.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-12 20:52:48 -07:00
Alexander Duyck 8f15486dd0 ixgbe: Default to queue pairs when number of queues is less than CPUs
The old code had several errors in how it was determining the vector
budget.  In order to simplify things this patch updates the code so that it
will attempt to always allocated paired Rx/Tx vectors instead of attempting
to allocate individual vectors when the number of queues is less than the
number of CPUs.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-12 20:32:27 -07:00
Alexander Duyck 46646e61ea ixgbe: Reorder adapter contents for better cache utilization
This change moves several frequently accessed items together into one cache
line in order to reduce cache misses in the hot-path.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-03-12 20:27:24 -07:00
Alexander Duyck b2d96e0ac0 ixgbe: add support for byte queue limits
This adds support for byte queue limits (BQL).

Based on patch from Eric Dumazet for igb.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
2012-03-12 20:14:53 -07:00
Alexander Duyck 8a0da21be8 ixgbe: Combine post-DMA processing of sk_buff fields into single function
This change combines a number of post-DMA Rx packet processing functions
into a single function.  The advantage of this is that it combines most of
the Rx descriptor processing into one spot so it should all be warm in the
cache.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-02-10 15:56:31 -08:00
Alexander Duyck e4f740287f ixgbe: Drop the _ADV of descriptor macros since all ixgbe descriptors are ADV
It doesn't make much sense to differentiate between advanced and legacy
descriptors when the only descriptors that ixgbe uses are advanced
descriptors.  As such we can drop the _ADV suffix since all ixgbe
descriptors are automatically advanced.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-02-10 15:55:58 -08:00
Alexander Duyck f56e0cb1fe ixgbe: Add function for testing status bits in Rx descriptor
This change adds a small function for testing Rx status bits in the
descriptor.  The advantage to this is that we can avoid unnecessary
byte swaps on big endian systems.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-02-10 15:51:33 -08:00
Alexander Duyck 4c1975d77b ixgbe: Minor refactor of RSC
This change addresses several issue.

First I had left the use of the next and prev skb pointers floating around
in the code and they were overdue to be pulled since I had rewritten the
RSC code in the out-of-tree driver some time ago to address issues brought
up by David Miller in regards to this.

I am also now defaulting to always leaving the first buffer unmapped on any
packet and then unmapping it after we read the EOP descriptor.  This allows
a simplification of the path with less branching.

Instead of counting packets received the code was changed some time ago to
track the number of buffers received.  This leads to inaccurate counting
when you compare numbers of packets received by the hardware versus what is
tracked by the software.  To correct this I am revising things so that the
append_cnt value for RSC accurately tracks the number of frames received.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-02-10 15:42:09 -08:00
Don Skidmore 9497182051 ixgbe: update copyright to 2012
New year so bump the copyright date.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2012-02-03 03:05:30 -08:00
Neerav Parikh ea81875ae0 ixgbe: FCoE: Add support for ndo_get_fcoe_hbainfo() call
This patch implements support for ndo_get_fcoe_hbainfo()
call in the ixgbe driver.

This function will be called by the FCoE protocol stack to
obtain device specific information from the underlying
device configured to do FCoE.

Signed-off-by: Neerav Parikh <Neerav.Parikh@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-01-05 13:12:04 -05:00
Emil Tantilov 15e5209f1c ixgbe: change the eeprom version reported by ethtool
Use 32bit value starting at offset 0x2d for displaying the firmware
version in ethtool. This should work for all current ixgbe HW

Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-10-17 17:04:30 -07:00
Greg Rose de4c7f653b ixgbe: Add new netdev op to turn spoof checking on or off per VF
Implements the new netdev op to allow user configuration of spoof
checking on a per VF basis.

V2 - Change netdev spoof check op setting to bool

Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-10-16 13:15:48 -07:00
Greg Rose 83c61fa97a ixgbe: Add protection from VF invalid target DMA
It is possible for a VF to set an invalid target DMA address in its
Tx/Rx descriptor buffer pointers.  The workarounds in this patch
will guard against such an event and issue a VFLR to the VF in response.
The VFLR will shut down the VF until an administrator can take action
to investigate the event and correct the problem.

Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-10-12 22:45:24 -07:00
Emil Tantilov d5bf4f67a6 ixgbe: Cleanup q_vector interrupt throttle rate logic
This patch is meant to help cleanup the interrupt throttle rate logic by
storing the interrupt throttle rate as a value in microseconds instead of
interrupts per second.  The advantage to this approach is that the value
can now be stored in an 16 bit field and doesn't require as much math to
flip the value back and forth since the hardware already used microseconds
when setting the rate.

Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-09-28 23:08:23 -07:00
Emil Tantilov c23f5b6bbb ixgbe: add WOL support for X540
Add support for WOL as determined by the EEPROM.

Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-09-23 09:05:51 -07:00
Greg Rose c6bda30a06 ixgbe: Reconfigure SR-IOV Init
Use the PCI device flag indicating if a VF is assigned to a guest VM
to guard against destroying VFs upon driver removal.  Implement
additional feature to detect if VFs already exist when the driver
is loaded and if so configure them and set the driver state to
SR-IOV enabled.

Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-09-23 09:05:49 -07:00
Alexander Duyck 919e78a6b8 ixgbe: Make better use of memory allocations in one-buffer mode w/ RSC
This patch improves the memory utilization with RSC when in one-buffer
mode.  This is accomplished by making the default buffer sizes match up
with the standard memory allocation sizes minus 1K for shared info and
padding overhead.  By doing this CPU utilization when doing large receives
can be reduced by as much as 8%.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-09-16 19:00:11 -07:00
Alexander Duyck c7ccde0f83 ixgbe: make ixgbe_up and ixgbe_up_complete void functions
ixgbe_up and ixgbe_up_complete will always return 0.  Since this doesn't
provide any useful information we might as well just make them both void
and save ourselves from having to return an unused value.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-09-16 18:39:56 -07:00
Alexander Duyck 2c4af694fe ixgbe: Correctly name and handle MSI-X other interrupt
It was possible to inadvertently add additional interrupt causes to the
MSI-X other interrupt.  This occurred when things such as RX buffer overrun
events were being triggered at the same time as an event such as a Flow
Director table reinit request.  In order to avoid this we should be
explicitly programming only the interrupts that we want enabled.  In
addition I am renaming the ixgbe_msix_lsc function and interrupt to drop
any implied meaning of this being a link status only interrupt.

Unfortunately the patch is a bit ugly due to the fact that ixgbe_irq_enable
needed to be moved up before ixgbe_msix_other in order to have things
defined in the correct order.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-09-15 21:32:04 -07:00
Alexander Duyck 592245559e ixgbe: Change default Tx work limit size to 256 buffers
This change makes it so that the default Tx work limit is 256 buffers or
1/2 of an entire ring instead of a full ring size so that it is much more
likely that we will be able to actually reach the work limit value.
Previously with the value set to an entire ring it would not have been
possible for us to trigger an event due to the fact that the Tx work is
stopped at the point where we cannot place one more buffer on the ring and
it is not restarted until cleanup is complete.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-09-15 20:29:14 -07:00
Alexander Duyck 7f9643fd77 ixgbe: Add support for setting CC bit when SR-IOV is enabled
This change makes it so that the CC bit in the descriptor is set when
SR-IOV is enabled.  This is needed in order to support offloading
functionality when passing traffic over the internal TX switch.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-08-29 01:04:12 -07:00
Alexander Duyck efe3d3c8ee ixgbe: convert rings from q_vector bit indexed array to linked list
This change converts the current bit array into a linked list so that the
q_vectors can simply go through ring by ring and locate each ring needing
to be cleaned.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-08-27 00:00:10 -07:00
Alexander Duyck 66f32a8b97 ixgbe: Cleanup FCOE and VLAN handling in xmit_frame_ring
This change is meant to further cleanup the transmit path by streamlining
some of the VLAN and FCOE/DCB tasks in the transmit path.  In addition it
adds code for support software VLANs in the event that they are used in
conjunction with DCB and/or FCOE.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-08-19 06:02:40 -07:00
Alexander Duyck d3d0023979 ixgbe: Refactor transmit map and cleanup routines
This patch implements a partial refactor of the TX map/queue and cleanup
routines.  It merges the map and queue functionality and as a result
improves the transmit performance by avoiding unnecessary reads from memory.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2011-08-19 05:57:43 -07:00