1
0
Fork 0

net: ena: Add a driver for Amazon Elastic Network Adapters (ENA)

This is a driver for the ENA family of networking devices.

Signed-off-by: Netanel Belgazal <netanel@annapurnalabs.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
hifive-unleashed-5.1
Netanel Belgazal 2016-08-10 14:03:22 +03:00 committed by David S. Miller
parent 4330ea798f
commit 1738cd3ed3
20 changed files with 10858 additions and 0 deletions

View File

@ -74,6 +74,8 @@ dns_resolver.txt
- The DNS resolver module allows kernel servies to make DNS queries.
driver.txt
- Softnet driver issues.
ena.txt
- info on Amazon's Elastic Network Adapter (ENA)
e100.txt
- info on Intel's EtherExpress PRO/100 line of 10/100 boards
e1000.txt

View File

@ -0,0 +1,305 @@
Linux kernel driver for Elastic Network Adapter (ENA) family:
=============================================================
Overview:
=========
ENA is a networking interface designed to make good use of modern CPU
features and system architectures.
The ENA device exposes a lightweight management interface with a
minimal set of memory mapped registers and extendable command set
through an Admin Queue.
The driver supports a range of ENA devices, is link-speed independent
(i.e., the same driver is used for 10GbE, 25GbE, 40GbE, etc.), and has
a negotiated and extendable feature set.
Some ENA devices support SR-IOV. This driver is used for both the
SR-IOV Physical Function (PF) and Virtual Function (VF) devices.
ENA devices enable high speed and low overhead network traffic
processing by providing multiple Tx/Rx queue pairs (the maximum number
is advertised by the device via the Admin Queue), a dedicated MSI-X
interrupt vector per Tx/Rx queue pair, adaptive interrupt moderation,
and CPU cacheline optimized data placement.
The ENA driver supports industry standard TCP/IP offload features such
as checksum offload and TCP transmit segmentation offload (TSO).
Receive-side scaling (RSS) is supported for multi-core scaling.
The ENA driver and its corresponding devices implement health
monitoring mechanisms such as watchdog, enabling the device and driver
to recover in a manner transparent to the application, as well as
debug logs.
Some of the ENA devices support a working mode called Low-latency
Queue (LLQ), which saves several more microseconds.
Supported PCI vendor ID/device IDs:
===================================
1d0f:0ec2 - ENA PF
1d0f:1ec2 - ENA PF with LLQ support
1d0f:ec20 - ENA VF
1d0f:ec21 - ENA VF with LLQ support
ENA Source Code Directory Structure:
====================================
ena_com.[ch] - Management communication layer. This layer is
responsible for the handling all the management
(admin) communication between the device and the
driver.
ena_eth_com.[ch] - Tx/Rx data path.
ena_admin_defs.h - Definition of ENA management interface.
ena_eth_io_defs.h - Definition of ENA data path interface.
ena_common_defs.h - Common definitions for ena_com layer.
ena_regs_defs.h - Definition of ENA PCI memory-mapped (MMIO) registers.
ena_netdev.[ch] - Main Linux kernel driver.
ena_syfsfs.[ch] - Sysfs files.
ena_ethtool.c - ethtool callbacks.
ena_pci_id_tbl.h - Supported device IDs.
Management Interface:
=====================
ENA management interface is exposed by means of:
- PCIe Configuration Space
- Device Registers
- Admin Queue (AQ) and Admin Completion Queue (ACQ)
- Asynchronous Event Notification Queue (AENQ)
ENA device MMIO Registers are accessed only during driver
initialization and are not involved in further normal device
operation.
AQ is used for submitting management commands, and the
results/responses are reported asynchronously through ACQ.
ENA introduces a very small set of management commands with room for
vendor-specific extensions. Most of the management operations are
framed in a generic Get/Set feature command.
The following admin queue commands are supported:
- Create I/O submission queue
- Create I/O completion queue
- Destroy I/O submission queue
- Destroy I/O completion queue
- Get feature
- Set feature
- Configure AENQ
- Get statistics
Refer to ena_admin_defs.h for the list of supported Get/Set Feature
properties.
The Asynchronous Event Notification Queue (AENQ) is a uni-directional
queue used by the ENA device to send to the driver events that cannot
be reported using ACQ. AENQ events are subdivided into groups. Each
group may have multiple syndromes, as shown below
The events are:
Group Syndrome
Link state change - X -
Fatal error - X -
Notification Suspend traffic
Notification Resume traffic
Keep-Alive - X -
ACQ and AENQ share the same MSI-X vector.
Keep-Alive is a special mechanism that allows monitoring of the
device's health. The driver maintains a watchdog (WD) handler which,
if fired, logs the current state and statistics then resets and
restarts the ENA device and driver. A Keep-Alive event is delivered by
the device every second. The driver re-arms the WD upon reception of a
Keep-Alive event. A missed Keep-Alive event causes the WD handler to
fire.
Data Path Interface:
====================
I/O operations are based on Tx and Rx Submission Queues (Tx SQ and Rx
SQ correspondingly). Each SQ has a completion queue (CQ) associated
with it.
The SQs and CQs are implemented as descriptor rings in contiguous
physical memory.
The ENA driver supports two Queue Operation modes for Tx SQs:
- Regular mode
* In this mode the Tx SQs reside in the host's memory. The ENA
device fetches the ENA Tx descriptors and packet data from host
memory.
- Low Latency Queue (LLQ) mode or "push-mode".
* In this mode the driver pushes the transmit descriptors and the
first 128 bytes of the packet directly to the ENA device memory
space. The rest of the packet payload is fetched by the
device. For this operation mode, the driver uses a dedicated PCI
device memory BAR, which is mapped with write-combine capability.
The Rx SQs support only the regular mode.
Note: Not all ENA devices support LLQ, and this feature is negotiated
with the device upon initialization. If the ENA device does not
support LLQ mode, the driver falls back to the regular mode.
The driver supports multi-queue for both Tx and Rx. This has various
benefits:
- Reduced CPU/thread/process contention on a given Ethernet interface.
- Cache miss rate on completion is reduced, particularly for data
cache lines that hold the sk_buff structures.
- Increased process-level parallelism when handling received packets.
- Increased data cache hit rate, by steering kernel processing of
packets to the CPU, where the application thread consuming the
packet is running.
- In hardware interrupt re-direction.
Interrupt Modes:
================
The driver assigns a single MSI-X vector per queue pair (for both Tx
and Rx directions). The driver assigns an additional dedicated MSI-X vector
for management (for ACQ and AENQ).
Management interrupt registration is performed when the Linux kernel
probes the adapter, and it is de-registered when the adapter is
removed. I/O queue interrupt registration is performed when the Linux
interface of the adapter is opened, and it is de-registered when the
interface is closed.
The management interrupt is named:
ena-mgmnt@pci:<PCI domain:bus:slot.function>
and for each queue pair, an interrupt is named:
<interface name>-Tx-Rx-<queue index>
The ENA device operates in auto-mask and auto-clear interrupt
modes. That is, once MSI-X is delivered to the host, its Cause bit is
automatically cleared and the interrupt is masked. The interrupt is
unmasked by the driver after NAPI processing is complete.
Interrupt Moderation:
=====================
ENA driver and device can operate in conventional or adaptive interrupt
moderation mode.
In conventional mode the driver instructs device to postpone interrupt
posting according to static interrupt delay value. The interrupt delay
value can be configured through ethtool(8). The following ethtool
parameters are supported by the driver: tx-usecs, rx-usecs
In adaptive interrupt moderation mode the interrupt delay value is
updated by the driver dynamically and adjusted every NAPI cycle
according to the traffic nature.
By default ENA driver applies adaptive coalescing on Rx traffic and
conventional coalescing on Tx traffic.
Adaptive coalescing can be switched on/off through ethtool(8)
adaptive_rx on|off parameter.
The driver chooses interrupt delay value according to the number of
bytes and packets received between interrupt unmasking and interrupt
posting. The driver uses interrupt delay table that subdivides the
range of received bytes/packets into 5 levels and assigns interrupt
delay value to each level.
The user can enable/disable adaptive moderation, modify the interrupt
delay table and restore its default values through sysfs.
The rx_copybreak is initialized by default to ENA_DEFAULT_RX_COPYBREAK
and can be configured by the ETHTOOL_STUNABLE command of the
SIOCETHTOOL ioctl.
SKB:
The driver-allocated SKB for frames received from Rx handling using
NAPI context. The allocation method depends on the size of the packet.
If the frame length is larger than rx_copybreak, napi_get_frags()
is used, otherwise netdev_alloc_skb_ip_align() is used, the buffer
content is copied (by CPU) to the SKB, and the buffer is recycled.
Statistics:
===========
The user can obtain ENA device and driver statistics using ethtool.
The driver can collect regular or extended statistics (including
per-queue stats) from the device.
In addition the driver logs the stats to syslog upon device reset.
MTU:
====
The driver supports an arbitrarily large MTU with a maximum that is
negotiated with the device. The driver configures MTU using the
SetFeature command (ENA_ADMIN_MTU property). The user can change MTU
via ip(8) and similar legacy tools.
Stateless Offloads:
===================
The ENA driver supports:
- TSO over IPv4/IPv6
- TSO with ECN
- IPv4 header checksum offload
- TCP/UDP over IPv4/IPv6 checksum offloads
RSS:
====
- The ENA device supports RSS that allows flexible Rx traffic
steering.
- Toeplitz and CRC32 hash functions are supported.
- Different combinations of L2/L3/L4 fields can be configured as
inputs for hash functions.
- The driver configures RSS settings using the AQ SetFeature command
(ENA_ADMIN_RSS_HASH_FUNCTION, ENA_ADMIN_RSS_HASH_INPUT and
ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG properties).
- If the NETIF_F_RXHASH flag is set, the 32-bit result of the hash
function delivered in the Rx CQ descriptor is set in the received
SKB.
- The user can provide a hash key, hash function, and configure the
indirection table through ethtool(8).
DATA PATH:
==========
Tx:
---
end_start_xmit() is called by the stack. This function does the following:
- Maps data buffers (skb->data and frags).
- Populates ena_buf for the push buffer (if the driver and device are
in push mode.)
- Prepares ENA bufs for the remaining frags.
- Allocates a new request ID from the empty req_id ring. The request
ID is the index of the packet in the Tx info. This is used for
out-of-order TX completions.
- Adds the packet to the proper place in the Tx ring.
- Calls ena_com_prepare_tx(), an ENA communication layer that converts
the ena_bufs to ENA descriptors (and adds meta ENA descriptors as
needed.)
* This function also copies the ENA descriptors and the push buffer
to the Device memory space (if in push mode.)
- Writes doorbell to the ENA device.
- When the ENA device finishes sending the packet, a completion
interrupt is raised.
- The interrupt handler schedules NAPI.
- The ena_clean_tx_irq() function is called. This function handles the
completion descriptors generated by the ENA, with a single
completion descriptor per completed packet.
* req_id is retrieved from the completion descriptor. The tx_info of
the packet is retrieved via the req_id. The data buffers are
unmapped and req_id is returned to the empty req_id ring.
* The function stops when the completion descriptors are completed or
the budget is reached.
Rx:
---
- When a packet is received from the ENA device.
- The interrupt handler schedules NAPI.
- The ena_clean_rx_irq() function is called. This function calls
ena_rx_pkt(), an ENA communication layer function, which returns the
number of descriptors used for a new unhandled packet, and zero if
no new packet is found.
- Then it calls the ena_clean_rx_irq() function.
- ena_eth_rx_skb() checks packet length:
* If the packet is small (len < rx_copybreak), the driver allocates
a SKB for the new packet, and copies the packet payload into the
SKB data buffer.
- In this way the original data buffer is not passed to the stack
and is reused for future Rx packets.
* Otherwise the function unmaps the Rx buffer, then allocates the
new SKB structure and hooks the Rx buffer to the SKB frags.
- The new SKB is updated with the necessary information (protocol,
checksum hw verify result, etc.), and then passed to the network
stack, using the NAPI interface function napi_gro_receive().

View File

@ -636,6 +636,15 @@ F: drivers/tty/serial/altera_jtaguart.c
F: include/linux/altera_uart.h
F: include/linux/altera_jtaguart.h
AMAZON ETHERNET DRIVERS
M: Netanel Belgazal <netanel@annapurnalabs.com>
R: Saeed Bishara <saeed@annapurnalabs.com>
R: Zorik Machulsky <zorik@annapurnalabs.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/ena.txt
F: drivers/net/ethernet/amazon/
AMD CRYPTOGRAPHIC COPROCESSOR (CCP) DRIVER
M: Tom Lendacky <thomas.lendacky@amd.com>
M: Gary Hook <gary.hook@amd.com>

View File

@ -24,6 +24,7 @@ source "drivers/net/ethernet/agere/Kconfig"
source "drivers/net/ethernet/allwinner/Kconfig"
source "drivers/net/ethernet/alteon/Kconfig"
source "drivers/net/ethernet/altera/Kconfig"
source "drivers/net/ethernet/amazon/Kconfig"
source "drivers/net/ethernet/amd/Kconfig"
source "drivers/net/ethernet/apm/Kconfig"
source "drivers/net/ethernet/apple/Kconfig"

View File

@ -10,6 +10,7 @@ obj-$(CONFIG_NET_VENDOR_AGERE) += agere/
obj-$(CONFIG_NET_VENDOR_ALLWINNER) += allwinner/
obj-$(CONFIG_NET_VENDOR_ALTEON) += alteon/
obj-$(CONFIG_ALTERA_TSE) += altera/
obj-$(CONFIG_NET_VENDOR_AMAZON) += amazon/
obj-$(CONFIG_NET_VENDOR_AMD) += amd/
obj-$(CONFIG_NET_XGENE) += apm/
obj-$(CONFIG_NET_VENDOR_APPLE) += apple/

View File

@ -0,0 +1,27 @@
#
# Amazon network device configuration
#
config NET_VENDOR_AMAZON
bool "Amazon Devices"
default y
---help---
If you have a network (Ethernet) device belonging to this class, say Y.
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about Amazon devices. If you say Y, you will be asked
for your specific device in the following questions.
if NET_VENDOR_AMAZON
config ENA_ETHERNET
tristate "Elastic Network Adapter (ENA) support"
depends on (PCI_MSI && X86)
---help---
This driver supports Elastic Network Adapter (ENA)"
To compile this driver as a module, choose M here.
The module will be called ena.
endif #NET_VENDOR_AMAZON

View File

@ -0,0 +1,5 @@
#
# Makefile for the Amazon network device drivers.
#
obj-$(CONFIG_ENA_ETHERNET) += ena/

View File

@ -0,0 +1,7 @@
#
# Makefile for the Elastic Network Adapter (ENA) device drivers.
#
obj-$(CONFIG_ENA_ETHERNET) += ena.o
ena-y := ena_netdev.o ena_com.o ena_eth_com.o ena_ethtool.o

View File

@ -0,0 +1,973 @@
/*
* Copyright 2015 - 2016 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _ENA_ADMIN_H_
#define _ENA_ADMIN_H_
enum ena_admin_aq_opcode {
ENA_ADMIN_CREATE_SQ = 1,
ENA_ADMIN_DESTROY_SQ = 2,
ENA_ADMIN_CREATE_CQ = 3,
ENA_ADMIN_DESTROY_CQ = 4,
ENA_ADMIN_GET_FEATURE = 8,
ENA_ADMIN_SET_FEATURE = 9,
ENA_ADMIN_GET_STATS = 11,
};
enum ena_admin_aq_completion_status {
ENA_ADMIN_SUCCESS = 0,
ENA_ADMIN_RESOURCE_ALLOCATION_FAILURE = 1,
ENA_ADMIN_BAD_OPCODE = 2,
ENA_ADMIN_UNSUPPORTED_OPCODE = 3,
ENA_ADMIN_MALFORMED_REQUEST = 4,
/* Additional status is provided in ACQ entry extended_status */
ENA_ADMIN_ILLEGAL_PARAMETER = 5,
ENA_ADMIN_UNKNOWN_ERROR = 6,
};
enum ena_admin_aq_feature_id {
ENA_ADMIN_DEVICE_ATTRIBUTES = 1,
ENA_ADMIN_MAX_QUEUES_NUM = 2,
ENA_ADMIN_RSS_HASH_FUNCTION = 10,
ENA_ADMIN_STATELESS_OFFLOAD_CONFIG = 11,
ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG = 12,
ENA_ADMIN_MTU = 14,
ENA_ADMIN_RSS_HASH_INPUT = 18,
ENA_ADMIN_INTERRUPT_MODERATION = 20,
ENA_ADMIN_AENQ_CONFIG = 26,
ENA_ADMIN_LINK_CONFIG = 27,
ENA_ADMIN_HOST_ATTR_CONFIG = 28,
ENA_ADMIN_FEATURES_OPCODE_NUM = 32,
};
enum ena_admin_placement_policy_type {
/* descriptors and headers are in host memory */
ENA_ADMIN_PLACEMENT_POLICY_HOST = 1,
/* descriptors and headers are in device memory (a.k.a Low Latency
* Queue)
*/
ENA_ADMIN_PLACEMENT_POLICY_DEV = 3,
};
enum ena_admin_link_types {
ENA_ADMIN_LINK_SPEED_1G = 0x1,
ENA_ADMIN_LINK_SPEED_2_HALF_G = 0x2,
ENA_ADMIN_LINK_SPEED_5G = 0x4,
ENA_ADMIN_LINK_SPEED_10G = 0x8,
ENA_ADMIN_LINK_SPEED_25G = 0x10,
ENA_ADMIN_LINK_SPEED_40G = 0x20,
ENA_ADMIN_LINK_SPEED_50G = 0x40,
ENA_ADMIN_LINK_SPEED_100G = 0x80,
ENA_ADMIN_LINK_SPEED_200G = 0x100,
ENA_ADMIN_LINK_SPEED_400G = 0x200,
};
enum ena_admin_completion_policy_type {
/* completion queue entry for each sq descriptor */
ENA_ADMIN_COMPLETION_POLICY_DESC = 0,
/* completion queue entry upon request in sq descriptor */
ENA_ADMIN_COMPLETION_POLICY_DESC_ON_DEMAND = 1,
/* current queue head pointer is updated in OS memory upon sq
* descriptor request
*/
ENA_ADMIN_COMPLETION_POLICY_HEAD_ON_DEMAND = 2,
/* current queue head pointer is updated in OS memory for each sq
* descriptor
*/
ENA_ADMIN_COMPLETION_POLICY_HEAD = 3,
};
/* basic stats return ena_admin_basic_stats while extanded stats return a
* buffer (string format) with additional statistics per queue and per
* device id
*/
enum ena_admin_get_stats_type {
ENA_ADMIN_GET_STATS_TYPE_BASIC = 0,
ENA_ADMIN_GET_STATS_TYPE_EXTENDED = 1,
};
enum ena_admin_get_stats_scope {
ENA_ADMIN_SPECIFIC_QUEUE = 0,
ENA_ADMIN_ETH_TRAFFIC = 1,
};
struct ena_admin_aq_common_desc {
/* 11:0 : command_id
* 15:12 : reserved12
*/
u16 command_id;
/* as appears in ena_admin_aq_opcode */
u8 opcode;
/* 0 : phase
* 1 : ctrl_data - control buffer address valid
* 2 : ctrl_data_indirect - control buffer address
* points to list of pages with addresses of control
* buffers
* 7:3 : reserved3
*/
u8 flags;
};
/* used in ena_admin_aq_entry. Can point directly to control data, or to a
* page list chunk. Used also at the end of indirect mode page list chunks,
* for chaining.
*/
struct ena_admin_ctrl_buff_info {
u32 length;
struct ena_common_mem_addr address;
};
struct ena_admin_sq {
u16 sq_idx;
/* 4:0 : reserved
* 7:5 : sq_direction - 0x1 - Tx; 0x2 - Rx
*/
u8 sq_identity;
u8 reserved1;
};
struct ena_admin_aq_entry {
struct ena_admin_aq_common_desc aq_common_descriptor;
union {
u32 inline_data_w1[3];
struct ena_admin_ctrl_buff_info control_buffer;
} u;
u32 inline_data_w4[12];
};
struct ena_admin_acq_common_desc {
/* command identifier to associate it with the aq descriptor
* 11:0 : command_id
* 15:12 : reserved12
*/
u16 command;
u8 status;
/* 0 : phase
* 7:1 : reserved1
*/
u8 flags;
u16 extended_status;
/* serves as a hint what AQ entries can be revoked */
u16 sq_head_indx;
};
struct ena_admin_acq_entry {
struct ena_admin_acq_common_desc acq_common_descriptor;
u32 response_specific_data[14];
};
struct ena_admin_aq_create_sq_cmd {
struct ena_admin_aq_common_desc aq_common_descriptor;
/* 4:0 : reserved0_w1
* 7:5 : sq_direction - 0x1 - Tx, 0x2 - Rx
*/
u8 sq_identity;
u8 reserved8_w1;
/* 3:0 : placement_policy - Describing where the SQ
* descriptor ring and the SQ packet headers reside:
* 0x1 - descriptors and headers are in OS memory,
* 0x3 - descriptors and headers in device memory
* (a.k.a Low Latency Queue)
* 6:4 : completion_policy - Describing what policy
* to use for generation completion entry (cqe) in
* the CQ associated with this SQ: 0x0 - cqe for each
* sq descriptor, 0x1 - cqe upon request in sq
* descriptor, 0x2 - current queue head pointer is
* updated in OS memory upon sq descriptor request
* 0x3 - current queue head pointer is updated in OS
* memory for each sq descriptor
* 7 : reserved15_w1
*/
u8 sq_caps_2;
/* 0 : is_physically_contiguous - Described if the
* queue ring memory is allocated in physical
* contiguous pages or split.
* 7:1 : reserved17_w1
*/
u8 sq_caps_3;
/* associated completion queue id. This CQ must be created prior to
* SQ creation
*/
u16 cq_idx;
/* submission queue depth in entries */
u16 sq_depth;
/* SQ physical base address in OS memory. This field should not be
* used for Low Latency queues. Has to be page aligned.
*/
struct ena_common_mem_addr sq_ba;
/* specifies queue head writeback location in OS memory. Valid if
* completion_policy is set to completion_policy_head_on_demand or
* completion_policy_head. Has to be cache aligned
*/
struct ena_common_mem_addr sq_head_writeback;
u32 reserved0_w7;
u32 reserved0_w8;
};
enum ena_admin_sq_direction {
ENA_ADMIN_SQ_DIRECTION_TX = 1,
ENA_ADMIN_SQ_DIRECTION_RX = 2,
};
struct ena_admin_acq_create_sq_resp_desc {
struct ena_admin_acq_common_desc acq_common_desc;
u16 sq_idx;
u16 reserved;
/* queue doorbell address as an offset to PCIe MMIO REG BAR */
u32 sq_doorbell_offset;
/* low latency queue ring base address as an offset to PCIe MMIO
* LLQ_MEM BAR
*/
u32 llq_descriptors_offset;
/* low latency queue headers' memory as an offset to PCIe MMIO
* LLQ_MEM BAR
*/
u32 llq_headers_offset;
};
struct ena_admin_aq_destroy_sq_cmd {
struct ena_admin_aq_common_desc aq_common_descriptor;
struct ena_admin_sq sq;
};
struct ena_admin_acq_destroy_sq_resp_desc {
struct ena_admin_acq_common_desc acq_common_desc;
};
struct ena_admin_aq_create_cq_cmd {
struct ena_admin_aq_common_desc aq_common_descriptor;
/* 4:0 : reserved5
* 5 : interrupt_mode_enabled - if set, cq operates
* in interrupt mode, otherwise - polling
* 7:6 : reserved6
*/
u8 cq_caps_1;
/* 4:0 : cq_entry_size_words - size of CQ entry in
* 32-bit words, valid values: 4, 8.
* 7:5 : reserved7
*/
u8 cq_caps_2;
/* completion queue depth in # of entries. must be power of 2 */
u16 cq_depth;
/* msix vector assigned to this cq */
u32 msix_vector;
/* cq physical base address in OS memory. CQ must be physically
* contiguous
*/
struct ena_common_mem_addr cq_ba;
};
struct ena_admin_acq_create_cq_resp_desc {
struct ena_admin_acq_common_desc acq_common_desc;
u16 cq_idx;
/* actual cq depth in number of entries */
u16 cq_actual_depth;
u32 numa_node_register_offset;
u32 cq_head_db_register_offset;
u32 cq_interrupt_unmask_register_offset;
};
struct ena_admin_aq_destroy_cq_cmd {
struct ena_admin_aq_common_desc aq_common_descriptor;
u16 cq_idx;
u16 reserved1;
};
struct ena_admin_acq_destroy_cq_resp_desc {
struct ena_admin_acq_common_desc acq_common_desc;
};
/* ENA AQ Get Statistics command. Extended statistics are placed in control
* buffer pointed by AQ entry
*/
struct ena_admin_aq_get_stats_cmd {
struct ena_admin_aq_common_desc aq_common_descriptor;
union {
/* command specific inline data */
u32 inline_data_w1[3];
struct ena_admin_ctrl_buff_info control_buffer;
} u;
/* stats type as defined in enum ena_admin_get_stats_type */
u8 type;
/* stats scope defined in enum ena_admin_get_stats_scope */
u8 scope;
u16 reserved3;
/* queue id. used when scope is specific_queue */
u16 queue_idx;
/* device id, value 0xFFFF means mine. only privileged device can get
* stats of other device
*/
u16 device_id;
};
/* Basic Statistics Command. */
struct ena_admin_basic_stats {
u32 tx_bytes_low;
u32 tx_bytes_high;
u32 tx_pkts_low;
u32 tx_pkts_high;
u32 rx_bytes_low;
u32 rx_bytes_high;
u32 rx_pkts_low;
u32 rx_pkts_high;
u32 rx_drops_low;
u32 rx_drops_high;
};
struct ena_admin_acq_get_stats_resp {
struct ena_admin_acq_common_desc acq_common_desc;
struct ena_admin_basic_stats basic_stats;
};
struct ena_admin_get_set_feature_common_desc {
/* 1:0 : select - 0x1 - current value; 0x3 - default
* value
* 7:3 : reserved3
*/
u8 flags;
/* as appears in ena_admin_aq_feature_id */
u8 feature_id;
u16 reserved16;
};
struct ena_admin_device_attr_feature_desc {
u32 impl_id;
u32 device_version;
/* bitmap of ena_admin_aq_feature_id */
u32 supported_features;
u32 reserved3;
/* Indicates how many bits are used physical address access. */
u32 phys_addr_width;
/* Indicates how many bits are used virtual address access. */
u32 virt_addr_width;
/* unicast MAC address (in Network byte order) */
u8 mac_addr[6];
u8 reserved7[2];
u32 max_mtu;
};
struct ena_admin_queue_feature_desc {
/* including LLQs */
u32 max_sq_num;
u32 max_sq_depth;
u32 max_cq_num;
u32 max_cq_depth;
u32 max_llq_num;
u32 max_llq_depth;
u32 max_header_size;
/* Maximum Descriptors number, including meta descriptor, allowed for
* a single Tx packet
*/
u16 max_packet_tx_descs;
/* Maximum Descriptors number allowed for a single Rx packet */
u16 max_packet_rx_descs;
};
struct ena_admin_set_feature_mtu_desc {
/* exclude L2 */
u32 mtu;
};
struct ena_admin_set_feature_host_attr_desc {
/* host OS info base address in OS memory. host info is 4KB of
* physically contiguous
*/
struct ena_common_mem_addr os_info_ba;
/* host debug area base address in OS memory. debug area must be
* physically contiguous
*/
struct ena_common_mem_addr debug_ba;
/* debug area size */
u32 debug_area_size;
};
struct ena_admin_feature_intr_moder_desc {
/* interrupt delay granularity in usec */
u16 intr_delay_resolution;
u16 reserved;
};
struct ena_admin_get_feature_link_desc {
/* Link speed in Mb */
u32 speed;
/* bit field of enum ena_admin_link types */
u32 supported;
/* 0 : autoneg
* 1 : duplex - Full Duplex
* 31:2 : reserved2
*/
u32 flags;
};
struct ena_admin_feature_aenq_desc {
/* bitmask for AENQ groups the device can report */
u32 supported_groups;
/* bitmask for AENQ groups to report */
u32 enabled_groups;
};
struct ena_admin_feature_offload_desc {
/* 0 : TX_L3_csum_ipv4
* 1 : TX_L4_ipv4_csum_part - The checksum field
* should be initialized with pseudo header checksum
* 2 : TX_L4_ipv4_csum_full
* 3 : TX_L4_ipv6_csum_part - The checksum field
* should be initialized with pseudo header checksum
* 4 : TX_L4_ipv6_csum_full
* 5 : tso_ipv4
* 6 : tso_ipv6
* 7 : tso_ecn
*/
u32 tx;
/* Receive side supported stateless offload
* 0 : RX_L3_csum_ipv4 - IPv4 checksum
* 1 : RX_L4_ipv4_csum - TCP/UDP/IPv4 checksum
* 2 : RX_L4_ipv6_csum - TCP/UDP/IPv6 checksum
* 3 : RX_hash - Hash calculation
*/
u32 rx_supported;
u32 rx_enabled;
};
enum ena_admin_hash_functions {
ENA_ADMIN_TOEPLITZ = 1,
ENA_ADMIN_CRC32 = 2,
};
struct ena_admin_feature_rss_flow_hash_control {
u32 keys_num;
u32 reserved;
u32 key[10];
};
struct ena_admin_feature_rss_flow_hash_function {
/* 7:0 : funcs - bitmask of ena_admin_hash_functions */
u32 supported_func;
/* 7:0 : selected_func - bitmask of
* ena_admin_hash_functions
*/
u32 selected_func;
/* initial value */
u32 init_val;
};
/* RSS flow hash protocols */
enum ena_admin_flow_hash_proto {
ENA_ADMIN_RSS_TCP4 = 0,
ENA_ADMIN_RSS_UDP4 = 1,
ENA_ADMIN_RSS_TCP6 = 2,
ENA_ADMIN_RSS_UDP6 = 3,
ENA_ADMIN_RSS_IP4 = 4,
ENA_ADMIN_RSS_IP6 = 5,
ENA_ADMIN_RSS_IP4_FRAG = 6,
ENA_ADMIN_RSS_NOT_IP = 7,
ENA_ADMIN_RSS_PROTO_NUM = 16,
};
/* RSS flow hash fields */
enum ena_admin_flow_hash_fields {
/* Ethernet Dest Addr */
ENA_ADMIN_RSS_L2_DA = 0,
/* Ethernet Src Addr */
ENA_ADMIN_RSS_L2_SA = 1,
/* ipv4/6 Dest Addr */
ENA_ADMIN_RSS_L3_DA = 2,
/* ipv4/6 Src Addr */
ENA_ADMIN_RSS_L3_SA = 5,
/* tcp/udp Dest Port */
ENA_ADMIN_RSS_L4_DP = 6,
/* tcp/udp Src Port */
ENA_ADMIN_RSS_L4_SP = 7,
};
struct ena_admin_proto_input {
/* flow hash fields (bitwise according to ena_admin_flow_hash_fields) */
u16 fields;
u16 reserved2;
};
struct ena_admin_feature_rss_hash_control {
struct ena_admin_proto_input supported_fields[ENA_ADMIN_RSS_PROTO_NUM];
struct ena_admin_proto_input selected_fields[ENA_ADMIN_RSS_PROTO_NUM];
struct ena_admin_proto_input reserved2[ENA_ADMIN_RSS_PROTO_NUM];
struct ena_admin_proto_input reserved3[ENA_ADMIN_RSS_PROTO_NUM];
};
struct ena_admin_feature_rss_flow_hash_input {
/* supported hash input sorting
* 1 : L3_sort - support swap L3 addresses if DA is
* smaller than SA
* 2 : L4_sort - support swap L4 ports if DP smaller
* SP
*/
u16 supported_input_sort;
/* enabled hash input sorting
* 1 : enable_L3_sort - enable swap L3 addresses if
* DA smaller than SA
* 2 : enable_L4_sort - enable swap L4 ports if DP
* smaller than SP
*/
u16 enabled_input_sort;
};
enum ena_admin_os_type {
ENA_ADMIN_OS_LINUX = 1,
ENA_ADMIN_OS_WIN = 2,
ENA_ADMIN_OS_DPDK = 3,
ENA_ADMIN_OS_FREEBSD = 4,
ENA_ADMIN_OS_IPXE = 5,
};
struct ena_admin_host_info {
/* defined in enum ena_admin_os_type */
u32 os_type;
/* os distribution string format */
u8 os_dist_str[128];
/* OS distribution numeric format */
u32 os_dist;
/* kernel version string format */
u8 kernel_ver_str[32];
/* Kernel version numeric format */
u32 kernel_ver;
/* 7:0 : major
* 15:8 : minor
* 23:16 : sub_minor
*/
u32 driver_version;
/* features bitmap */
u32 supported_network_features[4];
};
struct ena_admin_rss_ind_table_entry {
u16 cq_idx;
u16 reserved;
};
struct ena_admin_feature_rss_ind_table {
/* min supported table size (2^min_size) */
u16 min_size;
/* max supported table size (2^max_size) */
u16 max_size;
/* table size (2^size) */
u16 size;
u16 reserved;
/* index of the inline entry. 0xFFFFFFFF means invalid */
u32 inline_index;
/* used for updating single entry, ignored when setting the entire
* table through the control buffer.
*/
struct ena_admin_rss_ind_table_entry inline_entry;
};
struct ena_admin_get_feat_cmd {
struct ena_admin_aq_common_desc aq_common_descriptor;
struct ena_admin_ctrl_buff_info control_buffer;
struct ena_admin_get_set_feature_common_desc feat_common;
u32 raw[11];
};
struct ena_admin_get_feat_resp {
struct ena_admin_acq_common_desc acq_common_desc;
union {
u32 raw[14];
struct ena_admin_device_attr_feature_desc dev_attr;
struct ena_admin_queue_feature_desc max_queue;
struct ena_admin_feature_aenq_desc aenq;
struct ena_admin_get_feature_link_desc link;
struct ena_admin_feature_offload_desc offload;
struct ena_admin_feature_rss_flow_hash_function flow_hash_func;
struct ena_admin_feature_rss_flow_hash_input flow_hash_input;
struct ena_admin_feature_rss_ind_table ind_table;
struct ena_admin_feature_intr_moder_desc intr_moderation;
} u;
};
struct ena_admin_set_feat_cmd {
struct ena_admin_aq_common_desc aq_common_descriptor;
struct ena_admin_ctrl_buff_info control_buffer;
struct ena_admin_get_set_feature_common_desc feat_common;
union {
u32 raw[11];
/* mtu size */
struct ena_admin_set_feature_mtu_desc mtu;
/* host attributes */
struct ena_admin_set_feature_host_attr_desc host_attr;
/* AENQ configuration */
struct ena_admin_feature_aenq_desc aenq;
/* rss flow hash function */
struct ena_admin_feature_rss_flow_hash_function flow_hash_func;
/* rss flow hash input */
struct ena_admin_feature_rss_flow_hash_input flow_hash_input;
/* rss indirection table */
struct ena_admin_feature_rss_ind_table ind_table;
} u;
};
struct ena_admin_set_feat_resp {
struct ena_admin_acq_common_desc acq_common_desc;
union {
u32 raw[14];
} u;
};
struct ena_admin_aenq_common_desc {
u16 group;
u16 syndrom;
/* 0 : phase */
u8 flags;
u8 reserved1[3];
u32 timestamp_low;
u32 timestamp_high;
};
/* asynchronous event notification groups */
enum ena_admin_aenq_group {
ENA_ADMIN_LINK_CHANGE = 0,
ENA_ADMIN_FATAL_ERROR = 1,
ENA_ADMIN_WARNING = 2,
ENA_ADMIN_NOTIFICATION = 3,
ENA_ADMIN_KEEP_ALIVE = 4,
ENA_ADMIN_AENQ_GROUPS_NUM = 5,
};
enum ena_admin_aenq_notification_syndrom {
ENA_ADMIN_SUSPEND = 0,
ENA_ADMIN_RESUME = 1,
};
struct ena_admin_aenq_entry {
struct ena_admin_aenq_common_desc aenq_common_desc;
/* command specific inline data */
u32 inline_data_w4[12];
};
struct ena_admin_aenq_link_change_desc {
struct ena_admin_aenq_common_desc aenq_common_desc;
/* 0 : link_status */
u32 flags;
};
struct ena_admin_ena_mmio_req_read_less_resp {
u16 req_id;
u16 reg_off;
/* value is valid when poll is cleared */
u32 reg_val;
};
/* aq_common_desc */
#define ENA_ADMIN_AQ_COMMON_DESC_COMMAND_ID_MASK GENMASK(11, 0)
#define ENA_ADMIN_AQ_COMMON_DESC_PHASE_MASK BIT(0)
#define ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_SHIFT 1
#define ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_MASK BIT(1)
#define ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_SHIFT 2
#define ENA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT_MASK BIT(2)
/* sq */
#define ENA_ADMIN_SQ_SQ_DIRECTION_SHIFT 5
#define ENA_ADMIN_SQ_SQ_DIRECTION_MASK GENMASK(7, 5)
/* acq_common_desc */
#define ENA_ADMIN_ACQ_COMMON_DESC_COMMAND_ID_MASK GENMASK(11, 0)
#define ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK BIT(0)
/* aq_create_sq_cmd */
#define ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_SHIFT 5
#define ENA_ADMIN_AQ_CREATE_SQ_CMD_SQ_DIRECTION_MASK GENMASK(7, 5)
#define ENA_ADMIN_AQ_CREATE_SQ_CMD_PLACEMENT_POLICY_MASK GENMASK(3, 0)
#define ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_SHIFT 4
#define ENA_ADMIN_AQ_CREATE_SQ_CMD_COMPLETION_POLICY_MASK GENMASK(6, 4)
#define ENA_ADMIN_AQ_CREATE_SQ_CMD_IS_PHYSICALLY_CONTIGUOUS_MASK BIT(0)
/* aq_create_cq_cmd */
#define ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_SHIFT 5
#define ENA_ADMIN_AQ_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_MASK BIT(5)
#define ENA_ADMIN_AQ_CREATE_CQ_CMD_CQ_ENTRY_SIZE_WORDS_MASK GENMASK(4, 0)
/* get_set_feature_common_desc */
#define ENA_ADMIN_GET_SET_FEATURE_COMMON_DESC_SELECT_MASK GENMASK(1, 0)
/* get_feature_link_desc */
#define ENA_ADMIN_GET_FEATURE_LINK_DESC_AUTONEG_MASK BIT(0)
#define ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_SHIFT 1
#define ENA_ADMIN_GET_FEATURE_LINK_DESC_DUPLEX_MASK BIT(1)
/* feature_offload_desc */
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK BIT(0)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_SHIFT 1
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK BIT(1)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_SHIFT 2
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK BIT(2)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_SHIFT 3
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK BIT(3)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_SHIFT 4
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK BIT(4)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_SHIFT 5
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK BIT(5)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_SHIFT 6
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV6_MASK BIT(6)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_SHIFT 7
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_ECN_MASK BIT(7)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK BIT(0)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_SHIFT 1
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK BIT(1)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_SHIFT 2
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK BIT(2)
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_SHIFT 3
#define ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK BIT(3)
/* feature_rss_flow_hash_function */
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_FUNCS_MASK GENMASK(7, 0)
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_FUNCTION_SELECTED_FUNC_MASK GENMASK(7, 0)
/* feature_rss_flow_hash_input */
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_SHIFT 1
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L3_SORT_MASK BIT(1)
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_SHIFT 2
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_L4_SORT_MASK BIT(2)
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_SHIFT 1
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L3_SORT_MASK BIT(1)
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_SHIFT 2
#define ENA_ADMIN_FEATURE_RSS_FLOW_HASH_INPUT_ENABLE_L4_SORT_MASK BIT(2)
/* host_info */
#define ENA_ADMIN_HOST_INFO_MAJOR_MASK GENMASK(7, 0)
#define ENA_ADMIN_HOST_INFO_MINOR_SHIFT 8
#define ENA_ADMIN_HOST_INFO_MINOR_MASK GENMASK(15, 8)
#define ENA_ADMIN_HOST_INFO_SUB_MINOR_SHIFT 16
#define ENA_ADMIN_HOST_INFO_SUB_MINOR_MASK GENMASK(23, 16)
/* aenq_common_desc */
#define ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK BIT(0)
/* aenq_link_change_desc */
#define ENA_ADMIN_AENQ_LINK_CHANGE_DESC_LINK_STATUS_MASK BIT(0)
#endif /*_ENA_ADMIN_H_ */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,48 @@
/*
* Copyright 2015 - 2016 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _ENA_COMMON_H_
#define _ENA_COMMON_H_
#define ENA_COMMON_SPEC_VERSION_MAJOR 0 /* */
#define ENA_COMMON_SPEC_VERSION_MINOR 10 /* */
/* ENA operates with 48-bit memory addresses. ena_mem_addr_t */
struct ena_common_mem_addr {
u32 mem_addr_low;
u16 mem_addr_high;
/* MBZ */
u16 reserved16;
};
#endif /*_ENA_COMMON_H_ */

View File

@ -0,0 +1,501 @@
/*
* Copyright 2015 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "ena_eth_com.h"
static inline struct ena_eth_io_rx_cdesc_base *ena_com_get_next_rx_cdesc(
struct ena_com_io_cq *io_cq)
{
struct ena_eth_io_rx_cdesc_base *cdesc;
u16 expected_phase, head_masked;
u16 desc_phase;
head_masked = io_cq->head & (io_cq->q_depth - 1);
expected_phase = io_cq->phase;
cdesc = (struct ena_eth_io_rx_cdesc_base *)(io_cq->cdesc_addr.virt_addr
+ (head_masked * io_cq->cdesc_entry_size_in_bytes));
desc_phase = (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT;
if (desc_phase != expected_phase)
return NULL;
return cdesc;
}
static inline void ena_com_cq_inc_head(struct ena_com_io_cq *io_cq)
{
io_cq->head++;
/* Switch phase bit in case of wrap around */
if (unlikely((io_cq->head & (io_cq->q_depth - 1)) == 0))
io_cq->phase ^= 1;
}
static inline void *get_sq_desc(struct ena_com_io_sq *io_sq)
{
u16 tail_masked;
u32 offset;
tail_masked = io_sq->tail & (io_sq->q_depth - 1);
offset = tail_masked * io_sq->desc_entry_size;
return (void *)((uintptr_t)io_sq->desc_addr.virt_addr + offset);
}
static inline void ena_com_copy_curr_sq_desc_to_dev(struct ena_com_io_sq *io_sq)
{
u16 tail_masked = io_sq->tail & (io_sq->q_depth - 1);
u32 offset = tail_masked * io_sq->desc_entry_size;
/* In case this queue isn't a LLQ */
if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST)
return;
memcpy_toio(io_sq->desc_addr.pbuf_dev_addr + offset,
io_sq->desc_addr.virt_addr + offset,
io_sq->desc_entry_size);
}
static inline void ena_com_sq_update_tail(struct ena_com_io_sq *io_sq)
{
io_sq->tail++;
/* Switch phase bit in case of wrap around */
if (unlikely((io_sq->tail & (io_sq->q_depth - 1)) == 0))
io_sq->phase ^= 1;
}
static inline int ena_com_write_header(struct ena_com_io_sq *io_sq,
u8 *head_src, u16 header_len)
{
u16 tail_masked = io_sq->tail & (io_sq->q_depth - 1);
u8 __iomem *dev_head_addr =
io_sq->header_addr + (tail_masked * io_sq->tx_max_header_size);
if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST)
return 0;
if (unlikely(!io_sq->header_addr)) {
pr_err("Push buffer header ptr is NULL\n");
return -EINVAL;
}
memcpy_toio(dev_head_addr, head_src, header_len);
return 0;
}
static inline struct ena_eth_io_rx_cdesc_base *
ena_com_rx_cdesc_idx_to_ptr(struct ena_com_io_cq *io_cq, u16 idx)
{
idx &= (io_cq->q_depth - 1);
return (struct ena_eth_io_rx_cdesc_base *)
((uintptr_t)io_cq->cdesc_addr.virt_addr +
idx * io_cq->cdesc_entry_size_in_bytes);
}
static inline u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq,
u16 *first_cdesc_idx)
{
struct ena_eth_io_rx_cdesc_base *cdesc;
u16 count = 0, head_masked;
u32 last = 0;
do {
cdesc = ena_com_get_next_rx_cdesc(io_cq);
if (!cdesc)
break;
ena_com_cq_inc_head(io_cq);
count++;
last = (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT;
} while (!last);
if (last) {
*first_cdesc_idx = io_cq->cur_rx_pkt_cdesc_start_idx;
count += io_cq->cur_rx_pkt_cdesc_count;
head_masked = io_cq->head & (io_cq->q_depth - 1);
io_cq->cur_rx_pkt_cdesc_count = 0;
io_cq->cur_rx_pkt_cdesc_start_idx = head_masked;
pr_debug("ena q_id: %d packets were completed. first desc idx %u descs# %d\n",
io_cq->qid, *first_cdesc_idx, count);
} else {
io_cq->cur_rx_pkt_cdesc_count += count;
count = 0;
}
return count;
}
static inline bool ena_com_meta_desc_changed(struct ena_com_io_sq *io_sq,
struct ena_com_tx_ctx *ena_tx_ctx)
{
int rc;
if (ena_tx_ctx->meta_valid) {
rc = memcmp(&io_sq->cached_tx_meta,
&ena_tx_ctx->ena_meta,
sizeof(struct ena_com_tx_meta));
if (unlikely(rc != 0))
return true;
}
return false;
}
static inline void ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq,
struct ena_com_tx_ctx *ena_tx_ctx)
{
struct ena_eth_io_tx_meta_desc *meta_desc = NULL;
struct ena_com_tx_meta *ena_meta = &ena_tx_ctx->ena_meta;
meta_desc = get_sq_desc(io_sq);
memset(meta_desc, 0x0, sizeof(struct ena_eth_io_tx_meta_desc));
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_DESC_MASK;
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK;
/* bits 0-9 of the mss */
meta_desc->word2 |= (ena_meta->mss <<
ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT) &
ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK;
/* bits 10-13 of the mss */
meta_desc->len_ctrl |= ((ena_meta->mss >> 10) <<
ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT) &
ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK;
/* Extended meta desc */
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK;
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_STORE_MASK;
meta_desc->len_ctrl |= (io_sq->phase <<
ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT) &
ENA_ETH_IO_TX_META_DESC_PHASE_MASK;
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_FIRST_MASK;
meta_desc->word2 |= ena_meta->l3_hdr_len &
ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK;
meta_desc->word2 |= (ena_meta->l3_hdr_offset <<
ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT) &
ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK;
meta_desc->word2 |= (ena_meta->l4_hdr_len <<
ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT) &
ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK;
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_STORE_MASK;
/* Cached the meta desc */
memcpy(&io_sq->cached_tx_meta, ena_meta,
sizeof(struct ena_com_tx_meta));
ena_com_copy_curr_sq_desc_to_dev(io_sq);
ena_com_sq_update_tail(io_sq);
}
static inline void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx,
struct ena_eth_io_rx_cdesc_base *cdesc)
{
ena_rx_ctx->l3_proto = cdesc->status &
ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK;
ena_rx_ctx->l4_proto =
(cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT;
ena_rx_ctx->l3_csum_err =
(cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT;
ena_rx_ctx->l4_csum_err =
(cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT;
ena_rx_ctx->hash = cdesc->hash;
ena_rx_ctx->frag =
(cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT;
pr_debug("ena_rx_ctx->l3_proto %d ena_rx_ctx->l4_proto %d\nena_rx_ctx->l3_csum_err %d ena_rx_ctx->l4_csum_err %d\nhash frag %d frag: %d cdesc_status: %x\n",
ena_rx_ctx->l3_proto, ena_rx_ctx->l4_proto,
ena_rx_ctx->l3_csum_err, ena_rx_ctx->l4_csum_err,
ena_rx_ctx->hash, ena_rx_ctx->frag, cdesc->status);
}
/*****************************************************************************/
/***************************** API **********************************/
/*****************************************************************************/
int ena_com_prepare_tx(struct ena_com_io_sq *io_sq,
struct ena_com_tx_ctx *ena_tx_ctx,
int *nb_hw_desc)
{
struct ena_eth_io_tx_desc *desc = NULL;
struct ena_com_buf *ena_bufs = ena_tx_ctx->ena_bufs;
void *push_header = ena_tx_ctx->push_header;
u16 header_len = ena_tx_ctx->header_len;
u16 num_bufs = ena_tx_ctx->num_bufs;
int total_desc, i, rc;
bool have_meta;
u64 addr_hi;
WARN(io_sq->direction != ENA_COM_IO_QUEUE_DIRECTION_TX, "wrong Q type");
/* num_bufs +1 for potential meta desc */
if (ena_com_sq_empty_space(io_sq) < (num_bufs + 1)) {
pr_err("Not enough space in the tx queue\n");
return -ENOMEM;
}
if (unlikely(header_len > io_sq->tx_max_header_size)) {
pr_err("header size is too large %d max header: %d\n",
header_len, io_sq->tx_max_header_size);
return -EINVAL;
}
/* start with pushing the header (if needed) */
rc = ena_com_write_header(io_sq, push_header, header_len);
if (unlikely(rc))
return rc;
have_meta = ena_tx_ctx->meta_valid && ena_com_meta_desc_changed(io_sq,
ena_tx_ctx);
if (have_meta)
ena_com_create_and_store_tx_meta_desc(io_sq, ena_tx_ctx);
/* If the caller doesn't want send packets */
if (unlikely(!num_bufs && !header_len)) {
*nb_hw_desc = have_meta ? 0 : 1;
return 0;
}
desc = get_sq_desc(io_sq);
memset(desc, 0x0, sizeof(struct ena_eth_io_tx_desc));
/* Set first desc when we don't have meta descriptor */
if (!have_meta)
desc->len_ctrl |= ENA_ETH_IO_TX_DESC_FIRST_MASK;
desc->buff_addr_hi_hdr_sz |= (header_len <<
ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT) &
ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK;
desc->len_ctrl |= (io_sq->phase << ENA_ETH_IO_TX_DESC_PHASE_SHIFT) &
ENA_ETH_IO_TX_DESC_PHASE_MASK;
desc->len_ctrl |= ENA_ETH_IO_TX_DESC_COMP_REQ_MASK;
/* Bits 0-9 */
desc->meta_ctrl |= (ena_tx_ctx->req_id <<
ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT) &
ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK;
desc->meta_ctrl |= (ena_tx_ctx->df <<
ENA_ETH_IO_TX_DESC_DF_SHIFT) &
ENA_ETH_IO_TX_DESC_DF_MASK;
/* Bits 10-15 */
desc->len_ctrl |= ((ena_tx_ctx->req_id >> 10) <<
ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT) &
ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK;
if (ena_tx_ctx->meta_valid) {
desc->meta_ctrl |= (ena_tx_ctx->tso_enable <<
ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT) &
ENA_ETH_IO_TX_DESC_TSO_EN_MASK;
desc->meta_ctrl |= ena_tx_ctx->l3_proto &
ENA_ETH_IO_TX_DESC_L3_PROTO_IDX_MASK;
desc->meta_ctrl |= (ena_tx_ctx->l4_proto <<
ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT) &
ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK;
desc->meta_ctrl |= (ena_tx_ctx->l3_csum_enable <<
ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT) &
ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK;
desc->meta_ctrl |= (ena_tx_ctx->l4_csum_enable <<
ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT) &
ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK;
desc->meta_ctrl |= (ena_tx_ctx->l4_csum_partial <<
ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT) &
ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK;
}
for (i = 0; i < num_bufs; i++) {
/* The first desc share the same desc as the header */
if (likely(i != 0)) {
ena_com_copy_curr_sq_desc_to_dev(io_sq);
ena_com_sq_update_tail(io_sq);
desc = get_sq_desc(io_sq);
memset(desc, 0x0, sizeof(struct ena_eth_io_tx_desc));
desc->len_ctrl |= (io_sq->phase <<
ENA_ETH_IO_TX_DESC_PHASE_SHIFT) &
ENA_ETH_IO_TX_DESC_PHASE_MASK;
}
desc->len_ctrl |= ena_bufs->len &
ENA_ETH_IO_TX_DESC_LENGTH_MASK;
addr_hi = ((ena_bufs->paddr &
GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32);
desc->buff_addr_lo = (u32)ena_bufs->paddr;
desc->buff_addr_hi_hdr_sz |= addr_hi &
ENA_ETH_IO_TX_DESC_ADDR_HI_MASK;
ena_bufs++;
}
/* set the last desc indicator */
desc->len_ctrl |= ENA_ETH_IO_TX_DESC_LAST_MASK;
ena_com_copy_curr_sq_desc_to_dev(io_sq);
ena_com_sq_update_tail(io_sq);
total_desc = max_t(u16, num_bufs, 1);
total_desc += have_meta ? 1 : 0;
*nb_hw_desc = total_desc;
return 0;
}
int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
struct ena_com_io_sq *io_sq,
struct ena_com_rx_ctx *ena_rx_ctx)
{
struct ena_com_rx_buf_info *ena_buf = &ena_rx_ctx->ena_bufs[0];
struct ena_eth_io_rx_cdesc_base *cdesc = NULL;
u16 cdesc_idx = 0;
u16 nb_hw_desc;
u16 i;
WARN(io_cq->direction != ENA_COM_IO_QUEUE_DIRECTION_RX, "wrong Q type");
nb_hw_desc = ena_com_cdesc_rx_pkt_get(io_cq, &cdesc_idx);
if (nb_hw_desc == 0) {
ena_rx_ctx->descs = nb_hw_desc;
return 0;
}
pr_debug("fetch rx packet: queue %d completed desc: %d\n", io_cq->qid,
nb_hw_desc);
if (unlikely(nb_hw_desc > ena_rx_ctx->max_bufs)) {
pr_err("Too many RX cdescs (%d) > MAX(%d)\n", nb_hw_desc,
ena_rx_ctx->max_bufs);
return -ENOSPC;
}
for (i = 0; i < nb_hw_desc; i++) {
cdesc = ena_com_rx_cdesc_idx_to_ptr(io_cq, cdesc_idx + i);
ena_buf->len = cdesc->length;
ena_buf->req_id = cdesc->req_id;
ena_buf++;
}
/* Update SQ head ptr */
io_sq->next_to_comp += nb_hw_desc;
pr_debug("[%s][QID#%d] Updating SQ head to: %d\n", __func__, io_sq->qid,
io_sq->next_to_comp);
/* Get rx flags from the last pkt */
ena_com_rx_set_flags(ena_rx_ctx, cdesc);
ena_rx_ctx->descs = nb_hw_desc;
return 0;
}
int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
struct ena_com_buf *ena_buf,
u16 req_id)
{
struct ena_eth_io_rx_desc *desc;
WARN(io_sq->direction != ENA_COM_IO_QUEUE_DIRECTION_RX, "wrong Q type");
if (unlikely(ena_com_sq_empty_space(io_sq) == 0))
return -ENOSPC;
desc = get_sq_desc(io_sq);
memset(desc, 0x0, sizeof(struct ena_eth_io_rx_desc));
desc->length = ena_buf->len;
desc->ctrl |= ENA_ETH_IO_RX_DESC_FIRST_MASK;
desc->ctrl |= ENA_ETH_IO_RX_DESC_LAST_MASK;
desc->ctrl |= io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK;
desc->ctrl |= ENA_ETH_IO_RX_DESC_COMP_REQ_MASK;
desc->req_id = req_id;
desc->buff_addr_lo = (u32)ena_buf->paddr;
desc->buff_addr_hi =
((ena_buf->paddr & GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32);
ena_com_sq_update_tail(io_sq);
return 0;
}
int ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq, u16 *req_id)
{
u8 expected_phase, cdesc_phase;
struct ena_eth_io_tx_cdesc *cdesc;
u16 masked_head;
masked_head = io_cq->head & (io_cq->q_depth - 1);
expected_phase = io_cq->phase;
cdesc = (struct ena_eth_io_tx_cdesc *)
((uintptr_t)io_cq->cdesc_addr.virt_addr +
(masked_head * io_cq->cdesc_entry_size_in_bytes));
/* When the current completion descriptor phase isn't the same as the
* expected, it mean that the device still didn't update
* this completion.
*/
cdesc_phase = cdesc->flags & ENA_ETH_IO_TX_CDESC_PHASE_MASK;
if (cdesc_phase != expected_phase)
return -EAGAIN;
ena_com_cq_inc_head(io_cq);
*req_id = cdesc->req_id;
return 0;
}

View File

@ -0,0 +1,160 @@
/*
* Copyright 2015 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef ENA_ETH_COM_H_
#define ENA_ETH_COM_H_
#include "ena_com.h"
/* head update threshold in units of (queue size / ENA_COMP_HEAD_THRESH) */
#define ENA_COMP_HEAD_THRESH 4
struct ena_com_tx_ctx {
struct ena_com_tx_meta ena_meta;
struct ena_com_buf *ena_bufs;
/* For LLQ, header buffer - pushed to the device mem space */
void *push_header;
enum ena_eth_io_l3_proto_index l3_proto;
enum ena_eth_io_l4_proto_index l4_proto;
u16 num_bufs;
u16 req_id;
/* For regular queue, indicate the size of the header
* For LLQ, indicate the size of the pushed buffer
*/
u16 header_len;
u8 meta_valid;
u8 tso_enable;
u8 l3_csum_enable;
u8 l4_csum_enable;
u8 l4_csum_partial;
u8 df; /* Don't fragment */
};
struct ena_com_rx_ctx {
struct ena_com_rx_buf_info *ena_bufs;
enum ena_eth_io_l3_proto_index l3_proto;
enum ena_eth_io_l4_proto_index l4_proto;
bool l3_csum_err;
bool l4_csum_err;
/* fragmented packet */
bool frag;
u32 hash;
u16 descs;
int max_bufs;
};
int ena_com_prepare_tx(struct ena_com_io_sq *io_sq,
struct ena_com_tx_ctx *ena_tx_ctx,
int *nb_hw_desc);
int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
struct ena_com_io_sq *io_sq,
struct ena_com_rx_ctx *ena_rx_ctx);
int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
struct ena_com_buf *ena_buf,
u16 req_id);
int ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq, u16 *req_id);
static inline void ena_com_unmask_intr(struct ena_com_io_cq *io_cq,
struct ena_eth_io_intr_reg *intr_reg)
{
writel(intr_reg->intr_control, io_cq->unmask_reg);
}
static inline int ena_com_sq_empty_space(struct ena_com_io_sq *io_sq)
{
u16 tail, next_to_comp, cnt;
next_to_comp = io_sq->next_to_comp;
tail = io_sq->tail;
cnt = tail - next_to_comp;
return io_sq->q_depth - 1 - cnt;
}
static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq)
{
u16 tail;
tail = io_sq->tail;
pr_debug("write submission queue doorbell for queue: %d tail: %d\n",
io_sq->qid, tail);
writel(tail, io_sq->db_addr);
return 0;
}
static inline int ena_com_update_dev_comp_head(struct ena_com_io_cq *io_cq)
{
u16 unreported_comp, head;
bool need_update;
head = io_cq->head;
unreported_comp = head - io_cq->last_head_update;
need_update = unreported_comp > (io_cq->q_depth / ENA_COMP_HEAD_THRESH);
if (io_cq->cq_head_db_reg && need_update) {
pr_debug("Write completion queue doorbell for queue %d: head: %d\n",
io_cq->qid, head);
writel(head, io_cq->cq_head_db_reg);
io_cq->last_head_update = head;
}
return 0;
}
static inline void ena_com_update_numa_node(struct ena_com_io_cq *io_cq,
u8 numa_node)
{
struct ena_eth_io_numa_node_cfg_reg numa_cfg;
if (!io_cq->numa_node_cfg_reg)
return;
numa_cfg.numa_cfg = (numa_node & ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK)
| ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK;
writel(numa_cfg.numa_cfg, io_cq->numa_node_cfg_reg);
}
static inline void ena_com_comp_ack(struct ena_com_io_sq *io_sq, u16 elem)
{
io_sq->next_to_comp += elem;
}
#endif /* ENA_ETH_COM_H_ */

View File

@ -0,0 +1,416 @@
/*
* Copyright 2015 - 2016 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _ENA_ETH_IO_H_
#define _ENA_ETH_IO_H_
enum ena_eth_io_l3_proto_index {
ENA_ETH_IO_L3_PROTO_UNKNOWN = 0,
ENA_ETH_IO_L3_PROTO_IPV4 = 8,
ENA_ETH_IO_L3_PROTO_IPV6 = 11,
ENA_ETH_IO_L3_PROTO_FCOE = 21,
ENA_ETH_IO_L3_PROTO_ROCE = 22,
};
enum ena_eth_io_l4_proto_index {
ENA_ETH_IO_L4_PROTO_UNKNOWN = 0,
ENA_ETH_IO_L4_PROTO_TCP = 12,
ENA_ETH_IO_L4_PROTO_UDP = 13,
ENA_ETH_IO_L4_PROTO_ROUTEABLE_ROCE = 23,
};
struct ena_eth_io_tx_desc {
/* 15:0 : length - Buffer length in bytes, must
* include any packet trailers that the ENA supposed
* to update like End-to-End CRC, Authentication GMAC
* etc. This length must not include the
* 'Push_Buffer' length. This length must not include
* the 4-byte added in the end for 802.3 Ethernet FCS
* 21:16 : req_id_hi - Request ID[15:10]
* 22 : reserved22 - MBZ
* 23 : meta_desc - MBZ
* 24 : phase
* 25 : reserved1 - MBZ
* 26 : first - Indicates first descriptor in
* transaction
* 27 : last - Indicates last descriptor in
* transaction
* 28 : comp_req - Indicates whether completion
* should be posted, after packet is transmitted.
* Valid only for first descriptor
* 30:29 : reserved29 - MBZ
* 31 : reserved31 - MBZ
*/
u32 len_ctrl;
/* 3:0 : l3_proto_idx - L3 protocol. This field
* required when l3_csum_en,l3_csum or tso_en are set.
* 4 : DF - IPv4 DF, must be 0 if packet is IPv4 and
* DF flags of the IPv4 header is 0. Otherwise must
* be set to 1
* 6:5 : reserved5
* 7 : tso_en - Enable TSO, For TCP only.
* 12:8 : l4_proto_idx - L4 protocol. This field need
* to be set when l4_csum_en or tso_en are set.
* 13 : l3_csum_en - enable IPv4 header checksum.
* 14 : l4_csum_en - enable TCP/UDP checksum.
* 15 : ethernet_fcs_dis - when set, the controller
* will not append the 802.3 Ethernet Frame Check
* Sequence to the packet
* 16 : reserved16
* 17 : l4_csum_partial - L4 partial checksum. when
* set to 0, the ENA calculates the L4 checksum,
* where the Destination Address required for the
* TCP/UDP pseudo-header is taken from the actual
* packet L3 header. when set to 1, the ENA doesn't
* calculate the sum of the pseudo-header, instead,
* the checksum field of the L4 is used instead. When
* TSO enabled, the checksum of the pseudo-header
* must not include the tcp length field. L4 partial
* checksum should be used for IPv6 packet that
* contains Routing Headers.
* 20:18 : reserved18 - MBZ
* 21 : reserved21 - MBZ
* 31:22 : req_id_lo - Request ID[9:0]
*/
u32 meta_ctrl;
u32 buff_addr_lo;
/* address high and header size
* 15:0 : addr_hi - Buffer Pointer[47:32]
* 23:16 : reserved16_w2
* 31:24 : header_length - Header length. For Low
* Latency Queues, this fields indicates the number
* of bytes written to the headers' memory. For
* normal queues, if packet is TCP or UDP, and longer
* than max_header_size, then this field should be
* set to the sum of L4 header offset and L4 header
* size(without options), otherwise, this field
* should be set to 0. For both modes, this field
* must not exceed the max_header_size.
* max_header_size value is reported by the Max
* Queues Feature descriptor
*/
u32 buff_addr_hi_hdr_sz;
};
struct ena_eth_io_tx_meta_desc {
/* 9:0 : req_id_lo - Request ID[9:0]
* 11:10 : reserved10 - MBZ
* 12 : reserved12 - MBZ
* 13 : reserved13 - MBZ
* 14 : ext_valid - if set, offset fields in Word2
* are valid Also MSS High in Word 0 and bits [31:24]
* in Word 3
* 15 : reserved15
* 19:16 : mss_hi
* 20 : eth_meta_type - 0: Tx Metadata Descriptor, 1:
* Extended Metadata Descriptor
* 21 : meta_store - Store extended metadata in queue
* cache
* 22 : reserved22 - MBZ
* 23 : meta_desc - MBO
* 24 : phase
* 25 : reserved25 - MBZ
* 26 : first - Indicates first descriptor in
* transaction
* 27 : last - Indicates last descriptor in
* transaction
* 28 : comp_req - Indicates whether completion
* should be posted, after packet is transmitted.
* Valid only for first descriptor
* 30:29 : reserved29 - MBZ
* 31 : reserved31 - MBZ
*/
u32 len_ctrl;
/* 5:0 : req_id_hi
* 31:6 : reserved6 - MBZ
*/
u32 word1;
/* 7:0 : l3_hdr_len
* 15:8 : l3_hdr_off
* 21:16 : l4_hdr_len_in_words - counts the L4 header
* length in words. there is an explicit assumption
* that L4 header appears right after L3 header and
* L4 offset is based on l3_hdr_off+l3_hdr_len
* 31:22 : mss_lo
*/
u32 word2;
u32 reserved;
};
struct ena_eth_io_tx_cdesc {
/* Request ID[15:0] */
u16 req_id;
u8 status;
/* flags
* 0 : phase
* 7:1 : reserved1
*/
u8 flags;
u16 sub_qid;
u16 sq_head_idx;
};
struct ena_eth_io_rx_desc {
/* In bytes. 0 means 64KB */
u16 length;
/* MBZ */
u8 reserved2;
/* 0 : phase
* 1 : reserved1 - MBZ
* 2 : first - Indicates first descriptor in
* transaction
* 3 : last - Indicates last descriptor in transaction
* 4 : comp_req
* 5 : reserved5 - MBO
* 7:6 : reserved6 - MBZ
*/
u8 ctrl;
u16 req_id;
/* MBZ */
u16 reserved6;
u32 buff_addr_lo;
u16 buff_addr_hi;
/* MBZ */
u16 reserved16_w3;
};
/* 4-word format Note: all ethernet parsing information are valid only when
* last=1
*/
struct ena_eth_io_rx_cdesc_base {
/* 4:0 : l3_proto_idx
* 6:5 : src_vlan_cnt
* 7 : reserved7 - MBZ
* 12:8 : l4_proto_idx
* 13 : l3_csum_err - when set, either the L3
* checksum error detected, or, the controller didn't
* validate the checksum. This bit is valid only when
* l3_proto_idx indicates IPv4 packet
* 14 : l4_csum_err - when set, either the L4
* checksum error detected, or, the controller didn't
* validate the checksum. This bit is valid only when
* l4_proto_idx indicates TCP/UDP packet, and,
* ipv4_frag is not set
* 15 : ipv4_frag - Indicates IPv4 fragmented packet
* 23:16 : reserved16
* 24 : phase
* 25 : l3_csum2 - second checksum engine result
* 26 : first - Indicates first descriptor in
* transaction
* 27 : last - Indicates last descriptor in
* transaction
* 29:28 : reserved28
* 30 : buffer - 0: Metadata descriptor. 1: Buffer
* Descriptor was used
* 31 : reserved31
*/
u32 status;
u16 length;
u16 req_id;
/* 32-bit hash result */
u32 hash;
u16 sub_qid;
u16 reserved;
};
/* 8-word format */
struct ena_eth_io_rx_cdesc_ext {
struct ena_eth_io_rx_cdesc_base base;
u32 buff_addr_lo;
u16 buff_addr_hi;
u16 reserved16;
u32 reserved_w6;
u32 reserved_w7;
};
struct ena_eth_io_intr_reg {
/* 14:0 : rx_intr_delay
* 29:15 : tx_intr_delay
* 30 : intr_unmask
* 31 : reserved
*/
u32 intr_control;
};
struct ena_eth_io_numa_node_cfg_reg {
/* 7:0 : numa
* 30:8 : reserved
* 31 : enabled
*/
u32 numa_cfg;
};
/* tx_desc */
#define ENA_ETH_IO_TX_DESC_LENGTH_MASK GENMASK(15, 0)
#define ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT 16
#define ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK GENMASK(21, 16)
#define ENA_ETH_IO_TX_DESC_META_DESC_SHIFT 23
#define ENA_ETH_IO_TX_DESC_META_DESC_MASK BIT(23)
#define ENA_ETH_IO_TX_DESC_PHASE_SHIFT 24
#define ENA_ETH_IO_TX_DESC_PHASE_MASK BIT(24)
#define ENA_ETH_IO_TX_DESC_FIRST_SHIFT 26
#define ENA_ETH_IO_TX_DESC_FIRST_MASK BIT(26)
#define ENA_ETH_IO_TX_DESC_LAST_SHIFT 27
#define ENA_ETH_IO_TX_DESC_LAST_MASK BIT(27)
#define ENA_ETH_IO_TX_DESC_COMP_REQ_SHIFT 28
#define ENA_ETH_IO_TX_DESC_COMP_REQ_MASK BIT(28)
#define ENA_ETH_IO_TX_DESC_L3_PROTO_IDX_MASK GENMASK(3, 0)
#define ENA_ETH_IO_TX_DESC_DF_SHIFT 4
#define ENA_ETH_IO_TX_DESC_DF_MASK BIT(4)
#define ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT 7
#define ENA_ETH_IO_TX_DESC_TSO_EN_MASK BIT(7)
#define ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT 8
#define ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK GENMASK(12, 8)
#define ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT 13
#define ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK BIT(13)
#define ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT 14
#define ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK BIT(14)
#define ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_SHIFT 15
#define ENA_ETH_IO_TX_DESC_ETHERNET_FCS_DIS_MASK BIT(15)
#define ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT 17
#define ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK BIT(17)
#define ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT 22
#define ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK GENMASK(31, 22)
#define ENA_ETH_IO_TX_DESC_ADDR_HI_MASK GENMASK(15, 0)
#define ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT 24
#define ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK GENMASK(31, 24)
/* tx_meta_desc */
#define ENA_ETH_IO_TX_META_DESC_REQ_ID_LO_MASK GENMASK(9, 0)
#define ENA_ETH_IO_TX_META_DESC_EXT_VALID_SHIFT 14
#define ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK BIT(14)
#define ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT 16
#define ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK GENMASK(19, 16)
#define ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_SHIFT 20
#define ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK BIT(20)
#define ENA_ETH_IO_TX_META_DESC_META_STORE_SHIFT 21
#define ENA_ETH_IO_TX_META_DESC_META_STORE_MASK BIT(21)
#define ENA_ETH_IO_TX_META_DESC_META_DESC_SHIFT 23
#define ENA_ETH_IO_TX_META_DESC_META_DESC_MASK BIT(23)
#define ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT 24
#define ENA_ETH_IO_TX_META_DESC_PHASE_MASK BIT(24)
#define ENA_ETH_IO_TX_META_DESC_FIRST_SHIFT 26
#define ENA_ETH_IO_TX_META_DESC_FIRST_MASK BIT(26)
#define ENA_ETH_IO_TX_META_DESC_LAST_SHIFT 27
#define ENA_ETH_IO_TX_META_DESC_LAST_MASK BIT(27)
#define ENA_ETH_IO_TX_META_DESC_COMP_REQ_SHIFT 28
#define ENA_ETH_IO_TX_META_DESC_COMP_REQ_MASK BIT(28)
#define ENA_ETH_IO_TX_META_DESC_REQ_ID_HI_MASK GENMASK(5, 0)
#define ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK GENMASK(7, 0)
#define ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT 8
#define ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK GENMASK(15, 8)
#define ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT 16
#define ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK GENMASK(21, 16)
#define ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT 22
#define ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK GENMASK(31, 22)
/* tx_cdesc */
#define ENA_ETH_IO_TX_CDESC_PHASE_MASK BIT(0)
/* rx_desc */
#define ENA_ETH_IO_RX_DESC_PHASE_MASK BIT(0)
#define ENA_ETH_IO_RX_DESC_FIRST_SHIFT 2
#define ENA_ETH_IO_RX_DESC_FIRST_MASK BIT(2)
#define ENA_ETH_IO_RX_DESC_LAST_SHIFT 3
#define ENA_ETH_IO_RX_DESC_LAST_MASK BIT(3)
#define ENA_ETH_IO_RX_DESC_COMP_REQ_SHIFT 4
#define ENA_ETH_IO_RX_DESC_COMP_REQ_MASK BIT(4)
/* rx_cdesc_base */
#define ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK GENMASK(4, 0)
#define ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_SHIFT 5
#define ENA_ETH_IO_RX_CDESC_BASE_SRC_VLAN_CNT_MASK GENMASK(6, 5)
#define ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT 8
#define ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK GENMASK(12, 8)
#define ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT 13
#define ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK BIT(13)
#define ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT 14
#define ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK BIT(14)
#define ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT 15
#define ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK BIT(15)
#define ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT 24
#define ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK BIT(24)
#define ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_SHIFT 25
#define ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM2_MASK BIT(25)
#define ENA_ETH_IO_RX_CDESC_BASE_FIRST_SHIFT 26
#define ENA_ETH_IO_RX_CDESC_BASE_FIRST_MASK BIT(26)
#define ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT 27
#define ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK BIT(27)
#define ENA_ETH_IO_RX_CDESC_BASE_BUFFER_SHIFT 30
#define ENA_ETH_IO_RX_CDESC_BASE_BUFFER_MASK BIT(30)
/* intr_reg */
#define ENA_ETH_IO_INTR_REG_RX_INTR_DELAY_MASK GENMASK(14, 0)
#define ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_SHIFT 15
#define ENA_ETH_IO_INTR_REG_TX_INTR_DELAY_MASK GENMASK(29, 15)
#define ENA_ETH_IO_INTR_REG_INTR_UNMASK_SHIFT 30
#define ENA_ETH_IO_INTR_REG_INTR_UNMASK_MASK BIT(30)
/* numa_node_cfg_reg */
#define ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK GENMASK(7, 0)
#define ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_SHIFT 31
#define ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK BIT(31)
#endif /*_ENA_ETH_IO_H_ */

View File

@ -0,0 +1,895 @@
/*
* Copyright 2015 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/pci.h>
#include "ena_netdev.h"
struct ena_stats {
char name[ETH_GSTRING_LEN];
int stat_offset;
};
#define ENA_STAT_ENA_COM_ENTRY(stat) { \
.name = #stat, \
.stat_offset = offsetof(struct ena_com_stats_admin, stat) \
}
#define ENA_STAT_ENTRY(stat, stat_type) { \
.name = #stat, \
.stat_offset = offsetof(struct ena_stats_##stat_type, stat) \
}
#define ENA_STAT_RX_ENTRY(stat) \
ENA_STAT_ENTRY(stat, rx)
#define ENA_STAT_TX_ENTRY(stat) \
ENA_STAT_ENTRY(stat, tx)
#define ENA_STAT_GLOBAL_ENTRY(stat) \
ENA_STAT_ENTRY(stat, dev)
static const struct ena_stats ena_stats_global_strings[] = {
ENA_STAT_GLOBAL_ENTRY(tx_timeout),
ENA_STAT_GLOBAL_ENTRY(io_suspend),
ENA_STAT_GLOBAL_ENTRY(io_resume),
ENA_STAT_GLOBAL_ENTRY(wd_expired),
ENA_STAT_GLOBAL_ENTRY(interface_up),
ENA_STAT_GLOBAL_ENTRY(interface_down),
ENA_STAT_GLOBAL_ENTRY(admin_q_pause),
};
static const struct ena_stats ena_stats_tx_strings[] = {
ENA_STAT_TX_ENTRY(cnt),
ENA_STAT_TX_ENTRY(bytes),
ENA_STAT_TX_ENTRY(queue_stop),
ENA_STAT_TX_ENTRY(queue_wakeup),
ENA_STAT_TX_ENTRY(dma_mapping_err),
ENA_STAT_TX_ENTRY(linearize),
ENA_STAT_TX_ENTRY(linearize_failed),
ENA_STAT_TX_ENTRY(napi_comp),
ENA_STAT_TX_ENTRY(tx_poll),
ENA_STAT_TX_ENTRY(doorbells),
ENA_STAT_TX_ENTRY(prepare_ctx_err),
ENA_STAT_TX_ENTRY(missing_tx_comp),
ENA_STAT_TX_ENTRY(bad_req_id),
};
static const struct ena_stats ena_stats_rx_strings[] = {
ENA_STAT_RX_ENTRY(cnt),
ENA_STAT_RX_ENTRY(bytes),
ENA_STAT_RX_ENTRY(refil_partial),
ENA_STAT_RX_ENTRY(bad_csum),
ENA_STAT_RX_ENTRY(page_alloc_fail),
ENA_STAT_RX_ENTRY(skb_alloc_fail),
ENA_STAT_RX_ENTRY(dma_mapping_err),
ENA_STAT_RX_ENTRY(bad_desc_num),
ENA_STAT_RX_ENTRY(rx_copybreak_pkt),
};
static const struct ena_stats ena_stats_ena_com_strings[] = {
ENA_STAT_ENA_COM_ENTRY(aborted_cmd),
ENA_STAT_ENA_COM_ENTRY(submitted_cmd),
ENA_STAT_ENA_COM_ENTRY(completed_cmd),
ENA_STAT_ENA_COM_ENTRY(out_of_space),
ENA_STAT_ENA_COM_ENTRY(no_completion),
};
#define ENA_STATS_ARRAY_GLOBAL ARRAY_SIZE(ena_stats_global_strings)
#define ENA_STATS_ARRAY_TX ARRAY_SIZE(ena_stats_tx_strings)
#define ENA_STATS_ARRAY_RX ARRAY_SIZE(ena_stats_rx_strings)
#define ENA_STATS_ARRAY_ENA_COM ARRAY_SIZE(ena_stats_ena_com_strings)
static void ena_safe_update_stat(u64 *src, u64 *dst,
struct u64_stats_sync *syncp)
{
unsigned int start;
do {
start = u64_stats_fetch_begin_irq(syncp);
*(dst) = *src;
} while (u64_stats_fetch_retry_irq(syncp, start));
}
static void ena_queue_stats(struct ena_adapter *adapter, u64 **data)
{
const struct ena_stats *ena_stats;
struct ena_ring *ring;
u64 *ptr;
int i, j;
for (i = 0; i < adapter->num_queues; i++) {
/* Tx stats */
ring = &adapter->tx_ring[i];
for (j = 0; j < ENA_STATS_ARRAY_TX; j++) {
ena_stats = &ena_stats_tx_strings[j];
ptr = (u64 *)((uintptr_t)&ring->tx_stats +
(uintptr_t)ena_stats->stat_offset);
ena_safe_update_stat(ptr, (*data)++, &ring->syncp);
}
/* Rx stats */
ring = &adapter->rx_ring[i];
for (j = 0; j < ENA_STATS_ARRAY_RX; j++) {
ena_stats = &ena_stats_rx_strings[j];
ptr = (u64 *)((uintptr_t)&ring->rx_stats +
(uintptr_t)ena_stats->stat_offset);
ena_safe_update_stat(ptr, (*data)++, &ring->syncp);
}
}
}
static void ena_dev_admin_queue_stats(struct ena_adapter *adapter, u64 **data)
{
const struct ena_stats *ena_stats;
u32 *ptr;
int i;
for (i = 0; i < ENA_STATS_ARRAY_ENA_COM; i++) {
ena_stats = &ena_stats_ena_com_strings[i];
ptr = (u32 *)((uintptr_t)&adapter->ena_dev->admin_queue.stats +
(uintptr_t)ena_stats->stat_offset);
*(*data)++ = *ptr;
}
}
static void ena_get_ethtool_stats(struct net_device *netdev,
struct ethtool_stats *stats,
u64 *data)
{
struct ena_adapter *adapter = netdev_priv(netdev);
const struct ena_stats *ena_stats;
u64 *ptr;
int i;
for (i = 0; i < ENA_STATS_ARRAY_GLOBAL; i++) {
ena_stats = &ena_stats_global_strings[i];
ptr = (u64 *)((uintptr_t)&adapter->dev_stats +
(uintptr_t)ena_stats->stat_offset);
ena_safe_update_stat(ptr, data++, &adapter->syncp);
}
ena_queue_stats(adapter, &data);
ena_dev_admin_queue_stats(adapter, &data);
}
int ena_get_sset_count(struct net_device *netdev, int sset)
{
struct ena_adapter *adapter = netdev_priv(netdev);
if (sset != ETH_SS_STATS)
return -EOPNOTSUPP;
return adapter->num_queues * (ENA_STATS_ARRAY_TX + ENA_STATS_ARRAY_RX)
+ ENA_STATS_ARRAY_GLOBAL + ENA_STATS_ARRAY_ENA_COM;
}
static void ena_queue_strings(struct ena_adapter *adapter, u8 **data)
{
const struct ena_stats *ena_stats;
int i, j;
for (i = 0; i < adapter->num_queues; i++) {
/* Tx stats */
for (j = 0; j < ENA_STATS_ARRAY_TX; j++) {
ena_stats = &ena_stats_tx_strings[j];
snprintf(*data, ETH_GSTRING_LEN,
"queue_%u_tx_%s", i, ena_stats->name);
(*data) += ETH_GSTRING_LEN;
}
/* Rx stats */
for (j = 0; j < ENA_STATS_ARRAY_RX; j++) {
ena_stats = &ena_stats_rx_strings[j];
snprintf(*data, ETH_GSTRING_LEN,
"queue_%u_rx_%s", i, ena_stats->name);
(*data) += ETH_GSTRING_LEN;
}
}
}
static void ena_com_dev_strings(u8 **data)
{
const struct ena_stats *ena_stats;
int i;
for (i = 0; i < ENA_STATS_ARRAY_ENA_COM; i++) {
ena_stats = &ena_stats_ena_com_strings[i];
snprintf(*data, ETH_GSTRING_LEN,
"ena_admin_q_%s", ena_stats->name);
(*data) += ETH_GSTRING_LEN;
}
}
static void ena_get_strings(struct net_device *netdev, u32 sset, u8 *data)
{
struct ena_adapter *adapter = netdev_priv(netdev);
const struct ena_stats *ena_stats;
int i;
if (sset != ETH_SS_STATS)
return;
for (i = 0; i < ENA_STATS_ARRAY_GLOBAL; i++) {
ena_stats = &ena_stats_global_strings[i];
memcpy(data, ena_stats->name, ETH_GSTRING_LEN);
data += ETH_GSTRING_LEN;
}
ena_queue_strings(adapter, &data);
ena_com_dev_strings(&data);
}
static int ena_get_link_ksettings(struct net_device *netdev,
struct ethtool_link_ksettings *link_ksettings)
{
struct ena_adapter *adapter = netdev_priv(netdev);
struct ena_com_dev *ena_dev = adapter->ena_dev;
struct ena_admin_get_feature_link_desc *link;
struct ena_admin_get_feat_resp feat_resp;
int rc;
rc = ena_com_get_link_params(ena_dev, &feat_resp);
if (rc)
return rc;
link = &feat_resp.u.link;
link_ksettings->base.speed = link->speed;
if (link->flags & ENA_ADMIN_GET_FEATURE_LINK_DESC_AUTONEG_MASK) {
ethtool_link_ksettings_add_link_mode(link_ksettings,
supported, Autoneg);
ethtool_link_ksettings_add_link_mode(link_ksettings,
supported, Autoneg);
}
link_ksettings->base.autoneg =
(link->flags & ENA_ADMIN_GET_FEATURE_LINK_DESC_AUTONEG_MASK) ?
AUTONEG_ENABLE : AUTONEG_DISABLE;
link_ksettings->base.duplex = DUPLEX_FULL;
return 0;
}
static int ena_get_coalesce(struct net_device *net_dev,
struct ethtool_coalesce *coalesce)
{
struct ena_adapter *adapter = netdev_priv(net_dev);
struct ena_com_dev *ena_dev = adapter->ena_dev;
struct ena_intr_moder_entry intr_moder_entry;
if (!ena_com_interrupt_moderation_supported(ena_dev)) {
/* the devie doesn't support interrupt moderation */
return -EOPNOTSUPP;
}
coalesce->tx_coalesce_usecs =
ena_com_get_nonadaptive_moderation_interval_tx(ena_dev) /
ena_dev->intr_delay_resolution;
if (!ena_com_get_adaptive_moderation_enabled(ena_dev)) {
coalesce->rx_coalesce_usecs =
ena_com_get_nonadaptive_moderation_interval_rx(ena_dev)
/ ena_dev->intr_delay_resolution;
} else {
ena_com_get_intr_moderation_entry(adapter->ena_dev, ENA_INTR_MODER_LOWEST, &intr_moder_entry);
coalesce->rx_coalesce_usecs_low = intr_moder_entry.intr_moder_interval;
coalesce->rx_max_coalesced_frames_low = intr_moder_entry.pkts_per_interval;
ena_com_get_intr_moderation_entry(adapter->ena_dev, ENA_INTR_MODER_MID, &intr_moder_entry);
coalesce->rx_coalesce_usecs = intr_moder_entry.intr_moder_interval;
coalesce->rx_max_coalesced_frames = intr_moder_entry.pkts_per_interval;
ena_com_get_intr_moderation_entry(adapter->ena_dev, ENA_INTR_MODER_HIGHEST, &intr_moder_entry);
coalesce->rx_coalesce_usecs_high = intr_moder_entry.intr_moder_interval;
coalesce->rx_max_coalesced_frames_high = intr_moder_entry.pkts_per_interval;
}
coalesce->use_adaptive_rx_coalesce =
ena_com_get_adaptive_moderation_enabled(ena_dev);
return 0;
}
static void ena_update_tx_rings_intr_moderation(struct ena_adapter *adapter)
{
unsigned int val;
int i;
val = ena_com_get_nonadaptive_moderation_interval_tx(adapter->ena_dev);
for (i = 0; i < adapter->num_queues; i++)
adapter->tx_ring[i].smoothed_interval = val;
}
static int ena_set_coalesce(struct net_device *net_dev,
struct ethtool_coalesce *coalesce)
{
struct ena_adapter *adapter = netdev_priv(net_dev);
struct ena_com_dev *ena_dev = adapter->ena_dev;
struct ena_intr_moder_entry intr_moder_entry;
int rc;
if (!ena_com_interrupt_moderation_supported(ena_dev)) {
/* the devie doesn't support interrupt moderation */
return -EOPNOTSUPP;
}
if (coalesce->rx_coalesce_usecs_irq ||
coalesce->rx_max_coalesced_frames_irq ||
coalesce->tx_coalesce_usecs_irq ||
coalesce->tx_max_coalesced_frames ||
coalesce->tx_max_coalesced_frames_irq ||
coalesce->stats_block_coalesce_usecs ||
coalesce->use_adaptive_tx_coalesce ||
coalesce->pkt_rate_low ||
coalesce->tx_coalesce_usecs_low ||
coalesce->tx_max_coalesced_frames_low ||
coalesce->pkt_rate_high ||
coalesce->tx_coalesce_usecs_high ||
coalesce->tx_max_coalesced_frames_high ||
coalesce->rate_sample_interval)
return -EINVAL;
rc = ena_com_update_nonadaptive_moderation_interval_tx(ena_dev,
coalesce->tx_coalesce_usecs);
if (rc)
return rc;
ena_update_tx_rings_intr_moderation(adapter);
if (ena_com_get_adaptive_moderation_enabled(ena_dev)) {
if (!coalesce->use_adaptive_rx_coalesce) {
ena_com_disable_adaptive_moderation(ena_dev);
rc = ena_com_update_nonadaptive_moderation_interval_rx(ena_dev,
coalesce->rx_coalesce_usecs);
return rc;
}
} else { /* was in non-adaptive mode */
if (coalesce->use_adaptive_rx_coalesce) {
ena_com_enable_adaptive_moderation(ena_dev);
} else {
rc = ena_com_update_nonadaptive_moderation_interval_rx(ena_dev,
coalesce->rx_coalesce_usecs);
return rc;
}
}
intr_moder_entry.intr_moder_interval = coalesce->rx_coalesce_usecs_low;
intr_moder_entry.pkts_per_interval = coalesce->rx_max_coalesced_frames_low;
intr_moder_entry.bytes_per_interval = ENA_INTR_BYTE_COUNT_NOT_SUPPORTED;
ena_com_init_intr_moderation_entry(adapter->ena_dev, ENA_INTR_MODER_LOWEST, &intr_moder_entry);
intr_moder_entry.intr_moder_interval = coalesce->rx_coalesce_usecs;
intr_moder_entry.pkts_per_interval = coalesce->rx_max_coalesced_frames;
intr_moder_entry.bytes_per_interval = ENA_INTR_BYTE_COUNT_NOT_SUPPORTED;
ena_com_init_intr_moderation_entry(adapter->ena_dev, ENA_INTR_MODER_MID, &intr_moder_entry);
intr_moder_entry.intr_moder_interval = coalesce->rx_coalesce_usecs_high;
intr_moder_entry.pkts_per_interval = coalesce->rx_max_coalesced_frames_high;
intr_moder_entry.bytes_per_interval = ENA_INTR_BYTE_COUNT_NOT_SUPPORTED;
ena_com_init_intr_moderation_entry(adapter->ena_dev, ENA_INTR_MODER_HIGHEST, &intr_moder_entry);
return 0;
}
static u32 ena_get_msglevel(struct net_device *netdev)
{
struct ena_adapter *adapter = netdev_priv(netdev);
return adapter->msg_enable;
}
static void ena_set_msglevel(struct net_device *netdev, u32 value)
{
struct ena_adapter *adapter = netdev_priv(netdev);
adapter->msg_enable = value;
}
static void ena_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
struct ena_adapter *adapter = netdev_priv(dev);
strlcpy(info->driver, DRV_MODULE_NAME, sizeof(info->driver));
strlcpy(info->version, DRV_MODULE_VERSION, sizeof(info->version));
strlcpy(info->bus_info, pci_name(adapter->pdev),
sizeof(info->bus_info));
}
static void ena_get_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring)
{
struct ena_adapter *adapter = netdev_priv(netdev);
struct ena_ring *tx_ring = &adapter->tx_ring[0];
struct ena_ring *rx_ring = &adapter->rx_ring[0];
ring->rx_max_pending = rx_ring->ring_size;
ring->tx_max_pending = tx_ring->ring_size;
ring->rx_pending = rx_ring->ring_size;
ring->tx_pending = tx_ring->ring_size;
}
static u32 ena_flow_hash_to_flow_type(u16 hash_fields)
{
u32 data = 0;
if (hash_fields & ENA_ADMIN_RSS_L2_DA)
data |= RXH_L2DA;
if (hash_fields & ENA_ADMIN_RSS_L3_DA)
data |= RXH_IP_DST;
if (hash_fields & ENA_ADMIN_RSS_L3_SA)
data |= RXH_IP_SRC;
if (hash_fields & ENA_ADMIN_RSS_L4_DP)
data |= RXH_L4_B_2_3;
if (hash_fields & ENA_ADMIN_RSS_L4_SP)
data |= RXH_L4_B_0_1;
return data;
}
static u16 ena_flow_data_to_flow_hash(u32 hash_fields)
{
u16 data = 0;
if (hash_fields & RXH_L2DA)
data |= ENA_ADMIN_RSS_L2_DA;
if (hash_fields & RXH_IP_DST)
data |= ENA_ADMIN_RSS_L3_DA;
if (hash_fields & RXH_IP_SRC)
data |= ENA_ADMIN_RSS_L3_SA;
if (hash_fields & RXH_L4_B_2_3)
data |= ENA_ADMIN_RSS_L4_DP;
if (hash_fields & RXH_L4_B_0_1)
data |= ENA_ADMIN_RSS_L4_SP;
return data;
}
static int ena_get_rss_hash(struct ena_com_dev *ena_dev,
struct ethtool_rxnfc *cmd)
{
enum ena_admin_flow_hash_proto proto;
u16 hash_fields;
int rc;
cmd->data = 0;
switch (cmd->flow_type) {
case TCP_V4_FLOW:
proto = ENA_ADMIN_RSS_TCP4;
break;
case UDP_V4_FLOW:
proto = ENA_ADMIN_RSS_UDP4;
break;
case TCP_V6_FLOW:
proto = ENA_ADMIN_RSS_TCP6;
break;
case UDP_V6_FLOW:
proto = ENA_ADMIN_RSS_UDP6;
break;
case IPV4_FLOW:
proto = ENA_ADMIN_RSS_IP4;
break;
case IPV6_FLOW:
proto = ENA_ADMIN_RSS_IP6;
break;
case ETHER_FLOW:
proto = ENA_ADMIN_RSS_NOT_IP;
break;
case AH_V4_FLOW:
case ESP_V4_FLOW:
case AH_V6_FLOW:
case ESP_V6_FLOW:
case SCTP_V4_FLOW:
case AH_ESP_V4_FLOW:
return -EOPNOTSUPP;
default:
return -EINVAL;
}
rc = ena_com_get_hash_ctrl(ena_dev, proto, &hash_fields);
if (rc) {
/* If device don't have permission, return unsupported */
if (rc == -EPERM)
rc = -EOPNOTSUPP;
return rc;
}
cmd->data = ena_flow_hash_to_flow_type(hash_fields);
return 0;
}
static int ena_set_rss_hash(struct ena_com_dev *ena_dev,
struct ethtool_rxnfc *cmd)
{
enum ena_admin_flow_hash_proto proto;
u16 hash_fields;
switch (cmd->flow_type) {
case TCP_V4_FLOW:
proto = ENA_ADMIN_RSS_TCP4;
break;
case UDP_V4_FLOW:
proto = ENA_ADMIN_RSS_UDP4;
break;
case TCP_V6_FLOW:
proto = ENA_ADMIN_RSS_TCP6;
break;
case UDP_V6_FLOW:
proto = ENA_ADMIN_RSS_UDP6;
break;
case IPV4_FLOW:
proto = ENA_ADMIN_RSS_IP4;
break;
case IPV6_FLOW:
proto = ENA_ADMIN_RSS_IP6;
break;
case ETHER_FLOW:
proto = ENA_ADMIN_RSS_NOT_IP;
break;
case AH_V4_FLOW:
case ESP_V4_FLOW:
case AH_V6_FLOW:
case ESP_V6_FLOW:
case SCTP_V4_FLOW:
case AH_ESP_V4_FLOW:
return -EOPNOTSUPP;
default:
return -EINVAL;
}
hash_fields = ena_flow_data_to_flow_hash(cmd->data);
return ena_com_fill_hash_ctrl(ena_dev, proto, hash_fields);
}
static int ena_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *info)
{
struct ena_adapter *adapter = netdev_priv(netdev);
int rc = 0;
switch (info->cmd) {
case ETHTOOL_SRXFH:
rc = ena_set_rss_hash(adapter->ena_dev, info);
break;
case ETHTOOL_SRXCLSRLDEL:
case ETHTOOL_SRXCLSRLINS:
default:
netif_err(adapter, drv, netdev,
"Command parameter %d is not supported\n", info->cmd);
rc = -EOPNOTSUPP;
}
return (rc == -EPERM) ? -EOPNOTSUPP : rc;
}
static int ena_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *info,
u32 *rules)
{
struct ena_adapter *adapter = netdev_priv(netdev);
int rc = 0;
switch (info->cmd) {
case ETHTOOL_GRXRINGS:
info->data = adapter->num_queues;
rc = 0;
break;
case ETHTOOL_GRXFH:
rc = ena_get_rss_hash(adapter->ena_dev, info);
break;
case ETHTOOL_GRXCLSRLCNT:
case ETHTOOL_GRXCLSRULE:
case ETHTOOL_GRXCLSRLALL:
default:
netif_err(adapter, drv, netdev,
"Command parameter %d is not supported\n", info->cmd);
rc = -EOPNOTSUPP;
}
return (rc == -EPERM) ? -EOPNOTSUPP : rc;
}
static u32 ena_get_rxfh_indir_size(struct net_device *netdev)
{
return ENA_RX_RSS_TABLE_SIZE;
}
static u32 ena_get_rxfh_key_size(struct net_device *netdev)
{
return ENA_HASH_KEY_SIZE;
}
static int ena_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
u8 *hfunc)
{
struct ena_adapter *adapter = netdev_priv(netdev);
enum ena_admin_hash_functions ena_func;
u8 func;
int rc;
rc = ena_com_indirect_table_get(adapter->ena_dev, indir);
if (rc)
return rc;
rc = ena_com_get_hash_function(adapter->ena_dev, &ena_func, key);
if (rc)
return rc;
switch (ena_func) {
case ENA_ADMIN_TOEPLITZ:
func = ETH_RSS_HASH_TOP;
break;
case ENA_ADMIN_CRC32:
func = ETH_RSS_HASH_XOR;
break;
default:
netif_err(adapter, drv, netdev,
"Command parameter is not supported\n");
return -EOPNOTSUPP;
}
if (hfunc)
*hfunc = func;
return rc;
}
static int ena_set_rxfh(struct net_device *netdev, const u32 *indir,
const u8 *key, const u8 hfunc)
{
struct ena_adapter *adapter = netdev_priv(netdev);
struct ena_com_dev *ena_dev = adapter->ena_dev;
enum ena_admin_hash_functions func;
int rc, i;
if (indir) {
for (i = 0; i < ENA_RX_RSS_TABLE_SIZE; i++) {
rc = ena_com_indirect_table_fill_entry(ena_dev,
ENA_IO_RXQ_IDX(indir[i]),
i);
if (unlikely(rc)) {
netif_err(adapter, drv, netdev,
"Cannot fill indirect table (index is too large)\n");
return rc;
}
}
rc = ena_com_indirect_table_set(ena_dev);
if (rc) {
netif_err(adapter, drv, netdev,
"Cannot set indirect table\n");
return rc == -EPERM ? -EOPNOTSUPP : rc;
}
}
switch (hfunc) {
case ETH_RSS_HASH_TOP:
func = ENA_ADMIN_TOEPLITZ;
break;
case ETH_RSS_HASH_XOR:
func = ENA_ADMIN_CRC32;
break;
default:
netif_err(adapter, drv, netdev, "Unsupported hfunc %d\n",
hfunc);
return -EOPNOTSUPP;
}
if (key) {
rc = ena_com_fill_hash_function(ena_dev, func, key,
ENA_HASH_KEY_SIZE,
0xFFFFFFFF);
if (unlikely(rc)) {
netif_err(adapter, drv, netdev, "Cannot fill key\n");
return rc == -EPERM ? -EOPNOTSUPP : rc;
}
}
return 0;
}
static void ena_get_channels(struct net_device *netdev,
struct ethtool_channels *channels)
{
struct ena_adapter *adapter = netdev_priv(netdev);
channels->max_rx = ENA_MAX_NUM_IO_QUEUES;
channels->max_tx = ENA_MAX_NUM_IO_QUEUES;
channels->max_other = 0;
channels->max_combined = 0;
channels->rx_count = adapter->num_queues;
channels->tx_count = adapter->num_queues;
channels->other_count = 0;
channels->combined_count = 0;
}
static int ena_get_tunable(struct net_device *netdev,
const struct ethtool_tunable *tuna, void *data)
{
struct ena_adapter *adapter = netdev_priv(netdev);
int ret = 0;
switch (tuna->id) {
case ETHTOOL_RX_COPYBREAK:
*(u32 *)data = adapter->rx_copybreak;
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
static int ena_set_tunable(struct net_device *netdev,
const struct ethtool_tunable *tuna,
const void *data)
{
struct ena_adapter *adapter = netdev_priv(netdev);
int ret = 0;
u32 len;
switch (tuna->id) {
case ETHTOOL_RX_COPYBREAK:
len = *(u32 *)data;
if (len > adapter->netdev->mtu) {
ret = -EINVAL;
break;
}
adapter->rx_copybreak = len;
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
static const struct ethtool_ops ena_ethtool_ops = {
.get_link_ksettings = ena_get_link_ksettings,
.get_drvinfo = ena_get_drvinfo,
.get_msglevel = ena_get_msglevel,
.set_msglevel = ena_set_msglevel,
.get_link = ethtool_op_get_link,
.get_coalesce = ena_get_coalesce,
.set_coalesce = ena_set_coalesce,
.get_ringparam = ena_get_ringparam,
.get_sset_count = ena_get_sset_count,
.get_strings = ena_get_strings,
.get_ethtool_stats = ena_get_ethtool_stats,
.get_rxnfc = ena_get_rxnfc,
.set_rxnfc = ena_set_rxnfc,
.get_rxfh_indir_size = ena_get_rxfh_indir_size,
.get_rxfh_key_size = ena_get_rxfh_key_size,
.get_rxfh = ena_get_rxfh,
.set_rxfh = ena_set_rxfh,
.get_channels = ena_get_channels,
.get_tunable = ena_get_tunable,
.set_tunable = ena_set_tunable,
};
void ena_set_ethtool_ops(struct net_device *netdev)
{
netdev->ethtool_ops = &ena_ethtool_ops;
}
static void ena_dump_stats_ex(struct ena_adapter *adapter, u8 *buf)
{
struct net_device *netdev = adapter->netdev;
u8 *strings_buf;
u64 *data_buf;
int strings_num;
int i, rc;
strings_num = ena_get_sset_count(netdev, ETH_SS_STATS);
if (strings_num <= 0) {
netif_err(adapter, drv, netdev, "Can't get stats num\n");
return;
}
strings_buf = devm_kzalloc(&adapter->pdev->dev,
strings_num * ETH_GSTRING_LEN,
GFP_ATOMIC);
if (!strings_buf) {
netif_err(adapter, drv, netdev,
"failed to alloc strings_buf\n");
return;
}
data_buf = devm_kzalloc(&adapter->pdev->dev,
strings_num * sizeof(u64),
GFP_ATOMIC);
if (!data_buf) {
netif_err(adapter, drv, netdev,
"failed to allocate data buf\n");
devm_kfree(&adapter->pdev->dev, strings_buf);
return;
}
ena_get_strings(netdev, ETH_SS_STATS, strings_buf);
ena_get_ethtool_stats(netdev, NULL, data_buf);
/* If there is a buffer, dump stats, otherwise print them to dmesg */
if (buf)
for (i = 0; i < strings_num; i++) {
rc = snprintf(buf, ETH_GSTRING_LEN + sizeof(u64),
"%s %llu\n",
strings_buf + i * ETH_GSTRING_LEN,
data_buf[i]);
buf += rc;
}
else
for (i = 0; i < strings_num; i++)
netif_err(adapter, drv, netdev, "%s: %llu\n",
strings_buf + i * ETH_GSTRING_LEN,
data_buf[i]);
devm_kfree(&adapter->pdev->dev, strings_buf);
devm_kfree(&adapter->pdev->dev, data_buf);
}
void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf)
{
if (!buf)
return;
ena_dump_stats_ex(adapter, buf);
}
void ena_dump_stats_to_dmesg(struct ena_adapter *adapter)
{
ena_dump_stats_ex(adapter, NULL);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,324 @@
/*
* Copyright 2015 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef ENA_H
#define ENA_H
#include <linux/bitops.h>
#include <linux/etherdevice.h>
#include <linux/inetdevice.h>
#include <linux/interrupt.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include "ena_com.h"
#include "ena_eth_com.h"
#define DRV_MODULE_VER_MAJOR 1
#define DRV_MODULE_VER_MINOR 0
#define DRV_MODULE_VER_SUBMINOR 2
#define DRV_MODULE_NAME "ena"
#ifndef DRV_MODULE_VERSION
#define DRV_MODULE_VERSION \
__stringify(DRV_MODULE_VER_MAJOR) "." \
__stringify(DRV_MODULE_VER_MINOR) "." \
__stringify(DRV_MODULE_VER_SUBMINOR)
#endif
#define DEVICE_NAME "Elastic Network Adapter (ENA)"
/* 1 for AENQ + ADMIN */
#define ENA_MAX_MSIX_VEC(io_queues) (1 + (io_queues))
#define ENA_REG_BAR 0
#define ENA_MEM_BAR 2
#define ENA_BAR_MASK (BIT(ENA_REG_BAR) | BIT(ENA_MEM_BAR))
#define ENA_DEFAULT_RING_SIZE (1024)
#define ENA_TX_WAKEUP_THRESH (MAX_SKB_FRAGS + 2)
#define ENA_DEFAULT_RX_COPYBREAK (128 - NET_IP_ALIGN)
/* limit the buffer size to 600 bytes to handle MTU changes from very
* small to very large, in which case the number of buffers per packet
* could exceed ENA_PKT_MAX_BUFS
*/
#define ENA_DEFAULT_MIN_RX_BUFF_ALLOC_SIZE 600
#define ENA_MIN_MTU 128
#define ENA_NAME_MAX_LEN 20
#define ENA_IRQNAME_SIZE 40
#define ENA_PKT_MAX_BUFS 19
#define ENA_RX_RSS_TABLE_LOG_SIZE 7
#define ENA_RX_RSS_TABLE_SIZE (1 << ENA_RX_RSS_TABLE_LOG_SIZE)
#define ENA_HASH_KEY_SIZE 40
/* The number of tx packet completions that will be handled each NAPI poll
* cycle is ring_size / ENA_TX_POLL_BUDGET_DIVIDER.
*/
#define ENA_TX_POLL_BUDGET_DIVIDER 4
/* Refill Rx queue when number of available descriptors is below
* QUEUE_SIZE / ENA_RX_REFILL_THRESH_DIVIDER
*/
#define ENA_RX_REFILL_THRESH_DIVIDER 8
/* Number of queues to check for missing queues per timer service */
#define ENA_MONITORED_TX_QUEUES 4
/* Max timeout packets before device reset */
#define MAX_NUM_OF_TIMEOUTED_PACKETS 32
#define ENA_TX_RING_IDX_NEXT(idx, ring_size) (((idx) + 1) & ((ring_size) - 1))
#define ENA_RX_RING_IDX_NEXT(idx, ring_size) (((idx) + 1) & ((ring_size) - 1))
#define ENA_RX_RING_IDX_ADD(idx, n, ring_size) \
(((idx) + (n)) & ((ring_size) - 1))
#define ENA_IO_TXQ_IDX(q) (2 * (q))
#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1)
#define ENA_MGMNT_IRQ_IDX 0
#define ENA_IO_IRQ_FIRST_IDX 1
#define ENA_IO_IRQ_IDX(q) (ENA_IO_IRQ_FIRST_IDX + (q))
/* ENA device should send keep alive msg every 1 sec.
* We wait for 3 sec just to be on the safe side.
*/
#define ENA_DEVICE_KALIVE_TIMEOUT (3 * HZ)
#define ENA_MMIO_DISABLE_REG_READ BIT(0)
struct ena_irq {
irq_handler_t handler;
void *data;
int cpu;
u32 vector;
cpumask_t affinity_hint_mask;
char name[ENA_IRQNAME_SIZE];
};
struct ena_napi {
struct napi_struct napi ____cacheline_aligned;
struct ena_ring *tx_ring;
struct ena_ring *rx_ring;
u32 qid;
};
struct ena_tx_buffer {
struct sk_buff *skb;
/* num of ena desc for this specific skb
* (includes data desc and metadata desc)
*/
u32 tx_descs;
/* num of buffers used by this skb */
u32 num_of_bufs;
/* Save the last jiffies to detect missing tx packets */
unsigned long last_jiffies;
struct ena_com_buf bufs[ENA_PKT_MAX_BUFS];
} ____cacheline_aligned;
struct ena_rx_buffer {
struct sk_buff *skb;
struct page *page;
u32 page_offset;
struct ena_com_buf ena_buf;
} ____cacheline_aligned;
struct ena_stats_tx {
u64 cnt;
u64 bytes;
u64 queue_stop;
u64 prepare_ctx_err;
u64 queue_wakeup;
u64 dma_mapping_err;
u64 linearize;
u64 linearize_failed;
u64 napi_comp;
u64 tx_poll;
u64 doorbells;
u64 missing_tx_comp;
u64 bad_req_id;
};
struct ena_stats_rx {
u64 cnt;
u64 bytes;
u64 refil_partial;
u64 bad_csum;
u64 page_alloc_fail;
u64 skb_alloc_fail;
u64 dma_mapping_err;
u64 bad_desc_num;
u64 rx_copybreak_pkt;
};
struct ena_ring {
/* Holds the empty requests for TX out of order completions */
u16 *free_tx_ids;
union {
struct ena_tx_buffer *tx_buffer_info;
struct ena_rx_buffer *rx_buffer_info;
};
/* cache ptr to avoid using the adapter */
struct device *dev;
struct pci_dev *pdev;
struct napi_struct *napi;
struct net_device *netdev;
struct ena_com_dev *ena_dev;
struct ena_adapter *adapter;
struct ena_com_io_cq *ena_com_io_cq;
struct ena_com_io_sq *ena_com_io_sq;
u16 next_to_use;
u16 next_to_clean;
u16 rx_copybreak;
u16 qid;
u16 mtu;
u16 sgl_size;
/* The maximum header length the device can handle */
u8 tx_max_header_size;
/* cpu for TPH */
int cpu;
/* number of tx/rx_buffer_info's entries */
int ring_size;
enum ena_admin_placement_policy_type tx_mem_queue_type;
struct ena_com_rx_buf_info ena_bufs[ENA_PKT_MAX_BUFS];
u32 smoothed_interval;
u32 per_napi_packets;
u32 per_napi_bytes;
enum ena_intr_moder_level moder_tbl_idx;
struct u64_stats_sync syncp;
union {
struct ena_stats_tx tx_stats;
struct ena_stats_rx rx_stats;
};
} ____cacheline_aligned;
struct ena_stats_dev {
u64 tx_timeout;
u64 io_suspend;
u64 io_resume;
u64 wd_expired;
u64 interface_up;
u64 interface_down;
u64 admin_q_pause;
};
enum ena_flags_t {
ENA_FLAG_DEVICE_RUNNING,
ENA_FLAG_DEV_UP,
ENA_FLAG_LINK_UP,
ENA_FLAG_MSIX_ENABLED,
ENA_FLAG_TRIGGER_RESET
};
/* adapter specific private data structure */
struct ena_adapter {
struct ena_com_dev *ena_dev;
/* OS defined structs */
struct net_device *netdev;
struct pci_dev *pdev;
/* rx packets that shorter that this len will be copied to the skb
* header
*/
u32 rx_copybreak;
u32 max_mtu;
int num_queues;
struct msix_entry *msix_entries;
int msix_vecs;
u32 tx_usecs, rx_usecs; /* interrupt moderation */
u32 tx_frames, rx_frames; /* interrupt moderation */
u32 tx_ring_size;
u32 rx_ring_size;
u32 msg_enable;
u16 max_tx_sgl_size;
u16 max_rx_sgl_size;
u8 mac_addr[ETH_ALEN];
char name[ENA_NAME_MAX_LEN];
unsigned long flags;
/* TX */
struct ena_ring tx_ring[ENA_MAX_NUM_IO_QUEUES]
____cacheline_aligned_in_smp;
/* RX */
struct ena_ring rx_ring[ENA_MAX_NUM_IO_QUEUES]
____cacheline_aligned_in_smp;
struct ena_napi ena_napi[ENA_MAX_NUM_IO_QUEUES];
struct ena_irq irq_tbl[ENA_MAX_MSIX_VEC(ENA_MAX_NUM_IO_QUEUES)];
/* timer service */
struct work_struct reset_task;
struct work_struct suspend_io_task;
struct work_struct resume_io_task;
struct timer_list timer_service;
bool wd_state;
unsigned long last_keep_alive_jiffies;
struct u64_stats_sync syncp;
struct ena_stats_dev dev_stats;
/* last queue index that was checked for uncompleted tx packets */
u32 last_monitored_tx_qid;
};
void ena_set_ethtool_ops(struct net_device *netdev);
void ena_dump_stats_to_dmesg(struct ena_adapter *adapter);
void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf);
int ena_get_sset_count(struct net_device *netdev, int sset);
#endif /* !(ENA_H) */

View File

@ -0,0 +1,67 @@
/*
* Copyright 2015 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef ENA_PCI_ID_TBL_H_
#define ENA_PCI_ID_TBL_H_
#ifndef PCI_VENDOR_ID_AMAZON
#define PCI_VENDOR_ID_AMAZON 0x1d0f
#endif
#ifndef PCI_DEV_ID_ENA_PF
#define PCI_DEV_ID_ENA_PF 0x0ec2
#endif
#ifndef PCI_DEV_ID_ENA_LLQ_PF
#define PCI_DEV_ID_ENA_LLQ_PF 0x1ec2
#endif
#ifndef PCI_DEV_ID_ENA_VF
#define PCI_DEV_ID_ENA_VF 0xec20
#endif
#ifndef PCI_DEV_ID_ENA_LLQ_VF
#define PCI_DEV_ID_ENA_LLQ_VF 0xec21
#endif
#define ENA_PCI_ID_TABLE_ENTRY(devid) \
{PCI_DEVICE(PCI_VENDOR_ID_AMAZON, devid)},
static const struct pci_device_id ena_pci_tbl[] = {
ENA_PCI_ID_TABLE_ENTRY(PCI_DEV_ID_ENA_PF)
ENA_PCI_ID_TABLE_ENTRY(PCI_DEV_ID_ENA_LLQ_PF)
ENA_PCI_ID_TABLE_ENTRY(PCI_DEV_ID_ENA_VF)
ENA_PCI_ID_TABLE_ENTRY(PCI_DEV_ID_ENA_LLQ_VF)
{ }
};
#endif /* ENA_PCI_ID_TBL_H_ */

View File

@ -0,0 +1,133 @@
/*
* Copyright 2015 - 2016 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _ENA_REGS_H_
#define _ENA_REGS_H_
/* ena_registers offsets */
#define ENA_REGS_VERSION_OFF 0x0
#define ENA_REGS_CONTROLLER_VERSION_OFF 0x4
#define ENA_REGS_CAPS_OFF 0x8
#define ENA_REGS_CAPS_EXT_OFF 0xc
#define ENA_REGS_AQ_BASE_LO_OFF 0x10
#define ENA_REGS_AQ_BASE_HI_OFF 0x14
#define ENA_REGS_AQ_CAPS_OFF 0x18
#define ENA_REGS_ACQ_BASE_LO_OFF 0x20
#define ENA_REGS_ACQ_BASE_HI_OFF 0x24
#define ENA_REGS_ACQ_CAPS_OFF 0x28
#define ENA_REGS_AQ_DB_OFF 0x2c
#define ENA_REGS_ACQ_TAIL_OFF 0x30
#define ENA_REGS_AENQ_CAPS_OFF 0x34
#define ENA_REGS_AENQ_BASE_LO_OFF 0x38
#define ENA_REGS_AENQ_BASE_HI_OFF 0x3c
#define ENA_REGS_AENQ_HEAD_DB_OFF 0x40
#define ENA_REGS_AENQ_TAIL_OFF 0x44
#define ENA_REGS_INTR_MASK_OFF 0x4c
#define ENA_REGS_DEV_CTL_OFF 0x54
#define ENA_REGS_DEV_STS_OFF 0x58
#define ENA_REGS_MMIO_REG_READ_OFF 0x5c
#define ENA_REGS_MMIO_RESP_LO_OFF 0x60
#define ENA_REGS_MMIO_RESP_HI_OFF 0x64
#define ENA_REGS_RSS_IND_ENTRY_UPDATE_OFF 0x68
/* version register */
#define ENA_REGS_VERSION_MINOR_VERSION_MASK 0xff
#define ENA_REGS_VERSION_MAJOR_VERSION_SHIFT 8
#define ENA_REGS_VERSION_MAJOR_VERSION_MASK 0xff00
/* controller_version register */
#define ENA_REGS_CONTROLLER_VERSION_SUBMINOR_VERSION_MASK 0xff
#define ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_SHIFT 8
#define ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK 0xff00
#define ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT 16
#define ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK 0xff0000
#define ENA_REGS_CONTROLLER_VERSION_IMPL_ID_SHIFT 24
#define ENA_REGS_CONTROLLER_VERSION_IMPL_ID_MASK 0xff000000
/* caps register */
#define ENA_REGS_CAPS_CONTIGUOUS_QUEUE_REQUIRED_MASK 0x1
#define ENA_REGS_CAPS_RESET_TIMEOUT_SHIFT 1
#define ENA_REGS_CAPS_RESET_TIMEOUT_MASK 0x3e
#define ENA_REGS_CAPS_DMA_ADDR_WIDTH_SHIFT 8
#define ENA_REGS_CAPS_DMA_ADDR_WIDTH_MASK 0xff00
/* aq_caps register */
#define ENA_REGS_AQ_CAPS_AQ_DEPTH_MASK 0xffff
#define ENA_REGS_AQ_CAPS_AQ_ENTRY_SIZE_SHIFT 16
#define ENA_REGS_AQ_CAPS_AQ_ENTRY_SIZE_MASK 0xffff0000
/* acq_caps register */
#define ENA_REGS_ACQ_CAPS_ACQ_DEPTH_MASK 0xffff
#define ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_SHIFT 16
#define ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_MASK 0xffff0000
/* aenq_caps register */
#define ENA_REGS_AENQ_CAPS_AENQ_DEPTH_MASK 0xffff
#define ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_SHIFT 16
#define ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_MASK 0xffff0000
/* dev_ctl register */
#define ENA_REGS_DEV_CTL_DEV_RESET_MASK 0x1
#define ENA_REGS_DEV_CTL_AQ_RESTART_SHIFT 1
#define ENA_REGS_DEV_CTL_AQ_RESTART_MASK 0x2
#define ENA_REGS_DEV_CTL_QUIESCENT_SHIFT 2
#define ENA_REGS_DEV_CTL_QUIESCENT_MASK 0x4
#define ENA_REGS_DEV_CTL_IO_RESUME_SHIFT 3
#define ENA_REGS_DEV_CTL_IO_RESUME_MASK 0x8
/* dev_sts register */
#define ENA_REGS_DEV_STS_READY_MASK 0x1
#define ENA_REGS_DEV_STS_AQ_RESTART_IN_PROGRESS_SHIFT 1
#define ENA_REGS_DEV_STS_AQ_RESTART_IN_PROGRESS_MASK 0x2
#define ENA_REGS_DEV_STS_AQ_RESTART_FINISHED_SHIFT 2
#define ENA_REGS_DEV_STS_AQ_RESTART_FINISHED_MASK 0x4
#define ENA_REGS_DEV_STS_RESET_IN_PROGRESS_SHIFT 3
#define ENA_REGS_DEV_STS_RESET_IN_PROGRESS_MASK 0x8
#define ENA_REGS_DEV_STS_RESET_FINISHED_SHIFT 4
#define ENA_REGS_DEV_STS_RESET_FINISHED_MASK 0x10
#define ENA_REGS_DEV_STS_FATAL_ERROR_SHIFT 5
#define ENA_REGS_DEV_STS_FATAL_ERROR_MASK 0x20
#define ENA_REGS_DEV_STS_QUIESCENT_STATE_IN_PROGRESS_SHIFT 6
#define ENA_REGS_DEV_STS_QUIESCENT_STATE_IN_PROGRESS_MASK 0x40
#define ENA_REGS_DEV_STS_QUIESCENT_STATE_ACHIEVED_SHIFT 7
#define ENA_REGS_DEV_STS_QUIESCENT_STATE_ACHIEVED_MASK 0x80
/* mmio_reg_read register */
#define ENA_REGS_MMIO_REG_READ_REQ_ID_MASK 0xffff
#define ENA_REGS_MMIO_REG_READ_REG_OFF_SHIFT 16
#define ENA_REGS_MMIO_REG_READ_REG_OFF_MASK 0xffff0000
/* rss_ind_entry_update register */
#define ENA_REGS_RSS_IND_ENTRY_UPDATE_INDEX_MASK 0xffff
#define ENA_REGS_RSS_IND_ENTRY_UPDATE_CQ_IDX_SHIFT 16
#define ENA_REGS_RSS_IND_ENTRY_UPDATE_CQ_IDX_MASK 0xffff0000
#endif /*_ENA_REGS_H_ */