Merge branches 'acpi-resources', 'acpi-battery', 'acpi-doc' and 'acpi-pnp'

* acpi-resources:
  x86/PCI/ACPI: Make all resources except [io 0xcf8-0xcff] available on PCI bus

* acpi-battery:
  ACPI / SBS: Add 5 us delay to fix SBS hangs on MacBook

* acpi-doc:
  ACPI / documentation: Fix ambiguity in the GPIO properties document
  ACPI / documentation: fix a sentence about GPIO resources

* acpi-pnp:
  ACPI / PNP: add two IDs to list for PNPACPI device enumeration
This commit is contained in:
Rafael J. Wysocki 2015-05-07 21:24:34 +02:00
commit 9a5d9315e4
230 changed files with 2712 additions and 1661 deletions

View file

@ -253,7 +253,7 @@ input driver:
GPIO support
~~~~~~~~~~~~
ACPI 5 introduced two new resources to describe GPIO connections: GpioIo
and GpioInt. These resources are used be used to pass GPIO numbers used by
and GpioInt. These resources can be used to pass GPIO numbers used by
the device to the driver. ACPI 5.1 extended this with _DSD (Device
Specific Data) which made it possible to name the GPIOs among other things.

View file

@ -1,9 +1,9 @@
_DSD Device Properties Related to GPIO
--------------------------------------
With the release of ACPI 5.1 and the _DSD configuration objecte names
can finally be given to GPIOs (and other things as well) returned by
_CRS. Previously, we were only able to use an integer index to find
With the release of ACPI 5.1, the _DSD configuration object finally
allows names to be given to GPIOs (and other things as well) returned
by _CRS. Previously, we were only able to use an integer index to find
the corresponding GPIO, which is pretty error prone (it depends on
the _CRS output ordering, for example).

View file

@ -3787,6 +3787,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
READ_CAPACITY_16 command);
f = NO_REPORT_OPCODES (don't use report opcodes
command, uas only);
g = MAX_SECTORS_240 (don't transfer more than
240 sectors at a time, uas only);
h = CAPACITY_HEURISTICS (decrease the
reported device capacity by one
sector if the number is odd);

View file

@ -119,9 +119,9 @@ Most notably, in the x509.genkey file, the req_distinguished_name section
should be altered from the default:
[ req_distinguished_name ]
O = Magrathea
CN = Glacier signing key
emailAddress = slartibartfast@magrathea.h2g2
#O = Unspecified company
CN = Build time autogenerated kernel key
#emailAddress = unspecified.user@unspecified.company
The generated RSA key size can also be set with:

View file

@ -18,3 +18,12 @@ platform_labels - INTEGER
Possible values: 0 - 1048575
Default: 0
conf/<interface>/input - BOOL
Control whether packets can be input on this interface.
If disabled, packets will be discarded without further
processing.
0 - disabled (default)
not 0 - enabled

View file

@ -282,7 +282,7 @@ following is true:
- The current CPU's queue head counter >= the recorded tail counter
value in rps_dev_flow[i]
- The current CPU is unset (equal to RPS_NO_CPU)
- The current CPU is unset (>= nr_cpu_ids)
- The current CPU is offline
After this check, the packet is sent to the (possibly updated) current

View file

@ -74,23 +74,22 @@ Causes of transaction aborts
Syscalls
========
Syscalls made from within an active transaction will not be performed and the
transaction will be doomed by the kernel with the failure code TM_CAUSE_SYSCALL
| TM_CAUSE_PERSISTENT.
Performing syscalls from within transaction is not recommended, and can lead
to unpredictable results.
Syscalls made from within a suspended transaction are performed as normal and
the transaction is not explicitly doomed by the kernel. However, what the
kernel does to perform the syscall may result in the transaction being doomed
by the hardware. The syscall is performed in suspended mode so any side
effects will be persistent, independent of transaction success or failure. No
guarantees are provided by the kernel about which syscalls will affect
transaction success.
Syscalls do not by design abort transactions, but beware: The kernel code will
not be running in transactional state. The effect of syscalls will always
remain visible, but depending on the call they may abort your transaction as a
side-effect, read soon-to-be-aborted transactional data that should not remain
invisible, etc. If you constantly retry a transaction that constantly aborts
itself by calling a syscall, you'll have a livelock & make no progress.
Care must be taken when relying on syscalls to abort during active transactions
if the calls are made via a library. Libraries may cache values (which may
give the appearance of success) or perform operations that cause transaction
failure before entering the kernel (which may produce different failure codes).
Examples are glibc's getpid() and lazy symbol resolution.
Simple syscalls (e.g. sigprocmask()) "could" be OK. Even things like write()
from, say, printf() should be OK as long as the kernel does not access any
memory that was accessed transactionally.
Consider any syscalls that happen to work as debug-only -- not recommended for
production use. Best to queue them up till after the transaction is over.
Signals
@ -177,7 +176,8 @@ kernel aborted a transaction:
TM_CAUSE_RESCHED Thread was rescheduled.
TM_CAUSE_TLBI Software TLB invalid.
TM_CAUSE_FAC_UNAV FP/VEC/VSX unavailable trap.
TM_CAUSE_SYSCALL Syscall from active transaction.
TM_CAUSE_SYSCALL Currently unused; future syscalls that must abort
transactions for consistency will use this.
TM_CAUSE_SIGNAL Signal delivered.
TM_CAUSE_MISC Currently unused.
TM_CAUSE_ALIGNMENT Alignment fault.

View file

@ -3413,6 +3413,13 @@ F: drivers/gpu/drm/rcar-du/
F: drivers/gpu/drm/shmobile/
F: include/linux/platform_data/shmob_drm.h
DRM DRIVERS FOR ROCKCHIP
M: Mark Yao <mark.yao@rock-chips.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
F: drivers/gpu/drm/rockchip/
F: Documentation/devicetree/bindings/video/rockchip*
DSBR100 USB FM RADIO DRIVER
M: Alexey Klimov <klimov.linux@gmail.com>
L: linux-media@vger.kernel.org
@ -10523,7 +10530,6 @@ F: include/linux/virtio_console.h
F: include/uapi/linux/virtio_console.h
VIRTIO CORE, NET AND BLOCK DRIVERS
M: Rusty Russell <rusty@rustcorp.com.au>
M: "Michael S. Tsirkin" <mst@redhat.com>
L: virtualization@lists.linux-foundation.org
S: Maintained

View file

@ -1,7 +1,7 @@
VERSION = 4
PATCHLEVEL = 1
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc2
NAME = Hurr durr I'ma sheep
# *DOCUMENTATION*

View file

@ -31,6 +31,7 @@ config ARM64
select GENERIC_EARLY_IOREMAP
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD

View file

@ -65,6 +65,14 @@ do { \
do { \
compiletime_assert_atomic_type(*p); \
switch (sizeof(*p)) { \
case 1: \
asm volatile ("stlrb %w1, %0" \
: "=Q" (*p) : "r" (v) : "memory"); \
break; \
case 2: \
asm volatile ("stlrh %w1, %0" \
: "=Q" (*p) : "r" (v) : "memory"); \
break; \
case 4: \
asm volatile ("stlr %w1, %0" \
: "=Q" (*p) : "r" (v) : "memory"); \
@ -81,6 +89,14 @@ do { \
typeof(*p) ___p1; \
compiletime_assert_atomic_type(*p); \
switch (sizeof(*p)) { \
case 1: \
asm volatile ("ldarb %w0, %1" \
: "=r" (___p1) : "Q" (*p) : "memory"); \
break; \
case 2: \
asm volatile ("ldarh %w0, %1" \
: "=r" (___p1) : "Q" (*p) : "memory"); \
break; \
case 4: \
asm volatile ("ldar %w0, %1" \
: "=r" (___p1) : "Q" (*p) : "memory"); \

View file

@ -1310,7 +1310,7 @@ static const struct of_device_id armpmu_of_device_ids[] = {
static int armpmu_device_probe(struct platform_device *pdev)
{
int i, *irqs;
int i, irq, *irqs;
if (!cpu_pmu)
return -ENODEV;
@ -1319,6 +1319,11 @@ static int armpmu_device_probe(struct platform_device *pdev)
if (!irqs)
return -ENOMEM;
/* Don't bother with PPIs; they're already affine */
irq = platform_get_irq(pdev, 0);
if (irq >= 0 && irq_is_percpu(irq))
return 0;
for (i = 0; i < pdev->num_resources; ++i) {
struct device_node *dn;
int cpu;
@ -1327,7 +1332,7 @@ static int armpmu_device_probe(struct platform_device *pdev)
i);
if (!dn) {
pr_warn("Failed to parse %s/interrupt-affinity[%d]\n",
of_node_full_name(dn), i);
of_node_full_name(pdev->dev.of_node), i);
break;
}

View file

@ -67,8 +67,7 @@ static void *__alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags)
*ret_page = phys_to_page(phys);
ptr = (void *)val;
if (flags & __GFP_ZERO)
memset(ptr, 0, size);
memset(ptr, 0, size);
}
return ptr;
@ -105,7 +104,6 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
struct page *page;
void *addr;
size = PAGE_ALIGN(size);
page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
get_order(size));
if (!page)
@ -113,8 +111,7 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
*dma_handle = phys_to_dma(dev, page_to_phys(page));
addr = page_address(page);
if (flags & __GFP_ZERO)
memset(addr, 0, size);
memset(addr, 0, size);
return addr;
} else {
return swiotlb_alloc_coherent(dev, size, dma_handle, flags);
@ -195,6 +192,8 @@ static void __dma_free(struct device *dev, size_t size,
{
void *swiotlb_addr = phys_to_virt(dma_to_phys(dev, dma_handle));
size = PAGE_ALIGN(size);
if (!is_device_dma_coherent(dev)) {
if (__free_from_pool(vaddr, size))
return;

View file

@ -11,7 +11,7 @@
#define TM_CAUSE_RESCHED 0xde
#define TM_CAUSE_TLBI 0xdc
#define TM_CAUSE_FAC_UNAV 0xda
#define TM_CAUSE_SYSCALL 0xd8
#define TM_CAUSE_SYSCALL 0xd8 /* future use */
#define TM_CAUSE_MISC 0xd6 /* future use */
#define TM_CAUSE_SIGNAL 0xd4
#define TM_CAUSE_ALIGNMENT 0xd2

View file

@ -749,21 +749,24 @@ int pcibios_set_pcie_reset_state(struct pci_dev *dev, enum pcie_reset_state stat
eeh_unfreeze_pe(pe, false);
eeh_pe_state_clear(pe, EEH_PE_CFG_BLOCKED);
eeh_pe_dev_traverse(pe, eeh_restore_dev_state, dev);
eeh_pe_state_clear(pe, EEH_PE_ISOLATED);
break;
case pcie_hot_reset:
eeh_pe_state_mark(pe, EEH_PE_ISOLATED);
eeh_ops->set_option(pe, EEH_OPT_FREEZE_PE);
eeh_pe_dev_traverse(pe, eeh_disable_and_save_dev_state, dev);
eeh_pe_state_mark(pe, EEH_PE_CFG_BLOCKED);
eeh_ops->reset(pe, EEH_RESET_HOT);
break;
case pcie_warm_reset:
eeh_pe_state_mark(pe, EEH_PE_ISOLATED);
eeh_ops->set_option(pe, EEH_OPT_FREEZE_PE);
eeh_pe_dev_traverse(pe, eeh_disable_and_save_dev_state, dev);
eeh_pe_state_mark(pe, EEH_PE_CFG_BLOCKED);
eeh_ops->reset(pe, EEH_RESET_FUNDAMENTAL);
break;
default:
eeh_pe_state_clear(pe, EEH_PE_CFG_BLOCKED);
eeh_pe_state_clear(pe, EEH_PE_ISOLATED | EEH_PE_CFG_BLOCKED);
return -EINVAL;
};
@ -1058,6 +1061,9 @@ void eeh_add_device_early(struct pci_dn *pdn)
if (!edev || !eeh_enabled())
return;
if (!eeh_has_flag(EEH_PROBE_MODE_DEVTREE))
return;
/* USB Bus children of PCI devices will not have BUID's */
phb = edev->phb;
if (NULL == phb ||
@ -1112,6 +1118,9 @@ void eeh_add_device_late(struct pci_dev *dev)
return;
}
if (eeh_has_flag(EEH_PROBE_MODE_DEV))
eeh_ops->probe(pdn, NULL);
/*
* The EEH cache might not be removed correctly because of
* unbalanced kref to the device during unplug time, which

View file

@ -34,7 +34,6 @@
#include <asm/ftrace.h>
#include <asm/hw_irq.h>
#include <asm/context_tracking.h>
#include <asm/tm.h>
/*
* System calls.
@ -146,24 +145,6 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
andi. r11,r10,_TIF_SYSCALL_DOTRACE
bne syscall_dotrace
.Lsyscall_dotrace_cont:
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
BEGIN_FTR_SECTION
b 1f
END_FTR_SECTION_IFCLR(CPU_FTR_TM)
extrdi. r11, r12, 1, (63-MSR_TS_T_LG) /* transaction active? */
beq+ 1f
/* Doom the transaction and don't perform the syscall: */
mfmsr r11
li r12, 1
rldimi r11, r12, MSR_TM_LG, 63-MSR_TM_LG
mtmsrd r11, 0
li r11, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
TABORT(R11)
b .Lsyscall_exit
1:
#endif
cmpldi 0,r0,NR_syscalls
bge- syscall_enosys

View file

@ -501,9 +501,11 @@ BEGIN_FTR_SECTION
CHECK_HMI_INTERRUPT
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
ld r1,PACAR1(r13)
ld r6,_CCR(r1)
ld r4,_MSR(r1)
ld r5,_NIP(r1)
addi r1,r1,INT_FRAME_SIZE
mtcr r6
mtspr SPRN_SRR1,r4
mtspr SPRN_SRR0,r5
rfid

View file

@ -12,6 +12,7 @@
#include <linux/err.h>
#include <linux/gfp.h>
#include <linux/anon_inodes.h>
#include <linux/spinlock.h>
#include <asm/uaccess.h>
#include <asm/kvm_book3s.h>
@ -20,7 +21,6 @@
#include <asm/xics.h>
#include <asm/debug.h>
#include <asm/time.h>
#include <asm/spinlock.h>
#include <linux/debugfs.h>
#include <linux/seq_file.h>

View file

@ -2693,7 +2693,6 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
hose->last_busno = 0xff;
}
hose->private_data = phb;
hose->controller_ops = pnv_pci_controller_ops;
phb->hub_id = hub_id;
phb->opal_id = phb_id;
phb->type = ioda_type;
@ -2812,6 +2811,7 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
pnv_pci_controller_ops.enable_device_hook = pnv_pci_enable_device_hook;
pnv_pci_controller_ops.window_alignment = pnv_pci_window_alignment;
pnv_pci_controller_ops.reset_secondary_bus = pnv_pci_reset_secondary_bus;
hose->controller_ops = pnv_pci_controller_ops;
#ifdef CONFIG_PCI_IOV
ppc_md.pcibios_fixup_sriov = pnv_pci_ioda_fixup_iov_resources;

View file

@ -412,6 +412,10 @@ static ssize_t dlpar_cpu_probe(const char *buf, size_t count)
if (rc)
return -EINVAL;
rc = dlpar_acquire_drc(drc_index);
if (rc)
return -EINVAL;
parent = of_find_node_by_path("/cpus");
if (!parent)
return -ENODEV;
@ -422,12 +426,6 @@ static ssize_t dlpar_cpu_probe(const char *buf, size_t count)
of_node_put(parent);
rc = dlpar_acquire_drc(drc_index);
if (rc) {
dlpar_free_cc_nodes(dn);
return -EINVAL;
}
rc = dlpar_attach_node(dn);
if (rc) {
dlpar_release_drc(drc_index);

View file

@ -115,7 +115,7 @@ config S390
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_BPF_JIT if PACK_STACK && HAVE_MARCH_Z9_109_FEATURES
select HAVE_BPF_JIT if PACK_STACK && HAVE_MARCH_Z196_FEATURES
select HAVE_CMPXCHG_DOUBLE
select HAVE_CMPXCHG_LOCAL
select HAVE_DEBUG_KMEMLEAK

View file

@ -3,9 +3,10 @@
*
* Support for s390 cryptographic instructions.
*
* Copyright IBM Corp. 2003, 2007
* Copyright IBM Corp. 2003, 2015
* Author(s): Thomas Spatzier
* Jan Glauber (jan.glauber@de.ibm.com)
* Harald Freudenberger (freude@de.ibm.com)
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
@ -28,15 +29,17 @@
#define CRYPT_S390_MSA 0x1
#define CRYPT_S390_MSA3 0x2
#define CRYPT_S390_MSA4 0x4
#define CRYPT_S390_MSA5 0x8
/* s390 cryptographic operations */
enum crypt_s390_operations {
CRYPT_S390_KM = 0x0100,
CRYPT_S390_KMC = 0x0200,
CRYPT_S390_KIMD = 0x0300,
CRYPT_S390_KLMD = 0x0400,
CRYPT_S390_KMAC = 0x0500,
CRYPT_S390_KMCTR = 0x0600
CRYPT_S390_KM = 0x0100,
CRYPT_S390_KMC = 0x0200,
CRYPT_S390_KIMD = 0x0300,
CRYPT_S390_KLMD = 0x0400,
CRYPT_S390_KMAC = 0x0500,
CRYPT_S390_KMCTR = 0x0600,
CRYPT_S390_PPNO = 0x0700
};
/*
@ -138,6 +141,16 @@ enum crypt_s390_kmac_func {
KMAC_TDEA_192 = CRYPT_S390_KMAC | 3
};
/*
* function codes for PPNO (PERFORM PSEUDORANDOM NUMBER
* OPERATION) instruction
*/
enum crypt_s390_ppno_func {
PPNO_QUERY = CRYPT_S390_PPNO | 0,
PPNO_SHA512_DRNG_GEN = CRYPT_S390_PPNO | 3,
PPNO_SHA512_DRNG_SEED = CRYPT_S390_PPNO | 0x83
};
/**
* crypt_s390_km:
* @func: the function code passed to KM; see crypt_s390_km_func
@ -162,11 +175,11 @@ static inline int crypt_s390_km(long func, void *param,
int ret;
asm volatile(
"0: .insn rre,0xb92e0000,%3,%1 \n" /* KM opcode */
"1: brc 1,0b \n" /* handle partial completion */
"0: .insn rre,0xb92e0000,%3,%1\n" /* KM opcode */
"1: brc 1,0b\n" /* handle partial completion */
" la %0,0\n"
"2:\n"
EX_TABLE(0b,2b) EX_TABLE(1b,2b)
EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
: "=d" (ret), "+a" (__src), "+d" (__src_len), "+a" (__dest)
: "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
if (ret < 0)
@ -198,11 +211,11 @@ static inline int crypt_s390_kmc(long func, void *param,
int ret;
asm volatile(
"0: .insn rre,0xb92f0000,%3,%1 \n" /* KMC opcode */
"1: brc 1,0b \n" /* handle partial completion */
"0: .insn rre,0xb92f0000,%3,%1\n" /* KMC opcode */
"1: brc 1,0b\n" /* handle partial completion */
" la %0,0\n"
"2:\n"
EX_TABLE(0b,2b) EX_TABLE(1b,2b)
EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
: "=d" (ret), "+a" (__src), "+d" (__src_len), "+a" (__dest)
: "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
if (ret < 0)
@ -233,11 +246,11 @@ static inline int crypt_s390_kimd(long func, void *param,
int ret;
asm volatile(
"0: .insn rre,0xb93e0000,%1,%1 \n" /* KIMD opcode */
"1: brc 1,0b \n" /* handle partial completion */
"0: .insn rre,0xb93e0000,%1,%1\n" /* KIMD opcode */
"1: brc 1,0b\n" /* handle partial completion */
" la %0,0\n"
"2:\n"
EX_TABLE(0b,2b) EX_TABLE(1b,2b)
EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
: "=d" (ret), "+a" (__src), "+d" (__src_len)
: "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
if (ret < 0)
@ -267,11 +280,11 @@ static inline int crypt_s390_klmd(long func, void *param,
int ret;
asm volatile(
"0: .insn rre,0xb93f0000,%1,%1 \n" /* KLMD opcode */
"1: brc 1,0b \n" /* handle partial completion */
"0: .insn rre,0xb93f0000,%1,%1\n" /* KLMD opcode */
"1: brc 1,0b\n" /* handle partial completion */
" la %0,0\n"
"2:\n"
EX_TABLE(0b,2b) EX_TABLE(1b,2b)
EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
: "=d" (ret), "+a" (__src), "+d" (__src_len)
: "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
if (ret < 0)
@ -302,11 +315,11 @@ static inline int crypt_s390_kmac(long func, void *param,
int ret;
asm volatile(
"0: .insn rre,0xb91e0000,%1,%1 \n" /* KLAC opcode */
"1: brc 1,0b \n" /* handle partial completion */
"0: .insn rre,0xb91e0000,%1,%1\n" /* KLAC opcode */
"1: brc 1,0b\n" /* handle partial completion */
" la %0,0\n"
"2:\n"
EX_TABLE(0b,2b) EX_TABLE(1b,2b)
EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
: "=d" (ret), "+a" (__src), "+d" (__src_len)
: "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
if (ret < 0)
@ -340,11 +353,11 @@ static inline int crypt_s390_kmctr(long func, void *param, u8 *dest,
int ret = -1;
asm volatile(
"0: .insn rrf,0xb92d0000,%3,%1,%4,0 \n" /* KMCTR opcode */
"1: brc 1,0b \n" /* handle partial completion */
"0: .insn rrf,0xb92d0000,%3,%1,%4,0\n" /* KMCTR opcode */
"1: brc 1,0b\n" /* handle partial completion */
" la %0,0\n"
"2:\n"
EX_TABLE(0b,2b) EX_TABLE(1b,2b)
EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
: "+d" (ret), "+a" (__src), "+d" (__src_len), "+a" (__dest),
"+a" (__ctr)
: "d" (__func), "a" (__param) : "cc", "memory");
@ -353,6 +366,47 @@ static inline int crypt_s390_kmctr(long func, void *param, u8 *dest,
return (func & CRYPT_S390_FUNC_MASK) ? src_len - __src_len : __src_len;
}
/**
* crypt_s390_ppno:
* @func: the function code passed to PPNO; see crypt_s390_ppno_func
* @param: address of parameter block; see POP for details on each func
* @dest: address of destination memory area
* @dest_len: size of destination memory area in bytes
* @seed: address of seed data
* @seed_len: size of seed data in bytes
*
* Executes the PPNO (PERFORM PSEUDORANDOM NUMBER OPERATION)
* operation of the CPU.
*
* Returns -1 for failure, 0 for the query func, number of random
* bytes stored in dest buffer for generate function
*/
static inline int crypt_s390_ppno(long func, void *param,
u8 *dest, long dest_len,
const u8 *seed, long seed_len)
{
register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
register void *__param asm("1") = param; /* param block (240 bytes) */
register u8 *__dest asm("2") = dest; /* buf for recv random bytes */
register long __dest_len asm("3") = dest_len; /* requested random bytes */
register const u8 *__seed asm("4") = seed; /* buf with seed data */
register long __seed_len asm("5") = seed_len; /* bytes in seed buf */
int ret = -1;
asm volatile (
"0: .insn rre,0xb93c0000,%1,%5\n" /* PPNO opcode */
"1: brc 1,0b\n" /* handle partial completion */
" la %0,0\n"
"2:\n"
EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
: "+d" (ret), "+a"(__dest), "+d"(__dest_len)
: "d"(__func), "a"(__param), "a"(__seed), "d"(__seed_len)
: "cc", "memory");
if (ret < 0)
return ret;
return (func & CRYPT_S390_FUNC_MASK) ? dest_len - __dest_len : 0;
}
/**
* crypt_s390_func_available:
* @func: the function code of the specific function; 0 if op in general
@ -373,6 +427,9 @@ static inline int crypt_s390_func_available(int func,
return 0;
if (facility_mask & CRYPT_S390_MSA4 && !test_facility(77))
return 0;
if (facility_mask & CRYPT_S390_MSA5 && !test_facility(57))
return 0;
switch (func & CRYPT_S390_OP_MASK) {
case CRYPT_S390_KM:
ret = crypt_s390_km(KM_QUERY, &status, NULL, NULL, 0);
@ -390,8 +447,12 @@ static inline int crypt_s390_func_available(int func,
ret = crypt_s390_kmac(KMAC_QUERY, &status, NULL, 0);
break;
case CRYPT_S390_KMCTR:
ret = crypt_s390_kmctr(KMCTR_QUERY, &status, NULL, NULL, 0,
NULL);
ret = crypt_s390_kmctr(KMCTR_QUERY, &status,
NULL, NULL, 0, NULL);
break;
case CRYPT_S390_PPNO:
ret = crypt_s390_ppno(PPNO_QUERY, &status,
NULL, 0, NULL, 0);
break;
default:
return 0;
@ -419,15 +480,14 @@ static inline int crypt_s390_pcc(long func, void *param)
int ret = -1;
asm volatile(
"0: .insn rre,0xb92c0000,0,0 \n" /* PCC opcode */
"1: brc 1,0b \n" /* handle partial completion */
"0: .insn rre,0xb92c0000,0,0\n" /* PCC opcode */
"1: brc 1,0b\n" /* handle partial completion */
" la %0,0\n"
"2:\n"
EX_TABLE(0b,2b) EX_TABLE(1b,2b)
EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
: "+d" (ret)
: "d" (__func), "a" (__param) : "cc", "memory");
return ret;
}
#endif /* _CRYPTO_ARCH_S390_CRYPT_S390_H */

View file

@ -1,106 +1,529 @@
/*
* Copyright IBM Corp. 2006, 2007
* Copyright IBM Corp. 2006, 2015
* Author(s): Jan Glauber <jan.glauber@de.ibm.com>
* Harald Freudenberger <freude@de.ibm.com>
* Driver for the s390 pseudo random number generator
*/
#define KMSG_COMPONENT "prng"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
#include <linux/fs.h>
#include <linux/fips.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/device.h>
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/mutex.h>
#include <linux/random.h>
#include <linux/slab.h>
#include <asm/debug.h>
#include <asm/uaccess.h>
#include <asm/timex.h>
#include "crypt_s390.h"
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Jan Glauber <jan.glauber@de.ibm.com>");
MODULE_AUTHOR("IBM Corporation");
MODULE_DESCRIPTION("s390 PRNG interface");
static int prng_chunk_size = 256;
module_param(prng_chunk_size, int, S_IRUSR | S_IRGRP | S_IROTH);
#define PRNG_MODE_AUTO 0
#define PRNG_MODE_TDES 1
#define PRNG_MODE_SHA512 2
static unsigned int prng_mode = PRNG_MODE_AUTO;
module_param_named(mode, prng_mode, int, 0);
MODULE_PARM_DESC(prng_mode, "PRNG mode: 0 - auto, 1 - TDES, 2 - SHA512");
#define PRNG_CHUNKSIZE_TDES_MIN 8
#define PRNG_CHUNKSIZE_TDES_MAX (64*1024)
#define PRNG_CHUNKSIZE_SHA512_MIN 64
#define PRNG_CHUNKSIZE_SHA512_MAX (64*1024)
static unsigned int prng_chunk_size = 256;
module_param_named(chunksize, prng_chunk_size, int, 0);
MODULE_PARM_DESC(prng_chunk_size, "PRNG read chunk size in bytes");
static int prng_entropy_limit = 4096;
module_param(prng_entropy_limit, int, S_IRUSR | S_IRGRP | S_IROTH | S_IWUSR);
MODULE_PARM_DESC(prng_entropy_limit,
"PRNG add entropy after that much bytes were produced");
#define PRNG_RESEED_LIMIT_TDES 4096
#define PRNG_RESEED_LIMIT_TDES_LOWER 4096
#define PRNG_RESEED_LIMIT_SHA512 100000
#define PRNG_RESEED_LIMIT_SHA512_LOWER 10000
static unsigned int prng_reseed_limit;
module_param_named(reseed_limit, prng_reseed_limit, int, 0);
MODULE_PARM_DESC(prng_reseed_limit, "PRNG reseed limit");
/*
* Any one who considers arithmetical methods of producing random digits is,
* of course, in a state of sin. -- John von Neumann
*/
struct s390_prng_data {
unsigned long count; /* how many bytes were produced */
char *buf;
static int prng_errorflag;
#define PRNG_GEN_ENTROPY_FAILED 1
#define PRNG_SELFTEST_FAILED 2
#define PRNG_INSTANTIATE_FAILED 3
#define PRNG_SEED_FAILED 4
#define PRNG_RESEED_FAILED 5
#define PRNG_GEN_FAILED 6
struct prng_ws_s {
u8 parm_block[32];
u32 reseed_counter;
u64 byte_counter;
};
static struct s390_prng_data *p;
/* copied from libica, use a non-zero initial parameter block */
static unsigned char parm_block[32] = {
0x0F,0x2B,0x8E,0x63,0x8C,0x8E,0xD2,0x52,0x64,0xB7,0xA0,0x7B,0x75,0x28,0xB8,0xF4,
0x75,0x5F,0xD2,0xA6,0x8D,0x97,0x11,0xFF,0x49,0xD8,0x23,0xF3,0x7E,0x21,0xEC,0xA0,
struct ppno_ws_s {
u32 res;
u32 reseed_counter;
u64 stream_bytes;
u8 V[112];
u8 C[112];
};
static int prng_open(struct inode *inode, struct file *file)
struct prng_data_s {
struct mutex mutex;
union {
struct prng_ws_s prngws;
struct ppno_ws_s ppnows;
};
u8 *buf;
u32 rest;
u8 *prev;
};
static struct prng_data_s *prng_data;
/* initial parameter block for tdes mode, copied from libica */
static const u8 initial_parm_block[32] __initconst = {
0x0F, 0x2B, 0x8E, 0x63, 0x8C, 0x8E, 0xD2, 0x52,
0x64, 0xB7, 0xA0, 0x7B, 0x75, 0x28, 0xB8, 0xF4,
0x75, 0x5F, 0xD2, 0xA6, 0x8D, 0x97, 0x11, 0xFF,
0x49, 0xD8, 0x23, 0xF3, 0x7E, 0x21, 0xEC, 0xA0 };
/*** helper functions ***/
static int generate_entropy(u8 *ebuf, size_t nbytes)
{
return nonseekable_open(inode, file);
int n, ret = 0;
u8 *pg, *h, hash[32];
pg = (u8 *) __get_free_page(GFP_KERNEL);
if (!pg) {
prng_errorflag = PRNG_GEN_ENTROPY_FAILED;
return -ENOMEM;
}
while (nbytes) {
/* fill page with urandom bytes */
get_random_bytes(pg, PAGE_SIZE);
/* exor page with stckf values */
for (n = 0; n < sizeof(PAGE_SIZE/sizeof(u64)); n++) {
u64 *p = ((u64 *)pg) + n;
*p ^= get_tod_clock_fast();
}
n = (nbytes < sizeof(hash)) ? nbytes : sizeof(hash);
if (n < sizeof(hash))
h = hash;
else
h = ebuf;
/* generate sha256 from this page */
if (crypt_s390_kimd(KIMD_SHA_256, h,
pg, PAGE_SIZE) != PAGE_SIZE) {
prng_errorflag = PRNG_GEN_ENTROPY_FAILED;
ret = -EIO;
goto out;
}
if (n < sizeof(hash))
memcpy(ebuf, hash, n);
ret += n;
ebuf += n;
nbytes -= n;
}
out:
free_page((unsigned long)pg);
return ret;
}
static void prng_add_entropy(void)
/*** tdes functions ***/
static void prng_tdes_add_entropy(void)
{
__u64 entropy[4];
unsigned int i;
int ret;
for (i = 0; i < 16; i++) {
ret = crypt_s390_kmc(KMC_PRNG, parm_block, (char *)entropy,
(char *)entropy, sizeof(entropy));
ret = crypt_s390_kmc(KMC_PRNG, prng_data->prngws.parm_block,
(char *)entropy, (char *)entropy,
sizeof(entropy));
BUG_ON(ret < 0 || ret != sizeof(entropy));
memcpy(parm_block, entropy, sizeof(entropy));
memcpy(prng_data->prngws.parm_block, entropy, sizeof(entropy));
}
}
static void prng_seed(int nbytes)
static void prng_tdes_seed(int nbytes)
{
char buf[16];
int i = 0;
BUG_ON(nbytes > 16);
BUG_ON(nbytes > sizeof(buf));
get_random_bytes(buf, nbytes);
/* Add the entropy */
while (nbytes >= 8) {
*((__u64 *)parm_block) ^= *((__u64 *)(buf+i));
prng_add_entropy();
*((__u64 *)prng_data->prngws.parm_block) ^= *((__u64 *)(buf+i));
prng_tdes_add_entropy();
i += 8;
nbytes -= 8;
}
prng_add_entropy();
prng_tdes_add_entropy();
prng_data->prngws.reseed_counter = 0;
}
static ssize_t prng_read(struct file *file, char __user *ubuf, size_t nbytes,
loff_t *ppos)
{
int chunk, n;
int ret = 0;
int tmp;
/* nbytes can be arbitrary length, we split it into chunks */
static int __init prng_tdes_instantiate(void)
{
int datalen;
pr_debug("prng runs in TDES mode with "
"chunksize=%d and reseed_limit=%u\n",
prng_chunk_size, prng_reseed_limit);
/* memory allocation, prng_data struct init, mutex init */
datalen = sizeof(struct prng_data_s) + prng_chunk_size;
prng_data = kzalloc(datalen, GFP_KERNEL);
if (!prng_data) {
prng_errorflag = PRNG_INSTANTIATE_FAILED;
return -ENOMEM;
}
mutex_init(&prng_data->mutex);
prng_data->buf = ((u8 *)prng_data) + sizeof(struct prng_data_s);
memcpy(prng_data->prngws.parm_block, initial_parm_block, 32);
/* initialize the PRNG, add 128 bits of entropy */
prng_tdes_seed(16);
return 0;
}
static void prng_tdes_deinstantiate(void)
{
pr_debug("The prng module stopped "
"after running in triple DES mode\n");
kzfree(prng_data);
}
/*** sha512 functions ***/
static int __init prng_sha512_selftest(void)
{
/* NIST DRBG testvector for Hash Drbg, Sha-512, Count #0 */
static const u8 seed[] __initconst = {
0x6b, 0x50, 0xa7, 0xd8, 0xf8, 0xa5, 0x5d, 0x7a,
0x3d, 0xf8, 0xbb, 0x40, 0xbc, 0xc3, 0xb7, 0x22,
0xd8, 0x70, 0x8d, 0xe6, 0x7f, 0xda, 0x01, 0x0b,
0x03, 0xc4, 0xc8, 0x4d, 0x72, 0x09, 0x6f, 0x8c,
0x3e, 0xc6, 0x49, 0xcc, 0x62, 0x56, 0xd9, 0xfa,
0x31, 0xdb, 0x7a, 0x29, 0x04, 0xaa, 0xf0, 0x25 };
static const u8 V0[] __initconst = {
0x00, 0xad, 0xe3, 0x6f, 0x9a, 0x01, 0xc7, 0x76,
0x61, 0x34, 0x35, 0xf5, 0x4e, 0x24, 0x74, 0x22,
0x21, 0x9a, 0x29, 0x89, 0xc7, 0x93, 0x2e, 0x60,
0x1e, 0xe8, 0x14, 0x24, 0x8d, 0xd5, 0x03, 0xf1,
0x65, 0x5d, 0x08, 0x22, 0x72, 0xd5, 0xad, 0x95,
0xe1, 0x23, 0x1e, 0x8a, 0xa7, 0x13, 0xd9, 0x2b,
0x5e, 0xbc, 0xbb, 0x80, 0xab, 0x8d, 0xe5, 0x79,
0xab, 0x5b, 0x47, 0x4e, 0xdd, 0xee, 0x6b, 0x03,
0x8f, 0x0f, 0x5c, 0x5e, 0xa9, 0x1a, 0x83, 0xdd,
0xd3, 0x88, 0xb2, 0x75, 0x4b, 0xce, 0x83, 0x36,
0x57, 0x4b, 0xf1, 0x5c, 0xca, 0x7e, 0x09, 0xc0,
0xd3, 0x89, 0xc6, 0xe0, 0xda, 0xc4, 0x81, 0x7e,
0x5b, 0xf9, 0xe1, 0x01, 0xc1, 0x92, 0x05, 0xea,
0xf5, 0x2f, 0xc6, 0xc6, 0xc7, 0x8f, 0xbc, 0xf4 };
static const u8 C0[] __initconst = {
0x00, 0xf4, 0xa3, 0xe5, 0xa0, 0x72, 0x63, 0x95,
0xc6, 0x4f, 0x48, 0xd0, 0x8b, 0x5b, 0x5f, 0x8e,
0x6b, 0x96, 0x1f, 0x16, 0xed, 0xbc, 0x66, 0x94,
0x45, 0x31, 0xd7, 0x47, 0x73, 0x22, 0xa5, 0x86,
0xce, 0xc0, 0x4c, 0xac, 0x63, 0xb8, 0x39, 0x50,
0xbf, 0xe6, 0x59, 0x6c, 0x38, 0x58, 0x99, 0x1f,
0x27, 0xa7, 0x9d, 0x71, 0x2a, 0xb3, 0x7b, 0xf9,
0xfb, 0x17, 0x86, 0xaa, 0x99, 0x81, 0xaa, 0x43,
0xe4, 0x37, 0xd3, 0x1e, 0x6e, 0xe5, 0xe6, 0xee,
0xc2, 0xed, 0x95, 0x4f, 0x53, 0x0e, 0x46, 0x8a,
0xcc, 0x45, 0xa5, 0xdb, 0x69, 0x0d, 0x81, 0xc9,
0x32, 0x92, 0xbc, 0x8f, 0x33, 0xe6, 0xf6, 0x09,
0x7c, 0x8e, 0x05, 0x19, 0x0d, 0xf1, 0xb6, 0xcc,
0xf3, 0x02, 0x21, 0x90, 0x25, 0xec, 0xed, 0x0e };
static const u8 random[] __initconst = {
0x95, 0xb7, 0xf1, 0x7e, 0x98, 0x02, 0xd3, 0x57,
0x73, 0x92, 0xc6, 0xa9, 0xc0, 0x80, 0x83, 0xb6,
0x7d, 0xd1, 0x29, 0x22, 0x65, 0xb5, 0xf4, 0x2d,
0x23, 0x7f, 0x1c, 0x55, 0xbb, 0x9b, 0x10, 0xbf,
0xcf, 0xd8, 0x2c, 0x77, 0xa3, 0x78, 0xb8, 0x26,
0x6a, 0x00, 0x99, 0x14, 0x3b, 0x3c, 0x2d, 0x64,
0x61, 0x1e, 0xee, 0xb6, 0x9a, 0xcd, 0xc0, 0x55,
0x95, 0x7c, 0x13, 0x9e, 0x8b, 0x19, 0x0c, 0x7a,
0x06, 0x95, 0x5f, 0x2c, 0x79, 0x7c, 0x27, 0x78,
0xde, 0x94, 0x03, 0x96, 0xa5, 0x01, 0xf4, 0x0e,
0x91, 0x39, 0x6a, 0xcf, 0x8d, 0x7e, 0x45, 0xeb,
0xdb, 0xb5, 0x3b, 0xbf, 0x8c, 0x97, 0x52, 0x30,
0xd2, 0xf0, 0xff, 0x91, 0x06, 0xc7, 0x61, 0x19,
0xae, 0x49, 0x8e, 0x7f, 0xbc, 0x03, 0xd9, 0x0f,
0x8e, 0x4c, 0x51, 0x62, 0x7a, 0xed, 0x5c, 0x8d,
0x42, 0x63, 0xd5, 0xd2, 0xb9, 0x78, 0x87, 0x3a,
0x0d, 0xe5, 0x96, 0xee, 0x6d, 0xc7, 0xf7, 0xc2,
0x9e, 0x37, 0xee, 0xe8, 0xb3, 0x4c, 0x90, 0xdd,
0x1c, 0xf6, 0xa9, 0xdd, 0xb2, 0x2b, 0x4c, 0xbd,
0x08, 0x6b, 0x14, 0xb3, 0x5d, 0xe9, 0x3d, 0xa2,
0xd5, 0xcb, 0x18, 0x06, 0x69, 0x8c, 0xbd, 0x7b,
0xbb, 0x67, 0xbf, 0xe3, 0xd3, 0x1f, 0xd2, 0xd1,
0xdb, 0xd2, 0xa1, 0xe0, 0x58, 0xa3, 0xeb, 0x99,
0xd7, 0xe5, 0x1f, 0x1a, 0x93, 0x8e, 0xed, 0x5e,
0x1c, 0x1d, 0xe2, 0x3a, 0x6b, 0x43, 0x45, 0xd3,
0x19, 0x14, 0x09, 0xf9, 0x2f, 0x39, 0xb3, 0x67,
0x0d, 0x8d, 0xbf, 0xb6, 0x35, 0xd8, 0xe6, 0xa3,
0x69, 0x32, 0xd8, 0x10, 0x33, 0xd1, 0x44, 0x8d,
0x63, 0xb4, 0x03, 0xdd, 0xf8, 0x8e, 0x12, 0x1b,
0x6e, 0x81, 0x9a, 0xc3, 0x81, 0x22, 0x6c, 0x13,
0x21, 0xe4, 0xb0, 0x86, 0x44, 0xf6, 0x72, 0x7c,
0x36, 0x8c, 0x5a, 0x9f, 0x7a, 0x4b, 0x3e, 0xe2 };
int ret = 0;
u8 buf[sizeof(random)];
struct ppno_ws_s ws;
memset(&ws, 0, sizeof(ws));
/* initial seed */
ret = crypt_s390_ppno(PPNO_SHA512_DRNG_SEED,
&ws, NULL, 0,
seed, sizeof(seed));
if (ret < 0) {
pr_err("The prng self test seed operation for the "
"SHA-512 mode failed with rc=%d\n", ret);
prng_errorflag = PRNG_SELFTEST_FAILED;
return -EIO;
}
/* check working states V and C */
if (memcmp(ws.V, V0, sizeof(V0)) != 0
|| memcmp(ws.C, C0, sizeof(C0)) != 0) {
pr_err("The prng self test state test "
"for the SHA-512 mode failed\n");
prng_errorflag = PRNG_SELFTEST_FAILED;
return -EIO;
}
/* generate random bytes */
ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN,
&ws, buf, sizeof(buf),
NULL, 0);
if (ret < 0) {
pr_err("The prng self test generate operation for "
"the SHA-512 mode failed with rc=%d\n", ret);
prng_errorflag = PRNG_SELFTEST_FAILED;
return -EIO;
}
ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN,
&ws, buf, sizeof(buf),
NULL, 0);
if (ret < 0) {
pr_err("The prng self test generate operation for "
"the SHA-512 mode failed with rc=%d\n", ret);
prng_errorflag = PRNG_SELFTEST_FAILED;
return -EIO;
}
/* check against expected data */
if (memcmp(buf, random, sizeof(random)) != 0) {
pr_err("The prng self test data test "
"for the SHA-512 mode failed\n");
prng_errorflag = PRNG_SELFTEST_FAILED;
return -EIO;
}
return 0;
}
static int __init prng_sha512_instantiate(void)
{
int ret, datalen;
u8 seed[64];
pr_debug("prng runs in SHA-512 mode "
"with chunksize=%d and reseed_limit=%u\n",
prng_chunk_size, prng_reseed_limit);
/* memory allocation, prng_data struct init, mutex init */
datalen = sizeof(struct prng_data_s) + prng_chunk_size;
if (fips_enabled)
datalen += prng_chunk_size;
prng_data = kzalloc(datalen, GFP_KERNEL);
if (!prng_data) {
prng_errorflag = PRNG_INSTANTIATE_FAILED;
return -ENOMEM;
}
mutex_init(&prng_data->mutex);
prng_data->buf = ((u8 *)prng_data) + sizeof(struct prng_data_s);
/* selftest */
ret = prng_sha512_selftest();
if (ret)
goto outfree;
/* generate initial seed bytestring, first 48 bytes of entropy */
ret = generate_entropy(seed, 48);
if (ret != 48)
goto outfree;
/* followed by 16 bytes of unique nonce */
get_tod_clock_ext(seed + 48);
/* initial seed of the ppno drng */
ret = crypt_s390_ppno(PPNO_SHA512_DRNG_SEED,
&prng_data->ppnows, NULL, 0,
seed, sizeof(seed));
if (ret < 0) {
prng_errorflag = PRNG_SEED_FAILED;
ret = -EIO;
goto outfree;
}
/* if fips mode is enabled, generate a first block of random
bytes for the FIPS 140-2 Conditional Self Test */
if (fips_enabled) {
prng_data->prev = prng_data->buf + prng_chunk_size;
ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN,
&prng_data->ppnows,
prng_data->prev,
prng_chunk_size,
NULL, 0);
if (ret < 0 || ret != prng_chunk_size) {
prng_errorflag = PRNG_GEN_FAILED;
ret = -EIO;
goto outfree;
}
}
return 0;
outfree:
kfree(prng_data);
return ret;
}
static void prng_sha512_deinstantiate(void)
{
pr_debug("The prng module stopped after running in SHA-512 mode\n");
kzfree(prng_data);
}
static int prng_sha512_reseed(void)
{
int ret;
u8 seed[32];
/* generate 32 bytes of fresh entropy */
ret = generate_entropy(seed, sizeof(seed));
if (ret != sizeof(seed))
return ret;
/* do a reseed of the ppno drng with this bytestring */
ret = crypt_s390_ppno(PPNO_SHA512_DRNG_SEED,
&prng_data->ppnows, NULL, 0,
seed, sizeof(seed));
if (ret) {
prng_errorflag = PRNG_RESEED_FAILED;
return -EIO;
}
return 0;
}
static int prng_sha512_generate(u8 *buf, size_t nbytes)
{
int ret;
/* reseed needed ? */
if (prng_data->ppnows.reseed_counter > prng_reseed_limit) {
ret = prng_sha512_reseed();
if (ret)
return ret;
}
/* PPNO generate */
ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN,
&prng_data->ppnows, buf, nbytes,
NULL, 0);
if (ret < 0 || ret != nbytes) {
prng_errorflag = PRNG_GEN_FAILED;
return -EIO;
}
/* FIPS 140-2 Conditional Self Test */
if (fips_enabled) {
if (!memcmp(prng_data->prev, buf, nbytes)) {
prng_errorflag = PRNG_GEN_FAILED;
return -EILSEQ;
}
memcpy(prng_data->prev, buf, nbytes);
}
return ret;
}
/*** file io functions ***/
static int prng_open(struct inode *inode, struct file *file)
{
return nonseekable_open(inode, file);
}
static ssize_t prng_tdes_read(struct file *file, char __user *ubuf,
size_t nbytes, loff_t *ppos)
{
int chunk, n, tmp, ret = 0;
/* lock prng_data struct */
if (mutex_lock_interruptible(&prng_data->mutex))
return -ERESTARTSYS;
while (nbytes) {
/* same as in extract_entropy_user in random.c */
if (need_resched()) {
if (signal_pending(current)) {
if (ret == 0)
ret = -ERESTARTSYS;
break;
}
/* give mutex free before calling schedule() */
mutex_unlock(&prng_data->mutex);
schedule();
/* occopy mutex again */
if (mutex_lock_interruptible(&prng_data->mutex)) {
if (ret == 0)
ret = -ERESTARTSYS;
return ret;
}
}
/*
@ -112,12 +535,11 @@ static ssize_t prng_read(struct file *file, char __user *ubuf, size_t nbytes,
/* PRNG only likes multiples of 8 bytes */
n = (chunk + 7) & -8;
if (p->count > prng_entropy_limit)
prng_seed(8);
if (prng_data->prngws.reseed_counter > prng_reseed_limit)
prng_tdes_seed(8);
/* if the CPU supports PRNG stckf is present too */
asm volatile(".insn s,0xb27c0000,%0"
: "=m" (*((unsigned long long *)p->buf)) : : "cc");
*((unsigned long long *)prng_data->buf) = get_tod_clock_fast();
/*
* Beside the STCKF the input for the TDES-EDE is the output
@ -132,34 +554,258 @@ static ssize_t prng_read(struct file *file, char __user *ubuf, size_t nbytes,
* Note: you can still get strict X9.17 conformity by setting
* prng_chunk_size to 8 bytes.
*/
tmp = crypt_s390_kmc(KMC_PRNG, parm_block, p->buf, p->buf, n);
BUG_ON((tmp < 0) || (tmp != n));
tmp = crypt_s390_kmc(KMC_PRNG, prng_data->prngws.parm_block,
prng_data->buf, prng_data->buf, n);
if (tmp < 0 || tmp != n) {
ret = -EIO;
break;
}
p->count += n;
prng_data->prngws.byte_counter += n;
prng_data->prngws.reseed_counter += n;
if (copy_to_user(ubuf, p->buf, chunk))
if (copy_to_user(ubuf, prng_data->buf, chunk))
return -EFAULT;
nbytes -= chunk;
ret += chunk;
ubuf += chunk;
}
/* unlock prng_data struct */
mutex_unlock(&prng_data->mutex);
return ret;
}
static const struct file_operations prng_fops = {
static ssize_t prng_sha512_read(struct file *file, char __user *ubuf,
size_t nbytes, loff_t *ppos)
{
int n, ret = 0;
u8 *p;
/* if errorflag is set do nothing and return 'broken pipe' */
if (prng_errorflag)
return -EPIPE;
/* lock prng_data struct */
if (mutex_lock_interruptible(&prng_data->mutex))
return -ERESTARTSYS;
while (nbytes) {
if (need_resched()) {
if (signal_pending(current)) {
if (ret == 0)
ret = -ERESTARTSYS;
break;
}
/* give mutex free before calling schedule() */
mutex_unlock(&prng_data->mutex);
schedule();
/* occopy mutex again */
if (mutex_lock_interruptible(&prng_data->mutex)) {
if (ret == 0)
ret = -ERESTARTSYS;
return ret;
}
}
if (prng_data->rest) {
/* push left over random bytes from the previous read */
p = prng_data->buf + prng_chunk_size - prng_data->rest;
n = (nbytes < prng_data->rest) ?
nbytes : prng_data->rest;
prng_data->rest -= n;
} else {
/* generate one chunk of random bytes into read buf */
p = prng_data->buf;
n = prng_sha512_generate(p, prng_chunk_size);
if (n < 0) {
ret = n;
break;
}
if (nbytes < prng_chunk_size) {
n = nbytes;
prng_data->rest = prng_chunk_size - n;
} else {
n = prng_chunk_size;
prng_data->rest = 0;
}
}
if (copy_to_user(ubuf, p, n)) {
ret = -EFAULT;
break;
}
ubuf += n;
nbytes -= n;
ret += n;
}
/* unlock prng_data struct */
mutex_unlock(&prng_data->mutex);
return ret;
}
/*** sysfs stuff ***/
static const struct file_operations prng_sha512_fops = {
.owner = THIS_MODULE,
.open = &prng_open,
.release = NULL,
.read = &prng_read,
.read = &prng_sha512_read,
.llseek = noop_llseek,
};
static const struct file_operations prng_tdes_fops = {
.owner = THIS_MODULE,
.open = &prng_open,
.release = NULL,
.read = &prng_tdes_read,
.llseek = noop_llseek,
};
static struct miscdevice prng_dev = {
static struct miscdevice prng_sha512_dev = {
.name = "prandom",
.minor = MISC_DYNAMIC_MINOR,
.fops = &prng_fops,
.fops = &prng_sha512_fops,
};
static struct miscdevice prng_tdes_dev = {
.name = "prandom",
.minor = MISC_DYNAMIC_MINOR,
.fops = &prng_tdes_fops,
};
/* chunksize attribute (ro) */
static ssize_t prng_chunksize_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return snprintf(buf, PAGE_SIZE, "%u\n", prng_chunk_size);
}
static DEVICE_ATTR(chunksize, 0444, prng_chunksize_show, NULL);
/* counter attribute (ro) */
static ssize_t prng_counter_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
u64 counter;
if (mutex_lock_interruptible(&prng_data->mutex))
return -ERESTARTSYS;
if (prng_mode == PRNG_MODE_SHA512)
counter = prng_data->ppnows.stream_bytes;
else
counter = prng_data->prngws.byte_counter;
mutex_unlock(&prng_data->mutex);
return snprintf(buf, PAGE_SIZE, "%llu\n", counter);
}
static DEVICE_ATTR(byte_counter, 0444, prng_counter_show, NULL);
/* errorflag attribute (ro) */
static ssize_t prng_errorflag_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return snprintf(buf, PAGE_SIZE, "%d\n", prng_errorflag);
}
static DEVICE_ATTR(errorflag, 0444, prng_errorflag_show, NULL);
/* mode attribute (ro) */
static ssize_t prng_mode_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
if (prng_mode == PRNG_MODE_TDES)
return snprintf(buf, PAGE_SIZE, "TDES\n");
else
return snprintf(buf, PAGE_SIZE, "SHA512\n");
}
static DEVICE_ATTR(mode, 0444, prng_mode_show, NULL);
/* reseed attribute (w) */
static ssize_t prng_reseed_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
if (mutex_lock_interruptible(&prng_data->mutex))
return -ERESTARTSYS;
prng_sha512_reseed();
mutex_unlock(&prng_data->mutex);
return count;
}
static DEVICE_ATTR(reseed, 0200, NULL, prng_reseed_store);
/* reseed limit attribute (rw) */
static ssize_t prng_reseed_limit_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return snprintf(buf, PAGE_SIZE, "%u\n", prng_reseed_limit);
}
static ssize_t prng_reseed_limit_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
unsigned limit;
if (sscanf(buf, "%u\n", &limit) != 1)
return -EINVAL;
if (prng_mode == PRNG_MODE_SHA512) {
if (limit < PRNG_RESEED_LIMIT_SHA512_LOWER)
return -EINVAL;
} else {
if (limit < PRNG_RESEED_LIMIT_TDES_LOWER)
return -EINVAL;
}
prng_reseed_limit = limit;
return count;
}
static DEVICE_ATTR(reseed_limit, 0644,
prng_reseed_limit_show, prng_reseed_limit_store);
/* strength attribute (ro) */
static ssize_t prng_strength_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return snprintf(buf, PAGE_SIZE, "256\n");
}
static DEVICE_ATTR(strength, 0444, prng_strength_show, NULL);
static struct attribute *prng_sha512_dev_attrs[] = {
&dev_attr_errorflag.attr,
&dev_attr_chunksize.attr,
&dev_attr_byte_counter.attr,
&dev_attr_mode.attr,
&dev_attr_reseed.attr,
&dev_attr_reseed_limit.attr,
&dev_attr_strength.attr,
NULL
};
static struct attribute *prng_tdes_dev_attrs[] = {
&dev_attr_chunksize.attr,
&dev_attr_byte_counter.attr,
&dev_attr_mode.attr,
NULL
};
static struct attribute_group prng_sha512_dev_attr_group = {
.attrs = prng_sha512_dev_attrs
};
static struct attribute_group prng_tdes_dev_attr_group = {
.attrs = prng_tdes_dev_attrs
};
/*** module init and exit ***/
static int __init prng_init(void)
{
@ -169,43 +815,105 @@ static int __init prng_init(void)
if (!crypt_s390_func_available(KMC_PRNG, CRYPT_S390_MSA))
return -EOPNOTSUPP;
if (prng_chunk_size < 8)
return -EINVAL;
p = kmalloc(sizeof(struct s390_prng_data), GFP_KERNEL);
if (!p)
return -ENOMEM;
p->count = 0;
p->buf = kmalloc(prng_chunk_size, GFP_KERNEL);
if (!p->buf) {
ret = -ENOMEM;
goto out_free;
/* choose prng mode */
if (prng_mode != PRNG_MODE_TDES) {
/* check for MSA5 support for PPNO operations */
if (!crypt_s390_func_available(PPNO_SHA512_DRNG_GEN,
CRYPT_S390_MSA5)) {
if (prng_mode == PRNG_MODE_SHA512) {
pr_err("The prng module cannot "
"start in SHA-512 mode\n");
return -EOPNOTSUPP;
}
prng_mode = PRNG_MODE_TDES;
} else
prng_mode = PRNG_MODE_SHA512;
}
/* initialize the PRNG, add 128 bits of entropy */
prng_seed(16);
if (prng_mode == PRNG_MODE_SHA512) {
ret = misc_register(&prng_dev);
if (ret)
goto out_buf;
return 0;
/* SHA512 mode */
out_buf:
kfree(p->buf);
out_free:
kfree(p);
if (prng_chunk_size < PRNG_CHUNKSIZE_SHA512_MIN
|| prng_chunk_size > PRNG_CHUNKSIZE_SHA512_MAX)
return -EINVAL;
prng_chunk_size = (prng_chunk_size + 0x3f) & ~0x3f;
if (prng_reseed_limit == 0)
prng_reseed_limit = PRNG_RESEED_LIMIT_SHA512;
else if (prng_reseed_limit < PRNG_RESEED_LIMIT_SHA512_LOWER)
return -EINVAL;
ret = prng_sha512_instantiate();
if (ret)
goto out;
ret = misc_register(&prng_sha512_dev);
if (ret) {
prng_sha512_deinstantiate();
goto out;
}
ret = sysfs_create_group(&prng_sha512_dev.this_device->kobj,
&prng_sha512_dev_attr_group);
if (ret) {
misc_deregister(&prng_sha512_dev);
prng_sha512_deinstantiate();
goto out;
}
} else {
/* TDES mode */
if (prng_chunk_size < PRNG_CHUNKSIZE_TDES_MIN
|| prng_chunk_size > PRNG_CHUNKSIZE_TDES_MAX)
return -EINVAL;
prng_chunk_size = (prng_chunk_size + 0x07) & ~0x07;
if (prng_reseed_limit == 0)
prng_reseed_limit = PRNG_RESEED_LIMIT_TDES;
else if (prng_reseed_limit < PRNG_RESEED_LIMIT_TDES_LOWER)
return -EINVAL;
ret = prng_tdes_instantiate();
if (ret)
goto out;
ret = misc_register(&prng_tdes_dev);
if (ret) {
prng_tdes_deinstantiate();
goto out;
}
ret = sysfs_create_group(&prng_tdes_dev.this_device->kobj,
&prng_tdes_dev_attr_group);
if (ret) {
misc_deregister(&prng_tdes_dev);
prng_tdes_deinstantiate();
goto out;
}
}
out:
return ret;
}
static void __exit prng_exit(void)
{
/* wipe me */
kzfree(p->buf);
kfree(p);
misc_deregister(&prng_dev);
if (prng_mode == PRNG_MODE_SHA512) {
sysfs_remove_group(&prng_sha512_dev.this_device->kobj,
&prng_sha512_dev_attr_group);
misc_deregister(&prng_sha512_dev);
prng_sha512_deinstantiate();
} else {
sysfs_remove_group(&prng_tdes_dev.this_device->kobj,
&prng_tdes_dev_attr_group);
misc_deregister(&prng_tdes_dev);
prng_tdes_deinstantiate();
}
}
module_init(prng_init);
module_exit(prng_exit);

View file

@ -26,6 +26,9 @@
/* Not more than 2GB */
#define KEXEC_CONTROL_MEMORY_LIMIT (1UL<<31)
/* Allocate control page with GFP_DMA */
#define KEXEC_CONTROL_MEMORY_GFP GFP_DMA
/* Maximum address we can use for the crash control pages */
#define KEXEC_CRASH_CONTROL_MEMORY_LIMIT (-1UL)

View file

@ -14,7 +14,9 @@ typedef struct {
unsigned long asce_bits;
unsigned long asce_limit;
unsigned long vdso_base;
/* The mmu context has extended page tables. */
/* The mmu context allocates 4K page tables. */
unsigned int alloc_pgste:1;
/* The mmu context uses extended page tables. */
unsigned int has_pgste:1;
/* The mmu context uses storage keys. */
unsigned int use_skey:1;

View file

@ -20,8 +20,11 @@ static inline int init_new_context(struct task_struct *tsk,
mm->context.flush_mm = 0;
mm->context.asce_bits = _ASCE_TABLE_LENGTH | _ASCE_USER_BITS;
mm->context.asce_bits |= _ASCE_TYPE_REGION3;
#ifdef CONFIG_PGSTE
mm->context.alloc_pgste = page_table_allocate_pgste;
mm->context.has_pgste = 0;
mm->context.use_skey = 0;
#endif
mm->context.asce_limit = STACK_TOP_MAX;
crst_table_init((unsigned long *) mm->pgd, pgd_entry_type(mm));
return 0;

View file

@ -21,6 +21,7 @@ void crst_table_free(struct mm_struct *, unsigned long *);
unsigned long *page_table_alloc(struct mm_struct *);
void page_table_free(struct mm_struct *, unsigned long *);
void page_table_free_rcu(struct mmu_gather *, unsigned long *, unsigned long);
extern int page_table_allocate_pgste;
int set_guest_storage_key(struct mm_struct *mm, unsigned long addr,
unsigned long key, bool nq);

View file

@ -12,12 +12,9 @@
#define _ASM_S390_PGTABLE_H
/*
* The Linux memory management assumes a three-level page table setup. For
* s390 31 bit we "fold" the mid level into the top-level page table, so
* that we physically have the same two-level page table as the s390 mmu
* expects in 31 bit mode. For s390 64 bit we use three of the five levels
* the hardware provides (region first and region second tables are not
* used).
* The Linux memory management assumes a three-level page table setup.
* For s390 64 bit we use up to four of the five levels the hardware
* provides (region first tables are not used).
*
* The "pgd_xxx()" functions are trivial for a folded two-level
* setup: the pgd is never bad, and a pmd always exists (as it's folded
@ -101,8 +98,8 @@ extern unsigned long zero_page_mask;
#ifndef __ASSEMBLY__
/*
* The vmalloc and module area will always be on the topmost area of the kernel
* mapping. We reserve 96MB (31bit) / 128GB (64bit) for vmalloc and modules.
* The vmalloc and module area will always be on the topmost area of the
* kernel mapping. We reserve 128GB (64bit) for vmalloc and modules.
* On 64 bit kernels we have a 2GB area at the top of the vmalloc area where
* modules will reside. That makes sure that inter module branches always
* happen without trampolines and in addition the placement within a 2GB frame
@ -131,38 +128,6 @@ static inline int is_module_addr(void *addr)
}
/*
* A 31 bit pagetable entry of S390 has following format:
* | PFRA | | OS |
* 0 0IP0
* 00000000001111111111222222222233
* 01234567890123456789012345678901
*
* I Page-Invalid Bit: Page is not available for address-translation
* P Page-Protection Bit: Store access not possible for page
*
* A 31 bit segmenttable entry of S390 has following format:
* | P-table origin | |PTL
* 0 IC
* 00000000001111111111222222222233
* 01234567890123456789012345678901
*
* I Segment-Invalid Bit: Segment is not available for address-translation
* C Common-Segment Bit: Segment is not private (PoP 3-30)
* PTL Page-Table-Length: Page-table length (PTL+1*16 entries -> up to 256)
*
* The 31 bit segmenttable origin of S390 has following format:
*
* |S-table origin | | STL |
* X **GPS
* 00000000001111111111222222222233
* 01234567890123456789012345678901
*
* X Space-Switch event:
* G Segment-Invalid Bit: *
* P Private-Space Bit: Segment is not private (PoP 3-30)
* S Storage-Alteration:
* STL Segment-Table-Length: Segment-table length (STL+1*16 entries -> up to 2048)
*
* A 64 bit pagetable entry of S390 has following format:
* | PFRA |0IPC| OS |
* 0000000000111111111122222222223333333333444444444455555555556666
@ -220,7 +185,6 @@ static inline int is_module_addr(void *addr)
/* Software bits in the page table entry */
#define _PAGE_PRESENT 0x001 /* SW pte present bit */
#define _PAGE_TYPE 0x002 /* SW pte type bit */
#define _PAGE_YOUNG 0x004 /* SW pte young bit */
#define _PAGE_DIRTY 0x008 /* SW pte dirty bit */
#define _PAGE_READ 0x010 /* SW pte read bit */
@ -240,31 +204,34 @@ static inline int is_module_addr(void *addr)
* table lock held.
*
* The following table gives the different possible bit combinations for
* the pte hardware and software bits in the last 12 bits of a pte:
* the pte hardware and software bits in the last 12 bits of a pte
* (. unassigned bit, x don't care, t swap type):
*
* 842100000000
* 000084210000
* 000000008421
* .IR...wrdytp
* empty .10...000000
* swap .10...xxxx10
* file .11...xxxxx0
* prot-none, clean, old .11...000001
* prot-none, clean, young .11...000101
* prot-none, dirty, old .10...001001
* prot-none, dirty, young .10...001101
* read-only, clean, old .11...010001
* read-only, clean, young .01...010101
* read-only, dirty, old .11...011001
* read-only, dirty, young .01...011101
* read-write, clean, old .11...110001
* read-write, clean, young .01...110101
* read-write, dirty, old .10...111001
* read-write, dirty, young .00...111101
* .IR.uswrdy.p
* empty .10.00000000
* swap .11..ttttt.0
* prot-none, clean, old .11.xx0000.1
* prot-none, clean, young .11.xx0001.1
* prot-none, dirty, old .10.xx0010.1
* prot-none, dirty, young .10.xx0011.1
* read-only, clean, old .11.xx0100.1
* read-only, clean, young .01.xx0101.1
* read-only, dirty, old .11.xx0110.1
* read-only, dirty, young .01.xx0111.1
* read-write, clean, old .11.xx1100.1
* read-write, clean, young .01.xx1101.1
* read-write, dirty, old .10.xx1110.1
* read-write, dirty, young .00.xx1111.1
* HW-bits: R read-only, I invalid
* SW-bits: p present, y young, d dirty, r read, w write, s special,
* u unused, l large
*
* pte_present is true for the bit pattern .xx...xxxxx1, (pte & 0x001) == 0x001
* pte_none is true for the bit pattern .10...xxxx00, (pte & 0x603) == 0x400
* pte_swap is true for the bit pattern .10...xxxx10, (pte & 0x603) == 0x402
* pte_none is true for the bit pattern .10.00000000, pte == 0x400
* pte_swap is true for the bit pattern .11..ooooo.0, (pte & 0x201) == 0x200
* pte_present is true for the bit pattern .xx.xxxxxx.1, (pte & 0x001) == 0x001
*/
/* Bits in the segment/region table address-space-control-element */
@ -335,6 +302,8 @@ static inline int is_module_addr(void *addr)
* read-write, dirty, young 11..0...0...11
* The segment table origin is used to distinguish empty (origin==0) from
* read-write, old segment table entries (origin!=0)
* HW-bits: R read-only, I invalid
* SW-bits: y young, d dirty, r read, w write
*/
#define _SEGMENT_ENTRY_SPLIT_BIT 11 /* THP splitting bit number */
@ -423,6 +392,15 @@ static inline int mm_has_pgste(struct mm_struct *mm)
return 0;
}
static inline int mm_alloc_pgste(struct mm_struct *mm)
{
#ifdef CONFIG_PGSTE
if (unlikely(mm->context.alloc_pgste))
return 1;
#endif
return 0;
}
/*
* In the case that a guest uses storage keys
* faults should no longer be backed by zero pages
@ -582,10 +560,9 @@ static inline int pte_none(pte_t pte)
static inline int pte_swap(pte_t pte)
{
/* Bit pattern: (pte & 0x603) == 0x402 */
return (pte_val(pte) & (_PAGE_INVALID | _PAGE_PROTECT |
_PAGE_TYPE | _PAGE_PRESENT))
== (_PAGE_INVALID | _PAGE_TYPE);
/* Bit pattern: (pte & 0x201) == 0x200 */
return (pte_val(pte) & (_PAGE_PROTECT | _PAGE_PRESENT))
== _PAGE_PROTECT;
}
static inline int pte_special(pte_t pte)
@ -1586,51 +1563,51 @@ static inline int has_transparent_hugepage(void)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
/*
* 31 bit swap entry format:
* A page-table entry has some bits we have to treat in a special way.
* Bits 0, 20 and bit 23 have to be zero, otherwise an specification
* exception will occur instead of a page translation exception. The
* specifiation exception has the bad habit not to store necessary
* information in the lowcore.
* Bits 21, 22, 30 and 31 are used to indicate the page type.
* A swap pte is indicated by bit pattern (pte & 0x603) == 0x402
* This leaves the bits 1-19 and bits 24-29 to store type and offset.
* We use the 5 bits from 25-29 for the type and the 20 bits from 1-19
* plus 24 for the offset.
* 0| offset |0110|o|type |00|
* 0 0000000001111111111 2222 2 22222 33
* 0 1234567890123456789 0123 4 56789 01
*
* 64 bit swap entry format:
* A page-table entry has some bits we have to treat in a special way.
* Bits 52 and bit 55 have to be zero, otherwise an specification
* exception will occur instead of a page translation exception. The
* specifiation exception has the bad habit not to store necessary
* information in the lowcore.
* Bits 53, 54, 62 and 63 are used to indicate the page type.
* A swap pte is indicated by bit pattern (pte & 0x603) == 0x402
* This leaves the bits 0-51 and bits 56-61 to store type and offset.
* We use the 5 bits from 57-61 for the type and the 53 bits from 0-51
* plus 56 for the offset.
* | offset |0110|o|type |00|
* 0000000000111111111122222222223333333333444444444455 5555 5 55566 66
* 0123456789012345678901234567890123456789012345678901 2345 6 78901 23
* Bits 54 and 63 are used to indicate the page type.
* A swap pte is indicated by bit pattern (pte & 0x201) == 0x200
* This leaves the bits 0-51 and bits 56-62 to store type and offset.
* We use the 5 bits from 57-61 for the type and the 52 bits from 0-51
* for the offset.
* | offset |01100|type |00|
* |0000000000111111111122222222223333333333444444444455|55555|55566|66|
* |0123456789012345678901234567890123456789012345678901|23456|78901|23|
*/
#define __SWP_OFFSET_MASK (~0UL >> 11)
#define __SWP_OFFSET_MASK ((1UL << 52) - 1)
#define __SWP_OFFSET_SHIFT 12
#define __SWP_TYPE_MASK ((1UL << 5) - 1)
#define __SWP_TYPE_SHIFT 2
static inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
{
pte_t pte;
offset &= __SWP_OFFSET_MASK;
pte_val(pte) = _PAGE_INVALID | _PAGE_TYPE | ((type & 0x1f) << 2) |
((offset & 1UL) << 7) | ((offset & ~1UL) << 11);
pte_val(pte) = _PAGE_INVALID | _PAGE_PROTECT;
pte_val(pte) |= (offset & __SWP_OFFSET_MASK) << __SWP_OFFSET_SHIFT;
pte_val(pte) |= (type & __SWP_TYPE_MASK) << __SWP_TYPE_SHIFT;
return pte;
}
#define __swp_type(entry) (((entry).val >> 2) & 0x1f)
#define __swp_offset(entry) (((entry).val >> 11) | (((entry).val >> 7) & 1))
#define __swp_entry(type,offset) ((swp_entry_t) { pte_val(mk_swap_pte((type),(offset))) })
static inline unsigned long __swp_type(swp_entry_t entry)
{
return (entry.val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK;
}
static inline unsigned long __swp_offset(swp_entry_t entry)
{
return (entry.val >> __SWP_OFFSET_SHIFT) & __SWP_OFFSET_MASK;
}
static inline swp_entry_t __swp_entry(unsigned long type, unsigned long offset)
{
return (swp_entry_t) { pte_val(mk_swap_pte(type, offset)) };
}
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })

View file

@ -14,20 +14,23 @@ static inline pmd_t __pte_to_pmd(pte_t pte)
/*
* Convert encoding pte bits pmd bits
* .IR...wrdytp dy..R...I...wr
* empty .10...000000 -> 00..0...1...00
* prot-none, clean, old .11...000001 -> 00..1...1...00
* prot-none, clean, young .11...000101 -> 01..1...1...00
* prot-none, dirty, old .10...001001 -> 10..1...1...00
* prot-none, dirty, young .10...001101 -> 11..1...1...00
* read-only, clean, old .11...010001 -> 00..1...1...01
* read-only, clean, young .01...010101 -> 01..1...0...01
* read-only, dirty, old .11...011001 -> 10..1...1...01
* read-only, dirty, young .01...011101 -> 11..1...0...01
* read-write, clean, old .11...110001 -> 00..0...1...11
* read-write, clean, young .01...110101 -> 01..0...0...11
* read-write, dirty, old .10...111001 -> 10..0...1...11
* read-write, dirty, young .00...111101 -> 11..0...0...11
* lIR.uswrdy.p dy..R...I...wr
* empty 010.000000.0 -> 00..0...1...00
* prot-none, clean, old 111.000000.1 -> 00..1...1...00
* prot-none, clean, young 111.000001.1 -> 01..1...1...00
* prot-none, dirty, old 111.000010.1 -> 10..1...1...00
* prot-none, dirty, young 111.000011.1 -> 11..1...1...00
* read-only, clean, old 111.000100.1 -> 00..1...1...01
* read-only, clean, young 101.000101.1 -> 01..1...0...01
* read-only, dirty, old 111.000110.1 -> 10..1...1...01
* read-only, dirty, young 101.000111.1 -> 11..1...0...01
* read-write, clean, old 111.001100.1 -> 00..1...1...11
* read-write, clean, young 101.001101.1 -> 01..1...0...11
* read-write, dirty, old 110.001110.1 -> 10..0...1...11
* read-write, dirty, young 100.001111.1 -> 11..0...0...11
* HW-bits: R read-only, I invalid
* SW-bits: p present, y young, d dirty, r read, w write, s special,
* u unused, l large
*/
if (pte_present(pte)) {
pmd_val(pmd) = pte_val(pte) & PAGE_MASK;
@ -48,20 +51,23 @@ static inline pte_t __pmd_to_pte(pmd_t pmd)
/*
* Convert encoding pmd bits pte bits
* dy..R...I...wr .IR...wrdytp
* empty 00..0...1...00 -> .10...001100
* prot-none, clean, old 00..0...1...00 -> .10...000001
* prot-none, clean, young 01..0...1...00 -> .10...000101
* prot-none, dirty, old 10..0...1...00 -> .10...001001
* prot-none, dirty, young 11..0...1...00 -> .10...001101
* read-only, clean, old 00..1...1...01 -> .11...010001
* read-only, clean, young 01..1...1...01 -> .11...010101
* read-only, dirty, old 10..1...1...01 -> .11...011001
* read-only, dirty, young 11..1...1...01 -> .11...011101
* read-write, clean, old 00..0...1...11 -> .10...110001
* read-write, clean, young 01..0...1...11 -> .10...110101
* read-write, dirty, old 10..0...1...11 -> .10...111001
* read-write, dirty, young 11..0...1...11 -> .10...111101
* dy..R...I...wr lIR.uswrdy.p
* empty 00..0...1...00 -> 010.000000.0
* prot-none, clean, old 00..1...1...00 -> 111.000000.1
* prot-none, clean, young 01..1...1...00 -> 111.000001.1
* prot-none, dirty, old 10..1...1...00 -> 111.000010.1
* prot-none, dirty, young 11..1...1...00 -> 111.000011.1
* read-only, clean, old 00..1...1...01 -> 111.000100.1
* read-only, clean, young 01..1...0...01 -> 101.000101.1
* read-only, dirty, old 10..1...1...01 -> 111.000110.1
* read-only, dirty, young 11..1...0...01 -> 101.000111.1
* read-write, clean, old 00..1...1...11 -> 111.001100.1
* read-write, clean, young 01..1...0...11 -> 101.001101.1
* read-write, dirty, old 10..0...1...11 -> 110.001110.1
* read-write, dirty, young 11..0...0...11 -> 100.001111.1
* HW-bits: R read-only, I invalid
* SW-bits: p present, y young, d dirty, r read, w write, s special,
* u unused, l large
*/
if (pmd_present(pmd)) {
pte_val(pte) = pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN_LARGE;
@ -70,8 +76,8 @@ static inline pte_t __pmd_to_pte(pmd_t pmd)
pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_WRITE) << 4;
pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_INVALID) << 5;
pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_PROTECT);
pmd_val(pmd) |= (pte_val(pte) & _PAGE_DIRTY) << 10;
pmd_val(pmd) |= (pte_val(pte) & _PAGE_YOUNG) << 10;
pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_DIRTY) >> 10;
pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_YOUNG) >> 10;
} else
pte_val(pte) = _PAGE_INVALID;
return pte;

View file

@ -18,6 +18,7 @@
#include <linux/rcupdate.h>
#include <linux/slab.h>
#include <linux/swapops.h>
#include <linux/sysctl.h>
#include <linux/ksm.h>
#include <linux/mman.h>
@ -920,6 +921,40 @@ unsigned long get_guest_storage_key(struct mm_struct *mm, unsigned long addr)
}
EXPORT_SYMBOL(get_guest_storage_key);
static int page_table_allocate_pgste_min = 0;
static int page_table_allocate_pgste_max = 1;
int page_table_allocate_pgste = 0;
EXPORT_SYMBOL(page_table_allocate_pgste);
static struct ctl_table page_table_sysctl[] = {
{
.procname = "allocate_pgste",
.data = &page_table_allocate_pgste,
.maxlen = sizeof(int),
.mode = S_IRUGO | S_IWUSR,
.proc_handler = proc_dointvec,
.extra1 = &page_table_allocate_pgste_min,
.extra2 = &page_table_allocate_pgste_max,
},
{ }
};
static struct ctl_table page_table_sysctl_dir[] = {
{
.procname = "vm",
.maxlen = 0,
.mode = 0555,
.child = page_table_sysctl,
},
{ }
};
static int __init page_table_register_sysctl(void)
{
return register_sysctl_table(page_table_sysctl_dir) ? 0 : -ENOMEM;
}
__initcall(page_table_register_sysctl);
#else /* CONFIG_PGSTE */
static inline int page_table_with_pgste(struct page *page)
@ -963,7 +998,7 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
struct page *uninitialized_var(page);
unsigned int mask, bit;
if (mm_has_pgste(mm))
if (mm_alloc_pgste(mm))
return page_table_alloc_pgste(mm);
/* Allocate fragments of a 4K page as 1K/2K page table */
spin_lock_bh(&mm->context.list_lock);
@ -1165,116 +1200,25 @@ static inline void thp_split_mm(struct mm_struct *mm)
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
static unsigned long page_table_realloc_pmd(struct mmu_gather *tlb,
struct mm_struct *mm, pud_t *pud,
unsigned long addr, unsigned long end)
{
unsigned long next, *table, *new;
struct page *page;
spinlock_t *ptl;
pmd_t *pmd;
pmd = pmd_offset(pud, addr);
do {
next = pmd_addr_end(addr, end);
again:
if (pmd_none_or_clear_bad(pmd))
continue;
table = (unsigned long *) pmd_deref(*pmd);
page = pfn_to_page(__pa(table) >> PAGE_SHIFT);
if (page_table_with_pgste(page))
continue;
/* Allocate new page table with pgstes */
new = page_table_alloc_pgste(mm);
if (!new)
return -ENOMEM;
ptl = pmd_lock(mm, pmd);
if (likely((unsigned long *) pmd_deref(*pmd) == table)) {
/* Nuke pmd entry pointing to the "short" page table */
pmdp_flush_lazy(mm, addr, pmd);
pmd_clear(pmd);
/* Copy ptes from old table to new table */
memcpy(new, table, PAGE_SIZE/2);
clear_table(table, _PAGE_INVALID, PAGE_SIZE/2);
/* Establish new table */
pmd_populate(mm, pmd, (pte_t *) new);
/* Free old table with rcu, there might be a walker! */
page_table_free_rcu(tlb, table, addr);
new = NULL;
}
spin_unlock(ptl);
if (new) {
page_table_free_pgste(new);
goto again;
}
} while (pmd++, addr = next, addr != end);
return addr;
}
static unsigned long page_table_realloc_pud(struct mmu_gather *tlb,
struct mm_struct *mm, pgd_t *pgd,
unsigned long addr, unsigned long end)
{
unsigned long next;
pud_t *pud;
pud = pud_offset(pgd, addr);
do {
next = pud_addr_end(addr, end);
if (pud_none_or_clear_bad(pud))
continue;
next = page_table_realloc_pmd(tlb, mm, pud, addr, next);
if (unlikely(IS_ERR_VALUE(next)))
return next;
} while (pud++, addr = next, addr != end);
return addr;
}
static unsigned long page_table_realloc(struct mmu_gather *tlb, struct mm_struct *mm,
unsigned long addr, unsigned long end)
{
unsigned long next;
pgd_t *pgd;
pgd = pgd_offset(mm, addr);
do {
next = pgd_addr_end(addr, end);
if (pgd_none_or_clear_bad(pgd))
continue;
next = page_table_realloc_pud(tlb, mm, pgd, addr, next);
if (unlikely(IS_ERR_VALUE(next)))
return next;
} while (pgd++, addr = next, addr != end);
return 0;
}
/*
* switch on pgstes for its userspace process (for kvm)
*/
int s390_enable_sie(void)
{
struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm;
struct mmu_gather tlb;
struct mm_struct *mm = current->mm;
/* Do we have pgstes? if yes, we are done */
if (mm_has_pgste(tsk->mm))
if (mm_has_pgste(mm))
return 0;
/* Fail if the page tables are 2K */
if (!mm_alloc_pgste(mm))
return -EINVAL;
down_write(&mm->mmap_sem);
mm->context.has_pgste = 1;
/* split thp mappings and disable thp for future mappings */
thp_split_mm(mm);
/* Reallocate the page tables with pgstes */
tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE);
if (!page_table_realloc(&tlb, mm, 0, TASK_SIZE))
mm->context.has_pgste = 1;
tlb_finish_mmu(&tlb, 0, TASK_SIZE);
up_write(&mm->mmap_sem);
return mm->context.has_pgste ? 0 : -ENOMEM;
return 0;
}
EXPORT_SYMBOL_GPL(s390_enable_sie);

View file

@ -774,7 +774,7 @@ static void __init zone_sizes_init(void)
* though, there'll be no lowmem, so we just alloc_bootmem
* the memmap. There will be no percpu memory either.
*/
if (i != 0 && cpumask_test_cpu(i, &isolnodes)) {
if (i != 0 && node_isset(i, isolnodes)) {
node_memmap_pfn[i] =
alloc_bootmem_pfn(0, memmap_size, 0);
BUG_ON(node_percpu[i] != 0);

View file

@ -95,7 +95,6 @@ unsigned __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src,
struct pvclock_vsyscall_time_info {
struct pvclock_vcpu_time_info pvti;
u32 migrate_count;
} __attribute__((__aligned__(SMP_CACHE_BYTES)));
#define PVTI_SIZE sizeof(struct pvclock_vsyscall_time_info)

View file

@ -141,46 +141,7 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
set_normalized_timespec(ts, now.tv_sec, now.tv_nsec);
}
static struct pvclock_vsyscall_time_info *pvclock_vdso_info;
static struct pvclock_vsyscall_time_info *
pvclock_get_vsyscall_user_time_info(int cpu)
{
if (!pvclock_vdso_info) {
BUG();
return NULL;
}
return &pvclock_vdso_info[cpu];
}
struct pvclock_vcpu_time_info *pvclock_get_vsyscall_time_info(int cpu)
{
return &pvclock_get_vsyscall_user_time_info(cpu)->pvti;
}
#ifdef CONFIG_X86_64
static int pvclock_task_migrate(struct notifier_block *nb, unsigned long l,
void *v)
{
struct task_migration_notifier *mn = v;
struct pvclock_vsyscall_time_info *pvti;
pvti = pvclock_get_vsyscall_user_time_info(mn->from_cpu);
/* this is NULL when pvclock vsyscall is not initialized */
if (unlikely(pvti == NULL))
return NOTIFY_DONE;
pvti->migrate_count++;
return NOTIFY_DONE;
}
static struct notifier_block pvclock_migrate = {
.notifier_call = pvclock_task_migrate,
};
/*
* Initialize the generic pvclock vsyscall state. This will allocate
* a/some page(s) for the per-vcpu pvclock information, set up a
@ -194,17 +155,12 @@ int __init pvclock_init_vsyscall(struct pvclock_vsyscall_time_info *i,
WARN_ON (size != PVCLOCK_VSYSCALL_NR_PAGES*PAGE_SIZE);
pvclock_vdso_info = i;
for (idx = 0; idx <= (PVCLOCK_FIXMAP_END-PVCLOCK_FIXMAP_BEGIN); idx++) {
__set_fixmap(PVCLOCK_FIXMAP_BEGIN + idx,
__pa(i) + (idx*PAGE_SIZE),
PAGE_KERNEL_VVAR);
}
register_task_migration_notifier(&pvclock_migrate);
return 0;
}
#endif

View file

@ -1669,12 +1669,28 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
&guest_hv_clock, sizeof(guest_hv_clock))))
return 0;
/*
* The interface expects us to write an even number signaling that the
* update is finished. Since the guest won't see the intermediate
* state, we just increase by 2 at the end.
/* This VCPU is paused, but it's legal for a guest to read another
* VCPU's kvmclock, so we really have to follow the specification where
* it says that version is odd if data is being modified, and even after
* it is consistent.
*
* Version field updates must be kept separate. This is because
* kvm_write_guest_cached might use a "rep movs" instruction, and
* writes within a string instruction are weakly ordered. So there
* are three writes overall.
*
* As a small optimization, only write the version field in the first
* and third write. The vcpu->pv_time cache is still valid, because the
* version field is the first in the struct.
*/
vcpu->hv_clock.version = guest_hv_clock.version + 2;
BUILD_BUG_ON(offsetof(struct pvclock_vcpu_time_info, version) != 0);
vcpu->hv_clock.version = guest_hv_clock.version + 1;
kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
&vcpu->hv_clock,
sizeof(vcpu->hv_clock.version));
smp_wmb();
/* retain PVCLOCK_GUEST_STOPPED if set in guest copy */
pvclock_flags = (guest_hv_clock.flags & PVCLOCK_GUEST_STOPPED);
@ -1695,6 +1711,13 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
&vcpu->hv_clock,
sizeof(vcpu->hv_clock));
smp_wmb();
vcpu->hv_clock.version++;
kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
&vcpu->hv_clock,
sizeof(vcpu->hv_clock.version));
return 0;
}

View file

@ -82,15 +82,18 @@ static notrace cycle_t vread_pvclock(int *mode)
cycle_t ret;
u64 last;
u32 version;
u32 migrate_count;
u8 flags;
unsigned cpu, cpu1;
/*
* When looping to get a consistent (time-info, tsc) pair, we
* also need to deal with the possibility we can switch vcpus,
* so make sure we always re-fetch time-info for the current vcpu.
* Note: hypervisor must guarantee that:
* 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
* 2. that per-CPU pvclock time info is updated if the
* underlying CPU changes.
* 3. that version is increased whenever underlying CPU
* changes.
*
*/
do {
cpu = __getcpu() & VGETCPU_CPU_MASK;
@ -99,27 +102,20 @@ static notrace cycle_t vread_pvclock(int *mode)
* __getcpu() calls (Gleb).
*/
/* Make sure migrate_count will change if we leave the VCPU. */
do {
pvti = get_pvti(cpu);
migrate_count = pvti->migrate_count;
cpu1 = cpu;
cpu = __getcpu() & VGETCPU_CPU_MASK;
} while (unlikely(cpu != cpu1));
pvti = get_pvti(cpu);
version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags);
/*
* Test we're still on the cpu as well as the version.
* - We must read TSC of pvti's VCPU.
* - KVM doesn't follow the versioning protocol, so data could
* change before version if we left the VCPU.
* We could have been migrated just after the first
* vgetcpu but before fetching the version, so we
* wouldn't notice a version change.
*/
smp_rmb();
} while (unlikely((pvti->pvti.version & 1) ||
pvti->pvti.version != version ||
pvti->migrate_count != migrate_count));
cpu1 = __getcpu() & VGETCPU_CPU_MASK;
} while (unlikely(cpu != cpu1 ||
(pvti->pvti.version & 1) ||
pvti->pvti.version != version));
if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
*mode = VCLOCK_NONE;

View file

@ -304,6 +304,8 @@ static const struct acpi_device_id acpi_pnp_device_ids[] = {
{"PNPb006"},
/* cs423x-pnpbios */
{"CSC0100"},
{"CSC0103"},
{"CSC0110"},
{"CSC0000"},
{"GIM0100"}, /* Guillemot Turtlebeach something appears to be cs4232 compatible */
/* es18xx-pnpbios */

View file

@ -684,7 +684,7 @@ static int acpi_sbs_add(struct acpi_device *device)
if (!sbs_manager_broken) {
result = acpi_manager_get_info(sbs);
if (!result) {
sbs->manager_present = 0;
sbs->manager_present = 1;
for (id = 0; id < MAX_SBS_BAT; ++id)
if ((sbs->batteries_supported & (1 << id)))
acpi_battery_add(sbs, id);

View file

@ -14,6 +14,7 @@
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/dmi.h>
#include "sbshc.h"
#define PREFIX "ACPI: "
@ -87,6 +88,8 @@ enum acpi_smb_offset {
ACPI_SMB_ALARM_DATA = 0x26, /* 2 bytes alarm data */
};
static bool macbook;
static inline int smb_hc_read(struct acpi_smb_hc *hc, u8 address, u8 *data)
{
return ec_read(hc->offset + address, data);
@ -132,6 +135,8 @@ static int acpi_smbus_transaction(struct acpi_smb_hc *hc, u8 protocol,
}
mutex_lock(&hc->lock);
if (macbook)
udelay(5);
if (smb_hc_read(hc, ACPI_SMB_PROTOCOL, &temp))
goto end;
if (temp) {
@ -257,12 +262,29 @@ extern int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
acpi_handle handle, acpi_ec_query_func func,
void *data);
static int macbook_dmi_match(const struct dmi_system_id *d)
{
pr_debug("Detected MacBook, enabling workaround\n");
macbook = true;
return 0;
}
static struct dmi_system_id acpi_smbus_dmi_table[] = {
{ macbook_dmi_match, "Apple MacBook", {
DMI_MATCH(DMI_BOARD_VENDOR, "Apple"),
DMI_MATCH(DMI_PRODUCT_NAME, "MacBook") },
},
{ },
};
static int acpi_smbus_hc_add(struct acpi_device *device)
{
int status;
unsigned long long val;
struct acpi_smb_hc *hc;
dmi_check_system(acpi_smbus_dmi_table);
if (!device)
return -EINVAL;

View file

@ -2264,6 +2264,11 @@ static bool rbd_img_obj_end_request(struct rbd_obj_request *obj_request)
result, xferred);
if (!img_request->result)
img_request->result = result;
/*
* Need to end I/O on the entire obj_request worth of
* bytes in case of error.
*/
xferred = obj_request->length;
}
/* Image object requests don't own their page array */

View file

@ -158,9 +158,18 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
int entered_state;
struct cpuidle_state *target_state = &drv->states[index];
bool broadcast = !!(target_state->flags & CPUIDLE_FLAG_TIMER_STOP);
ktime_t time_start, time_end;
s64 diff;
/*
* Tell the time framework to switch to a broadcast timer because our
* local timer will be shut down. If a local timer is used from another
* CPU as a broadcast timer, this call may fail if it is not available.
*/
if (broadcast && tick_broadcast_enter())
return -EBUSY;
trace_cpu_idle_rcuidle(index, dev->cpu);
time_start = ktime_get();
@ -169,6 +178,13 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
time_end = ktime_get();
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
if (broadcast) {
if (WARN_ON_ONCE(!irqs_disabled()))
local_irq_disable();
tick_broadcast_exit();
}
if (!cpuidle_state_is_coupled(dev, drv, entered_state))
local_irq_enable();

View file

@ -437,6 +437,7 @@ config IMG_MDC_DMA
config XGENE_DMA
tristate "APM X-Gene DMA support"
depends on ARCH_XGENE || COMPILE_TEST
select DMA_ENGINE
select DMA_ENGINE_RAID
select ASYNC_TX_ENABLE_CHANNEL_SWITCH

View file

@ -571,11 +571,15 @@ struct dma_chan *dma_get_any_slave_channel(struct dma_device *device)
chan = private_candidate(&mask, device, NULL, NULL);
if (chan) {
dma_cap_set(DMA_PRIVATE, device->cap_mask);
device->privatecnt++;
err = dma_chan_get(chan);
if (err) {
pr_debug("%s: failed to get %s: (%d)\n",
__func__, dma_chan_name(chan), err);
chan = NULL;
if (--device->privatecnt == 0)
dma_cap_clear(DMA_PRIVATE, device->cap_mask);
}
}

View file

@ -673,6 +673,7 @@ static struct dma_chan *usb_dmac_of_xlate(struct of_phandle_args *dma_spec,
* Power management
*/
#ifdef CONFIG_PM
static int usb_dmac_runtime_suspend(struct device *dev)
{
struct usb_dmac *dmac = dev_get_drvdata(dev);
@ -690,6 +691,7 @@ static int usb_dmac_runtime_resume(struct device *dev)
return usb_dmac_init(dmac);
}
#endif /* CONFIG_PM */
static const struct dev_pm_ops usb_dmac_pm = {
SET_RUNTIME_PM_OPS(usb_dmac_runtime_suspend, usb_dmac_runtime_resume,

View file

@ -6074,6 +6074,8 @@ enum skl_disp_power_wells {
#define GTFIFOCTL 0x120008
#define GT_FIFO_FREE_ENTRIES_MASK 0x7f
#define GT_FIFO_NUM_RESERVED_ENTRIES 20
#define GT_FIFO_CTL_BLOCK_ALL_POLICY_STALL (1 << 12)
#define GT_FIFO_CTL_RC6_POLICY_STALL (1 << 11)
#define HSW_IDICR 0x9008
#define IDIHASHMSK(x) (((x) & 0x3f) << 16)

View file

@ -360,6 +360,14 @@ static void __intel_uncore_early_sanitize(struct drm_device *dev,
__raw_i915_write32(dev_priv, GTFIFODBG,
__raw_i915_read32(dev_priv, GTFIFODBG));
/* WaDisableShadowRegForCpd:chv */
if (IS_CHERRYVIEW(dev)) {
__raw_i915_write32(dev_priv, GTFIFOCTL,
__raw_i915_read32(dev_priv, GTFIFOCTL) |
GT_FIFO_CTL_BLOCK_ALL_POLICY_STALL |
GT_FIFO_CTL_RC6_POLICY_STALL);
}
intel_uncore_forcewake_reset(dev, restore_forcewake);
}

View file

@ -580,6 +580,9 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc,
else
radeon_crtc->pll_flags |= RADEON_PLL_PREFER_LOW_REF_DIV;
/* if there is no audio, set MINM_OVER_MAXP */
if (!drm_detect_monitor_audio(radeon_connector_edid(connector)))
radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
if (rdev->family < CHIP_RV770)
radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
/* use frac fb div on APUs */

View file

@ -1761,17 +1761,15 @@ radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode)
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
int encoder_mode = atombios_get_encoder_mode(encoder);
DRM_DEBUG_KMS("encoder dpms %d to mode %d, devices %08x, active_devices %08x\n",
radeon_encoder->encoder_id, mode, radeon_encoder->devices,
radeon_encoder->active_device);
if (connector && (radeon_audio != 0) &&
if ((radeon_audio != 0) &&
((encoder_mode == ATOM_ENCODER_MODE_HDMI) ||
(ENCODER_MODE_IS_DP(encoder_mode) &&
drm_detect_monitor_audio(radeon_connector_edid(connector)))))
ENCODER_MODE_IS_DP(encoder_mode)))
radeon_audio_dpms(encoder, mode);
switch (radeon_encoder->encoder_id) {

View file

@ -295,28 +295,3 @@ void dce6_dp_audio_set_dto(struct radeon_device *rdev,
WREG32(DCCG_AUDIO_DTO1_MODULE, clock);
}
}
void dce6_dp_enable(struct drm_encoder *encoder, bool enable)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
if (!dig || !dig->afmt)
return;
if (enable) {
WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset,
EVERGREEN_DP_SEC_TIMESTAMP_MODE(1));
WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset,
EVERGREEN_DP_SEC_ASP_ENABLE | /* Audio packet transmission */
EVERGREEN_DP_SEC_ATP_ENABLE | /* Audio timestamp packet transmission */
EVERGREEN_DP_SEC_AIP_ENABLE | /* Audio infoframe packet transmission */
EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */
} else {
WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0);
}
dig->afmt->enabled = enable;
}

View file

@ -219,13 +219,9 @@ void evergreen_set_avi_packet(struct radeon_device *rdev, u32 offset,
WREG32(AFMT_AVI_INFO3 + offset,
frame[0xC] | (frame[0xD] << 8) | (buffer[1] << 24));
WREG32_OR(HDMI_INFOFRAME_CONTROL0 + offset,
HDMI_AVI_INFO_SEND | /* enable AVI info frames */
HDMI_AVI_INFO_CONT); /* required for audio info values to be updated */
WREG32_P(HDMI_INFOFRAME_CONTROL1 + offset,
HDMI_AVI_INFO_LINE(2), /* anything other than 0 */
~HDMI_AVI_INFO_LINE_MASK);
HDMI_AVI_INFO_LINE(2), /* anything other than 0 */
~HDMI_AVI_INFO_LINE_MASK);
}
void dce4_hdmi_audio_set_dto(struct radeon_device *rdev,
@ -370,9 +366,13 @@ void dce4_set_audio_packet(struct drm_encoder *encoder, u32 offset)
WREG32(AFMT_AUDIO_PACKET_CONTROL2 + offset,
AFMT_AUDIO_CHANNEL_ENABLE(0xff));
WREG32(HDMI_AUDIO_PACKET_CONTROL + offset,
HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */
HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */
/* allow 60958 channel status and send audio packets fields to be updated */
WREG32(AFMT_AUDIO_PACKET_CONTROL + offset,
AFMT_AUDIO_SAMPLE_SEND | AFMT_RESET_FIFO_WHEN_AUDIO_DIS | AFMT_60958_CS_UPDATE);
WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + offset,
AFMT_RESET_FIFO_WHEN_AUDIO_DIS | AFMT_60958_CS_UPDATE);
}
@ -398,17 +398,26 @@ void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable)
return;
if (enable) {
WREG32(HDMI_INFOFRAME_CONTROL1 + dig->afmt->offset,
HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
WREG32(HDMI_AUDIO_PACKET_CONTROL + dig->afmt->offset,
HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */
HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */
WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */
HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */
if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
HDMI_AVI_INFO_SEND | /* enable AVI info frames */
HDMI_AVI_INFO_CONT | /* required for audio info values to be updated */
HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */
HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */
WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
AFMT_AUDIO_SAMPLE_SEND);
} else {
WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
HDMI_AVI_INFO_SEND | /* enable AVI info frames */
HDMI_AVI_INFO_CONT); /* required for audio info values to be updated */
WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
~AFMT_AUDIO_SAMPLE_SEND);
}
} else {
WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
~AFMT_AUDIO_SAMPLE_SEND);
WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 0);
}
@ -424,20 +433,24 @@ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable)
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
if (!dig || !dig->afmt)
return;
if (enable) {
if (enable && drm_detect_monitor_audio(radeon_connector_edid(connector))) {
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct radeon_connector_atom_dig *dig_connector;
uint32_t val;
WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
AFMT_AUDIO_SAMPLE_SEND);
WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset,
EVERGREEN_DP_SEC_TIMESTAMP_MODE(1));
if (radeon_connector->con_priv) {
if (!ASIC_IS_DCE6(rdev) && radeon_connector->con_priv) {
dig_connector = radeon_connector->con_priv;
val = RREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset);
val &= ~EVERGREEN_DP_SEC_N_BASE_MULTIPLE(0xf);
@ -457,6 +470,8 @@ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable)
EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */
} else {
WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0);
WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
~AFMT_AUDIO_SAMPLE_SEND);
}
dig->afmt->enabled = enable;

View file

@ -228,12 +228,13 @@ void r600_set_avi_packet(struct radeon_device *rdev, u32 offset,
WREG32(HDMI0_AVI_INFO3 + offset,
frame[0xC] | (frame[0xD] << 8) | (buffer[1] << 24));
WREG32_OR(HDMI0_INFOFRAME_CONTROL0 + offset,
HDMI0_AVI_INFO_SEND | /* enable AVI info frames */
HDMI0_AVI_INFO_CONT); /* send AVI info frames every frame/field */
WREG32_OR(HDMI0_INFOFRAME_CONTROL1 + offset,
HDMI0_AVI_INFO_LINE(2)); /* anything other than 0 */
HDMI0_AVI_INFO_LINE(2)); /* anything other than 0 */
WREG32_OR(HDMI0_INFOFRAME_CONTROL0 + offset,
HDMI0_AVI_INFO_SEND | /* enable AVI info frames */
HDMI0_AVI_INFO_CONT); /* send AVI info frames every frame/field */
}
/*

View file

@ -102,7 +102,6 @@ static void radeon_audio_dp_mode_set(struct drm_encoder *encoder,
void r600_hdmi_enable(struct drm_encoder *encoder, bool enable);
void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable);
void evergreen_dp_enable(struct drm_encoder *encoder, bool enable);
void dce6_dp_enable(struct drm_encoder *encoder, bool enable);
static const u32 pin_offsets[7] =
{
@ -240,7 +239,7 @@ static struct radeon_audio_funcs dce6_dp_funcs = {
.set_avi_packet = evergreen_set_avi_packet,
.set_audio_packet = dce4_set_audio_packet,
.mode_set = radeon_audio_dp_mode_set,
.dpms = dce6_dp_enable,
.dpms = evergreen_dp_enable,
};
static void radeon_audio_interface_init(struct radeon_device *rdev)
@ -461,30 +460,33 @@ void radeon_audio_detect(struct drm_connector *connector,
if (!connector || !connector->encoder)
return;
if (!radeon_encoder_is_digital(connector->encoder))
return;
rdev = connector->encoder->dev->dev_private;
radeon_encoder = to_radeon_encoder(connector->encoder);
dig = radeon_encoder->enc_priv;
if (!dig->afmt)
return;
if (status == connector_status_connected) {
struct radeon_connector *radeon_connector;
int sink_type;
if (!drm_detect_monitor_audio(radeon_connector_edid(connector))) {
radeon_encoder->audio = NULL;
return;
}
radeon_connector = to_radeon_connector(connector);
sink_type = radeon_dp_getsinktype(radeon_connector);
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort &&
sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
radeon_dp_getsinktype(radeon_connector) ==
CONNECTOR_OBJECT_ID_DISPLAYPORT)
radeon_encoder->audio = rdev->audio.dp_funcs;
else
radeon_encoder->audio = rdev->audio.hdmi_funcs;
dig->afmt->pin = radeon_audio_get_pin(connector->encoder);
radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
} else {
radeon_audio_enable(rdev, dig->afmt->pin, 0);
dig->afmt->pin = NULL;
}
} else {
radeon_audio_enable(rdev, dig->afmt->pin, 0);
dig->afmt->pin = NULL;

View file

@ -1379,8 +1379,10 @@ out:
/* updated in get modes as well since we need to know if it's analog or digital */
radeon_connector_update_scratch_regs(connector, ret);
if (radeon_audio != 0)
if (radeon_audio != 0) {
radeon_connector_get_edid(connector);
radeon_audio_detect(connector, ret);
}
exit:
pm_runtime_mark_last_busy(connector->dev->dev);
@ -1717,8 +1719,10 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
radeon_connector_update_scratch_regs(connector, ret);
if (radeon_audio != 0)
if (radeon_audio != 0) {
radeon_connector_get_edid(connector);
radeon_audio_detect(connector, ret);
}
out:
pm_runtime_mark_last_busy(connector->dev->dev);

View file

@ -88,7 +88,7 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
p->dma_reloc_idx = 0;
/* FIXME: we assume that each relocs use 4 dwords */
p->nrelocs = chunk->length_dw / 4;
p->relocs = kcalloc(p->nrelocs, sizeof(struct radeon_bo_list), GFP_KERNEL);
p->relocs = drm_calloc_large(p->nrelocs, sizeof(struct radeon_bo_list));
if (p->relocs == NULL) {
return -ENOMEM;
}
@ -428,7 +428,7 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bo
}
}
kfree(parser->track);
kfree(parser->relocs);
drm_free_large(parser->relocs);
drm_free_large(parser->vm_bos);
for (i = 0; i < parser->nchunks; i++)
drm_free_large(parser->chunks[i].kdata);

View file

@ -135,7 +135,7 @@ static void radeon_mn_invalidate_range_start(struct mmu_notifier *mn,
while (it) {
struct radeon_mn_node *node;
struct radeon_bo *bo;
int r;
long r;
node = container_of(it, struct radeon_mn_node, it);
it = interval_tree_iter_next(it, start, end);
@ -144,19 +144,19 @@ static void radeon_mn_invalidate_range_start(struct mmu_notifier *mn,
r = radeon_bo_reserve(bo, true);
if (r) {
DRM_ERROR("(%d) failed to reserve user bo\n", r);
DRM_ERROR("(%ld) failed to reserve user bo\n", r);
continue;
}
r = reservation_object_wait_timeout_rcu(bo->tbo.resv,
true, false, MAX_SCHEDULE_TIMEOUT);
if (r)
DRM_ERROR("(%d) failed to wait for user bo\n", r);
if (r <= 0)
DRM_ERROR("(%ld) failed to wait for user bo\n", r);
radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);
r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
if (r)
DRM_ERROR("(%d) failed to validate user bo\n", r);
DRM_ERROR("(%ld) failed to validate user bo\n", r);
radeon_bo_unreserve(bo);
}

View file

@ -473,6 +473,23 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
}
mutex_lock(&vm->mutex);
soffset /= RADEON_GPU_PAGE_SIZE;
eoffset /= RADEON_GPU_PAGE_SIZE;
if (soffset || eoffset) {
struct interval_tree_node *it;
it = interval_tree_iter_first(&vm->va, soffset, eoffset - 1);
if (it && it != &bo_va->it) {
struct radeon_bo_va *tmp;
tmp = container_of(it, struct radeon_bo_va, it);
/* bo and tmp overlap, invalid offset */
dev_err(rdev->dev, "bo %p va 0x%010Lx conflict with "
"(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo,
soffset, tmp->bo, tmp->it.start, tmp->it.last);
mutex_unlock(&vm->mutex);
return -EINVAL;
}
}
if (bo_va->it.start || bo_va->it.last) {
if (bo_va->addr) {
/* add a clone of the bo_va to clear the old address */
@ -490,6 +507,8 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
spin_lock(&vm->status_lock);
list_add(&tmp->vm_status, &vm->freed);
spin_unlock(&vm->status_lock);
bo_va->addr = 0;
}
interval_tree_remove(&bo_va->it, &vm->va);
@ -497,21 +516,7 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
bo_va->it.last = 0;
}
soffset /= RADEON_GPU_PAGE_SIZE;
eoffset /= RADEON_GPU_PAGE_SIZE;
if (soffset || eoffset) {
struct interval_tree_node *it;
it = interval_tree_iter_first(&vm->va, soffset, eoffset - 1);
if (it) {
struct radeon_bo_va *tmp;
tmp = container_of(it, struct radeon_bo_va, it);
/* bo and tmp overlap, invalid offset */
dev_err(rdev->dev, "bo %p va 0x%010Lx conflict with "
"(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo,
soffset, tmp->bo, tmp->it.start, tmp->it.last);
mutex_unlock(&vm->mutex);
return -EINVAL;
}
bo_va->it.start = soffset;
bo_va->it.last = eoffset - 1;
interval_tree_insert(&bo_va->it, &vm->va);
@ -1107,7 +1112,8 @@ void radeon_vm_bo_rmv(struct radeon_device *rdev,
list_del(&bo_va->bo_list);
mutex_lock(&vm->mutex);
interval_tree_remove(&bo_va->it, &vm->va);
if (bo_va->it.start || bo_va->it.last)
interval_tree_remove(&bo_va->it, &vm->va);
spin_lock(&vm->status_lock);
list_del(&bo_va->vm_status);

View file

@ -2924,6 +2924,7 @@ struct si_dpm_quirk {
static struct si_dpm_quirk si_dpm_quirk_list[] = {
/* PITCAIRN - https://bugs.freedesktop.org/show_bug.cgi?id=76490 */
{ PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 },
{ PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 },
{ 0, 0, 0, 0 },
};

View file

@ -1409,7 +1409,7 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
struct vop *vop;
struct resource *res;
size_t alloc_size;
int ret;
int ret, irq;
of_id = of_match_device(vop_driver_dt_match, dev);
vop_data = of_id->data;
@ -1445,11 +1445,12 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
return ret;
}
vop->irq = platform_get_irq(pdev, 0);
if (vop->irq < 0) {
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
dev_err(dev, "cannot find irq for vop\n");
return vop->irq;
return irq;
}
vop->irq = (unsigned int)irq;
spin_lock_init(&vop->reg_lock);
spin_lock_init(&vop->irq_lock);

View file

@ -1298,21 +1298,22 @@ static int table_load(struct dm_ioctl *param, size_t param_size)
goto err_unlock_md_type;
}
if (dm_get_md_type(md) == DM_TYPE_NONE)
if (dm_get_md_type(md) == DM_TYPE_NONE) {
/* Initial table load: acquire type of table. */
dm_set_md_type(md, dm_table_get_type(t));
else if (dm_get_md_type(md) != dm_table_get_type(t)) {
/* setup md->queue to reflect md's type (may block) */
r = dm_setup_md_queue(md);
if (r) {
DMWARN("unable to set up device queue for new table.");
goto err_unlock_md_type;
}
} else if (dm_get_md_type(md) != dm_table_get_type(t)) {
DMWARN("can't change device type after initial table load.");
r = -EINVAL;
goto err_unlock_md_type;
}
/* setup md->queue to reflect md's type (may block) */
r = dm_setup_md_queue(md);
if (r) {
DMWARN("unable to set up device queue for new table.");
goto err_unlock_md_type;
}
dm_unlock_md_type(md);
/* stage inactive table */

View file

@ -1082,18 +1082,26 @@ static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
dm_put(md);
}
static void free_rq_clone(struct request *clone)
static void free_rq_clone(struct request *clone, bool must_be_mapped)
{
struct dm_rq_target_io *tio = clone->end_io_data;
struct mapped_device *md = tio->md;
WARN_ON_ONCE(must_be_mapped && !clone->q);
blk_rq_unprep_clone(clone);
if (clone->q->mq_ops)
if (md->type == DM_TYPE_MQ_REQUEST_BASED)
/* stacked on blk-mq queue(s) */
tio->ti->type->release_clone_rq(clone);
else if (!md->queue->mq_ops)
/* request_fn queue stacked on request_fn queue(s) */
free_clone_request(md, clone);
/*
* NOTE: for the blk-mq queue stacked on request_fn queue(s) case:
* no need to call free_clone_request() because we leverage blk-mq by
* allocating the clone at the end of the blk-mq pdu (see: clone_rq)
*/
if (!md->queue->mq_ops)
free_rq_tio(tio);
@ -1124,7 +1132,7 @@ static void dm_end_request(struct request *clone, int error)
rq->sense_len = clone->sense_len;
}
free_rq_clone(clone);
free_rq_clone(clone, true);
if (!rq->q->mq_ops)
blk_end_request_all(rq, error);
else
@ -1143,7 +1151,7 @@ static void dm_unprep_request(struct request *rq)
}
if (clone)
free_rq_clone(clone);
free_rq_clone(clone, false);
}
/*
@ -2662,9 +2670,6 @@ static int dm_init_request_based_queue(struct mapped_device *md)
{
struct request_queue *q = NULL;
if (md->queue->elevator)
return 0;
/* Fully initialize the queue */
q = blk_init_allocated_queue(md->queue, dm_request_fn, NULL);
if (!q)

View file

@ -82,6 +82,8 @@
#include <net/bond_3ad.h>
#include <net/bond_alb.h>
#include "bonding_priv.h"
/*---------------------------- Module parameters ----------------------------*/
/* monitor all links that often (in milliseconds). <=0 disables monitoring */
@ -4542,6 +4544,8 @@ unsigned int bond_get_num_tx_queues(void)
int bond_create(struct net *net, const char *name)
{
struct net_device *bond_dev;
struct bonding *bond;
struct alb_bond_info *bond_info;
int res;
rtnl_lock();
@ -4555,6 +4559,14 @@ int bond_create(struct net *net, const char *name)
return -ENOMEM;
}
/*
* Initialize rx_hashtbl_used_head to RLB_NULL_INDEX.
* It is set to 0 by default which is wrong.
*/
bond = netdev_priv(bond_dev);
bond_info = &(BOND_ALB_INFO(bond));
bond_info->rx_hashtbl_used_head = RLB_NULL_INDEX;
dev_net_set(bond_dev, net);
bond_dev->rtnl_link_ops = &bond_link_ops;

View file

@ -4,6 +4,7 @@
#include <net/netns/generic.h>
#include <net/bonding.h>
#include "bonding_priv.h"
static void *bond_info_seq_start(struct seq_file *seq, loff_t *pos)
__acquires(RCU)

View file

@ -0,0 +1,25 @@
/*
* Bond several ethernet interfaces into a Cisco, running 'Etherchannel'.
*
* Portions are (c) Copyright 1995 Simon "Guru Aleph-Null" Janes
* NCM: Network and Communications Management, Inc.
*
* BUT, I'm the one who modified it for ethernet, so:
* (c) Copyright 1999, Thomas Davis, tadavis@lbl.gov
*
* This software may be used and distributed according to the terms
* of the GNU Public License, incorporated herein by reference.
*
*/
#ifndef _BONDING_PRIV_H
#define _BONDING_PRIV_H
#define DRV_VERSION "3.7.1"
#define DRV_RELDATE "April 27, 2011"
#define DRV_NAME "bonding"
#define DRV_DESCRIPTION "Ethernet Channel Bonding Driver"
#define bond_version DRV_DESCRIPTION ": v" DRV_VERSION " (" DRV_RELDATE ")\n"
#endif

View file

@ -112,7 +112,7 @@ config PCH_CAN
config CAN_GRCAN
tristate "Aeroflex Gaisler GRCAN and GRHCAN CAN devices"
depends on OF
depends on OF && HAS_DMA
---help---
Say Y here if you want to use Aeroflex Gaisler GRCAN or GRHCAN.
Note that the driver supports little endian, even though little

View file

@ -1102,7 +1102,7 @@ static void kvaser_usb_rx_can_err(const struct kvaser_usb_net_priv *priv,
if (msg->u.rx_can_header.flag & (MSG_FLAG_ERROR_FRAME |
MSG_FLAG_NERR)) {
netdev_err(priv->netdev, "Unknow error (flags: 0x%02x)\n",
netdev_err(priv->netdev, "Unknown error (flags: 0x%02x)\n",
msg->u.rx_can_header.flag);
stats->rx_errors++;

View file

@ -523,7 +523,7 @@ static int etherh_addr(char *addr, struct expansion_card *ec)
char *s;
if (!ecard_readchunk(&cd, ec, 0xf5, 0)) {
printk(KERN_ERR "%s: unable to read podule description string\n",
printk(KERN_ERR "%s: unable to read module description string\n",
dev_name(&ec->dev));
goto no_addr;
}

View file

@ -58,15 +58,12 @@ struct msgdma_extended_desc {
/* Tx buffer control flags
*/
#define MSGDMA_DESC_CTL_TX_FIRST (MSGDMA_DESC_CTL_GEN_SOP | \
MSGDMA_DESC_CTL_TR_ERR_IRQ | \
MSGDMA_DESC_CTL_GO)
#define MSGDMA_DESC_CTL_TX_MIDDLE (MSGDMA_DESC_CTL_TR_ERR_IRQ | \
MSGDMA_DESC_CTL_GO)
#define MSGDMA_DESC_CTL_TX_MIDDLE (MSGDMA_DESC_CTL_GO)
#define MSGDMA_DESC_CTL_TX_LAST (MSGDMA_DESC_CTL_GEN_EOP | \
MSGDMA_DESC_CTL_TR_COMP_IRQ | \
MSGDMA_DESC_CTL_TR_ERR_IRQ | \
MSGDMA_DESC_CTL_GO)
#define MSGDMA_DESC_CTL_TX_SINGLE (MSGDMA_DESC_CTL_GEN_SOP | \

View file

@ -391,6 +391,12 @@ static int tse_rx(struct altera_tse_private *priv, int limit)
"RCV pktstatus %08X pktlength %08X\n",
pktstatus, pktlength);
/* DMA trasfer from TSE starts with 2 aditional bytes for
* IP payload alignment. Status returned by get_rx_status()
* contains DMA transfer length. Packet is 2 bytes shorter.
*/
pktlength -= 2;
count++;
next_entry = (++priv->rx_cons) % priv->rx_ring_size;
@ -777,6 +783,8 @@ static int init_phy(struct net_device *dev)
struct altera_tse_private *priv = netdev_priv(dev);
struct phy_device *phydev;
struct device_node *phynode;
bool fixed_link = false;
int rc = 0;
/* Avoid init phy in case of no phy present */
if (!priv->phy_iface)
@ -789,13 +797,32 @@ static int init_phy(struct net_device *dev)
phynode = of_parse_phandle(priv->device->of_node, "phy-handle", 0);
if (!phynode) {
netdev_dbg(dev, "no phy-handle found\n");
if (!priv->mdio) {
netdev_err(dev,
"No phy-handle nor local mdio specified\n");
return -ENODEV;
/* check if a fixed-link is defined in device-tree */
if (of_phy_is_fixed_link(priv->device->of_node)) {
rc = of_phy_register_fixed_link(priv->device->of_node);
if (rc < 0) {
netdev_err(dev, "cannot register fixed PHY\n");
return rc;
}
/* In the case of a fixed PHY, the DT node associated
* to the PHY is the Ethernet MAC DT node.
*/
phynode = of_node_get(priv->device->of_node);
fixed_link = true;
netdev_dbg(dev, "fixed-link detected\n");
phydev = of_phy_connect(dev, phynode,
&altera_tse_adjust_link,
0, priv->phy_iface);
} else {
netdev_dbg(dev, "no phy-handle found\n");
if (!priv->mdio) {
netdev_err(dev, "No phy-handle nor local mdio specified\n");
return -ENODEV;
}
phydev = connect_local_phy(dev);
}
phydev = connect_local_phy(dev);
} else {
netdev_dbg(dev, "phy-handle found\n");
phydev = of_phy_connect(dev, phynode,
@ -819,10 +846,10 @@ static int init_phy(struct net_device *dev)
/* Broken HW is sometimes missing the pull-up resistor on the
* MDIO line, which results in reads to non-existent devices returning
* 0 rather than 0xffff. Catch this here and treat 0 as a non-existent
* device as well.
* device as well. If a fixed-link is used the phy_id is always 0.
* Note: phydev->phy_id is the result of reading the UID PHY registers.
*/
if (phydev->phy_id == 0) {
if ((phydev->phy_id == 0) && !fixed_link) {
netdev_err(dev, "Bad PHY UID 0x%08x\n", phydev->phy_id);
phy_disconnect(phydev);
return -ENODEV;

View file

@ -179,7 +179,7 @@ config SUNLANCE
config AMD_XGBE
tristate "AMD 10GbE Ethernet driver"
depends on (OF_NET || ACPI) && HAS_IOMEM
depends on (OF_NET || ACPI) && HAS_IOMEM && HAS_DMA
select PHYLIB
select AMD_XGBE_PHY
select BITREVERSE

View file

@ -25,8 +25,7 @@ config ARC_EMAC_CORE
config ARC_EMAC
tristate "ARC EMAC support"
select ARC_EMAC_CORE
depends on OF_IRQ
depends on OF_NET
depends on OF_IRQ && OF_NET && HAS_DMA
---help---
On some legacy ARC (Synopsys) FPGA boards such as ARCAngel4/ML50x
non-standard on-chip ethernet device ARC EMAC 10/100 is used.
@ -35,7 +34,7 @@ config ARC_EMAC
config EMAC_ROCKCHIP
tristate "Rockchip EMAC support"
select ARC_EMAC_CORE
depends on OF_IRQ && OF_NET && REGULATOR
depends on OF_IRQ && OF_NET && REGULATOR && HAS_DMA
---help---
Support for Rockchip RK3066/RK3188 EMAC ethernet controllers.
This selects Rockchip SoC glue layer support for the

View file

@ -129,7 +129,7 @@ s32 atl1e_restart_autoneg(struct atl1e_hw *hw);
#define TWSI_CTRL_LD_SLV_ADDR_SHIFT 8
#define TWSI_CTRL_SW_LDSTART 0x800
#define TWSI_CTRL_HW_LDSTART 0x1000
#define TWSI_CTRL_SMB_SLV_ADDR_MASK 0x0x7F
#define TWSI_CTRL_SMB_SLV_ADDR_MASK 0x7F
#define TWSI_CTRL_SMB_SLV_ADDR_SHIFT 15
#define TWSI_CTRL_LD_EXIST 0x400000
#define TWSI_CTRL_READ_FREQ_SEL_MASK 0x3

View file

@ -543,7 +543,7 @@ struct bcm_sysport_tx_counters {
u32 jbr; /* RO # of xmited jabber count*/
u32 bytes; /* RO # of xmited byte count */
u32 pok; /* RO # of xmited good pkt */
u32 uc; /* RO (0x0x4f0)# of xmited unitcast pkt */
u32 uc; /* RO (0x4f0) # of xmited unicast pkt */
};
struct bcm_sysport_mib {

View file

@ -1260,7 +1260,7 @@ static int bgmac_poll(struct napi_struct *napi, int weight)
/* Poll again if more events arrived in the meantime */
if (bgmac_read(bgmac, BGMAC_INT_STATUS) & (BGMAC_IS_TX0 | BGMAC_IS_RX))
return handled;
return weight;
if (handled < weight) {
napi_complete(napi);

View file

@ -521,6 +521,7 @@ struct bnx2x_fp_txdata {
};
enum bnx2x_tpa_mode_t {
TPA_MODE_DISABLED,
TPA_MODE_LRO,
TPA_MODE_GRO
};
@ -589,7 +590,6 @@ struct bnx2x_fastpath {
/* TPA related */
struct bnx2x_agg_info *tpa_info;
u8 disable_tpa;
#ifdef BNX2X_STOP_ON_ERROR
u64 tpa_queue_used;
#endif
@ -1545,9 +1545,7 @@ struct bnx2x {
#define USING_MSIX_FLAG (1 << 5)
#define USING_MSI_FLAG (1 << 6)
#define DISABLE_MSI_FLAG (1 << 7)
#define TPA_ENABLE_FLAG (1 << 8)
#define NO_MCP_FLAG (1 << 9)
#define GRO_ENABLE_FLAG (1 << 10)
#define MF_FUNC_DIS (1 << 11)
#define OWN_CNIC_IRQ (1 << 12)
#define NO_ISCSI_OOO_FLAG (1 << 13)

View file

@ -947,10 +947,10 @@ static int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget)
u16 frag_size, pages;
#ifdef BNX2X_STOP_ON_ERROR
/* sanity check */
if (fp->disable_tpa &&
if (fp->mode == TPA_MODE_DISABLED &&
(CQE_TYPE_START(cqe_fp_type) ||
CQE_TYPE_STOP(cqe_fp_type)))
BNX2X_ERR("START/STOP packet while disable_tpa type %x\n",
BNX2X_ERR("START/STOP packet while TPA disabled, type %x\n",
CQE_TYPE(cqe_fp_type));
#endif
@ -1396,7 +1396,7 @@ void bnx2x_init_rx_rings(struct bnx2x *bp)
DP(NETIF_MSG_IFUP,
"mtu %d rx_buf_size %d\n", bp->dev->mtu, fp->rx_buf_size);
if (!fp->disable_tpa) {
if (fp->mode != TPA_MODE_DISABLED) {
/* Fill the per-aggregation pool */
for (i = 0; i < MAX_AGG_QS(bp); i++) {
struct bnx2x_agg_info *tpa_info =
@ -1410,7 +1410,7 @@ void bnx2x_init_rx_rings(struct bnx2x *bp)
BNX2X_ERR("Failed to allocate TPA skb pool for queue[%d] - disabling TPA on this queue!\n",
j);
bnx2x_free_tpa_pool(bp, fp, i);
fp->disable_tpa = 1;
fp->mode = TPA_MODE_DISABLED;
break;
}
dma_unmap_addr_set(first_buf, mapping, 0);
@ -1438,7 +1438,7 @@ void bnx2x_init_rx_rings(struct bnx2x *bp)
ring_prod);
bnx2x_free_tpa_pool(bp, fp,
MAX_AGG_QS(bp));
fp->disable_tpa = 1;
fp->mode = TPA_MODE_DISABLED;
ring_prod = 0;
break;
}
@ -1560,7 +1560,7 @@ static void bnx2x_free_rx_skbs(struct bnx2x *bp)
bnx2x_free_rx_bds(fp);
if (!fp->disable_tpa)
if (fp->mode != TPA_MODE_DISABLED)
bnx2x_free_tpa_pool(bp, fp, MAX_AGG_QS(bp));
}
}
@ -2477,17 +2477,19 @@ static void bnx2x_bz_fp(struct bnx2x *bp, int index)
/* set the tpa flag for each queue. The tpa flag determines the queue
* minimal size so it must be set prior to queue memory allocation
*/
fp->disable_tpa = !(bp->flags & TPA_ENABLE_FLAG ||
(bp->flags & GRO_ENABLE_FLAG &&
bnx2x_mtu_allows_gro(bp->dev->mtu)));
if (bp->flags & TPA_ENABLE_FLAG)
if (bp->dev->features & NETIF_F_LRO)
fp->mode = TPA_MODE_LRO;
else if (bp->flags & GRO_ENABLE_FLAG)
else if (bp->dev->features & NETIF_F_GRO &&
bnx2x_mtu_allows_gro(bp->dev->mtu))
fp->mode = TPA_MODE_GRO;
else
fp->mode = TPA_MODE_DISABLED;
/* We don't want TPA on an FCoE L2 ring */
if (IS_FCOE_FP(fp))
fp->disable_tpa = 1;
/* We don't want TPA if it's disabled in bp
* or if this is an FCoE L2 ring.
*/
if (bp->disable_tpa || IS_FCOE_FP(fp))
fp->mode = TPA_MODE_DISABLED;
}
int bnx2x_load_cnic(struct bnx2x *bp)
@ -2608,7 +2610,7 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
/*
* Zero fastpath structures preserving invariants like napi, which are
* allocated only once, fp index, max_cos, bp pointer.
* Also set fp->disable_tpa and txdata_ptr.
* Also set fp->mode and txdata_ptr.
*/
DP(NETIF_MSG_IFUP, "num queues: %d", bp->num_queues);
for_each_queue(bp, i)
@ -3247,7 +3249,7 @@ int bnx2x_low_latency_recv(struct napi_struct *napi)
if ((bp->state == BNX2X_STATE_CLOSED) ||
(bp->state == BNX2X_STATE_ERROR) ||
(bp->flags & (TPA_ENABLE_FLAG | GRO_ENABLE_FLAG)))
(bp->dev->features & (NETIF_F_LRO | NETIF_F_GRO)))
return LL_FLUSH_FAILED;
if (!bnx2x_fp_lock_poll(fp))
@ -4543,7 +4545,7 @@ alloc_mem_err:
* In these cases we disable the queue
* Min size is different for OOO, TPA and non-TPA queues
*/
if (ring_size < (fp->disable_tpa ?
if (ring_size < (fp->mode == TPA_MODE_DISABLED ?
MIN_RX_SIZE_NONTPA : MIN_RX_SIZE_TPA)) {
/* release memory allocated for this queue */
bnx2x_free_fp_mem_at(bp, index);
@ -4809,66 +4811,71 @@ netdev_features_t bnx2x_fix_features(struct net_device *dev,
{
struct bnx2x *bp = netdev_priv(dev);
if (pci_num_vf(bp->pdev)) {
netdev_features_t changed = dev->features ^ features;
/* Revert the requested changes in features if they
* would require internal reload of PF in bnx2x_set_features().
*/
if (!(features & NETIF_F_RXCSUM) && !bp->disable_tpa) {
features &= ~NETIF_F_RXCSUM;
features |= dev->features & NETIF_F_RXCSUM;
}
if (changed & NETIF_F_LOOPBACK) {
features &= ~NETIF_F_LOOPBACK;
features |= dev->features & NETIF_F_LOOPBACK;
}
}
/* TPA requires Rx CSUM offloading */
if (!(features & NETIF_F_RXCSUM)) {
features &= ~NETIF_F_LRO;
features &= ~NETIF_F_GRO;
}
/* Note: do not disable SW GRO in kernel when HW GRO is off */
if (bp->disable_tpa)
features &= ~NETIF_F_LRO;
return features;
}
int bnx2x_set_features(struct net_device *dev, netdev_features_t features)
{
struct bnx2x *bp = netdev_priv(dev);
u32 flags = bp->flags;
u32 changes;
netdev_features_t changes = features ^ dev->features;
bool bnx2x_reload = false;
int rc;
if (features & NETIF_F_LRO)
flags |= TPA_ENABLE_FLAG;
else
flags &= ~TPA_ENABLE_FLAG;
if (features & NETIF_F_GRO)
flags |= GRO_ENABLE_FLAG;
else
flags &= ~GRO_ENABLE_FLAG;
if (features & NETIF_F_LOOPBACK) {
if (bp->link_params.loopback_mode != LOOPBACK_BMAC) {
bp->link_params.loopback_mode = LOOPBACK_BMAC;
bnx2x_reload = true;
}
} else {
if (bp->link_params.loopback_mode != LOOPBACK_NONE) {
bp->link_params.loopback_mode = LOOPBACK_NONE;
bnx2x_reload = true;
/* VFs or non SRIOV PFs should be able to change loopback feature */
if (!pci_num_vf(bp->pdev)) {
if (features & NETIF_F_LOOPBACK) {
if (bp->link_params.loopback_mode != LOOPBACK_BMAC) {
bp->link_params.loopback_mode = LOOPBACK_BMAC;
bnx2x_reload = true;
}
} else {
if (bp->link_params.loopback_mode != LOOPBACK_NONE) {
bp->link_params.loopback_mode = LOOPBACK_NONE;
bnx2x_reload = true;
}
}
}
changes = flags ^ bp->flags;
/* if GRO is changed while LRO is enabled, don't force a reload */
if ((changes & GRO_ENABLE_FLAG) && (flags & TPA_ENABLE_FLAG))
changes &= ~GRO_ENABLE_FLAG;
if ((changes & NETIF_F_GRO) && (features & NETIF_F_LRO))
changes &= ~NETIF_F_GRO;
/* if GRO is changed while HW TPA is off, don't force a reload */
if ((changes & GRO_ENABLE_FLAG) && bp->disable_tpa)
changes &= ~GRO_ENABLE_FLAG;
if ((changes & NETIF_F_GRO) && bp->disable_tpa)
changes &= ~NETIF_F_GRO;
if (changes)
bnx2x_reload = true;
bp->flags = flags;
if (bnx2x_reload) {
if (bp->recovery_state == BNX2X_RECOVERY_DONE)
return bnx2x_reload_if_running(dev);
if (bp->recovery_state == BNX2X_RECOVERY_DONE) {
dev->features = features;
rc = bnx2x_reload_if_running(dev);
return rc ? rc : 1;
}
/* else: bnx2x_nic_load() will be called at end of recovery */
}
@ -4931,6 +4938,11 @@ int bnx2x_resume(struct pci_dev *pdev)
}
bp = netdev_priv(dev);
if (pci_num_vf(bp->pdev)) {
DP(BNX2X_MSG_IOV, "VFs are enabled, can not change MTU\n");
return -EPERM;
}
if (bp->recovery_state != BNX2X_RECOVERY_DONE) {
BNX2X_ERR("Handling parity error recovery. Try again later\n");
return -EAGAIN;

View file

@ -969,7 +969,7 @@ static inline void bnx2x_free_rx_sge_range(struct bnx2x *bp,
{
int i;
if (fp->disable_tpa)
if (fp->mode == TPA_MODE_DISABLED)
return;
for (i = 0; i < last; i++)

View file

@ -1843,6 +1843,12 @@ static int bnx2x_set_ringparam(struct net_device *dev,
"set ring params command parameters: rx_pending = %d, tx_pending = %d\n",
ering->rx_pending, ering->tx_pending);
if (pci_num_vf(bp->pdev)) {
DP(BNX2X_MSG_IOV,
"VFs are enabled, can not change ring parameters\n");
return -EPERM;
}
if (bp->recovery_state != BNX2X_RECOVERY_DONE) {
DP(BNX2X_MSG_ETHTOOL,
"Handling parity error recovery. Try again later\n");
@ -2899,6 +2905,12 @@ static void bnx2x_self_test(struct net_device *dev,
u8 is_serdes, link_up;
int rc, cnt = 0;
if (pci_num_vf(bp->pdev)) {
DP(BNX2X_MSG_IOV,
"VFs are enabled, can not perform self test\n");
return;
}
if (bp->recovery_state != BNX2X_RECOVERY_DONE) {
netdev_err(bp->dev,
"Handling parity error recovery. Try again later\n");
@ -3468,6 +3480,11 @@ static int bnx2x_set_channels(struct net_device *dev,
channels->rx_count, channels->tx_count, channels->other_count,
channels->combined_count);
if (pci_num_vf(bp->pdev)) {
DP(BNX2X_MSG_IOV, "VFs are enabled, can not set channels\n");
return -EPERM;
}
/* We don't support separate rx / tx channels.
* We don't allow setting 'other' channels.
*/

View file

@ -3128,7 +3128,7 @@ static unsigned long bnx2x_get_q_flags(struct bnx2x *bp,
__set_bit(BNX2X_Q_FLG_FORCE_DEFAULT_PRI, &flags);
}
if (!fp->disable_tpa) {
if (fp->mode != TPA_MODE_DISABLED) {
__set_bit(BNX2X_Q_FLG_TPA, &flags);
__set_bit(BNX2X_Q_FLG_TPA_IPV6, &flags);
if (fp->mode == TPA_MODE_GRO)
@ -3176,7 +3176,7 @@ static void bnx2x_pf_rx_q_prep(struct bnx2x *bp,
u16 sge_sz = 0;
u16 tpa_agg_size = 0;
if (!fp->disable_tpa) {
if (fp->mode != TPA_MODE_DISABLED) {
pause->sge_th_lo = SGE_TH_LO(bp);
pause->sge_th_hi = SGE_TH_HI(bp);
@ -3304,7 +3304,7 @@ static void bnx2x_pf_init(struct bnx2x *bp)
/* This flag is relevant for E1x only.
* E2 doesn't have a TPA configuration in a function level.
*/
flags |= (bp->flags & TPA_ENABLE_FLAG) ? FUNC_FLG_TPA : 0;
flags |= (bp->dev->features & NETIF_F_LRO) ? FUNC_FLG_TPA : 0;
func_init.func_flgs = flags;
func_init.pf_id = BP_FUNC(bp);
@ -12107,11 +12107,8 @@ static int bnx2x_init_bp(struct bnx2x *bp)
/* Set TPA flags */
if (bp->disable_tpa) {
bp->flags &= ~(TPA_ENABLE_FLAG | GRO_ENABLE_FLAG);
bp->dev->hw_features &= ~NETIF_F_LRO;
bp->dev->features &= ~NETIF_F_LRO;
} else {
bp->flags |= (TPA_ENABLE_FLAG | GRO_ENABLE_FLAG);
bp->dev->features |= NETIF_F_LRO;
}
if (CHIP_IS_E1(bp))
@ -13371,6 +13368,12 @@ static int bnx2x_init_one(struct pci_dev *pdev,
bool is_vf;
int cnic_cnt;
/* Management FW 'remembers' living interfaces. Allow it some time
* to forget previously living interfaces, allowing a proper re-load.
*/
if (is_kdump_kernel())
msleep(5000);
/* An estimated maximum supported CoS number according to the chip
* version.
* We will try to roughly estimate the maximum number of CoSes this chip

View file

@ -594,7 +594,7 @@ int bnx2x_vfpf_setup_q(struct bnx2x *bp, struct bnx2x_fastpath *fp,
bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_SETUP_Q, sizeof(*req));
/* select tpa mode to request */
if (!fp->disable_tpa) {
if (fp->mode != TPA_MODE_DISABLED) {
flags |= VFPF_QUEUE_FLG_TPA;
flags |= VFPF_QUEUE_FLG_TPA_IPV6;
if (fp->mode == TPA_MODE_GRO)

View file

@ -18129,7 +18129,9 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,
rtnl_lock();
tp->pcierr_recovery = true;
/* We needn't recover from permanent error */
if (state == pci_channel_io_frozen)
tp->pcierr_recovery = true;
/* We probably don't have netdev yet */
if (!netdev || !netif_running(netdev))

View file

@ -707,6 +707,9 @@ static void gem_rx_refill(struct macb *bp)
/* properly align Ethernet header */
skb_reserve(skb, NET_IP_ALIGN);
} else {
bp->rx_ring[entry].addr &= ~MACB_BIT(RX_USED);
bp->rx_ring[entry].ctrl = 0;
}
}
@ -1473,9 +1476,9 @@ static void macb_init_rings(struct macb *bp)
for (i = 0; i < TX_RING_SIZE; i++) {
bp->queues[0].tx_ring[i].addr = 0;
bp->queues[0].tx_ring[i].ctrl = MACB_BIT(TX_USED);
bp->queues[0].tx_head = 0;
bp->queues[0].tx_tail = 0;
}
bp->queues[0].tx_head = 0;
bp->queues[0].tx_tail = 0;
bp->queues[0].tx_ring[TX_RING_SIZE - 1].ctrl |= MACB_BIT(TX_WRAP);
bp->rx_tail = 0;

View file

@ -492,7 +492,7 @@ int t4_memory_rw(struct adapter *adap, int win, int mtype, u32 addr,
memoffset = (mtype * (edc_size * 1024 * 1024));
else {
mc_size = EXT_MEM0_SIZE_G(t4_read_reg(adap,
MA_EXT_MEMORY1_BAR_A));
MA_EXT_MEMORY0_BAR_A));
memoffset = (MEM_MC0 * edc_size + mc_size) * 1024 * 1024;
}

View file

@ -4846,7 +4846,8 @@ err:
}
static int be_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
struct net_device *dev, u32 filter_mask)
struct net_device *dev, u32 filter_mask,
int nlflags)
{
struct be_adapter *adapter = netdev_priv(dev);
int status = 0;
@ -4868,7 +4869,7 @@ static int be_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
return ndo_dflt_bridge_getlink(skb, pid, seq, dev,
hsw_mode == PORT_FWD_TYPE_VEPA ?
BRIDGE_MODE_VEPA : BRIDGE_MODE_VEB,
0, 0);
0, 0, nlflags);
}
#ifdef CONFIG_BE2NET_VXLAN

View file

@ -988,7 +988,10 @@ fec_restart(struct net_device *ndev)
rcntl |= 0x40000000 | 0x00000020;
/* RGMII, RMII or MII */
if (fep->phy_interface == PHY_INTERFACE_MODE_RGMII)
if (fep->phy_interface == PHY_INTERFACE_MODE_RGMII ||
fep->phy_interface == PHY_INTERFACE_MODE_RGMII_ID ||
fep->phy_interface == PHY_INTERFACE_MODE_RGMII_RXID ||
fep->phy_interface == PHY_INTERFACE_MODE_RGMII_TXID)
rcntl |= (1 << 6);
else if (fep->phy_interface == PHY_INTERFACE_MODE_RMII)
rcntl |= (1 << 8);

View file

@ -3347,7 +3347,7 @@ static int ehea_register_memory_hooks(void)
{
int ret = 0;
if (atomic_inc_and_test(&ehea_memory_hooks_registered))
if (atomic_inc_return(&ehea_memory_hooks_registered) > 1)
return 0;
ret = ehea_create_busmap();
@ -3381,12 +3381,14 @@ out3:
out2:
unregister_reboot_notifier(&ehea_reboot_nb);
out:
atomic_dec(&ehea_memory_hooks_registered);
return ret;
}
static void ehea_unregister_memory_hooks(void)
{
if (atomic_read(&ehea_memory_hooks_registered))
/* Only remove the hooks if we've registered them */
if (atomic_read(&ehea_memory_hooks_registered) == 0)
return;
unregister_reboot_notifier(&ehea_reboot_nb);

View file

@ -1238,7 +1238,7 @@ static int ibmveth_change_mtu(struct net_device *dev, int new_mtu)
return -EINVAL;
for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size)
if (new_mtu_oh <= adapter->rx_buff_pool[i].buff_size)
break;
if (i == IBMVETH_NUM_BUFF_POOLS)
@ -1257,7 +1257,7 @@ static int ibmveth_change_mtu(struct net_device *dev, int new_mtu)
for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
adapter->rx_buff_pool[i].active = 1;
if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size) {
if (new_mtu_oh <= adapter->rx_buff_pool[i].buff_size) {
dev->mtu = new_mtu;
vio_cmo_set_dev_desired(viodev,
ibmveth_get_desired_dma

View file

@ -8053,10 +8053,10 @@ static int i40e_ndo_bridge_setlink(struct net_device *dev,
#ifdef HAVE_BRIDGE_FILTER
static int i40e_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
struct net_device *dev,
u32 __always_unused filter_mask)
u32 __always_unused filter_mask, int nlflags)
#else
static int i40e_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
struct net_device *dev)
struct net_device *dev, int nlflags)
#endif /* HAVE_BRIDGE_FILTER */
{
struct i40e_netdev_priv *np = netdev_priv(dev);
@ -8078,7 +8078,8 @@ static int i40e_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
if (!veb)
return 0;
return ndo_dflt_bridge_getlink(skb, pid, seq, dev, veb->bridge_mode);
return ndo_dflt_bridge_getlink(skb, pid, seq, dev, veb->bridge_mode,
nlflags);
}
#endif /* HAVE_BRIDGE_ATTRIBS */

View file

@ -8044,7 +8044,7 @@ static int ixgbe_ndo_bridge_setlink(struct net_device *dev,
static int ixgbe_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
struct net_device *dev,
u32 filter_mask)
u32 filter_mask, int nlflags)
{
struct ixgbe_adapter *adapter = netdev_priv(dev);
@ -8052,7 +8052,7 @@ static int ixgbe_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
return 0;
return ndo_dflt_bridge_getlink(skb, pid, seq, dev,
adapter->bridge_mode, 0, 0);
adapter->bridge_mode, 0, 0, nlflags);
}
static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)

View file

@ -1508,7 +1508,8 @@ static int pxa168_eth_probe(struct platform_device *pdev)
np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
if (!np) {
dev_err(&pdev->dev, "missing phy-handle\n");
return -EINVAL;
err = -EINVAL;
goto err_netdev;
}
of_property_read_u32(np, "reg", &pep->phy_addr);
pep->phy_intf = of_get_phy_mode(pdev->dev.of_node);
@ -1526,7 +1527,7 @@ static int pxa168_eth_probe(struct platform_device *pdev)
pep->smi_bus = mdiobus_alloc();
if (pep->smi_bus == NULL) {
err = -ENOMEM;
goto err_base;
goto err_netdev;
}
pep->smi_bus->priv = pep;
pep->smi_bus->name = "pxa168_eth smi";
@ -1551,13 +1552,10 @@ err_mdiobus:
mdiobus_unregister(pep->smi_bus);
err_free_mdio:
mdiobus_free(pep->smi_bus);
err_base:
iounmap(pep->base);
err_netdev:
free_netdev(dev);
err_clk:
clk_disable(clk);
clk_put(clk);
clk_disable_unprepare(clk);
return err;
}
@ -1574,13 +1572,9 @@ static int pxa168_eth_remove(struct platform_device *pdev)
if (pep->phy)
phy_disconnect(pep->phy);
if (pep->clk) {
clk_disable(pep->clk);
clk_put(pep->clk);
pep->clk = NULL;
clk_disable_unprepare(pep->clk);
}
iounmap(pep->base);
pep->base = NULL;
mdiobus_unregister(pep->smi_bus);
mdiobus_free(pep->smi_bus);
unregister_netdev(dev);

View file

@ -1102,20 +1102,21 @@ static int mlx4_en_check_rxfh_func(struct net_device *dev, u8 hfunc)
struct mlx4_en_priv *priv = netdev_priv(dev);
/* check if requested function is supported by the device */
if ((hfunc == ETH_RSS_HASH_TOP &&
!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP)) ||
(hfunc == ETH_RSS_HASH_XOR &&
!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR)))
return -EINVAL;
if (hfunc == ETH_RSS_HASH_TOP) {
if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP))
return -EINVAL;
if (!(dev->features & NETIF_F_RXHASH))
en_warn(priv, "Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
return 0;
} else if (hfunc == ETH_RSS_HASH_XOR) {
if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR))
return -EINVAL;
if (dev->features & NETIF_F_RXHASH)
en_warn(priv, "Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
return 0;
}
priv->rss_hash_fn = hfunc;
if (hfunc == ETH_RSS_HASH_TOP && !(dev->features & NETIF_F_RXHASH))
en_warn(priv,
"Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
if (hfunc == ETH_RSS_HASH_XOR && (dev->features & NETIF_F_RXHASH))
en_warn(priv,
"Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
return 0;
return -EINVAL;
}
static int mlx4_en_get_rxfh(struct net_device *dev, u32 *ring_index, u8 *key,
@ -1189,6 +1190,8 @@ static int mlx4_en_set_rxfh(struct net_device *dev, const u32 *ring_index,
priv->prof->rss_rings = rss_rings;
if (key)
memcpy(priv->rss_key, key, MLX4_EN_RSS_KEY_SIZE);
if (hfunc != ETH_RSS_HASH_NO_CHANGE)
priv->rss_hash_fn = hfunc;
if (port_up) {
err = mlx4_en_start_port(dev);

View file

@ -1467,6 +1467,7 @@ static void mlx4_en_service_task(struct work_struct *work)
if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_TS)
mlx4_en_ptp_overflow_check(mdev);
mlx4_en_recover_from_oom(priv);
queue_delayed_work(mdev->workqueue, &priv->service_task,
SERVICE_TASK_DELAY);
}
@ -1721,7 +1722,7 @@ mac_err:
cq_err:
while (rx_index--) {
mlx4_en_deactivate_cq(priv, priv->rx_cq[rx_index]);
mlx4_en_free_affinity_hint(priv, i);
mlx4_en_free_affinity_hint(priv, rx_index);
}
for (i = 0; i < priv->rx_ring_num; i++)
mlx4_en_deactivate_rx_ring(priv, priv->rx_ring[i]);

View file

@ -244,6 +244,12 @@ static int mlx4_en_prepare_rx_desc(struct mlx4_en_priv *priv,
return mlx4_en_alloc_frags(priv, rx_desc, frags, ring->page_alloc, gfp);
}
static inline bool mlx4_en_is_ring_empty(struct mlx4_en_rx_ring *ring)
{
BUG_ON((u32)(ring->prod - ring->cons) > ring->actual_size);
return ring->prod == ring->cons;
}
static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring)
{
*ring->wqres.db.db = cpu_to_be32(ring->prod & 0xffff);
@ -315,8 +321,7 @@ static void mlx4_en_free_rx_buf(struct mlx4_en_priv *priv,
ring->cons, ring->prod);
/* Unmap and free Rx buffers */
BUG_ON((u32) (ring->prod - ring->cons) > ring->actual_size);
while (ring->cons != ring->prod) {
while (!mlx4_en_is_ring_empty(ring)) {
index = ring->cons & ring->size_mask;
en_dbg(DRV, priv, "Processing descriptor:%d\n", index);
mlx4_en_free_rx_desc(priv, ring, index);
@ -491,6 +496,23 @@ err_allocator:
return err;
}
/* We recover from out of memory by scheduling our napi poll
* function (mlx4_en_process_cq), which tries to allocate
* all missing RX buffers (call to mlx4_en_refill_rx_buffers).
*/
void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
{
int ring;
if (!priv->port_up)
return;
for (ring = 0; ring < priv->rx_ring_num; ring++) {
if (mlx4_en_is_ring_empty(priv->rx_ring[ring]))
napi_reschedule(&priv->rx_cq[ring]->napi);
}
}
void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_rx_ring **pring,
u32 size, u16 stride)

View file

@ -143,8 +143,10 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
ring->hwtstamp_tx_type = priv->hwtstamp_config.tx_type;
ring->queue_index = queue_index;
if (queue_index < priv->num_tx_rings_p_up && cpu_online(queue_index))
cpumask_set_cpu(queue_index, &ring->affinity_mask);
if (queue_index < priv->num_tx_rings_p_up)
cpumask_set_cpu_local_first(queue_index,
priv->mdev->dev->numa_node,
&ring->affinity_mask);
*pring = ring;
return 0;
@ -213,7 +215,7 @@ int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
err = mlx4_qp_to_ready(mdev->dev, &ring->wqres.mtt, &ring->context,
&ring->qp, &ring->qp_state);
if (!user_prio && cpu_online(ring->queue_index))
if (!cpumask_empty(&ring->affinity_mask))
netif_set_xps_queue(priv->dev, &ring->affinity_mask,
ring->queue_index);

View file

@ -56,11 +56,13 @@ MODULE_PARM_DESC(enable_qos, "Enable Enhanced QoS support (default: on)");
#define MLX4_GET(dest, source, offset) \
do { \
void *__p = (char *) (source) + (offset); \
u64 val; \
switch (sizeof (dest)) { \
case 1: (dest) = *(u8 *) __p; break; \
case 2: (dest) = be16_to_cpup(__p); break; \
case 4: (dest) = be32_to_cpup(__p); break; \
case 8: (dest) = be64_to_cpup(__p); break; \
case 8: val = get_unaligned((u64 *)__p); \
(dest) = be64_to_cpu(val); break; \
default: __buggy_use_of_MLX4_GET(); \
} \
} while (0)
@ -1605,9 +1607,17 @@ static void get_board_id(void *vsd, char *board_id)
* swaps each 4-byte word before passing it back to
* us. Therefore we need to swab it before printing.
*/
for (i = 0; i < 4; ++i)
((u32 *) board_id)[i] =
swab32(*(u32 *) (vsd + VSD_OFFSET_MLX_BOARD_ID + i * 4));
u32 *bid_u32 = (u32 *)board_id;
for (i = 0; i < 4; ++i) {
u32 *addr;
u32 val;
addr = (u32 *) (vsd + VSD_OFFSET_MLX_BOARD_ID + i * 4);
val = get_unaligned(addr);
val = swab32(val);
put_unaligned(val, &bid_u32[i]);
}
}
}

View file

@ -774,6 +774,7 @@ int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
void mlx4_en_deactivate_tx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_tx_ring *ring);
void mlx4_en_set_num_rx_rings(struct mlx4_en_dev *mdev);
void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv);
int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_rx_ring **pring,
u32 size, u16 stride, int node);

View file

@ -69,11 +69,7 @@
#include <net/ip.h>
#include <net/tcp.h>
#include <asm/byteorder.h>
#include <asm/io.h>
#include <asm/processor.h>
#ifdef CONFIG_MTRR
#include <asm/mtrr.h>
#endif
#include <net/busy_poll.h>
#include "myri10ge_mcp.h"
@ -242,8 +238,7 @@ struct myri10ge_priv {
unsigned int rdma_tags_available;
int intr_coal_delay;
__be32 __iomem *intr_coal_delay_ptr;
int mtrr;
int wc_enabled;
int wc_cookie;
int down_cnt;
wait_queue_head_t down_wq;
struct work_struct watchdog_work;
@ -1905,7 +1900,7 @@ static const char myri10ge_gstrings_main_stats[][ETH_GSTRING_LEN] = {
"tx_aborted_errors", "tx_carrier_errors", "tx_fifo_errors",
"tx_heartbeat_errors", "tx_window_errors",
/* device-specific stats */
"tx_boundary", "WC", "irq", "MSI", "MSIX",
"tx_boundary", "irq", "MSI", "MSIX",
"read_dma_bw_MBs", "write_dma_bw_MBs", "read_write_dma_bw_MBs",
"serial_number", "watchdog_resets",
#ifdef CONFIG_MYRI10GE_DCA
@ -1984,7 +1979,6 @@ myri10ge_get_ethtool_stats(struct net_device *netdev,
data[i] = ((u64 *)&link_stats)[i];
data[i++] = (unsigned int)mgp->tx_boundary;
data[i++] = (unsigned int)mgp->wc_enabled;
data[i++] = (unsigned int)mgp->pdev->irq;
data[i++] = (unsigned int)mgp->msi_enabled;
data[i++] = (unsigned int)mgp->msix_enabled;
@ -4040,14 +4034,7 @@ static int myri10ge_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
mgp->board_span = pci_resource_len(pdev, 0);
mgp->iomem_base = pci_resource_start(pdev, 0);
mgp->mtrr = -1;
mgp->wc_enabled = 0;
#ifdef CONFIG_MTRR
mgp->mtrr = mtrr_add(mgp->iomem_base, mgp->board_span,
MTRR_TYPE_WRCOMB, 1);
if (mgp->mtrr >= 0)
mgp->wc_enabled = 1;
#endif
mgp->wc_cookie = arch_phys_wc_add(mgp->iomem_base, mgp->board_span);
mgp->sram = ioremap_wc(mgp->iomem_base, mgp->board_span);
if (mgp->sram == NULL) {
dev_err(&pdev->dev, "ioremap failed for %ld bytes at 0x%lx\n",
@ -4146,14 +4133,14 @@ static int myri10ge_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto abort_with_state;
}
if (mgp->msix_enabled)
dev_info(dev, "%d MSI-X IRQs, tx bndry %d, fw %s, WC %s\n",
dev_info(dev, "%d MSI-X IRQs, tx bndry %d, fw %s, MTRR %s, WC Enabled\n",
mgp->num_slices, mgp->tx_boundary, mgp->fw_name,
(mgp->wc_enabled ? "Enabled" : "Disabled"));
(mgp->wc_cookie > 0 ? "Enabled" : "Disabled"));
else
dev_info(dev, "%s IRQ %d, tx bndry %d, fw %s, WC %s\n",
dev_info(dev, "%s IRQ %d, tx bndry %d, fw %s, MTRR %s, WC Enabled\n",
mgp->msi_enabled ? "MSI" : "xPIC",
pdev->irq, mgp->tx_boundary, mgp->fw_name,
(mgp->wc_enabled ? "Enabled" : "Disabled"));
(mgp->wc_cookie > 0 ? "Enabled" : "Disabled"));
board_number++;
return 0;
@ -4175,10 +4162,7 @@ abort_with_ioremap:
iounmap(mgp->sram);
abort_with_mtrr:
#ifdef CONFIG_MTRR
if (mgp->mtrr >= 0)
mtrr_del(mgp->mtrr, mgp->iomem_base, mgp->board_span);
#endif
arch_phys_wc_del(mgp->wc_cookie);
dma_free_coherent(&pdev->dev, sizeof(*mgp->cmd),
mgp->cmd, mgp->cmd_bus);
@ -4220,11 +4204,7 @@ static void myri10ge_remove(struct pci_dev *pdev)
pci_restore_state(pdev);
iounmap(mgp->sram);
#ifdef CONFIG_MTRR
if (mgp->mtrr >= 0)
mtrr_del(mgp->mtrr, mgp->iomem_base, mgp->board_span);
#endif
arch_phys_wc_del(mgp->wc_cookie);
myri10ge_free_slices(mgp);
kfree(mgp->msix_vectors);
dma_free_coherent(&pdev->dev, sizeof(*mgp->cmd),

View file

@ -135,7 +135,7 @@ void netxen_release_tx_buffers(struct netxen_adapter *adapter)
int i, j;
struct nx_host_tx_ring *tx_ring = adapter->tx_ring;
spin_lock(&adapter->tx_clean_lock);
spin_lock_bh(&adapter->tx_clean_lock);
cmd_buf = tx_ring->cmd_buf_arr;
for (i = 0; i < tx_ring->num_desc; i++) {
buffrag = cmd_buf->frag_array;
@ -159,7 +159,7 @@ void netxen_release_tx_buffers(struct netxen_adapter *adapter)
}
cmd_buf++;
}
spin_unlock(&adapter->tx_clean_lock);
spin_unlock_bh(&adapter->tx_clean_lock);
}
void netxen_free_sw_resources(struct netxen_adapter *adapter)

View file

@ -4176,14 +4176,15 @@ static int rocker_port_bridge_setlink(struct net_device *dev,
static int rocker_port_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
struct net_device *dev,
u32 filter_mask)
u32 filter_mask, int nlflags)
{
struct rocker_port *rocker_port = netdev_priv(dev);
u16 mode = BRIDGE_MODE_UNDEF;
u32 mask = BR_LEARNING | BR_LEARNING_SYNC;
return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode,
rocker_port->brport_flags, mask);
rocker_port->brport_flags, mask,
nlflags);
}
static int rocker_port_get_phys_port_name(struct net_device *dev,

View file

@ -1765,7 +1765,9 @@ static void netcp_ethss_link_state_action(struct gbe_priv *gbe_dev,
ALE_PORT_STATE,
ALE_PORT_STATE_FORWARD);
if (ndev && slave->open)
if (ndev && slave->open &&
slave->link_interface != SGMII_LINK_MAC_PHY &&
slave->link_interface != XGMII_LINK_MAC_PHY)
netif_carrier_on(ndev);
} else {
writel(mac_control, GBE_REG_ADDR(slave, emac_regs,
@ -1773,7 +1775,9 @@ static void netcp_ethss_link_state_action(struct gbe_priv *gbe_dev,
cpsw_ale_control_set(gbe_dev->ale, slave->port_num,
ALE_PORT_STATE,
ALE_PORT_STATE_DISABLE);
if (ndev)
if (ndev &&
slave->link_interface != SGMII_LINK_MAC_PHY &&
slave->link_interface != XGMII_LINK_MAC_PHY)
netif_carrier_off(ndev);
}

View file

@ -128,7 +128,6 @@ struct ndis_tcp_ip_checksum_info;
struct hv_netvsc_packet {
/* Bookkeeping stuff */
u32 status;
bool part_of_skb;
bool is_data_pkt;
bool xmit_more; /* from skb */
@ -612,6 +611,15 @@ struct multi_send_data {
u32 count; /* counter of batched packets */
};
/* The context of the netvsc device */
struct net_device_context {
/* point back to our device context */
struct hv_device *device_ctx;
struct delayed_work dwork;
struct work_struct work;
u32 msg_enable; /* debug level */
};
/* Per netvsc device */
struct netvsc_device {
struct hv_device *dev;
@ -667,6 +675,9 @@ struct netvsc_device {
struct multi_send_data msd[NR_CPUS];
u32 max_pkt; /* max number of pkt in one send, e.g. 8 */
u32 pkt_align; /* alignment bytes, e.g. 8 */
/* The net device context */
struct net_device_context *nd_ctx;
};
/* NdisInitialize message */

View file

@ -889,11 +889,6 @@ int netvsc_send(struct hv_device *device,
} else {
packet->page_buf_cnt = 0;
packet->total_data_buflen += msd_len;
if (!packet->part_of_skb) {
skb = (struct sk_buff *)(unsigned long)packet->
send_completion_tid;
packet->send_completion_tid = 0;
}
}
if (msdp->pkt)
@ -1197,6 +1192,9 @@ int netvsc_device_add(struct hv_device *device, void *additional_info)
*/
ndev = net_device->ndev;
/* Add netvsc_device context to netvsc_device */
net_device->nd_ctx = netdev_priv(ndev);
/* Initialize the NetVSC channel extension */
init_completion(&net_device->channel_init_wait);

Some files were not shown because too many files have changed in this diff Show more