1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Three conflicts, one of which, for marvell10g.c is non-trivial and
requires some follow-up from Heiner or someone else.

The issue is that Heiner converted the marvell10g driver over to
use the generic c45 code as much as possible.

However, in 'net' a bug fix appeared which makes sure that a new
local mask (MDIO_AN_10GBT_CTRL_ADV_NBT_MASK) with value 0x01e0
is cleared.

Signed-off-by: David S. Miller <davem@davemloft.net>
hifive-unleashed-5.1
David S. Miller 2019-02-24 11:48:04 -08:00
commit 70f3522614
171 changed files with 1416 additions and 928 deletions

20
CREDITS
View File

@ -842,10 +842,9 @@ D: ax25-utils maintainer.
N: Helge Deller N: Helge Deller
E: deller@gmx.de E: deller@gmx.de
E: hdeller@redhat.de W: http://www.parisc-linux.org/
D: PA-RISC Linux hacker, LASI-, ASP-, WAX-, LCD/LED-driver D: PA-RISC Linux architecture maintainer
S: Schimmelsrain 1 D: LASI-, ASP-, WAX-, LCD/LED-driver
S: D-69231 Rauenberg
S: Germany S: Germany
N: Jean Delvare N: Jean Delvare
@ -1361,7 +1360,7 @@ S: Stellenbosch, Western Cape
S: South Africa S: South Africa
N: Grant Grundler N: Grant Grundler
E: grundler@parisc-linux.org E: grantgrundler@gmail.com
W: http://obmouse.sourceforge.net/ W: http://obmouse.sourceforge.net/
W: http://www.parisc-linux.org/ W: http://www.parisc-linux.org/
D: obmouse - rewrote Olivier Florent's Omnibook 600 "pop-up" mouse driver D: obmouse - rewrote Olivier Florent's Omnibook 600 "pop-up" mouse driver
@ -2492,7 +2491,7 @@ S: Syracuse, New York 13206
S: USA S: USA
N: Kyle McMartin N: Kyle McMartin
E: kyle@parisc-linux.org E: kyle@mcmartin.ca
D: Linux/PARISC hacker D: Linux/PARISC hacker
D: AD1889 sound driver D: AD1889 sound driver
S: Ottawa, Canada S: Ottawa, Canada
@ -3780,14 +3779,13 @@ S: 21513 Conradia Ct
S: Cupertino, CA 95014 S: Cupertino, CA 95014
S: USA S: USA
N: Thibaut Varene N: Thibaut Varène
E: T-Bone@parisc-linux.org E: hacks+kernel@slashdirt.org
W: http://www.parisc-linux.org/~varenet/ W: http://hacks.slashdirt.org/
P: 1024D/B7D2F063 E67C 0D43 A75E 12A5 BB1C FA2F 1E32 C3DA B7D2 F063
D: PA-RISC port minion, PDC and GSCPS2 drivers, debuglocks and other bits D: PA-RISC port minion, PDC and GSCPS2 drivers, debuglocks and other bits
D: Some ARM at91rm9200 bits, S1D13XXX FB driver, random patches here and there D: Some ARM at91rm9200 bits, S1D13XXX FB driver, random patches here and there
D: AD1889 sound driver D: AD1889 sound driver
S: Paris, France S: France
N: Heikki Vatiainen N: Heikki Vatiainen
E: hessu@cs.tut.fi E: hessu@cs.tut.fi

View File

@ -1,9 +1,9 @@
.. _readme: .. _readme:
Linux kernel release 4.x <http://kernel.org/> Linux kernel release 5.x <http://kernel.org/>
============================================= =============================================
These are the release notes for Linux version 4. Read them carefully, These are the release notes for Linux version 5. Read them carefully,
as they tell you what this is all about, explain how to install the as they tell you what this is all about, explain how to install the
kernel, and what to do if something goes wrong. kernel, and what to do if something goes wrong.
@ -63,7 +63,7 @@ Installing the kernel source
directory where you have permissions (e.g. your home directory) and directory where you have permissions (e.g. your home directory) and
unpack it:: unpack it::
xz -cd linux-4.X.tar.xz | tar xvf - xz -cd linux-5.x.tar.xz | tar xvf -
Replace "X" with the version number of the latest kernel. Replace "X" with the version number of the latest kernel.
@ -72,26 +72,26 @@ Installing the kernel source
files. They should match the library, and not get messed up by files. They should match the library, and not get messed up by
whatever the kernel-du-jour happens to be. whatever the kernel-du-jour happens to be.
- You can also upgrade between 4.x releases by patching. Patches are - You can also upgrade between 5.x releases by patching. Patches are
distributed in the xz format. To install by patching, get all the distributed in the xz format. To install by patching, get all the
newer patch files, enter the top level directory of the kernel source newer patch files, enter the top level directory of the kernel source
(linux-4.X) and execute:: (linux-5.x) and execute::
xz -cd ../patch-4.x.xz | patch -p1 xz -cd ../patch-5.x.xz | patch -p1
Replace "x" for all versions bigger than the version "X" of your current Replace "x" for all versions bigger than the version "x" of your current
source tree, **in_order**, and you should be ok. You may want to remove source tree, **in_order**, and you should be ok. You may want to remove
the backup files (some-file-name~ or some-file-name.orig), and make sure the backup files (some-file-name~ or some-file-name.orig), and make sure
that there are no failed patches (some-file-name# or some-file-name.rej). that there are no failed patches (some-file-name# or some-file-name.rej).
If there are, either you or I have made a mistake. If there are, either you or I have made a mistake.
Unlike patches for the 4.x kernels, patches for the 4.x.y kernels Unlike patches for the 5.x kernels, patches for the 5.x.y kernels
(also known as the -stable kernels) are not incremental but instead apply (also known as the -stable kernels) are not incremental but instead apply
directly to the base 4.x kernel. For example, if your base kernel is 4.0 directly to the base 5.x kernel. For example, if your base kernel is 5.0
and you want to apply the 4.0.3 patch, you must not first apply the 4.0.1 and you want to apply the 5.0.3 patch, you must not first apply the 5.0.1
and 4.0.2 patches. Similarly, if you are running kernel version 4.0.2 and and 5.0.2 patches. Similarly, if you are running kernel version 5.0.2 and
want to jump to 4.0.3, you must first reverse the 4.0.2 patch (that is, want to jump to 5.0.3, you must first reverse the 5.0.2 patch (that is,
patch -R) **before** applying the 4.0.3 patch. You can read more on this in patch -R) **before** applying the 5.0.3 patch. You can read more on this in
:ref:`Documentation/process/applying-patches.rst <applying_patches>`. :ref:`Documentation/process/applying-patches.rst <applying_patches>`.
Alternatively, the script patch-kernel can be used to automate this Alternatively, the script patch-kernel can be used to automate this
@ -114,7 +114,7 @@ Installing the kernel source
Software requirements Software requirements
--------------------- ---------------------
Compiling and running the 4.x kernels requires up-to-date Compiling and running the 5.x kernels requires up-to-date
versions of various software packages. Consult versions of various software packages. Consult
:ref:`Documentation/process/changes.rst <changes>` for the minimum version numbers :ref:`Documentation/process/changes.rst <changes>` for the minimum version numbers
required and how to get updates for these packages. Beware that using required and how to get updates for these packages. Beware that using
@ -132,12 +132,12 @@ Build directory for the kernel
place for the output files (including .config). place for the output files (including .config).
Example:: Example::
kernel source code: /usr/src/linux-4.X kernel source code: /usr/src/linux-5.x
build directory: /home/name/build/kernel build directory: /home/name/build/kernel
To configure and build the kernel, use:: To configure and build the kernel, use::
cd /usr/src/linux-4.X cd /usr/src/linux-5.x
make O=/home/name/build/kernel menuconfig make O=/home/name/build/kernel menuconfig
make O=/home/name/build/kernel make O=/home/name/build/kernel
sudo make O=/home/name/build/kernel modules_install install sudo make O=/home/name/build/kernel modules_install install

View File

@ -520,16 +520,12 @@ Bridge VLAN filtering
function that the driver has to call for each VLAN the given port is a member function that the driver has to call for each VLAN the given port is a member
of. A switchdev object is used to carry the VID and bridge flags. of. A switchdev object is used to carry the VID and bridge flags.
- port_fdb_prepare: bridge layer function invoked when the bridge prepares the
installation of a Forwarding Database entry. If the operation is not
supported, this function should return -EOPNOTSUPP to inform the bridge code
to fallback to a software implementation. No hardware setup must be done in
this function. See port_fdb_add for this and details.
- port_fdb_add: bridge layer function invoked when the bridge wants to install a - port_fdb_add: bridge layer function invoked when the bridge wants to install a
Forwarding Database entry, the switch hardware should be programmed with the Forwarding Database entry, the switch hardware should be programmed with the
specified address in the specified VLAN Id in the forwarding database specified address in the specified VLAN Id in the forwarding database
associated with this VLAN ID associated with this VLAN ID. If the operation is not supported, this
function should return -EOPNOTSUPP to inform the bridge code to fallback to
a software implementation.
Note: VLAN ID 0 corresponds to the port private database, which, in the context Note: VLAN ID 0 corresponds to the port private database, which, in the context
of DSA, would be the its port-based VLAN, used by the associated bridge device. of DSA, would be the its port-based VLAN, used by the associated bridge device.

View File

@ -92,11 +92,11 @@ device.
Switch ID Switch ID
^^^^^^^^^ ^^^^^^^^^
The switchdev driver must implement the switchdev op switchdev_port_attr_get The switchdev driver must implement the net_device operation
for SWITCHDEV_ATTR_ID_PORT_PARENT_ID for each port netdev, returning the same ndo_get_port_parent_id for each port netdev, returning the same physical ID for
physical ID for each port of a switch. The ID must be unique between switches each port of a switch. The ID must be unique between switches on the same
on the same system. The ID does not need to be unique between switches on system. The ID does not need to be unique between switches on different
different systems. systems.
The switch ID is used to locate ports on a switch and to know if aggregated The switch ID is used to locate ports on a switch and to know if aggregated
ports belong to the same switch. ports belong to the same switch.

View File

@ -216,14 +216,14 @@ You can use the ``interdiff`` program (http://cyberelk.net/tim/patchutils/) to
generate a patch representing the differences between two patches and then generate a patch representing the differences between two patches and then
apply the result. apply the result.
This will let you move from something like 4.7.2 to 4.7.3 in a single This will let you move from something like 5.7.2 to 5.7.3 in a single
step. The -z flag to interdiff will even let you feed it patches in gzip or step. The -z flag to interdiff will even let you feed it patches in gzip or
bzip2 compressed form directly without the use of zcat or bzcat or manual bzip2 compressed form directly without the use of zcat or bzcat or manual
decompression. decompression.
Here's how you'd go from 4.7.2 to 4.7.3 in a single step:: Here's how you'd go from 5.7.2 to 5.7.3 in a single step::
interdiff -z ../patch-4.7.2.gz ../patch-4.7.3.gz | patch -p1 interdiff -z ../patch-5.7.2.gz ../patch-5.7.3.gz | patch -p1
Although interdiff may save you a step or two you are generally advised to Although interdiff may save you a step or two you are generally advised to
do the additional steps since interdiff can get things wrong in some cases. do the additional steps since interdiff can get things wrong in some cases.
@ -245,62 +245,67 @@ The patches are available at http://kernel.org/
Most recent patches are linked from the front page, but they also have Most recent patches are linked from the front page, but they also have
specific homes. specific homes.
The 4.x.y (-stable) and 4.x patches live at The 5.x.y (-stable) and 5.x patches live at
https://www.kernel.org/pub/linux/kernel/v4.x/ https://www.kernel.org/pub/linux/kernel/v5.x/
The -rc patches live at The -rc patches are not stored on the webserver but are generated on
demand from git tags such as
https://www.kernel.org/pub/linux/kernel/v4.x/testing/ https://git.kernel.org/torvalds/p/v5.1-rc1/v5.0
The stable -rc patches live at
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/
The 4.x kernels The 5.x kernels
=============== ===============
These are the base stable releases released by Linus. The highest numbered These are the base stable releases released by Linus. The highest numbered
release is the most recent. release is the most recent.
If regressions or other serious flaws are found, then a -stable fix patch If regressions or other serious flaws are found, then a -stable fix patch
will be released (see below) on top of this base. Once a new 4.x base will be released (see below) on top of this base. Once a new 5.x base
kernel is released, a patch is made available that is a delta between the kernel is released, a patch is made available that is a delta between the
previous 4.x kernel and the new one. previous 5.x kernel and the new one.
To apply a patch moving from 4.6 to 4.7, you'd do the following (note To apply a patch moving from 5.6 to 5.7, you'd do the following (note
that such patches do **NOT** apply on top of 4.x.y kernels but on top of the that such patches do **NOT** apply on top of 5.x.y kernels but on top of the
base 4.x kernel -- if you need to move from 4.x.y to 4.x+1 you need to base 5.x kernel -- if you need to move from 5.x.y to 5.x+1 you need to
first revert the 4.x.y patch). first revert the 5.x.y patch).
Here are some examples:: Here are some examples::
# moving from 4.6 to 4.7 # moving from 5.6 to 5.7
$ cd ~/linux-4.6 # change to kernel source dir $ cd ~/linux-5.6 # change to kernel source dir
$ patch -p1 < ../patch-4.7 # apply the 4.7 patch $ patch -p1 < ../patch-5.7 # apply the 5.7 patch
$ cd .. $ cd ..
$ mv linux-4.6 linux-4.7 # rename source dir $ mv linux-5.6 linux-5.7 # rename source dir
# moving from 4.6.1 to 4.7 # moving from 5.6.1 to 5.7
$ cd ~/linux-4.6.1 # change to kernel source dir $ cd ~/linux-5.6.1 # change to kernel source dir
$ patch -p1 -R < ../patch-4.6.1 # revert the 4.6.1 patch $ patch -p1 -R < ../patch-5.6.1 # revert the 5.6.1 patch
# source dir is now 4.6 # source dir is now 5.6
$ patch -p1 < ../patch-4.7 # apply new 4.7 patch $ patch -p1 < ../patch-5.7 # apply new 5.7 patch
$ cd .. $ cd ..
$ mv linux-4.6.1 linux-4.7 # rename source dir $ mv linux-5.6.1 linux-5.7 # rename source dir
The 4.x.y kernels The 5.x.y kernels
================= =================
Kernels with 3-digit versions are -stable kernels. They contain small(ish) Kernels with 3-digit versions are -stable kernels. They contain small(ish)
critical fixes for security problems or significant regressions discovered critical fixes for security problems or significant regressions discovered
in a given 4.x kernel. in a given 5.x kernel.
This is the recommended branch for users who want the most recent stable This is the recommended branch for users who want the most recent stable
kernel and are not interested in helping test development/experimental kernel and are not interested in helping test development/experimental
versions. versions.
If no 4.x.y kernel is available, then the highest numbered 4.x kernel is If no 5.x.y kernel is available, then the highest numbered 5.x kernel is
the current stable kernel. the current stable kernel.
.. note:: .. note::
@ -308,23 +313,23 @@ the current stable kernel.
The -stable team usually do make incremental patches available as well The -stable team usually do make incremental patches available as well
as patches against the latest mainline release, but I only cover the as patches against the latest mainline release, but I only cover the
non-incremental ones below. The incremental ones can be found at non-incremental ones below. The incremental ones can be found at
https://www.kernel.org/pub/linux/kernel/v4.x/incr/ https://www.kernel.org/pub/linux/kernel/v5.x/incr/
These patches are not incremental, meaning that for example the 4.7.3 These patches are not incremental, meaning that for example the 5.7.3
patch does not apply on top of the 4.7.2 kernel source, but rather on top patch does not apply on top of the 5.7.2 kernel source, but rather on top
of the base 4.7 kernel source. of the base 5.7 kernel source.
So, in order to apply the 4.7.3 patch to your existing 4.7.2 kernel So, in order to apply the 5.7.3 patch to your existing 5.7.2 kernel
source you have to first back out the 4.7.2 patch (so you are left with a source you have to first back out the 5.7.2 patch (so you are left with a
base 4.7 kernel source) and then apply the new 4.7.3 patch. base 5.7 kernel source) and then apply the new 5.7.3 patch.
Here's a small example:: Here's a small example::
$ cd ~/linux-4.7.2 # change to the kernel source dir $ cd ~/linux-5.7.2 # change to the kernel source dir
$ patch -p1 -R < ../patch-4.7.2 # revert the 4.7.2 patch $ patch -p1 -R < ../patch-5.7.2 # revert the 5.7.2 patch
$ patch -p1 < ../patch-4.7.3 # apply the new 4.7.3 patch $ patch -p1 < ../patch-5.7.3 # apply the new 5.7.3 patch
$ cd .. $ cd ..
$ mv linux-4.7.2 linux-4.7.3 # rename the kernel source dir $ mv linux-5.7.2 linux-5.7.3 # rename the kernel source dir
The -rc kernels The -rc kernels
=============== ===============
@ -343,38 +348,38 @@ This is a good branch to run for people who want to help out testing
development kernels but do not want to run some of the really experimental development kernels but do not want to run some of the really experimental
stuff (such people should see the sections about -next and -mm kernels below). stuff (such people should see the sections about -next and -mm kernels below).
The -rc patches are not incremental, they apply to a base 4.x kernel, just The -rc patches are not incremental, they apply to a base 5.x kernel, just
like the 4.x.y patches described above. The kernel version before the -rcN like the 5.x.y patches described above. The kernel version before the -rcN
suffix denotes the version of the kernel that this -rc kernel will eventually suffix denotes the version of the kernel that this -rc kernel will eventually
turn into. turn into.
So, 4.8-rc5 means that this is the fifth release candidate for the 4.8 So, 5.8-rc5 means that this is the fifth release candidate for the 5.8
kernel and the patch should be applied on top of the 4.7 kernel source. kernel and the patch should be applied on top of the 5.7 kernel source.
Here are 3 examples of how to apply these patches:: Here are 3 examples of how to apply these patches::
# first an example of moving from 4.7 to 4.8-rc3 # first an example of moving from 5.7 to 5.8-rc3
$ cd ~/linux-4.7 # change to the 4.7 source dir $ cd ~/linux-5.7 # change to the 5.7 source dir
$ patch -p1 < ../patch-4.8-rc3 # apply the 4.8-rc3 patch $ patch -p1 < ../patch-5.8-rc3 # apply the 5.8-rc3 patch
$ cd .. $ cd ..
$ mv linux-4.7 linux-4.8-rc3 # rename the source dir $ mv linux-5.7 linux-5.8-rc3 # rename the source dir
# now let's move from 4.8-rc3 to 4.8-rc5 # now let's move from 5.8-rc3 to 5.8-rc5
$ cd ~/linux-4.8-rc3 # change to the 4.8-rc3 dir $ cd ~/linux-5.8-rc3 # change to the 5.8-rc3 dir
$ patch -p1 -R < ../patch-4.8-rc3 # revert the 4.8-rc3 patch $ patch -p1 -R < ../patch-5.8-rc3 # revert the 5.8-rc3 patch
$ patch -p1 < ../patch-4.8-rc5 # apply the new 4.8-rc5 patch $ patch -p1 < ../patch-5.8-rc5 # apply the new 5.8-rc5 patch
$ cd .. $ cd ..
$ mv linux-4.8-rc3 linux-4.8-rc5 # rename the source dir $ mv linux-5.8-rc3 linux-5.8-rc5 # rename the source dir
# finally let's try and move from 4.7.3 to 4.8-rc5 # finally let's try and move from 5.7.3 to 5.8-rc5
$ cd ~/linux-4.7.3 # change to the kernel source dir $ cd ~/linux-5.7.3 # change to the kernel source dir
$ patch -p1 -R < ../patch-4.7.3 # revert the 4.7.3 patch $ patch -p1 -R < ../patch-5.7.3 # revert the 5.7.3 patch
$ patch -p1 < ../patch-4.8-rc5 # apply new 4.8-rc5 patch $ patch -p1 < ../patch-5.8-rc5 # apply new 5.8-rc5 patch
$ cd .. $ cd ..
$ mv linux-4.7.3 linux-4.8-rc5 # rename the kernel source dir $ mv linux-5.7.3 linux-5.8-rc5 # rename the kernel source dir
The -mm patches and the linux-next tree The -mm patches and the linux-next tree

View File

@ -4,7 +4,7 @@
.. _it_readme: .. _it_readme:
Rilascio del kernel Linux 4.x <http://kernel.org/> Rilascio del kernel Linux 5.x <http://kernel.org/>
=================================================== ===================================================
.. warning:: .. warning::

View File

@ -409,8 +409,7 @@ F: drivers/platform/x86/wmi.c
F: include/uapi/linux/wmi.h F: include/uapi/linux/wmi.h
AD1889 ALSA SOUND DRIVER AD1889 ALSA SOUND DRIVER
M: Thibaut Varene <T-Bone@parisc-linux.org> W: https://parisc.wiki.kernel.org/index.php/AD1889
W: http://wiki.parisc-linux.org/AD1889
L: linux-parisc@vger.kernel.org L: linux-parisc@vger.kernel.org
S: Maintained S: Maintained
F: sound/pci/ad1889.* F: sound/pci/ad1889.*
@ -2852,7 +2851,7 @@ R: Martin KaFai Lau <kafai@fb.com>
R: Song Liu <songliubraving@fb.com> R: Song Liu <songliubraving@fb.com>
R: Yonghong Song <yhs@fb.com> R: Yonghong Song <yhs@fb.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: linux-kernel@vger.kernel.org L: bpf@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git
Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147 Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147
@ -2882,6 +2881,7 @@ N: bpf
BPF JIT for ARM BPF JIT for ARM
M: Shubham Bansal <illusionist.neo@gmail.com> M: Shubham Bansal <illusionist.neo@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/arm/net/ F: arch/arm/net/
@ -2890,18 +2890,21 @@ M: Daniel Borkmann <daniel@iogearbox.net>
M: Alexei Starovoitov <ast@kernel.org> M: Alexei Starovoitov <ast@kernel.org>
M: Zi Shen Lim <zlim.lnx@gmail.com> M: Zi Shen Lim <zlim.lnx@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported S: Supported
F: arch/arm64/net/ F: arch/arm64/net/
BPF JIT for MIPS (32-BIT AND 64-BIT) BPF JIT for MIPS (32-BIT AND 64-BIT)
M: Paul Burton <paul.burton@mips.com> M: Paul Burton <paul.burton@mips.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/mips/net/ F: arch/mips/net/
BPF JIT for NFP NICs BPF JIT for NFP NICs
M: Jakub Kicinski <jakub.kicinski@netronome.com> M: Jakub Kicinski <jakub.kicinski@netronome.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported S: Supported
F: drivers/net/ethernet/netronome/nfp/bpf/ F: drivers/net/ethernet/netronome/nfp/bpf/
@ -2909,6 +2912,7 @@ BPF JIT for POWERPC (32-BIT AND 64-BIT)
M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
M: Sandipan Das <sandipan@linux.ibm.com> M: Sandipan Das <sandipan@linux.ibm.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/powerpc/net/ F: arch/powerpc/net/
@ -2922,6 +2926,7 @@ BPF JIT for S390
M: Martin Schwidefsky <schwidefsky@de.ibm.com> M: Martin Schwidefsky <schwidefsky@de.ibm.com>
M: Heiko Carstens <heiko.carstens@de.ibm.com> M: Heiko Carstens <heiko.carstens@de.ibm.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/s390/net/ F: arch/s390/net/
X: arch/s390/net/pnet.c X: arch/s390/net/pnet.c
@ -2929,12 +2934,14 @@ X: arch/s390/net/pnet.c
BPF JIT for SPARC (32-BIT AND 64-BIT) BPF JIT for SPARC (32-BIT AND 64-BIT)
M: David S. Miller <davem@davemloft.net> M: David S. Miller <davem@davemloft.net>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/sparc/net/ F: arch/sparc/net/
BPF JIT for X86 32-BIT BPF JIT for X86 32-BIT
M: Wang YanQing <udknight@gmail.com> M: Wang YanQing <udknight@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/x86/net/bpf_jit_comp32.c F: arch/x86/net/bpf_jit_comp32.c
@ -2942,6 +2949,7 @@ BPF JIT for X86 64-BIT
M: Alexei Starovoitov <ast@kernel.org> M: Alexei Starovoitov <ast@kernel.org>
M: Daniel Borkmann <daniel@iogearbox.net> M: Daniel Borkmann <daniel@iogearbox.net>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported S: Supported
F: arch/x86/net/ F: arch/x86/net/
X: arch/x86/net/bpf_jit_comp32.c X: arch/x86/net/bpf_jit_comp32.c
@ -3396,9 +3404,8 @@ F: Documentation/media/v4l-drivers/cafe_ccic*
F: drivers/media/platform/marvell-ccic/ F: drivers/media/platform/marvell-ccic/
CAIF NETWORK LAYER CAIF NETWORK LAYER
M: Dmitry Tarnyagin <dmitry.tarnyagin@lockless.no>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Orphan
F: Documentation/networking/caif/ F: Documentation/networking/caif/
F: drivers/net/caif/ F: drivers/net/caif/
F: include/uapi/linux/caif/ F: include/uapi/linux/caif/
@ -8501,6 +8508,7 @@ L7 BPF FRAMEWORK
M: John Fastabend <john.fastabend@gmail.com> M: John Fastabend <john.fastabend@gmail.com>
M: Daniel Borkmann <daniel@iogearbox.net> M: Daniel Borkmann <daniel@iogearbox.net>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: include/linux/skmsg.h F: include/linux/skmsg.h
F: net/core/skmsg.c F: net/core/skmsg.c
@ -11503,7 +11511,7 @@ F: Documentation/blockdev/paride.txt
F: drivers/block/paride/ F: drivers/block/paride/
PARISC ARCHITECTURE PARISC ARCHITECTURE
M: "James E.J. Bottomley" <jejb@parisc-linux.org> M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
M: Helge Deller <deller@gmx.de> M: Helge Deller <deller@gmx.de>
L: linux-parisc@vger.kernel.org L: linux-parisc@vger.kernel.org
W: http://www.parisc-linux.org/ W: http://www.parisc-linux.org/
@ -16738,6 +16746,7 @@ M: Jesper Dangaard Brouer <hawk@kernel.org>
M: John Fastabend <john.fastabend@gmail.com> M: John Fastabend <john.fastabend@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: xdp-newbies@vger.kernel.org L: xdp-newbies@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported S: Supported
F: net/core/xdp.c F: net/core/xdp.c
F: include/net/xdp.h F: include/net/xdp.h
@ -16751,6 +16760,7 @@ XDP SOCKETS (AF_XDP)
M: Björn Töpel <bjorn.topel@intel.com> M: Björn Töpel <bjorn.topel@intel.com>
M: Magnus Karlsson <magnus.karlsson@intel.com> M: Magnus Karlsson <magnus.karlsson@intel.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: kernel/bpf/xskmap.c F: kernel/bpf/xskmap.c
F: net/xdp/ F: net/xdp/

View File

@ -191,7 +191,6 @@ config NR_CPUS
config ARC_SMP_HALT_ON_RESET config ARC_SMP_HALT_ON_RESET
bool "Enable Halt-on-reset boot mode" bool "Enable Halt-on-reset boot mode"
default y if ARC_UBOOT_SUPPORT
help help
In SMP configuration cores can be configured as Halt-on-reset In SMP configuration cores can be configured as Halt-on-reset
or they could all start at same time. For Halt-on-reset, non or they could all start at same time. For Halt-on-reset, non
@ -407,6 +406,14 @@ config ARC_HAS_ACCL_REGS
(also referred to as r58:r59). These can also be used by gcc as GPR so (also referred to as r58:r59). These can also be used by gcc as GPR so
kernel needs to save/restore per process kernel needs to save/restore per process
config ARC_IRQ_NO_AUTOSAVE
bool "Disable hardware autosave regfile on interrupts"
default n
help
On HS cores, taken interrupt auto saves the regfile on stack.
This is programmable and can be optionally disabled in which case
software INTERRUPT_PROLOGUE/EPILGUE do the needed work
endif # ISA_ARCV2 endif # ISA_ARCV2
endmenu # "ARC CPU Configuration" endmenu # "ARC CPU Configuration"
@ -515,17 +522,6 @@ config ARC_DBG_TLB_PARANOIA
endif endif
config ARC_UBOOT_SUPPORT
bool "Support uboot arg Handling"
help
ARC Linux by default checks for uboot provided args as pointers to
external cmdline or DTB. This however breaks in absence of uboot,
when booting from Metaware debugger directly, as the registers are
not zeroed out on reset by mdb and/or ARCv2 based cores. The bogus
registers look like uboot args to kernel which then chokes.
So only enable the uboot arg checking/processing if users are sure
of uboot being in play.
config ARC_BUILTIN_DTB_NAME config ARC_BUILTIN_DTB_NAME
string "Built in DTB" string "Built in DTB"
help help

View File

@ -31,7 +31,6 @@ CONFIG_ARC_CACHE_LINE_SHIFT=5
# CONFIG_ARC_HAS_LLSC is not set # CONFIG_ARC_HAS_LLSC is not set
CONFIG_ARC_KVADDR_SIZE=402 CONFIG_ARC_KVADDR_SIZE=402
CONFIG_ARC_EMUL_UNALIGNED=y CONFIG_ARC_EMUL_UNALIGNED=y
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
CONFIG_NET=y CONFIG_NET=y
CONFIG_UNIX=y CONFIG_UNIX=y

View File

@ -13,7 +13,6 @@ CONFIG_PARTITION_ADVANCED=y
CONFIG_ARC_PLAT_AXS10X=y CONFIG_ARC_PLAT_AXS10X=y
CONFIG_AXS103=y CONFIG_AXS103=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38" CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38"
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
CONFIG_NET=y CONFIG_NET=y

View File

@ -15,8 +15,6 @@ CONFIG_AXS103=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_SMP=y CONFIG_SMP=y
# CONFIG_ARC_TIMERS_64BIT is not set # CONFIG_ARC_TIMERS_64BIT is not set
# CONFIG_ARC_SMP_HALT_ON_RESET is not set
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp" CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp"
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
CONFIG_NET=y CONFIG_NET=y

View File

@ -151,6 +151,14 @@ struct bcr_isa_arcv2 {
#endif #endif
}; };
struct bcr_uarch_build_arcv2 {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:8, prod:8, maj:8, min:8;
#else
unsigned int min:8, maj:8, prod:8, pad:8;
#endif
};
struct bcr_mpy { struct bcr_mpy {
#ifdef CONFIG_CPU_BIG_ENDIAN #ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:8, x1616:8, dsp:4, cycles:2, type:2, ver:8; unsigned int pad:8, x1616:8, dsp:4, cycles:2, type:2, ver:8;

View File

@ -52,6 +52,17 @@
#define cache_line_size() SMP_CACHE_BYTES #define cache_line_size() SMP_CACHE_BYTES
#define ARCH_DMA_MINALIGN SMP_CACHE_BYTES #define ARCH_DMA_MINALIGN SMP_CACHE_BYTES
/*
* Make sure slab-allocated buffers are 64-bit aligned when atomic64_t uses
* ARCv2 64-bit atomics (LLOCKD/SCONDD). This guarantess runtime 64-bit
* alignment for any atomic64_t embedded in buffer.
* Default ARCH_SLAB_MINALIGN is __alignof__(long long) which has a relaxed
* value of 4 (and not 8) in ARC ABI.
*/
#if defined(CONFIG_ARC_HAS_LL64) && defined(CONFIG_ARC_HAS_LLSC)
#define ARCH_SLAB_MINALIGN 8
#endif
extern void arc_cache_init(void); extern void arc_cache_init(void);
extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len); extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len);
extern void read_decode_cache_bcr(void); extern void read_decode_cache_bcr(void);

View File

@ -17,6 +17,33 @@
; ;
; Now manually save: r12, sp, fp, gp, r25 ; Now manually save: r12, sp, fp, gp, r25
#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE
.ifnc \called_from, exception
st.as r9, [sp, -10] ; save r9 in it's final stack slot
sub sp, sp, 12 ; skip JLI, LDI, EI
PUSH lp_count
PUSHAX lp_start
PUSHAX lp_end
PUSH blink
PUSH r11
PUSH r10
sub sp, sp, 4 ; skip r9
PUSH r8
PUSH r7
PUSH r6
PUSH r5
PUSH r4
PUSH r3
PUSH r2
PUSH r1
PUSH r0
.endif
#endif
#ifdef CONFIG_ARC_HAS_ACCL_REGS #ifdef CONFIG_ARC_HAS_ACCL_REGS
PUSH r59 PUSH r59
PUSH r58 PUSH r58
@ -86,6 +113,33 @@
POP r59 POP r59
#endif #endif
#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE
.ifnc \called_from, exception
POP r0
POP r1
POP r2
POP r3
POP r4
POP r5
POP r6
POP r7
POP r8
POP r9
POP r10
POP r11
POP blink
POPAX lp_end
POPAX lp_start
POP r9
mov lp_count, r9
add sp, sp, 12 ; skip JLI, LDI, EI
ld.as r9, [sp, -10] ; reload r9 which got clobbered
.endif
#endif
.endm .endm
/*------------------------------------------------------------------------*/ /*------------------------------------------------------------------------*/

View File

@ -207,7 +207,7 @@ raw_copy_from_user(void *to, const void __user *from, unsigned long n)
*/ */
"=&r" (tmp), "+r" (to), "+r" (from) "=&r" (tmp), "+r" (to), "+r" (from)
: :
: "lp_count", "lp_start", "lp_end", "memory"); : "lp_count", "memory");
return n; return n;
} }
@ -433,7 +433,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n)
*/ */
"=&r" (tmp), "+r" (to), "+r" (from) "=&r" (tmp), "+r" (to), "+r" (from)
: :
: "lp_count", "lp_start", "lp_end", "memory"); : "lp_count", "memory");
return n; return n;
} }
@ -653,7 +653,7 @@ static inline unsigned long __arc_clear_user(void __user *to, unsigned long n)
" .previous \n" " .previous \n"
: "+r"(d_char), "+r"(res) : "+r"(d_char), "+r"(res)
: "i"(0) : "i"(0)
: "lp_count", "lp_start", "lp_end", "memory"); : "lp_count", "memory");
return res; return res;
} }
@ -686,7 +686,7 @@ __arc_strncpy_from_user(char *dst, const char __user *src, long count)
" .previous \n" " .previous \n"
: "+r"(res), "+r"(dst), "+r"(src), "=r"(val) : "+r"(res), "+r"(dst), "+r"(src), "=r"(val)
: "g"(-EFAULT), "r"(count) : "g"(-EFAULT), "r"(count)
: "lp_count", "lp_start", "lp_end", "memory"); : "lp_count", "memory");
return res; return res;
} }

View File

@ -209,7 +209,9 @@ restore_regs:
;####### Return from Intr ####### ;####### Return from Intr #######
debug_marker_l1: debug_marker_l1:
bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot ; bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot
btst r0, STATUS_DE_BIT ; Z flag set if bit clear
bnz .Lintr_ret_to_delay_slot ; branch if STATUS_DE_BIT set
.Lisr_ret_fast_path: .Lisr_ret_fast_path:
; Handle special case #1: (Entry via Exception, Return via IRQ) ; Handle special case #1: (Entry via Exception, Return via IRQ)

View File

@ -17,6 +17,7 @@
#include <asm/entry.h> #include <asm/entry.h>
#include <asm/arcregs.h> #include <asm/arcregs.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/irqflags.h>
.macro CPU_EARLY_SETUP .macro CPU_EARLY_SETUP
@ -47,6 +48,15 @@
sr r5, [ARC_REG_DC_CTRL] sr r5, [ARC_REG_DC_CTRL]
1: 1:
#ifdef CONFIG_ISA_ARCV2
; Unaligned access is disabled at reset, so re-enable early as
; gcc 7.3.1 (ARC GNU 2018.03) onwards generates unaligned access
; by default
lr r5, [status32]
bset r5, r5, STATUS_AD_BIT
kflag r5
#endif
.endm .endm
.section .init.text, "ax",@progbits .section .init.text, "ax",@progbits
@ -90,15 +100,13 @@ ENTRY(stext)
st.ab 0, [r5, 4] st.ab 0, [r5, 4]
1: 1:
#ifdef CONFIG_ARC_UBOOT_SUPPORT
; Uboot - kernel ABI ; Uboot - kernel ABI
; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2 ; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2
; r1 = magic number (board identity, unused as of now ; r1 = magic number (always zero as of now)
; r2 = pointer to uboot provided cmdline or external DTB in mem ; r2 = pointer to uboot provided cmdline or external DTB in mem
; These are handled later in setup_arch() ; These are handled later in handle_uboot_args()
st r0, [@uboot_tag] st r0, [@uboot_tag]
st r2, [@uboot_arg] st r2, [@uboot_arg]
#endif
; setup "current" tsk and optionally cache it in dedicated r25 ; setup "current" tsk and optionally cache it in dedicated r25
mov r9, @init_task mov r9, @init_task

View File

@ -49,11 +49,13 @@ void arc_init_IRQ(void)
*(unsigned int *)&ictrl = 0; *(unsigned int *)&ictrl = 0;
#ifndef CONFIG_ARC_IRQ_NO_AUTOSAVE
ictrl.save_nr_gpr_pairs = 6; /* r0 to r11 (r12 saved manually) */ ictrl.save_nr_gpr_pairs = 6; /* r0 to r11 (r12 saved manually) */
ictrl.save_blink = 1; ictrl.save_blink = 1;
ictrl.save_lp_regs = 1; /* LP_COUNT, LP_START, LP_END */ ictrl.save_lp_regs = 1; /* LP_COUNT, LP_START, LP_END */
ictrl.save_u_to_u = 0; /* user ctxt saved on kernel stack */ ictrl.save_u_to_u = 0; /* user ctxt saved on kernel stack */
ictrl.save_idx_regs = 1; /* JLI, LDI, EI */ ictrl.save_idx_regs = 1; /* JLI, LDI, EI */
#endif
WRITE_AUX(AUX_IRQ_CTRL, ictrl); WRITE_AUX(AUX_IRQ_CTRL, ictrl);

View File

@ -199,20 +199,36 @@ static void read_arc_build_cfg_regs(void)
cpu->bpu.ret_stk = 4 << bpu.rse; cpu->bpu.ret_stk = 4 << bpu.rse;
if (cpu->core.family >= 0x54) { if (cpu->core.family >= 0x54) {
unsigned int exec_ctrl;
READ_BCR(AUX_EXEC_CTRL, exec_ctrl); struct bcr_uarch_build_arcv2 uarch;
cpu->extn.dual_enb = !(exec_ctrl & 1);
/* dual issue always present for this core */ /*
cpu->extn.dual = 1; * The first 0x54 core (uarch maj:min 0:1 or 0:2) was
* dual issue only (HS4x). But next uarch rev (1:0)
* allows it be configured for single issue (HS3x)
* Ensure we fiddle with dual issue only on HS4x
*/
READ_BCR(ARC_REG_MICRO_ARCH_BCR, uarch);
if (uarch.prod == 4) {
unsigned int exec_ctrl;
/* dual issue hardware always present */
cpu->extn.dual = 1;
READ_BCR(AUX_EXEC_CTRL, exec_ctrl);
/* dual issue hardware enabled ? */
cpu->extn.dual_enb = !(exec_ctrl & 1);
}
} }
} }
READ_BCR(ARC_REG_AP_BCR, ap); READ_BCR(ARC_REG_AP_BCR, ap);
if (ap.ver) { if (ap.ver) {
cpu->extn.ap_num = 2 << ap.num; cpu->extn.ap_num = 2 << ap.num;
cpu->extn.ap_full = !!ap.min; cpu->extn.ap_full = !ap.min;
} }
READ_BCR(ARC_REG_SMART_BCR, bcr); READ_BCR(ARC_REG_SMART_BCR, bcr);
@ -462,43 +478,78 @@ void setup_processor(void)
arc_chk_core_config(); arc_chk_core_config();
} }
static inline int is_kernel(unsigned long addr) static inline bool uboot_arg_invalid(unsigned long addr)
{ {
if (addr >= (unsigned long)_stext && addr <= (unsigned long)_end) /*
return 1; * Check that it is a untranslated address (although MMU is not enabled
return 0; * yet, it being a high address ensures this is not by fluke)
*/
if (addr < PAGE_OFFSET)
return true;
/* Check that address doesn't clobber resident kernel image */
return addr >= (unsigned long)_stext && addr <= (unsigned long)_end;
}
#define IGNORE_ARGS "Ignore U-boot args: "
/* uboot_tag values for U-boot - kernel ABI revision 0; see head.S */
#define UBOOT_TAG_NONE 0
#define UBOOT_TAG_CMDLINE 1
#define UBOOT_TAG_DTB 2
void __init handle_uboot_args(void)
{
bool use_embedded_dtb = true;
bool append_cmdline = false;
/* check that we know this tag */
if (uboot_tag != UBOOT_TAG_NONE &&
uboot_tag != UBOOT_TAG_CMDLINE &&
uboot_tag != UBOOT_TAG_DTB) {
pr_warn(IGNORE_ARGS "invalid uboot tag: '%08x'\n", uboot_tag);
goto ignore_uboot_args;
}
if (uboot_tag != UBOOT_TAG_NONE &&
uboot_arg_invalid((unsigned long)uboot_arg)) {
pr_warn(IGNORE_ARGS "invalid uboot arg: '%px'\n", uboot_arg);
goto ignore_uboot_args;
}
/* see if U-boot passed an external Device Tree blob */
if (uboot_tag == UBOOT_TAG_DTB) {
machine_desc = setup_machine_fdt((void *)uboot_arg);
/* external Device Tree blob is invalid - use embedded one */
use_embedded_dtb = !machine_desc;
}
if (uboot_tag == UBOOT_TAG_CMDLINE)
append_cmdline = true;
ignore_uboot_args:
if (use_embedded_dtb) {
machine_desc = setup_machine_fdt(__dtb_start);
if (!machine_desc)
panic("Embedded DT invalid\n");
}
/*
* NOTE: @boot_command_line is populated by setup_machine_fdt() so this
* append processing can only happen after.
*/
if (append_cmdline) {
/* Ensure a whitespace between the 2 cmdlines */
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, uboot_arg, COMMAND_LINE_SIZE);
}
} }
void __init setup_arch(char **cmdline_p) void __init setup_arch(char **cmdline_p)
{ {
#ifdef CONFIG_ARC_UBOOT_SUPPORT handle_uboot_args();
/* make sure that uboot passed pointer to cmdline/dtb is valid */
if (uboot_tag && is_kernel((unsigned long)uboot_arg))
panic("Invalid uboot arg\n");
/* See if u-boot passed an external Device Tree blob */
machine_desc = setup_machine_fdt(uboot_arg); /* uboot_tag == 2 */
if (!machine_desc)
#endif
{
/* No, so try the embedded one */
machine_desc = setup_machine_fdt(__dtb_start);
if (!machine_desc)
panic("Embedded DT invalid\n");
/*
* If we are here, it is established that @uboot_arg didn't
* point to DT blob. Instead if u-boot says it is cmdline,
* append to embedded DT cmdline.
* setup_machine_fdt() would have populated @boot_command_line
*/
if (uboot_tag == 1) {
/* Ensure a whitespace between the 2 cmdlines */
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, uboot_arg,
COMMAND_LINE_SIZE);
}
}
/* Save unparsed command line copy for /proc/cmdline */ /* Save unparsed command line copy for /proc/cmdline */
*cmdline_p = boot_command_line; *cmdline_p = boot_command_line;

View File

@ -25,15 +25,11 @@
#endif #endif
#ifdef CONFIG_ARC_HAS_LL64 #ifdef CONFIG_ARC_HAS_LL64
# define PREFETCH_READ(RX) prefetch [RX, 56]
# define PREFETCH_WRITE(RX) prefetchw [RX, 64]
# define LOADX(DST,RX) ldd.ab DST, [RX, 8] # define LOADX(DST,RX) ldd.ab DST, [RX, 8]
# define STOREX(SRC,RX) std.ab SRC, [RX, 8] # define STOREX(SRC,RX) std.ab SRC, [RX, 8]
# define ZOLSHFT 5 # define ZOLSHFT 5
# define ZOLAND 0x1F # define ZOLAND 0x1F
#else #else
# define PREFETCH_READ(RX) prefetch [RX, 28]
# define PREFETCH_WRITE(RX) prefetchw [RX, 32]
# define LOADX(DST,RX) ld.ab DST, [RX, 4] # define LOADX(DST,RX) ld.ab DST, [RX, 4]
# define STOREX(SRC,RX) st.ab SRC, [RX, 4] # define STOREX(SRC,RX) st.ab SRC, [RX, 4]
# define ZOLSHFT 4 # define ZOLSHFT 4
@ -41,8 +37,6 @@
#endif #endif
ENTRY_CFI(memcpy) ENTRY_CFI(memcpy)
prefetch [r1] ; Prefetch the read location
prefetchw [r0] ; Prefetch the write location
mov.f 0, r2 mov.f 0, r2
;;; if size is zero ;;; if size is zero
jz.d [blink] jz.d [blink]
@ -72,8 +66,6 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy32_64bytes lpnz @.Lcopy32_64bytes
;; LOOP START ;; LOOP START
LOADX (r6, r1) LOADX (r6, r1)
PREFETCH_READ (r1)
PREFETCH_WRITE (r3)
LOADX (r8, r1) LOADX (r8, r1)
LOADX (r10, r1) LOADX (r10, r1)
LOADX (r4, r1) LOADX (r4, r1)
@ -117,9 +109,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_1 lpnz @.Lcopy8bytes_1
;; LOOP START ;; LOOP START
ld.ab r6, [r1, 4] ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4] ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 24) SHIFT_1 (r7, r6, 24)
or r7, r7, r5 or r7, r7, r5
@ -162,9 +152,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_2 lpnz @.Lcopy8bytes_2
;; LOOP START ;; LOOP START
ld.ab r6, [r1, 4] ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4] ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 16) SHIFT_1 (r7, r6, 16)
or r7, r7, r5 or r7, r7, r5
@ -204,9 +192,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_3 lpnz @.Lcopy8bytes_3
;; LOOP START ;; LOOP START
ld.ab r6, [r1, 4] ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4] ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 8) SHIFT_1 (r7, r6, 8)
or r7, r7, r5 or r7, r7, r5

View File

@ -9,6 +9,7 @@ menuconfig ARC_SOC_HSDK
bool "ARC HS Development Kit SOC" bool "ARC HS Development Kit SOC"
depends on ISA_ARCV2 depends on ISA_ARCV2
select ARC_HAS_ACCL_REGS select ARC_HAS_ACCL_REGS
select ARC_IRQ_NO_AUTOSAVE
select CLK_HSDK select CLK_HSDK
select RESET_HSDK select RESET_HSDK
select HAVE_PCI select HAVE_PCI

View File

@ -729,7 +729,7 @@
&cpsw_emac0 { &cpsw_emac0 {
phy-handle = <&ethphy0>; phy-handle = <&ethphy0>;
phy-mode = "rgmii-txid"; phy-mode = "rgmii-id";
}; };
&tscadc { &tscadc {

View File

@ -651,13 +651,13 @@
&cpsw_emac0 { &cpsw_emac0 {
phy-handle = <&ethphy0>; phy-handle = <&ethphy0>;
phy-mode = "rgmii-txid"; phy-mode = "rgmii-id";
dual_emac_res_vlan = <1>; dual_emac_res_vlan = <1>;
}; };
&cpsw_emac1 { &cpsw_emac1 {
phy-handle = <&ethphy1>; phy-handle = <&ethphy1>;
phy-mode = "rgmii-txid"; phy-mode = "rgmii-id";
dual_emac_res_vlan = <2>; dual_emac_res_vlan = <2>;
}; };

View File

@ -144,30 +144,32 @@
status = "okay"; status = "okay";
}; };
nand@d0000 { nand-controller@d0000 {
status = "okay"; status = "okay";
label = "pxa3xx_nand-0";
num-cs = <1>;
marvell,nand-keep-config;
nand-on-flash-bbt;
partitions { nand@0 {
compatible = "fixed-partitions"; reg = <0>;
#address-cells = <1>; label = "pxa3xx_nand-0";
#size-cells = <1>; nand-rb = <0>;
nand-on-flash-bbt;
partition@0 { partitions {
label = "U-Boot"; compatible = "fixed-partitions";
reg = <0 0x800000>; #address-cells = <1>;
}; #size-cells = <1>;
partition@800000 {
label = "Linux";
reg = <0x800000 0x800000>;
};
partition@1000000 {
label = "Filesystem";
reg = <0x1000000 0x3f000000>;
partition@0 {
label = "U-Boot";
reg = <0 0x800000>;
};
partition@800000 {
label = "Linux";
reg = <0x800000 0x800000>;
};
partition@1000000 {
label = "Filesystem";
reg = <0x1000000 0x3f000000>;
};
}; };
}; };
}; };

View File

@ -160,12 +160,15 @@
status = "okay"; status = "okay";
}; };
nand@d0000 { nand-controller@d0000 {
status = "okay"; status = "okay";
label = "pxa3xx_nand-0";
num-cs = <1>; nand@0 {
marvell,nand-keep-config; reg = <0>;
nand-on-flash-bbt; label = "pxa3xx_nand-0";
nand-rb = <0>;
nand-on-flash-bbt;
};
}; };
}; };

View File

@ -81,49 +81,52 @@
}; };
nand@d0000 { nand-controller@d0000 {
status = "okay"; status = "okay";
label = "pxa3xx_nand-0";
num-cs = <1>;
marvell,nand-keep-config;
nand-on-flash-bbt;
partitions { nand@0 {
compatible = "fixed-partitions"; reg = <0>;
#address-cells = <1>; label = "pxa3xx_nand-0";
#size-cells = <1>; nand-rb = <0>;
nand-on-flash-bbt;
partition@0 { partitions {
label = "u-boot"; compatible = "fixed-partitions";
reg = <0x00000000 0x000e0000>; #address-cells = <1>;
read-only; #size-cells = <1>;
};
partition@e0000 { partition@0 {
label = "u-boot-env"; label = "u-boot";
reg = <0x000e0000 0x00020000>; reg = <0x00000000 0x000e0000>;
read-only; read-only;
}; };
partition@100000 { partition@e0000 {
label = "u-boot-env2"; label = "u-boot-env";
reg = <0x00100000 0x00020000>; reg = <0x000e0000 0x00020000>;
read-only; read-only;
}; };
partition@120000 { partition@100000 {
label = "zImage"; label = "u-boot-env2";
reg = <0x00120000 0x00400000>; reg = <0x00100000 0x00020000>;
}; read-only;
};
partition@520000 { partition@120000 {
label = "initrd"; label = "zImage";
reg = <0x00520000 0x00400000>; reg = <0x00120000 0x00400000>;
}; };
partition@e00000 { partition@520000 {
label = "boot"; label = "initrd";
reg = <0x00e00000 0x3f200000>; reg = <0x00520000 0x00400000>;
};
partition@e00000 {
label = "boot";
reg = <0x00e00000 0x3f200000>;
};
}; };
}; };
}; };

View File

@ -13,10 +13,25 @@
stdout-path = "serial0:115200n8"; stdout-path = "serial0:115200n8";
}; };
memory@80000000 { /*
* Note that recent version of the device tree compiler (starting with
* version 1.4.2) warn about this node containing a reg property, but
* missing a unit-address. However, the bootloader on these Chromebook
* devices relies on the full name of this node to be exactly /memory.
* Adding the unit-address causes the bootloader to create a /memory
* node and write the memory bank configuration to that node, which in
* turn leads the kernel to believe that the device has 2 GiB of
* memory instead of the amount detected by the bootloader.
*
* The name of this node is effectively ABI and must not be changed.
*/
memory {
device_type = "memory";
reg = <0x0 0x80000000 0x0 0x80000000>; reg = <0x0 0x80000000 0x0 0x80000000>;
}; };
/delete-node/ memory@80000000;
host1x@50000000 { host1x@50000000 {
hdmi@54280000 { hdmi@54280000 {
status = "okay"; status = "okay";

View File

@ -351,7 +351,7 @@
reg = <0>; reg = <0>;
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&cp0_copper_eth_phy_reset>; pinctrl-0 = <&cp0_copper_eth_phy_reset>;
reset-gpios = <&cp1_gpio1 11 GPIO_ACTIVE_LOW>; reset-gpios = <&cp0_gpio2 11 GPIO_ACTIVE_LOW>;
reset-assert-us = <10000>; reset-assert-us = <10000>;
}; };

View File

@ -36,4 +36,8 @@
#include <arm_neon.h> #include <arm_neon.h>
#endif #endif
#ifdef CONFIG_CC_IS_CLANG
#pragma clang diagnostic ignored "-Wincompatible-pointer-types"
#endif
#endif /* __ASM_NEON_INTRINSICS_H */ #endif /* __ASM_NEON_INTRINSICS_H */

View File

@ -539,8 +539,7 @@ set_hcr:
/* GICv3 system register access */ /* GICv3 system register access */
mrs x0, id_aa64pfr0_el1 mrs x0, id_aa64pfr0_el1
ubfx x0, x0, #24, #4 ubfx x0, x0, #24, #4
cmp x0, #1 cbz x0, 3f
b.ne 3f
mrs_s x0, SYS_ICC_SRE_EL2 mrs_s x0, SYS_ICC_SRE_EL2
orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1 orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1

View File

@ -1702,19 +1702,20 @@ void syscall_trace_exit(struct pt_regs *regs)
} }
/* /*
* SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487C.a * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
* We also take into account DIT (bit 24), which is not yet documented, and * We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
* treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may be * not described in ARM DDI 0487D.a.
* allocated an EL0 meaning in future. * We treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may
* be allocated an EL0 meaning in future.
* Userspace cannot use these until they have an architectural meaning. * Userspace cannot use these until they have an architectural meaning.
* Note that this follows the SPSR_ELx format, not the AArch32 PSR format. * Note that this follows the SPSR_ELx format, not the AArch32 PSR format.
* We also reserve IL for the kernel; SS is handled dynamically. * We also reserve IL for the kernel; SS is handled dynamically.
*/ */
#define SPSR_EL1_AARCH64_RES0_BITS \ #define SPSR_EL1_AARCH64_RES0_BITS \
(GENMASK_ULL(63,32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \ (GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
GENMASK_ULL(20, 10) | GENMASK_ULL(5, 5)) GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5))
#define SPSR_EL1_AARCH32_RES0_BITS \ #define SPSR_EL1_AARCH32_RES0_BITS \
(GENMASK_ULL(63,32) | GENMASK_ULL(23, 22) | GENMASK_ULL(20,20)) (GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20))
static int valid_compat_regs(struct user_pt_regs *regs) static int valid_compat_regs(struct user_pt_regs *regs)
{ {

View File

@ -339,6 +339,9 @@ void __init setup_arch(char **cmdline_p)
smp_init_cpus(); smp_init_cpus();
smp_build_mpidr_hash(); smp_build_mpidr_hash();
/* Init percpu seeds for random tags after cpus are set up. */
kasan_init_tags();
#ifdef CONFIG_ARM64_SW_TTBR0_PAN #ifdef CONFIG_ARM64_SW_TTBR0_PAN
/* /*
* Make sure init_thread_info.ttbr0 always generates translation * Make sure init_thread_info.ttbr0 always generates translation

View File

@ -252,8 +252,6 @@ void __init kasan_init(void)
memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
kasan_init_tags();
/* At this point kasan is fully initialized. Enable error messages */ /* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0; init_task.kasan_depth = 0;
pr_info("KernelAddressSanitizer initialized\n"); pr_info("KernelAddressSanitizer initialized\n");

View File

@ -308,15 +308,29 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
long do_syscall_trace_enter(struct pt_regs *regs) long do_syscall_trace_enter(struct pt_regs *regs)
{ {
if (test_thread_flag(TIF_SYSCALL_TRACE) && if (test_thread_flag(TIF_SYSCALL_TRACE)) {
tracehook_report_syscall_entry(regs)) { int rc = tracehook_report_syscall_entry(regs);
/* /*
* Tracing decided this syscall should not happen or the * As tracesys_next does not set %r28 to -ENOSYS
* debugger stored an invalid system call number. Skip * when %r20 is set to -1, initialize it here.
* the system call and the system call restart handling.
*/ */
regs->gr[20] = -1UL; regs->gr[28] = -ENOSYS;
goto out;
if (rc) {
/*
* A nonzero return code from
* tracehook_report_syscall_entry() tells us
* to prevent the syscall execution. Skip
* the syscall call and the syscall restart handling.
*
* Note that the tracer may also just change
* regs->gr[20] to an invalid syscall number,
* that is handled by tracesys_next.
*/
regs->gr[20] = -1UL;
return -1;
}
} }
/* Do the secure computing check after ptrace. */ /* Do the secure computing check after ptrace. */
@ -340,7 +354,6 @@ long do_syscall_trace_enter(struct pt_regs *regs)
regs->gr[24] & 0xffffffff, regs->gr[24] & 0xffffffff,
regs->gr[23] & 0xffffffff); regs->gr[23] & 0xffffffff);
out:
/* /*
* Sign extend the syscall number to 64bit since it may have been * Sign extend the syscall number to 64bit since it may have been
* modified by a compat ptrace call * modified by a compat ptrace call

View File

@ -1593,6 +1593,8 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs)
pnv_pci_ioda2_setup_dma_pe(phb, pe); pnv_pci_ioda2_setup_dma_pe(phb, pe);
#ifdef CONFIG_IOMMU_API #ifdef CONFIG_IOMMU_API
iommu_register_group(&pe->table_group,
pe->phb->hose->global_number, pe->pe_number);
pnv_ioda_setup_bus_iommu_group(pe, &pe->table_group, NULL); pnv_ioda_setup_bus_iommu_group(pe, &pe->table_group, NULL);
#endif #endif
} }

View File

@ -1147,6 +1147,8 @@ static int pnv_tce_iommu_bus_notifier(struct notifier_block *nb,
return 0; return 0;
pe = &phb->ioda.pe_array[pdn->pe_number]; pe = &phb->ioda.pe_array[pdn->pe_number];
if (!pe->table_group.group)
return 0;
iommu_add_device(&pe->table_group, dev); iommu_add_device(&pe->table_group, dev);
return 0; return 0;
case BUS_NOTIFY_DEL_DEVICE: case BUS_NOTIFY_DEL_DEVICE:

View File

@ -297,7 +297,7 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
scb_s->crycbd = 0; scb_s->crycbd = 0;
apie_h = vcpu->arch.sie_block->eca & ECA_APIE; apie_h = vcpu->arch.sie_block->eca & ECA_APIE;
if (!apie_h && !key_msk) if (!apie_h && (!key_msk || fmt_o == CRYCB_FORMAT0))
return 0; return 0;
if (!crycb_addr) if (!crycb_addr)

View File

@ -1,3 +1,3 @@
ifneq ($(CONFIG_BUILTIN_DTB_SOURCE),"") ifneq ($(CONFIG_BUILTIN_DTB_SOURCE),"")
obj-y += $(patsubst "%",%,$(CONFIG_BUILTIN_DTB_SOURCE)).dtb.o obj-$(CONFIG_USE_BUILTIN_DTB) += $(patsubst "%",%,$(CONFIG_BUILTIN_DTB_SOURCE)).dtb.o
endif endif

View File

@ -299,6 +299,7 @@ union kvm_mmu_extended_role {
unsigned int cr4_smap:1; unsigned int cr4_smap:1;
unsigned int cr4_smep:1; unsigned int cr4_smep:1;
unsigned int cr4_la57:1; unsigned int cr4_la57:1;
unsigned int maxphyaddr:6;
}; };
}; };
@ -397,6 +398,7 @@ struct kvm_mmu {
void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
u64 *spte, const void *pte); u64 *spte, const void *pte);
hpa_t root_hpa; hpa_t root_hpa;
gpa_t root_cr3;
union kvm_mmu_role mmu_role; union kvm_mmu_role mmu_role;
u8 root_level; u8 root_level;
u8 shadow_root_level; u8 shadow_root_level;

View File

@ -335,6 +335,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0; unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0; unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0; unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
unsigned f_la57 = 0;
/* cpuid 1.edx */ /* cpuid 1.edx */
const u32 kvm_cpuid_1_edx_x86_features = const u32 kvm_cpuid_1_edx_x86_features =
@ -489,7 +490,10 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
// TSC_ADJUST is emulated // TSC_ADJUST is emulated
entry->ebx |= F(TSC_ADJUST); entry->ebx |= F(TSC_ADJUST);
entry->ecx &= kvm_cpuid_7_0_ecx_x86_features; entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
f_la57 = entry->ecx & F(LA57);
cpuid_mask(&entry->ecx, CPUID_7_ECX); cpuid_mask(&entry->ecx, CPUID_7_ECX);
/* Set LA57 based on hardware capability. */
entry->ecx |= f_la57;
entry->ecx |= f_umip; entry->ecx |= f_umip;
/* PKU is not yet implemented for shadow paging. */ /* PKU is not yet implemented for shadow paging. */
if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE)) if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))

View File

@ -3555,6 +3555,7 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
&invalid_list); &invalid_list);
mmu->root_hpa = INVALID_PAGE; mmu->root_hpa = INVALID_PAGE;
} }
mmu->root_cr3 = 0;
} }
kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
@ -3610,6 +3611,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root); vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root);
} else } else
BUG(); BUG();
vcpu->arch.mmu->root_cr3 = vcpu->arch.mmu->get_cr3(vcpu);
return 0; return 0;
} }
@ -3618,10 +3620,11 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
{ {
struct kvm_mmu_page *sp; struct kvm_mmu_page *sp;
u64 pdptr, pm_mask; u64 pdptr, pm_mask;
gfn_t root_gfn; gfn_t root_gfn, root_cr3;
int i; int i;
root_gfn = vcpu->arch.mmu->get_cr3(vcpu) >> PAGE_SHIFT; root_cr3 = vcpu->arch.mmu->get_cr3(vcpu);
root_gfn = root_cr3 >> PAGE_SHIFT;
if (mmu_check_root(vcpu, root_gfn)) if (mmu_check_root(vcpu, root_gfn))
return 1; return 1;
@ -3646,7 +3649,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
++sp->root_count; ++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock); spin_unlock(&vcpu->kvm->mmu_lock);
vcpu->arch.mmu->root_hpa = root; vcpu->arch.mmu->root_hpa = root;
return 0; goto set_root_cr3;
} }
/* /*
@ -3712,6 +3715,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->lm_root); vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->lm_root);
} }
set_root_cr3:
vcpu->arch.mmu->root_cr3 = root_cr3;
return 0; return 0;
} }
@ -4163,7 +4169,7 @@ static bool cached_root_available(struct kvm_vcpu *vcpu, gpa_t new_cr3,
struct kvm_mmu_root_info root; struct kvm_mmu_root_info root;
struct kvm_mmu *mmu = vcpu->arch.mmu; struct kvm_mmu *mmu = vcpu->arch.mmu;
root.cr3 = mmu->get_cr3(vcpu); root.cr3 = mmu->root_cr3;
root.hpa = mmu->root_hpa; root.hpa = mmu->root_hpa;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) {
@ -4176,6 +4182,7 @@ static bool cached_root_available(struct kvm_vcpu *vcpu, gpa_t new_cr3,
} }
mmu->root_hpa = root.hpa; mmu->root_hpa = root.hpa;
mmu->root_cr3 = root.cr3;
return i < KVM_MMU_NUM_PREV_ROOTS; return i < KVM_MMU_NUM_PREV_ROOTS;
} }
@ -4770,6 +4777,7 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu)
ext.cr4_pse = !!is_pse(vcpu); ext.cr4_pse = !!is_pse(vcpu);
ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE); ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE);
ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57); ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57);
ext.maxphyaddr = cpuid_maxphyaddr(vcpu);
ext.valid = 1; ext.valid = 1;
@ -5516,11 +5524,13 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
vcpu->arch.root_mmu.root_hpa = INVALID_PAGE; vcpu->arch.root_mmu.root_hpa = INVALID_PAGE;
vcpu->arch.root_mmu.root_cr3 = 0;
vcpu->arch.root_mmu.translate_gpa = translate_gpa; vcpu->arch.root_mmu.translate_gpa = translate_gpa;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
vcpu->arch.root_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID; vcpu->arch.root_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
vcpu->arch.guest_mmu.root_hpa = INVALID_PAGE; vcpu->arch.guest_mmu.root_hpa = INVALID_PAGE;
vcpu->arch.guest_mmu.root_cr3 = 0;
vcpu->arch.guest_mmu.translate_gpa = translate_gpa; vcpu->arch.guest_mmu.translate_gpa = translate_gpa;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
vcpu->arch.guest_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID; vcpu->arch.guest_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;

View File

@ -95,7 +95,7 @@ static void __update_runtime_status(struct device *dev, enum rpm_status status)
static void pm_runtime_deactivate_timer(struct device *dev) static void pm_runtime_deactivate_timer(struct device *dev)
{ {
if (dev->power.timer_expires > 0) { if (dev->power.timer_expires > 0) {
hrtimer_cancel(&dev->power.suspend_timer); hrtimer_try_to_cancel(&dev->power.suspend_timer);
dev->power.timer_expires = 0; dev->power.timer_expires = 0;
} }
} }

View File

@ -144,8 +144,7 @@ static void __init at91sam9x5_pmc_setup(struct device_node *np,
return; return;
at91sam9x5_pmc = pmc_data_allocate(PMC_MAIN + 1, at91sam9x5_pmc = pmc_data_allocate(PMC_MAIN + 1,
nck(at91sam9x5_systemck), nck(at91sam9x5_systemck), 31, 0);
nck(at91sam9x35_periphck), 0);
if (!at91sam9x5_pmc) if (!at91sam9x5_pmc)
return; return;
@ -210,7 +209,7 @@ static void __init at91sam9x5_pmc_setup(struct device_node *np,
parent_names[1] = "mainck"; parent_names[1] = "mainck";
parent_names[2] = "plladivck"; parent_names[2] = "plladivck";
parent_names[3] = "utmick"; parent_names[3] = "utmick";
parent_names[4] = "mck"; parent_names[4] = "masterck";
for (i = 0; i < 2; i++) { for (i = 0; i < 2; i++) {
char name[6]; char name[6];

View File

@ -240,7 +240,7 @@ static void __init sama5d2_pmc_setup(struct device_node *np)
parent_names[1] = "mainck"; parent_names[1] = "mainck";
parent_names[2] = "plladivck"; parent_names[2] = "plladivck";
parent_names[3] = "utmick"; parent_names[3] = "utmick";
parent_names[4] = "mck"; parent_names[4] = "masterck";
for (i = 0; i < 3; i++) { for (i = 0; i < 3; i++) {
char name[6]; char name[6];
@ -291,7 +291,7 @@ static void __init sama5d2_pmc_setup(struct device_node *np)
parent_names[1] = "mainck"; parent_names[1] = "mainck";
parent_names[2] = "plladivck"; parent_names[2] = "plladivck";
parent_names[3] = "utmick"; parent_names[3] = "utmick";
parent_names[4] = "mck"; parent_names[4] = "masterck";
parent_names[5] = "audiopll_pmcck"; parent_names[5] = "audiopll_pmcck";
for (i = 0; i < ARRAY_SIZE(sama5d2_gck); i++) { for (i = 0; i < ARRAY_SIZE(sama5d2_gck); i++) {
hw = at91_clk_register_generated(regmap, &pmc_pcr_lock, hw = at91_clk_register_generated(regmap, &pmc_pcr_lock,

View File

@ -207,7 +207,7 @@ static void __init sama5d4_pmc_setup(struct device_node *np)
parent_names[1] = "mainck"; parent_names[1] = "mainck";
parent_names[2] = "plladivck"; parent_names[2] = "plladivck";
parent_names[3] = "utmick"; parent_names[3] = "utmick";
parent_names[4] = "mck"; parent_names[4] = "masterck";
for (i = 0; i < 3; i++) { for (i = 0; i < 3; i++) {
char name[6]; char name[6];

View File

@ -264,9 +264,9 @@ static SUNXI_CCU_GATE(ahb1_mmc1_clk, "ahb1-mmc1", "ahb1",
static SUNXI_CCU_GATE(ahb1_mmc2_clk, "ahb1-mmc2", "ahb1", static SUNXI_CCU_GATE(ahb1_mmc2_clk, "ahb1-mmc2", "ahb1",
0x060, BIT(10), 0); 0x060, BIT(10), 0);
static SUNXI_CCU_GATE(ahb1_mmc3_clk, "ahb1-mmc3", "ahb1", static SUNXI_CCU_GATE(ahb1_mmc3_clk, "ahb1-mmc3", "ahb1",
0x060, BIT(12), 0); 0x060, BIT(11), 0);
static SUNXI_CCU_GATE(ahb1_nand1_clk, "ahb1-nand1", "ahb1", static SUNXI_CCU_GATE(ahb1_nand1_clk, "ahb1-nand1", "ahb1",
0x060, BIT(13), 0); 0x060, BIT(12), 0);
static SUNXI_CCU_GATE(ahb1_nand0_clk, "ahb1-nand0", "ahb1", static SUNXI_CCU_GATE(ahb1_nand0_clk, "ahb1-nand0", "ahb1",
0x060, BIT(13), 0); 0x060, BIT(13), 0);
static SUNXI_CCU_GATE(ahb1_sdram_clk, "ahb1-sdram", "ahb1", static SUNXI_CCU_GATE(ahb1_sdram_clk, "ahb1-sdram", "ahb1",

View File

@ -542,7 +542,7 @@ static struct ccu_reset_map sun8i_v3s_ccu_resets[] = {
[RST_BUS_OHCI0] = { 0x2c0, BIT(29) }, [RST_BUS_OHCI0] = { 0x2c0, BIT(29) },
[RST_BUS_VE] = { 0x2c4, BIT(0) }, [RST_BUS_VE] = { 0x2c4, BIT(0) },
[RST_BUS_TCON0] = { 0x2c4, BIT(3) }, [RST_BUS_TCON0] = { 0x2c4, BIT(4) },
[RST_BUS_CSI] = { 0x2c4, BIT(8) }, [RST_BUS_CSI] = { 0x2c4, BIT(8) },
[RST_BUS_DE] = { 0x2c4, BIT(12) }, [RST_BUS_DE] = { 0x2c4, BIT(12) },
[RST_BUS_DBG] = { 0x2c4, BIT(31) }, [RST_BUS_DBG] = { 0x2c4, BIT(31) },

View File

@ -187,8 +187,8 @@ static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
cpufreq_cooling_unregister(priv->cdev); cpufreq_cooling_unregister(priv->cdev);
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
kfree(priv);
dev_pm_opp_remove_all_dynamic(priv->cpu_dev); dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
kfree(priv);
return 0; return 0;
} }

View File

@ -30,6 +30,7 @@
#define GPIO_REG_EDGE 0xA0 #define GPIO_REG_EDGE 0xA0
struct mtk_gc { struct mtk_gc {
struct irq_chip irq_chip;
struct gpio_chip chip; struct gpio_chip chip;
spinlock_t lock; spinlock_t lock;
int bank; int bank;
@ -189,13 +190,6 @@ mediatek_gpio_irq_type(struct irq_data *d, unsigned int type)
return 0; return 0;
} }
static struct irq_chip mediatek_gpio_irq_chip = {
.irq_unmask = mediatek_gpio_irq_unmask,
.irq_mask = mediatek_gpio_irq_mask,
.irq_mask_ack = mediatek_gpio_irq_mask,
.irq_set_type = mediatek_gpio_irq_type,
};
static int static int
mediatek_gpio_xlate(struct gpio_chip *chip, mediatek_gpio_xlate(struct gpio_chip *chip,
const struct of_phandle_args *spec, u32 *flags) const struct of_phandle_args *spec, u32 *flags)
@ -254,6 +248,13 @@ mediatek_gpio_bank_probe(struct device *dev,
return ret; return ret;
} }
rg->irq_chip.name = dev_name(dev);
rg->irq_chip.parent_device = dev;
rg->irq_chip.irq_unmask = mediatek_gpio_irq_unmask;
rg->irq_chip.irq_mask = mediatek_gpio_irq_mask;
rg->irq_chip.irq_mask_ack = mediatek_gpio_irq_mask;
rg->irq_chip.irq_set_type = mediatek_gpio_irq_type;
if (mtk->gpio_irq) { if (mtk->gpio_irq) {
/* /*
* Manually request the irq here instead of passing * Manually request the irq here instead of passing
@ -270,14 +271,14 @@ mediatek_gpio_bank_probe(struct device *dev,
return ret; return ret;
} }
ret = gpiochip_irqchip_add(&rg->chip, &mediatek_gpio_irq_chip, ret = gpiochip_irqchip_add(&rg->chip, &rg->irq_chip,
0, handle_simple_irq, IRQ_TYPE_NONE); 0, handle_simple_irq, IRQ_TYPE_NONE);
if (ret) { if (ret) {
dev_err(dev, "failed to add gpiochip_irqchip\n"); dev_err(dev, "failed to add gpiochip_irqchip\n");
return ret; return ret;
} }
gpiochip_set_chained_irqchip(&rg->chip, &mediatek_gpio_irq_chip, gpiochip_set_chained_irqchip(&rg->chip, &rg->irq_chip,
mtk->gpio_irq, NULL); mtk->gpio_irq, NULL);
} }
@ -310,7 +311,6 @@ mediatek_gpio_probe(struct platform_device *pdev)
mtk->gpio_irq = irq_of_parse_and_map(np, 0); mtk->gpio_irq = irq_of_parse_and_map(np, 0);
mtk->dev = dev; mtk->dev = dev;
platform_set_drvdata(pdev, mtk); platform_set_drvdata(pdev, mtk);
mediatek_gpio_irq_chip.name = dev_name(dev);
for (i = 0; i < MTK_BANK_CNT; i++) { for (i = 0; i < MTK_BANK_CNT; i++) {
ret = mediatek_gpio_bank_probe(dev, np, i); ret = mediatek_gpio_bank_probe(dev, np, i);

View File

@ -245,6 +245,7 @@ static bool pxa_gpio_has_pinctrl(void)
{ {
switch (gpio_type) { switch (gpio_type) {
case PXA3XX_GPIO: case PXA3XX_GPIO:
case MMP2_GPIO:
return false; return false;
default: default:

View File

@ -212,6 +212,7 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
} }
if (amdgpu_device_is_px(dev)) { if (amdgpu_device_is_px(dev)) {
dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NEVER_SKIP);
pm_runtime_use_autosuspend(dev->dev); pm_runtime_use_autosuspend(dev->dev);
pm_runtime_set_autosuspend_delay(dev->dev, 5000); pm_runtime_set_autosuspend_delay(dev->dev, 5000);
pm_runtime_set_active(dev->dev); pm_runtime_set_active(dev->dev);

View File

@ -638,12 +638,14 @@ void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
struct ttm_bo_global *glob = adev->mman.bdev.glob; struct ttm_bo_global *glob = adev->mman.bdev.glob;
struct amdgpu_vm_bo_base *bo_base; struct amdgpu_vm_bo_base *bo_base;
#if 0
if (vm->bulk_moveable) { if (vm->bulk_moveable) {
spin_lock(&glob->lru_lock); spin_lock(&glob->lru_lock);
ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move); ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
spin_unlock(&glob->lru_lock); spin_unlock(&glob->lru_lock);
return; return;
} }
#endif
memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move)); memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));

View File

@ -128,7 +128,7 @@ static const struct soc15_reg_golden golden_settings_sdma0_4_2_init[] = {
static const struct soc15_reg_golden golden_settings_sdma0_4_2[] = static const struct soc15_reg_golden golden_settings_sdma0_4_2[] =
{ {
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CHICKEN_BITS, 0xfe931f07, 0x02831d07), SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CHICKEN_BITS, 0xfe931f07, 0x02831f07),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CLK_CTRL, 0xffffffff, 0x3f000100), SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CLK_CTRL, 0xffffffff, 0x3f000100),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0000773f, 0x00004002), SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0000773f, 0x00004002),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002), SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002),
@ -158,7 +158,7 @@ static const struct soc15_reg_golden golden_settings_sdma0_4_2[] =
}; };
static const struct soc15_reg_golden golden_settings_sdma1_4_2[] = { static const struct soc15_reg_golden golden_settings_sdma1_4_2[] = {
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831d07), SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100), SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0000773f, 0x00004002), SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0000773f, 0x00004002),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002), SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002),

View File

@ -786,12 +786,13 @@ static int dm_suspend(void *handle)
struct amdgpu_display_manager *dm = &adev->dm; struct amdgpu_display_manager *dm = &adev->dm;
int ret = 0; int ret = 0;
WARN_ON(adev->dm.cached_state);
adev->dm.cached_state = drm_atomic_helper_suspend(adev->ddev);
s3_handle_mst(adev->ddev, true); s3_handle_mst(adev->ddev, true);
amdgpu_dm_irq_suspend(adev); amdgpu_dm_irq_suspend(adev);
WARN_ON(adev->dm.cached_state);
adev->dm.cached_state = drm_atomic_helper_suspend(adev->ddev);
dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3); dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3);

View File

@ -662,6 +662,11 @@ static void dce11_update_clocks(struct clk_mgr *clk_mgr,
{ {
struct dce_clk_mgr *clk_mgr_dce = TO_DCE_CLK_MGR(clk_mgr); struct dce_clk_mgr *clk_mgr_dce = TO_DCE_CLK_MGR(clk_mgr);
struct dm_pp_power_level_change_request level_change_req; struct dm_pp_power_level_change_request level_change_req;
int patched_disp_clk = context->bw.dce.dispclk_khz;
/*TODO: W/A for dal3 linux, investigate why this works */
if (!clk_mgr_dce->dfs_bypass_active)
patched_disp_clk = patched_disp_clk * 115 / 100;
level_change_req.power_level = dce_get_required_clocks_state(clk_mgr, context); level_change_req.power_level = dce_get_required_clocks_state(clk_mgr, context);
/* get max clock state from PPLIB */ /* get max clock state from PPLIB */
@ -671,9 +676,9 @@ static void dce11_update_clocks(struct clk_mgr *clk_mgr,
clk_mgr_dce->cur_min_clks_state = level_change_req.power_level; clk_mgr_dce->cur_min_clks_state = level_change_req.power_level;
} }
if (should_set_clock(safe_to_lower, context->bw.dce.dispclk_khz, clk_mgr->clks.dispclk_khz)) { if (should_set_clock(safe_to_lower, patched_disp_clk, clk_mgr->clks.dispclk_khz)) {
context->bw.dce.dispclk_khz = dce_set_clock(clk_mgr, context->bw.dce.dispclk_khz); context->bw.dce.dispclk_khz = dce_set_clock(clk_mgr, patched_disp_clk);
clk_mgr->clks.dispclk_khz = context->bw.dce.dispclk_khz; clk_mgr->clks.dispclk_khz = patched_disp_clk;
} }
dce11_pplib_apply_display_requirements(clk_mgr->ctx->dc, context); dce11_pplib_apply_display_requirements(clk_mgr->ctx->dc, context);
} }

View File

@ -37,6 +37,10 @@ void dce100_prepare_bandwidth(
struct dc *dc, struct dc *dc,
struct dc_state *context); struct dc_state *context);
void dce100_optimize_bandwidth(
struct dc *dc,
struct dc_state *context);
bool dce100_enable_display_power_gating(struct dc *dc, uint8_t controller_id, bool dce100_enable_display_power_gating(struct dc *dc, uint8_t controller_id,
struct dc_bios *dcb, struct dc_bios *dcb,
enum pipe_gating_control power_gating); enum pipe_gating_control power_gating);

View File

@ -77,6 +77,6 @@ void dce80_hw_sequencer_construct(struct dc *dc)
dc->hwss.enable_display_power_gating = dce100_enable_display_power_gating; dc->hwss.enable_display_power_gating = dce100_enable_display_power_gating;
dc->hwss.pipe_control_lock = dce_pipe_control_lock; dc->hwss.pipe_control_lock = dce_pipe_control_lock;
dc->hwss.prepare_bandwidth = dce100_prepare_bandwidth; dc->hwss.prepare_bandwidth = dce100_prepare_bandwidth;
dc->hwss.optimize_bandwidth = dce100_prepare_bandwidth; dc->hwss.optimize_bandwidth = dce100_optimize_bandwidth;
} }

View File

@ -790,9 +790,22 @@ bool dce80_validate_bandwidth(
struct dc *dc, struct dc *dc,
struct dc_state *context) struct dc_state *context)
{ {
/* TODO implement when needed but for now hardcode max value*/ int i;
context->bw.dce.dispclk_khz = 681000; bool at_least_one_pipe = false;
context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ;
for (i = 0; i < dc->res_pool->pipe_count; i++) {
if (context->res_ctx.pipe_ctx[i].stream)
at_least_one_pipe = true;
}
if (at_least_one_pipe) {
/* TODO implement when needed but for now hardcode max value*/
context->bw.dce.dispclk_khz = 681000;
context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ;
} else {
context->bw.dce.dispclk_khz = 0;
context->bw.dce.yclk_khz = 0;
}
return true; return true;
} }

View File

@ -2658,8 +2658,8 @@ static void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
.mirror = pipe_ctx->plane_state->horizontal_mirror .mirror = pipe_ctx->plane_state->horizontal_mirror
}; };
pos_cpy.x -= pipe_ctx->plane_state->dst_rect.x; pos_cpy.x_hotspot += pipe_ctx->plane_state->dst_rect.x;
pos_cpy.y -= pipe_ctx->plane_state->dst_rect.y; pos_cpy.y_hotspot += pipe_ctx->plane_state->dst_rect.y;
if (pipe_ctx->plane_state->address.type if (pipe_ctx->plane_state->address.type
== PLN_ADDR_TYPE_VIDEO_PROGRESSIVE) == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)

View File

@ -336,8 +336,8 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
bool *enabled, int width, int height) bool *enabled, int width, int height)
{ {
struct drm_i915_private *dev_priv = to_i915(fb_helper->dev); struct drm_i915_private *dev_priv = to_i915(fb_helper->dev);
unsigned long conn_configured, conn_seq, mask;
unsigned int count = min(fb_helper->connector_count, BITS_PER_LONG); unsigned int count = min(fb_helper->connector_count, BITS_PER_LONG);
unsigned long conn_configured, conn_seq;
int i, j; int i, j;
bool *save_enabled; bool *save_enabled;
bool fallback = true, ret = true; bool fallback = true, ret = true;
@ -355,10 +355,9 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
drm_modeset_backoff(&ctx); drm_modeset_backoff(&ctx);
memcpy(save_enabled, enabled, count); memcpy(save_enabled, enabled, count);
mask = GENMASK(count - 1, 0); conn_seq = GENMASK(count - 1, 0);
conn_configured = 0; conn_configured = 0;
retry: retry:
conn_seq = conn_configured;
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
struct drm_fb_helper_connector *fb_conn; struct drm_fb_helper_connector *fb_conn;
struct drm_connector *connector; struct drm_connector *connector;
@ -371,7 +370,8 @@ retry:
if (conn_configured & BIT(i)) if (conn_configured & BIT(i))
continue; continue;
if (conn_seq == 0 && !connector->has_tile) /* First pass, only consider tiled connectors */
if (conn_seq == GENMASK(count - 1, 0) && !connector->has_tile)
continue; continue;
if (connector->status == connector_status_connected) if (connector->status == connector_status_connected)
@ -475,8 +475,10 @@ retry:
conn_configured |= BIT(i); conn_configured |= BIT(i);
} }
if ((conn_configured & mask) != mask && conn_configured != conn_seq) if (conn_configured != conn_seq) { /* repeat until no more are found */
conn_seq = conn_configured;
goto retry; goto retry;
}
/* /*
* If the BIOS didn't enable everything it could, fall back to have the * If the BIOS didn't enable everything it could, fall back to have the

View File

@ -172,6 +172,7 @@ int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags)
} }
if (radeon_is_px(dev)) { if (radeon_is_px(dev)) {
dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NEVER_SKIP);
pm_runtime_use_autosuspend(dev->dev); pm_runtime_use_autosuspend(dev->dev);
pm_runtime_set_autosuspend_delay(dev->dev, 5000); pm_runtime_set_autosuspend_delay(dev->dev, 5000);
pm_runtime_set_active(dev->dev); pm_runtime_set_active(dev->dev);

View File

@ -783,6 +783,7 @@ void c4iw_init_dev_ucontext(struct c4iw_rdev *rdev,
static int c4iw_rdev_open(struct c4iw_rdev *rdev) static int c4iw_rdev_open(struct c4iw_rdev *rdev)
{ {
int err; int err;
unsigned int factor;
c4iw_init_dev_ucontext(rdev, &rdev->uctx); c4iw_init_dev_ucontext(rdev, &rdev->uctx);
@ -806,8 +807,18 @@ static int c4iw_rdev_open(struct c4iw_rdev *rdev)
return -EINVAL; return -EINVAL;
} }
rdev->qpmask = rdev->lldi.udb_density - 1; /* This implementation requires a sge_host_page_size <= PAGE_SIZE. */
rdev->cqmask = rdev->lldi.ucq_density - 1; if (rdev->lldi.sge_host_page_size > PAGE_SIZE) {
pr_err("%s: unsupported sge host page size %u\n",
pci_name(rdev->lldi.pdev),
rdev->lldi.sge_host_page_size);
return -EINVAL;
}
factor = PAGE_SIZE / rdev->lldi.sge_host_page_size;
rdev->qpmask = (rdev->lldi.udb_density * factor) - 1;
rdev->cqmask = (rdev->lldi.ucq_density * factor) - 1;
pr_debug("dev %s stag start 0x%0x size 0x%0x num stags %d pbl start 0x%0x size 0x%0x rq start 0x%0x size 0x%0x qp qid start %u size %u cq qid start %u size %u srq size %u\n", pr_debug("dev %s stag start 0x%0x size 0x%0x num stags %d pbl start 0x%0x size 0x%0x rq start 0x%0x size 0x%0x qp qid start %u size %u cq qid start %u size %u srq size %u\n",
pci_name(rdev->lldi.pdev), rdev->lldi.vr->stag.start, pci_name(rdev->lldi.pdev), rdev->lldi.vr->stag.start,
rdev->lldi.vr->stag.size, c4iw_num_stags(rdev), rdev->lldi.vr->stag.size, c4iw_num_stags(rdev),

View File

@ -3032,7 +3032,6 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
{ {
struct srp_target_port *target = host_to_target(scmnd->device->host); struct srp_target_port *target = host_to_target(scmnd->device->host);
struct srp_rdma_ch *ch; struct srp_rdma_ch *ch;
int i, j;
u8 status; u8 status;
shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n"); shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n");
@ -3044,15 +3043,6 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
if (status) if (status)
return FAILED; return FAILED;
for (i = 0; i < target->ch_count; i++) {
ch = &target->ch[i];
for (j = 0; j < target->req_ring_size; ++j) {
struct srp_request *req = &ch->req_ring[j];
srp_finish_req(ch, req, scmnd->device, DID_RESET << 16);
}
}
return SUCCESS; return SUCCESS;
} }

View File

@ -212,7 +212,7 @@ static int powernv_flash_set_driver_info(struct device *dev,
* Going to have to check what details I need to set and how to * Going to have to check what details I need to set and how to
* get them * get them
*/ */
mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFn", dev->of_node); mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
mtd->type = MTD_NORFLASH; mtd->type = MTD_NORFLASH;
mtd->flags = MTD_WRITEABLE; mtd->flags = MTD_WRITEABLE;
mtd->size = size; mtd->size = size;

View File

@ -507,6 +507,7 @@ static int mtd_nvmem_add(struct mtd_info *mtd)
{ {
struct nvmem_config config = {}; struct nvmem_config config = {};
config.id = -1;
config.dev = &mtd->dev; config.dev = &mtd->dev;
config.name = mtd->name; config.name = mtd->name;
config.owner = THIS_MODULE; config.owner = THIS_MODULE;

View File

@ -1183,29 +1183,22 @@ static rx_handler_result_t bond_handle_frame(struct sk_buff **pskb)
} }
} }
/* Link-local multicast packets should be passed to the /*
* stack on the link they arrive as well as pass them to the * For packets determined by bond_should_deliver_exact_match() call to
* bond-master device. These packets are mostly usable when * be suppressed we want to make an exception for link-local packets.
* stack receives it with the link on which they arrive * This is necessary for e.g. LLDP daemons to be able to monitor
* (e.g. LLDP) they also must be available on master. Some of * inactive slave links without being forced to bind to them
* the use cases include (but are not limited to): LLDP agents * explicitly.
* that must be able to operate both on enslaved interfaces as *
* well as on bonds themselves; linux bridges that must be able * At the same time, packets that are passed to the bonding master
* to process/pass BPDUs from attached bonds when any kind of * (including link-local ones) can have their originating interface
* STP version is enabled on the network. * determined via PACKET_ORIGDEV socket option.
*/ */
if (is_link_local_ether_addr(eth_hdr(skb)->h_dest)) { if (bond_should_deliver_exact_match(skb, slave, bond)) {
struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC); if (is_link_local_ether_addr(eth_hdr(skb)->h_dest))
return RX_HANDLER_PASS;
if (nskb) {
nskb->dev = bond->dev;
nskb->queue_mapping = 0;
netif_rx(nskb);
}
return RX_HANDLER_PASS;
}
if (bond_should_deliver_exact_match(skb, slave, bond))
return RX_HANDLER_EXACT; return RX_HANDLER_EXACT;
}
skb->dev = bond->dev; skb->dev = bond->dev;

View File

@ -1335,13 +1335,11 @@ static int atl2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{ {
struct net_device *netdev; struct net_device *netdev;
struct atl2_adapter *adapter; struct atl2_adapter *adapter;
static int cards_found; static int cards_found = 0;
unsigned long mmio_start; unsigned long mmio_start;
int mmio_len; int mmio_len;
int err; int err;
cards_found = 0;
err = pci_enable_device(pdev); err = pci_enable_device(pdev);
if (err) if (err)
return err; return err;

View File

@ -3907,7 +3907,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
if (len) if (len)
break; break;
/* on first few passes, just barely sleep */ /* on first few passes, just barely sleep */
if (i < DFLT_HWRM_CMD_TIMEOUT) if (i < HWRM_SHORT_TIMEOUT_COUNTER)
usleep_range(HWRM_SHORT_MIN_TIMEOUT, usleep_range(HWRM_SHORT_MIN_TIMEOUT,
HWRM_SHORT_MAX_TIMEOUT); HWRM_SHORT_MAX_TIMEOUT);
else else
@ -3930,7 +3930,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
dma_rmb(); dma_rmb();
if (*valid) if (*valid)
break; break;
udelay(1); usleep_range(1, 5);
} }
if (j >= HWRM_VALID_BIT_DELAY_USEC) { if (j >= HWRM_VALID_BIT_DELAY_USEC) {

View File

@ -581,7 +581,7 @@ struct nqe_cn {
(HWRM_SHORT_TIMEOUT_COUNTER * HWRM_SHORT_MIN_TIMEOUT + \ (HWRM_SHORT_TIMEOUT_COUNTER * HWRM_SHORT_MIN_TIMEOUT + \
((n) - HWRM_SHORT_TIMEOUT_COUNTER) * HWRM_MIN_TIMEOUT)) ((n) - HWRM_SHORT_TIMEOUT_COUNTER) * HWRM_MIN_TIMEOUT))
#define HWRM_VALID_BIT_DELAY_USEC 20 #define HWRM_VALID_BIT_DELAY_USEC 150
#define BNXT_HWRM_CHNL_CHIMP 0 #define BNXT_HWRM_CHNL_CHIMP 0
#define BNXT_HWRM_CHNL_KONG 1 #define BNXT_HWRM_CHNL_KONG 1

View File

@ -271,7 +271,7 @@ struct xcast_addr_list {
}; };
struct nicvf_work { struct nicvf_work {
struct delayed_work work; struct work_struct work;
u8 mode; u8 mode;
struct xcast_addr_list *mc; struct xcast_addr_list *mc;
}; };
@ -327,7 +327,11 @@ struct nicvf {
struct nicvf_work rx_mode_work; struct nicvf_work rx_mode_work;
/* spinlock to protect workqueue arguments from concurrent access */ /* spinlock to protect workqueue arguments from concurrent access */
spinlock_t rx_mode_wq_lock; spinlock_t rx_mode_wq_lock;
/* workqueue for handling kernel ndo_set_rx_mode() calls */
struct workqueue_struct *nicvf_rx_mode_wq;
/* mutex to protect VF's mailbox contents from concurrent access */
struct mutex rx_mode_mtx;
struct delayed_work link_change_work;
/* PTP timestamp */ /* PTP timestamp */
struct cavium_ptp *ptp_clock; struct cavium_ptp *ptp_clock;
/* Inbound timestamping is on */ /* Inbound timestamping is on */
@ -575,10 +579,8 @@ struct set_ptp {
struct xcast { struct xcast {
u8 msg; u8 msg;
union { u8 mode;
u8 mode; u64 mac:48;
u64 mac;
} data;
}; };
/* 128 bit shared memory between PF and each VF */ /* 128 bit shared memory between PF and each VF */

View File

@ -57,14 +57,8 @@ struct nicpf {
#define NIC_GET_BGX_FROM_VF_LMAC_MAP(map) ((map >> 4) & 0xF) #define NIC_GET_BGX_FROM_VF_LMAC_MAP(map) ((map >> 4) & 0xF)
#define NIC_GET_LMAC_FROM_VF_LMAC_MAP(map) (map & 0xF) #define NIC_GET_LMAC_FROM_VF_LMAC_MAP(map) (map & 0xF)
u8 *vf_lmac_map; u8 *vf_lmac_map;
struct delayed_work dwork;
struct workqueue_struct *check_link;
u8 *link;
u8 *duplex;
u32 *speed;
u16 cpi_base[MAX_NUM_VFS_SUPPORTED]; u16 cpi_base[MAX_NUM_VFS_SUPPORTED];
u16 rssi_base[MAX_NUM_VFS_SUPPORTED]; u16 rssi_base[MAX_NUM_VFS_SUPPORTED];
bool mbx_lock[MAX_NUM_VFS_SUPPORTED];
/* MSI-X */ /* MSI-X */
u8 num_vec; u8 num_vec;
@ -929,6 +923,35 @@ static void nic_config_timestamp(struct nicpf *nic, int vf, struct set_ptp *ptp)
nic_reg_write(nic, NIC_PF_PKIND_0_15_CFG | (pkind_idx << 3), pkind_val); nic_reg_write(nic, NIC_PF_PKIND_0_15_CFG | (pkind_idx << 3), pkind_val);
} }
/* Get BGX LMAC link status and update corresponding VF
* if there is a change, valid only if internal L2 switch
* is not present otherwise VF link is always treated as up
*/
static void nic_link_status_get(struct nicpf *nic, u8 vf)
{
union nic_mbx mbx = {};
struct bgx_link_status link;
u8 bgx, lmac;
mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;
/* Get BGX, LMAC indices for the VF */
bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
/* Get interface link status */
bgx_get_lmac_link_state(nic->node, bgx, lmac, &link);
/* Send a mbox message to VF with current link status */
mbx.link_status.link_up = link.link_up;
mbx.link_status.duplex = link.duplex;
mbx.link_status.speed = link.speed;
mbx.link_status.mac_type = link.mac_type;
/* reply with link status */
nic_send_msg_to_vf(nic, vf, &mbx);
}
/* Interrupt handler to handle mailbox messages from VFs */ /* Interrupt handler to handle mailbox messages from VFs */
static void nic_handle_mbx_intr(struct nicpf *nic, int vf) static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
{ {
@ -941,8 +964,6 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
int i; int i;
int ret = 0; int ret = 0;
nic->mbx_lock[vf] = true;
mbx_addr = nic_get_mbx_addr(vf); mbx_addr = nic_get_mbx_addr(vf);
mbx_data = (u64 *)&mbx; mbx_data = (u64 *)&mbx;
@ -957,12 +978,7 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
switch (mbx.msg.msg) { switch (mbx.msg.msg) {
case NIC_MBOX_MSG_READY: case NIC_MBOX_MSG_READY:
nic_mbx_send_ready(nic, vf); nic_mbx_send_ready(nic, vf);
if (vf < nic->num_vf_en) { return;
nic->link[vf] = 0;
nic->duplex[vf] = 0;
nic->speed[vf] = 0;
}
goto unlock;
case NIC_MBOX_MSG_QS_CFG: case NIC_MBOX_MSG_QS_CFG:
reg_addr = NIC_PF_QSET_0_127_CFG | reg_addr = NIC_PF_QSET_0_127_CFG |
(mbx.qs.num << NIC_QS_ID_SHIFT); (mbx.qs.num << NIC_QS_ID_SHIFT);
@ -1031,7 +1047,7 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
break; break;
case NIC_MBOX_MSG_RSS_SIZE: case NIC_MBOX_MSG_RSS_SIZE:
nic_send_rss_size(nic, vf); nic_send_rss_size(nic, vf);
goto unlock; return;
case NIC_MBOX_MSG_RSS_CFG: case NIC_MBOX_MSG_RSS_CFG:
case NIC_MBOX_MSG_RSS_CFG_CONT: case NIC_MBOX_MSG_RSS_CFG_CONT:
nic_config_rss(nic, &mbx.rss_cfg); nic_config_rss(nic, &mbx.rss_cfg);
@ -1039,7 +1055,7 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
case NIC_MBOX_MSG_CFG_DONE: case NIC_MBOX_MSG_CFG_DONE:
/* Last message of VF config msg sequence */ /* Last message of VF config msg sequence */
nic_enable_vf(nic, vf, true); nic_enable_vf(nic, vf, true);
goto unlock; break;
case NIC_MBOX_MSG_SHUTDOWN: case NIC_MBOX_MSG_SHUTDOWN:
/* First msg in VF teardown sequence */ /* First msg in VF teardown sequence */
if (vf >= nic->num_vf_en) if (vf >= nic->num_vf_en)
@ -1049,19 +1065,19 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
break; break;
case NIC_MBOX_MSG_ALLOC_SQS: case NIC_MBOX_MSG_ALLOC_SQS:
nic_alloc_sqs(nic, &mbx.sqs_alloc); nic_alloc_sqs(nic, &mbx.sqs_alloc);
goto unlock; return;
case NIC_MBOX_MSG_NICVF_PTR: case NIC_MBOX_MSG_NICVF_PTR:
nic->nicvf[vf] = mbx.nicvf.nicvf; nic->nicvf[vf] = mbx.nicvf.nicvf;
break; break;
case NIC_MBOX_MSG_PNICVF_PTR: case NIC_MBOX_MSG_PNICVF_PTR:
nic_send_pnicvf(nic, vf); nic_send_pnicvf(nic, vf);
goto unlock; return;
case NIC_MBOX_MSG_SNICVF_PTR: case NIC_MBOX_MSG_SNICVF_PTR:
nic_send_snicvf(nic, &mbx.nicvf); nic_send_snicvf(nic, &mbx.nicvf);
goto unlock; return;
case NIC_MBOX_MSG_BGX_STATS: case NIC_MBOX_MSG_BGX_STATS:
nic_get_bgx_stats(nic, &mbx.bgx_stats); nic_get_bgx_stats(nic, &mbx.bgx_stats);
goto unlock; return;
case NIC_MBOX_MSG_LOOPBACK: case NIC_MBOX_MSG_LOOPBACK:
ret = nic_config_loopback(nic, &mbx.lbk); ret = nic_config_loopback(nic, &mbx.lbk);
break; break;
@ -1070,7 +1086,7 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
break; break;
case NIC_MBOX_MSG_PFC: case NIC_MBOX_MSG_PFC:
nic_pause_frame(nic, vf, &mbx.pfc); nic_pause_frame(nic, vf, &mbx.pfc);
goto unlock; return;
case NIC_MBOX_MSG_PTP_CFG: case NIC_MBOX_MSG_PTP_CFG:
nic_config_timestamp(nic, vf, &mbx.ptp); nic_config_timestamp(nic, vf, &mbx.ptp);
break; break;
@ -1094,7 +1110,7 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
bgx_set_dmac_cam_filter(nic->node, bgx, lmac, bgx_set_dmac_cam_filter(nic->node, bgx, lmac,
mbx.xcast.data.mac, mbx.xcast.mac,
vf < NIC_VF_PER_MBX_REG ? vf : vf < NIC_VF_PER_MBX_REG ? vf :
vf - NIC_VF_PER_MBX_REG); vf - NIC_VF_PER_MBX_REG);
break; break;
@ -1106,8 +1122,15 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
} }
bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
bgx_set_xcast_mode(nic->node, bgx, lmac, mbx.xcast.data.mode); bgx_set_xcast_mode(nic->node, bgx, lmac, mbx.xcast.mode);
break; break;
case NIC_MBOX_MSG_BGX_LINK_CHANGE:
if (vf >= nic->num_vf_en) {
ret = -1; /* NACK */
break;
}
nic_link_status_get(nic, vf);
return;
default: default:
dev_err(&nic->pdev->dev, dev_err(&nic->pdev->dev,
"Invalid msg from VF%d, msg 0x%x\n", vf, mbx.msg.msg); "Invalid msg from VF%d, msg 0x%x\n", vf, mbx.msg.msg);
@ -1121,8 +1144,6 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
mbx.msg.msg, vf); mbx.msg.msg, vf);
nic_mbx_send_nack(nic, vf); nic_mbx_send_nack(nic, vf);
} }
unlock:
nic->mbx_lock[vf] = false;
} }
static irqreturn_t nic_mbx_intr_handler(int irq, void *nic_irq) static irqreturn_t nic_mbx_intr_handler(int irq, void *nic_irq)
@ -1270,52 +1291,6 @@ static int nic_sriov_init(struct pci_dev *pdev, struct nicpf *nic)
return 0; return 0;
} }
/* Poll for BGX LMAC link status and update corresponding VF
* if there is a change, valid only if internal L2 switch
* is not present otherwise VF link is always treated as up
*/
static void nic_poll_for_link(struct work_struct *work)
{
union nic_mbx mbx = {};
struct nicpf *nic;
struct bgx_link_status link;
u8 vf, bgx, lmac;
nic = container_of(work, struct nicpf, dwork.work);
mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;
for (vf = 0; vf < nic->num_vf_en; vf++) {
/* Poll only if VF is UP */
if (!nic->vf_enabled[vf])
continue;
/* Get BGX, LMAC indices for the VF */
bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
/* Get interface link status */
bgx_get_lmac_link_state(nic->node, bgx, lmac, &link);
/* Inform VF only if link status changed */
if (nic->link[vf] == link.link_up)
continue;
if (!nic->mbx_lock[vf]) {
nic->link[vf] = link.link_up;
nic->duplex[vf] = link.duplex;
nic->speed[vf] = link.speed;
/* Send a mbox message to VF with current link status */
mbx.link_status.link_up = link.link_up;
mbx.link_status.duplex = link.duplex;
mbx.link_status.speed = link.speed;
mbx.link_status.mac_type = link.mac_type;
nic_send_msg_to_vf(nic, vf, &mbx);
}
}
queue_delayed_work(nic->check_link, &nic->dwork, HZ * 2);
}
static int nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) static int nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
@ -1384,18 +1359,6 @@ static int nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (!nic->vf_lmac_map) if (!nic->vf_lmac_map)
goto err_release_regions; goto err_release_regions;
nic->link = devm_kmalloc_array(dev, max_lmac, sizeof(u8), GFP_KERNEL);
if (!nic->link)
goto err_release_regions;
nic->duplex = devm_kmalloc_array(dev, max_lmac, sizeof(u8), GFP_KERNEL);
if (!nic->duplex)
goto err_release_regions;
nic->speed = devm_kmalloc_array(dev, max_lmac, sizeof(u32), GFP_KERNEL);
if (!nic->speed)
goto err_release_regions;
/* Initialize hardware */ /* Initialize hardware */
nic_init_hw(nic); nic_init_hw(nic);
@ -1411,22 +1374,8 @@ static int nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (err) if (err)
goto err_unregister_interrupts; goto err_unregister_interrupts;
/* Register a physical link status poll fn() */
nic->check_link = alloc_workqueue("check_link_status",
WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
if (!nic->check_link) {
err = -ENOMEM;
goto err_disable_sriov;
}
INIT_DELAYED_WORK(&nic->dwork, nic_poll_for_link);
queue_delayed_work(nic->check_link, &nic->dwork, 0);
return 0; return 0;
err_disable_sriov:
if (nic->flags & NIC_SRIOV_ENABLED)
pci_disable_sriov(pdev);
err_unregister_interrupts: err_unregister_interrupts:
nic_unregister_interrupts(nic); nic_unregister_interrupts(nic);
err_release_regions: err_release_regions:
@ -1447,12 +1396,6 @@ static void nic_remove(struct pci_dev *pdev)
if (nic->flags & NIC_SRIOV_ENABLED) if (nic->flags & NIC_SRIOV_ENABLED)
pci_disable_sriov(pdev); pci_disable_sriov(pdev);
if (nic->check_link) {
/* Destroy work Queue */
cancel_delayed_work_sync(&nic->dwork);
destroy_workqueue(nic->check_link);
}
nic_unregister_interrupts(nic); nic_unregister_interrupts(nic);
pci_release_regions(pdev); pci_release_regions(pdev);

View File

@ -68,9 +68,6 @@ module_param(cpi_alg, int, 0444);
MODULE_PARM_DESC(cpi_alg, MODULE_PARM_DESC(cpi_alg,
"PFC algorithm (0=none, 1=VLAN, 2=VLAN16, 3=IP Diffserv)"); "PFC algorithm (0=none, 1=VLAN, 2=VLAN16, 3=IP Diffserv)");
/* workqueue for handling kernel ndo_set_rx_mode() calls */
static struct workqueue_struct *nicvf_rx_mode_wq;
static inline u8 nicvf_netdev_qidx(struct nicvf *nic, u8 qidx) static inline u8 nicvf_netdev_qidx(struct nicvf *nic, u8 qidx)
{ {
if (nic->sqs_mode) if (nic->sqs_mode)
@ -127,6 +124,9 @@ int nicvf_send_msg_to_pf(struct nicvf *nic, union nic_mbx *mbx)
{ {
int timeout = NIC_MBOX_MSG_TIMEOUT; int timeout = NIC_MBOX_MSG_TIMEOUT;
int sleep = 10; int sleep = 10;
int ret = 0;
mutex_lock(&nic->rx_mode_mtx);
nic->pf_acked = false; nic->pf_acked = false;
nic->pf_nacked = false; nic->pf_nacked = false;
@ -139,7 +139,8 @@ int nicvf_send_msg_to_pf(struct nicvf *nic, union nic_mbx *mbx)
netdev_err(nic->netdev, netdev_err(nic->netdev,
"PF NACK to mbox msg 0x%02x from VF%d\n", "PF NACK to mbox msg 0x%02x from VF%d\n",
(mbx->msg.msg & 0xFF), nic->vf_id); (mbx->msg.msg & 0xFF), nic->vf_id);
return -EINVAL; ret = -EINVAL;
break;
} }
msleep(sleep); msleep(sleep);
if (nic->pf_acked) if (nic->pf_acked)
@ -149,10 +150,12 @@ int nicvf_send_msg_to_pf(struct nicvf *nic, union nic_mbx *mbx)
netdev_err(nic->netdev, netdev_err(nic->netdev,
"PF didn't ACK to mbox msg 0x%02x from VF%d\n", "PF didn't ACK to mbox msg 0x%02x from VF%d\n",
(mbx->msg.msg & 0xFF), nic->vf_id); (mbx->msg.msg & 0xFF), nic->vf_id);
return -EBUSY; ret = -EBUSY;
break;
} }
} }
return 0; mutex_unlock(&nic->rx_mode_mtx);
return ret;
} }
/* Checks if VF is able to comminicate with PF /* Checks if VF is able to comminicate with PF
@ -172,6 +175,17 @@ static int nicvf_check_pf_ready(struct nicvf *nic)
return 1; return 1;
} }
static void nicvf_send_cfg_done(struct nicvf *nic)
{
union nic_mbx mbx = {};
mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE;
if (nicvf_send_msg_to_pf(nic, &mbx)) {
netdev_err(nic->netdev,
"PF didn't respond to CFG DONE msg\n");
}
}
static void nicvf_read_bgx_stats(struct nicvf *nic, struct bgx_stats_msg *bgx) static void nicvf_read_bgx_stats(struct nicvf *nic, struct bgx_stats_msg *bgx)
{ {
if (bgx->rx) if (bgx->rx)
@ -228,21 +242,24 @@ static void nicvf_handle_mbx_intr(struct nicvf *nic)
break; break;
case NIC_MBOX_MSG_BGX_LINK_CHANGE: case NIC_MBOX_MSG_BGX_LINK_CHANGE:
nic->pf_acked = true; nic->pf_acked = true;
nic->link_up = mbx.link_status.link_up; if (nic->link_up != mbx.link_status.link_up) {
nic->duplex = mbx.link_status.duplex; nic->link_up = mbx.link_status.link_up;
nic->speed = mbx.link_status.speed; nic->duplex = mbx.link_status.duplex;
nic->mac_type = mbx.link_status.mac_type; nic->speed = mbx.link_status.speed;
if (nic->link_up) { nic->mac_type = mbx.link_status.mac_type;
netdev_info(nic->netdev, "Link is Up %d Mbps %s duplex\n", if (nic->link_up) {
nic->speed, netdev_info(nic->netdev,
nic->duplex == DUPLEX_FULL ? "Link is Up %d Mbps %s duplex\n",
"Full" : "Half"); nic->speed,
netif_carrier_on(nic->netdev); nic->duplex == DUPLEX_FULL ?
netif_tx_start_all_queues(nic->netdev); "Full" : "Half");
} else { netif_carrier_on(nic->netdev);
netdev_info(nic->netdev, "Link is Down\n"); netif_tx_start_all_queues(nic->netdev);
netif_carrier_off(nic->netdev); } else {
netif_tx_stop_all_queues(nic->netdev); netdev_info(nic->netdev, "Link is Down\n");
netif_carrier_off(nic->netdev);
netif_tx_stop_all_queues(nic->netdev);
}
} }
break; break;
case NIC_MBOX_MSG_ALLOC_SQS: case NIC_MBOX_MSG_ALLOC_SQS:
@ -1311,6 +1328,11 @@ int nicvf_stop(struct net_device *netdev)
struct nicvf_cq_poll *cq_poll = NULL; struct nicvf_cq_poll *cq_poll = NULL;
union nic_mbx mbx = {}; union nic_mbx mbx = {};
cancel_delayed_work_sync(&nic->link_change_work);
/* wait till all queued set_rx_mode tasks completes */
drain_workqueue(nic->nicvf_rx_mode_wq);
mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN; mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
nicvf_send_msg_to_pf(nic, &mbx); nicvf_send_msg_to_pf(nic, &mbx);
@ -1410,13 +1432,27 @@ static int nicvf_update_hw_max_frs(struct nicvf *nic, int mtu)
return nicvf_send_msg_to_pf(nic, &mbx); return nicvf_send_msg_to_pf(nic, &mbx);
} }
static void nicvf_link_status_check_task(struct work_struct *work_arg)
{
struct nicvf *nic = container_of(work_arg,
struct nicvf,
link_change_work.work);
union nic_mbx mbx = {};
mbx.msg.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;
nicvf_send_msg_to_pf(nic, &mbx);
queue_delayed_work(nic->nicvf_rx_mode_wq,
&nic->link_change_work, 2 * HZ);
}
int nicvf_open(struct net_device *netdev) int nicvf_open(struct net_device *netdev)
{ {
int cpu, err, qidx; int cpu, err, qidx;
struct nicvf *nic = netdev_priv(netdev); struct nicvf *nic = netdev_priv(netdev);
struct queue_set *qs = nic->qs; struct queue_set *qs = nic->qs;
struct nicvf_cq_poll *cq_poll = NULL; struct nicvf_cq_poll *cq_poll = NULL;
union nic_mbx mbx = {};
/* wait till all queued set_rx_mode tasks completes if any */
drain_workqueue(nic->nicvf_rx_mode_wq);
netif_carrier_off(netdev); netif_carrier_off(netdev);
@ -1512,8 +1548,12 @@ int nicvf_open(struct net_device *netdev)
nicvf_enable_intr(nic, NICVF_INTR_RBDR, qidx); nicvf_enable_intr(nic, NICVF_INTR_RBDR, qidx);
/* Send VF config done msg to PF */ /* Send VF config done msg to PF */
mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE; nicvf_send_cfg_done(nic);
nicvf_write_to_mbx(nic, &mbx);
INIT_DELAYED_WORK(&nic->link_change_work,
nicvf_link_status_check_task);
queue_delayed_work(nic->nicvf_rx_mode_wq,
&nic->link_change_work, 0);
return 0; return 0;
cleanup: cleanup:
@ -1941,15 +1981,17 @@ static void __nicvf_set_rx_mode_task(u8 mode, struct xcast_addr_list *mc_addrs,
/* flush DMAC filters and reset RX mode */ /* flush DMAC filters and reset RX mode */
mbx.xcast.msg = NIC_MBOX_MSG_RESET_XCAST; mbx.xcast.msg = NIC_MBOX_MSG_RESET_XCAST;
nicvf_send_msg_to_pf(nic, &mbx); if (nicvf_send_msg_to_pf(nic, &mbx) < 0)
goto free_mc;
if (mode & BGX_XCAST_MCAST_FILTER) { if (mode & BGX_XCAST_MCAST_FILTER) {
/* once enabling filtering, we need to signal to PF to add /* once enabling filtering, we need to signal to PF to add
* its' own LMAC to the filter to accept packets for it. * its' own LMAC to the filter to accept packets for it.
*/ */
mbx.xcast.msg = NIC_MBOX_MSG_ADD_MCAST; mbx.xcast.msg = NIC_MBOX_MSG_ADD_MCAST;
mbx.xcast.data.mac = 0; mbx.xcast.mac = 0;
nicvf_send_msg_to_pf(nic, &mbx); if (nicvf_send_msg_to_pf(nic, &mbx) < 0)
goto free_mc;
} }
/* check if we have any specific MACs to be added to PF DMAC filter */ /* check if we have any specific MACs to be added to PF DMAC filter */
@ -1957,23 +1999,25 @@ static void __nicvf_set_rx_mode_task(u8 mode, struct xcast_addr_list *mc_addrs,
/* now go through kernel list of MACs and add them one by one */ /* now go through kernel list of MACs and add them one by one */
for (idx = 0; idx < mc_addrs->count; idx++) { for (idx = 0; idx < mc_addrs->count; idx++) {
mbx.xcast.msg = NIC_MBOX_MSG_ADD_MCAST; mbx.xcast.msg = NIC_MBOX_MSG_ADD_MCAST;
mbx.xcast.data.mac = mc_addrs->mc[idx]; mbx.xcast.mac = mc_addrs->mc[idx];
nicvf_send_msg_to_pf(nic, &mbx); if (nicvf_send_msg_to_pf(nic, &mbx) < 0)
goto free_mc;
} }
kfree(mc_addrs);
} }
/* and finally set rx mode for PF accordingly */ /* and finally set rx mode for PF accordingly */
mbx.xcast.msg = NIC_MBOX_MSG_SET_XCAST; mbx.xcast.msg = NIC_MBOX_MSG_SET_XCAST;
mbx.xcast.data.mode = mode; mbx.xcast.mode = mode;
nicvf_send_msg_to_pf(nic, &mbx); nicvf_send_msg_to_pf(nic, &mbx);
free_mc:
kfree(mc_addrs);
} }
static void nicvf_set_rx_mode_task(struct work_struct *work_arg) static void nicvf_set_rx_mode_task(struct work_struct *work_arg)
{ {
struct nicvf_work *vf_work = container_of(work_arg, struct nicvf_work, struct nicvf_work *vf_work = container_of(work_arg, struct nicvf_work,
work.work); work);
struct nicvf *nic = container_of(vf_work, struct nicvf, rx_mode_work); struct nicvf *nic = container_of(vf_work, struct nicvf, rx_mode_work);
u8 mode; u8 mode;
struct xcast_addr_list *mc; struct xcast_addr_list *mc;
@ -2030,7 +2074,7 @@ static void nicvf_set_rx_mode(struct net_device *netdev)
kfree(nic->rx_mode_work.mc); kfree(nic->rx_mode_work.mc);
nic->rx_mode_work.mc = mc_list; nic->rx_mode_work.mc = mc_list;
nic->rx_mode_work.mode = mode; nic->rx_mode_work.mode = mode;
queue_delayed_work(nicvf_rx_mode_wq, &nic->rx_mode_work.work, 0); queue_work(nic->nicvf_rx_mode_wq, &nic->rx_mode_work.work);
spin_unlock(&nic->rx_mode_wq_lock); spin_unlock(&nic->rx_mode_wq_lock);
} }
@ -2187,8 +2231,12 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
INIT_WORK(&nic->reset_task, nicvf_reset_task); INIT_WORK(&nic->reset_task, nicvf_reset_task);
INIT_DELAYED_WORK(&nic->rx_mode_work.work, nicvf_set_rx_mode_task); nic->nicvf_rx_mode_wq = alloc_ordered_workqueue("nicvf_rx_mode_wq_VF%d",
WQ_MEM_RECLAIM,
nic->vf_id);
INIT_WORK(&nic->rx_mode_work.work, nicvf_set_rx_mode_task);
spin_lock_init(&nic->rx_mode_wq_lock); spin_lock_init(&nic->rx_mode_wq_lock);
mutex_init(&nic->rx_mode_mtx);
err = register_netdev(netdev); err = register_netdev(netdev);
if (err) { if (err) {
@ -2228,13 +2276,15 @@ static void nicvf_remove(struct pci_dev *pdev)
nic = netdev_priv(netdev); nic = netdev_priv(netdev);
pnetdev = nic->pnicvf->netdev; pnetdev = nic->pnicvf->netdev;
cancel_delayed_work_sync(&nic->rx_mode_work.work);
/* Check if this Qset is assigned to different VF. /* Check if this Qset is assigned to different VF.
* If yes, clean primary and all secondary Qsets. * If yes, clean primary and all secondary Qsets.
*/ */
if (pnetdev && (pnetdev->reg_state == NETREG_REGISTERED)) if (pnetdev && (pnetdev->reg_state == NETREG_REGISTERED))
unregister_netdev(pnetdev); unregister_netdev(pnetdev);
if (nic->nicvf_rx_mode_wq) {
destroy_workqueue(nic->nicvf_rx_mode_wq);
nic->nicvf_rx_mode_wq = NULL;
}
nicvf_unregister_interrupts(nic); nicvf_unregister_interrupts(nic);
pci_set_drvdata(pdev, NULL); pci_set_drvdata(pdev, NULL);
if (nic->drv_stats) if (nic->drv_stats)
@ -2261,17 +2311,11 @@ static struct pci_driver nicvf_driver = {
static int __init nicvf_init_module(void) static int __init nicvf_init_module(void)
{ {
pr_info("%s, ver %s\n", DRV_NAME, DRV_VERSION); pr_info("%s, ver %s\n", DRV_NAME, DRV_VERSION);
nicvf_rx_mode_wq = alloc_ordered_workqueue("nicvf_generic",
WQ_MEM_RECLAIM);
return pci_register_driver(&nicvf_driver); return pci_register_driver(&nicvf_driver);
} }
static void __exit nicvf_cleanup_module(void) static void __exit nicvf_cleanup_module(void)
{ {
if (nicvf_rx_mode_wq) {
destroy_workqueue(nicvf_rx_mode_wq);
nicvf_rx_mode_wq = NULL;
}
pci_unregister_driver(&nicvf_driver); pci_unregister_driver(&nicvf_driver);
} }

View File

@ -1217,7 +1217,7 @@ static void bgx_init_hw(struct bgx *bgx)
/* Disable MAC steering (NCSI traffic) */ /* Disable MAC steering (NCSI traffic) */
for (i = 0; i < RX_TRAFFIC_STEER_RULE_COUNT; i++) for (i = 0; i < RX_TRAFFIC_STEER_RULE_COUNT; i++)
bgx_reg_write(bgx, 0, BGX_CMR_RX_STREERING + (i * 8), 0x00); bgx_reg_write(bgx, 0, BGX_CMR_RX_STEERING + (i * 8), 0x00);
} }
static u8 bgx_get_lane2sds_cfg(struct bgx *bgx, struct lmac *lmac) static u8 bgx_get_lane2sds_cfg(struct bgx *bgx, struct lmac *lmac)

View File

@ -60,7 +60,7 @@
#define RX_DMACX_CAM_EN BIT_ULL(48) #define RX_DMACX_CAM_EN BIT_ULL(48)
#define RX_DMACX_CAM_LMACID(x) (((u64)x) << 49) #define RX_DMACX_CAM_LMACID(x) (((u64)x) << 49)
#define RX_DMAC_COUNT 32 #define RX_DMAC_COUNT 32
#define BGX_CMR_RX_STREERING 0x300 #define BGX_CMR_RX_STEERING 0x300
#define RX_TRAFFIC_STEER_RULE_COUNT 8 #define RX_TRAFFIC_STEER_RULE_COUNT 8
#define BGX_CMR_CHAN_MSK_AND 0x450 #define BGX_CMR_CHAN_MSK_AND 0x450
#define BGX_CMR_BIST_STATUS 0x460 #define BGX_CMR_BIST_STATUS 0x460

View File

@ -660,6 +660,7 @@ static void uld_init(struct adapter *adap, struct cxgb4_lld_info *lld)
lld->cclk_ps = 1000000000 / adap->params.vpd.cclk; lld->cclk_ps = 1000000000 / adap->params.vpd.cclk;
lld->udb_density = 1 << adap->params.sge.eq_qpp; lld->udb_density = 1 << adap->params.sge.eq_qpp;
lld->ucq_density = 1 << adap->params.sge.iq_qpp; lld->ucq_density = 1 << adap->params.sge.iq_qpp;
lld->sge_host_page_size = 1 << (adap->params.sge.hps + 10);
lld->filt_mode = adap->params.tp.vlan_pri_map; lld->filt_mode = adap->params.tp.vlan_pri_map;
/* MODQ_REQ_MAP sets queues 0-3 to chan 0-3 */ /* MODQ_REQ_MAP sets queues 0-3 to chan 0-3 */
for (i = 0; i < NCHAN; i++) for (i = 0; i < NCHAN; i++)

View File

@ -336,6 +336,7 @@ struct cxgb4_lld_info {
unsigned int cclk_ps; /* Core clock period in psec */ unsigned int cclk_ps; /* Core clock period in psec */
unsigned short udb_density; /* # of user DB/page */ unsigned short udb_density; /* # of user DB/page */
unsigned short ucq_density; /* # of user CQs/page */ unsigned short ucq_density; /* # of user CQs/page */
unsigned int sge_host_page_size; /* SGE host page size */
unsigned short filt_mode; /* filter optional components */ unsigned short filt_mode; /* filter optional components */
unsigned short tx_modq[NCHAN]; /* maps each tx channel to a */ unsigned short tx_modq[NCHAN]; /* maps each tx channel to a */
/* scheduler queue */ /* scheduler queue */

View File

@ -3289,8 +3289,11 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
i40e_alloc_rx_buffers_zc(ring, I40E_DESC_UNUSED(ring)) : i40e_alloc_rx_buffers_zc(ring, I40E_DESC_UNUSED(ring)) :
!i40e_alloc_rx_buffers(ring, I40E_DESC_UNUSED(ring)); !i40e_alloc_rx_buffers(ring, I40E_DESC_UNUSED(ring));
if (!ok) { if (!ok) {
/* Log this in case the user has forgotten to give the kernel
* any buffers, even later in the application.
*/
dev_info(&vsi->back->pdev->dev, dev_info(&vsi->back->pdev->dev,
"Failed allocate some buffers on %sRx ring %d (pf_q %d)\n", "Failed to allocate some buffers on %sRx ring %d (pf_q %d)\n",
ring->xsk_umem ? "UMEM enabled " : "", ring->xsk_umem ? "UMEM enabled " : "",
ring->queue_index, pf_q); ring->queue_index, pf_q);
} }
@ -6725,8 +6728,13 @@ void i40e_down(struct i40e_vsi *vsi)
for (i = 0; i < vsi->num_queue_pairs; i++) { for (i = 0; i < vsi->num_queue_pairs; i++) {
i40e_clean_tx_ring(vsi->tx_rings[i]); i40e_clean_tx_ring(vsi->tx_rings[i]);
if (i40e_enabled_xdp_vsi(vsi)) if (i40e_enabled_xdp_vsi(vsi)) {
/* Make sure that in-progress ndo_xdp_xmit
* calls are completed.
*/
synchronize_rcu();
i40e_clean_tx_ring(vsi->xdp_rings[i]); i40e_clean_tx_ring(vsi->xdp_rings[i]);
}
i40e_clean_rx_ring(vsi->rx_rings[i]); i40e_clean_rx_ring(vsi->rx_rings[i]);
} }
@ -11855,6 +11863,14 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi,
if (old_prog) if (old_prog)
bpf_prog_put(old_prog); bpf_prog_put(old_prog);
/* Kick start the NAPI context if there is an AF_XDP socket open
* on that queue id. This so that receiving will start.
*/
if (need_reset && prog)
for (i = 0; i < vsi->num_queue_pairs; i++)
if (vsi->xdp_rings[i]->xsk_umem)
(void)i40e_xsk_async_xmit(vsi->netdev, i);
return 0; return 0;
} }
@ -11915,8 +11931,13 @@ static void i40e_queue_pair_reset_stats(struct i40e_vsi *vsi, int queue_pair)
static void i40e_queue_pair_clean_rings(struct i40e_vsi *vsi, int queue_pair) static void i40e_queue_pair_clean_rings(struct i40e_vsi *vsi, int queue_pair)
{ {
i40e_clean_tx_ring(vsi->tx_rings[queue_pair]); i40e_clean_tx_ring(vsi->tx_rings[queue_pair]);
if (i40e_enabled_xdp_vsi(vsi)) if (i40e_enabled_xdp_vsi(vsi)) {
/* Make sure that in-progress ndo_xdp_xmit calls are
* completed.
*/
synchronize_rcu();
i40e_clean_tx_ring(vsi->xdp_rings[queue_pair]); i40e_clean_tx_ring(vsi->xdp_rings[queue_pair]);
}
i40e_clean_rx_ring(vsi->rx_rings[queue_pair]); i40e_clean_rx_ring(vsi->rx_rings[queue_pair]);
} }

View File

@ -3709,6 +3709,7 @@ int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
struct i40e_netdev_priv *np = netdev_priv(dev); struct i40e_netdev_priv *np = netdev_priv(dev);
unsigned int queue_index = smp_processor_id(); unsigned int queue_index = smp_processor_id();
struct i40e_vsi *vsi = np->vsi; struct i40e_vsi *vsi = np->vsi;
struct i40e_pf *pf = vsi->back;
struct i40e_ring *xdp_ring; struct i40e_ring *xdp_ring;
int drops = 0; int drops = 0;
int i; int i;
@ -3716,7 +3717,8 @@ int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
if (test_bit(__I40E_VSI_DOWN, vsi->state)) if (test_bit(__I40E_VSI_DOWN, vsi->state))
return -ENETDOWN; return -ENETDOWN;
if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs) if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs ||
test_bit(__I40E_CONFIG_BUSY, pf->state))
return -ENXIO; return -ENXIO;
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))

View File

@ -112,6 +112,11 @@ static int i40e_xsk_umem_enable(struct i40e_vsi *vsi, struct xdp_umem *umem,
err = i40e_queue_pair_enable(vsi, qid); err = i40e_queue_pair_enable(vsi, qid);
if (err) if (err)
return err; return err;
/* Kick start the NAPI context so that receiving will start */
err = i40e_xsk_async_xmit(vsi->netdev, qid);
if (err)
return err;
} }
return 0; return 0;

View File

@ -3953,8 +3953,11 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
else else
mrqc = IXGBE_MRQC_VMDQRSS64EN; mrqc = IXGBE_MRQC_VMDQRSS64EN;
/* Enable L3/L4 for Tx Switched packets */ /* Enable L3/L4 for Tx Switched packets only for X550,
mrqc |= IXGBE_MRQC_L3L4TXSWEN; * older devices do not support this feature
*/
if (hw->mac.type >= ixgbe_mac_X550)
mrqc |= IXGBE_MRQC_L3L4TXSWEN;
} else { } else {
if (tcs > 4) if (tcs > 4)
mrqc = IXGBE_MRQC_RTRSS8TCEN; mrqc = IXGBE_MRQC_RTRSS8TCEN;
@ -10226,6 +10229,7 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
int i, frame_size = dev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN; int i, frame_size = dev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
struct ixgbe_adapter *adapter = netdev_priv(dev); struct ixgbe_adapter *adapter = netdev_priv(dev);
struct bpf_prog *old_prog; struct bpf_prog *old_prog;
bool need_reset;
if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)
return -EINVAL; return -EINVAL;
@ -10248,9 +10252,10 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
return -ENOMEM; return -ENOMEM;
old_prog = xchg(&adapter->xdp_prog, prog); old_prog = xchg(&adapter->xdp_prog, prog);
need_reset = (!!prog != !!old_prog);
/* If transitioning XDP modes reconfigure rings */ /* If transitioning XDP modes reconfigure rings */
if (!!prog != !!old_prog) { if (need_reset) {
int err = ixgbe_setup_tc(dev, adapter->hw_tcs); int err = ixgbe_setup_tc(dev, adapter->hw_tcs);
if (err) { if (err) {
@ -10266,6 +10271,14 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
if (old_prog) if (old_prog)
bpf_prog_put(old_prog); bpf_prog_put(old_prog);
/* Kick start the NAPI context if there is an AF_XDP socket open
* on that queue id. This so that receiving will start.
*/
if (need_reset && prog)
for (i = 0; i < adapter->num_rx_queues; i++)
if (adapter->xdp_ring[i]->xsk_umem)
(void)ixgbe_xsk_async_xmit(adapter->netdev, i);
return 0; return 0;
} }

View File

@ -144,11 +144,19 @@ static int ixgbe_xsk_umem_enable(struct ixgbe_adapter *adapter,
ixgbe_txrx_ring_disable(adapter, qid); ixgbe_txrx_ring_disable(adapter, qid);
err = ixgbe_add_xsk_umem(adapter, umem, qid); err = ixgbe_add_xsk_umem(adapter, umem, qid);
if (err)
return err;
if (if_running) if (if_running) {
ixgbe_txrx_ring_enable(adapter, qid); ixgbe_txrx_ring_enable(adapter, qid);
return err; /* Kick start the NAPI context so that receiving will start */
err = ixgbe_xsk_async_xmit(adapter->netdev, qid);
if (err)
return err;
}
return 0;
} }
static int ixgbe_xsk_umem_disable(struct ixgbe_adapter *adapter, u16 qid) static int ixgbe_xsk_umem_disable(struct ixgbe_adapter *adapter, u16 qid)
@ -617,7 +625,8 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
dma_addr_t dma; dma_addr_t dma;
while (budget-- > 0) { while (budget-- > 0) {
if (unlikely(!ixgbe_desc_unused(xdp_ring))) { if (unlikely(!ixgbe_desc_unused(xdp_ring)) ||
!netif_carrier_ok(xdp_ring->netdev)) {
work_done = false; work_done = false;
break; break;
} }

View File

@ -2148,7 +2148,7 @@ err_drop_frame:
if (unlikely(!skb)) if (unlikely(!skb))
goto err_drop_frame_ret_pool; goto err_drop_frame_ret_pool;
dma_sync_single_range_for_cpu(dev->dev.parent, dma_sync_single_range_for_cpu(&pp->bm_priv->pdev->dev,
rx_desc->buf_phys_addr, rx_desc->buf_phys_addr,
MVNETA_MH_SIZE + NET_SKB_PAD, MVNETA_MH_SIZE + NET_SKB_PAD,
rx_bytes, rx_bytes,

View File

@ -1291,15 +1291,10 @@ wrp_alu64_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
static int static int
wrp_alu32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, wrp_alu32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
enum alu_op alu_op, bool skip) enum alu_op alu_op)
{ {
const struct bpf_insn *insn = &meta->insn; const struct bpf_insn *insn = &meta->insn;
if (skip) {
meta->flags |= FLAG_INSN_SKIP_NOOP;
return 0;
}
wrp_alu_imm(nfp_prog, insn->dst_reg * 2, alu_op, insn->imm); wrp_alu_imm(nfp_prog, insn->dst_reg * 2, alu_op, insn->imm);
wrp_immed(nfp_prog, reg_both(insn->dst_reg * 2 + 1), 0); wrp_immed(nfp_prog, reg_both(insn->dst_reg * 2 + 1), 0);
@ -2322,7 +2317,7 @@ static int xor_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
static int xor_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int xor_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{ {
return wrp_alu32_imm(nfp_prog, meta, ALU_OP_XOR, !~meta->insn.imm); return wrp_alu32_imm(nfp_prog, meta, ALU_OP_XOR);
} }
static int and_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int and_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
@ -2332,7 +2327,7 @@ static int and_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
static int and_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int and_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{ {
return wrp_alu32_imm(nfp_prog, meta, ALU_OP_AND, !~meta->insn.imm); return wrp_alu32_imm(nfp_prog, meta, ALU_OP_AND);
} }
static int or_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int or_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
@ -2342,7 +2337,7 @@ static int or_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
static int or_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int or_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{ {
return wrp_alu32_imm(nfp_prog, meta, ALU_OP_OR, !meta->insn.imm); return wrp_alu32_imm(nfp_prog, meta, ALU_OP_OR);
} }
static int add_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int add_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
@ -2352,7 +2347,7 @@ static int add_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
static int add_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int add_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{ {
return wrp_alu32_imm(nfp_prog, meta, ALU_OP_ADD, !meta->insn.imm); return wrp_alu32_imm(nfp_prog, meta, ALU_OP_ADD);
} }
static int sub_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int sub_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
@ -2362,7 +2357,7 @@ static int sub_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
static int sub_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int sub_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{ {
return wrp_alu32_imm(nfp_prog, meta, ALU_OP_SUB, !meta->insn.imm); return wrp_alu32_imm(nfp_prog, meta, ALU_OP_SUB);
} }
static int mul_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int mul_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)

View File

@ -433,6 +433,8 @@ static int ipvlan_nl_changelink(struct net_device *dev,
if (!data) if (!data)
return 0; return 0;
if (!ns_capable(dev_net(ipvlan->phy_dev)->user_ns, CAP_NET_ADMIN))
return -EPERM;
if (data[IFLA_IPVLAN_MODE]) { if (data[IFLA_IPVLAN_MODE]) {
u16 nmode = nla_get_u16(data[IFLA_IPVLAN_MODE]); u16 nmode = nla_get_u16(data[IFLA_IPVLAN_MODE]);
@ -535,6 +537,8 @@ int ipvlan_link_new(struct net *src_net, struct net_device *dev,
struct ipvl_dev *tmp = netdev_priv(phy_dev); struct ipvl_dev *tmp = netdev_priv(phy_dev);
phy_dev = tmp->phy_dev; phy_dev = tmp->phy_dev;
if (!ns_capable(dev_net(phy_dev)->user_ns, CAP_NET_ADMIN))
return -EPERM;
} else if (!netif_is_ipvlan_port(phy_dev)) { } else if (!netif_is_ipvlan_port(phy_dev)) {
/* Exit early if the underlying link is invalid or busy */ /* Exit early if the underlying link is invalid or busy */
if (phy_dev->type != ARPHRD_ETHER || if (phy_dev->type != ARPHRD_ETHER ||

View File

@ -373,7 +373,6 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
err = device_register(&bus->dev); err = device_register(&bus->dev);
if (err) { if (err) {
pr_err("mii_bus %s failed to register\n", bus->id); pr_err("mii_bus %s failed to register\n", bus->id);
put_device(&bus->dev);
return -EINVAL; return -EINVAL;
} }

View File

@ -286,6 +286,13 @@ static struct phy_driver realtek_drvs[] = {
.name = "RTL8366RB Gigabit Ethernet", .name = "RTL8366RB Gigabit Ethernet",
.features = PHY_GBIT_FEATURES, .features = PHY_GBIT_FEATURES,
.config_init = &rtl8366rb_config_init, .config_init = &rtl8366rb_config_init,
/* These interrupts are handled by the irq controller
* embedded inside the RTL8366RB, they get unmasked when the
* irq is requested and ACKed by reading the status register,
* which is done by the irqchip code.
*/
.ack_interrupt = genphy_no_ack_interrupt,
.config_intr = genphy_no_config_intr,
.suspend = genphy_suspend, .suspend = genphy_suspend,
.resume = genphy_resume, .resume = genphy_resume,
}, },

View File

@ -1256,7 +1256,7 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
list_add_tail_rcu(&port->list, &team->port_list); list_add_tail_rcu(&port->list, &team->port_list);
team_port_enable(team, port); team_port_enable(team, port);
__team_compute_features(team); __team_compute_features(team);
__team_port_change_port_added(port, !!netif_carrier_ok(port_dev)); __team_port_change_port_added(port, !!netif_oper_up(port_dev));
__team_options_change_check(team); __team_options_change_check(team);
netdev_info(dev, "Port device %s added\n", portname); netdev_info(dev, "Port device %s added\n", portname);
@ -2915,7 +2915,7 @@ static int team_device_event(struct notifier_block *unused,
switch (event) { switch (event) {
case NETDEV_UP: case NETDEV_UP:
if (netif_carrier_ok(dev)) if (netif_oper_up(dev))
team_port_change_check(port, true); team_port_change_check(port, true);
break; break;
case NETDEV_DOWN: case NETDEV_DOWN:

View File

@ -1179,7 +1179,7 @@ static int vendor_mac_passthru_addr_read(struct r8152 *tp, struct sockaddr *sa)
} else { } else {
/* test for RTL8153-BND and RTL8153-BD */ /* test for RTL8153-BND and RTL8153-BD */
ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_MISC_1); ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_MISC_1);
if ((ocp_data & BND_MASK) == 0 && (ocp_data & BD_MASK)) { if ((ocp_data & BND_MASK) == 0 && (ocp_data & BD_MASK) == 0) {
netif_dbg(tp, probe, tp->netdev, netif_dbg(tp, probe, tp->netdev,
"Invalid variant for MAC pass through\n"); "Invalid variant for MAC pass through\n");
return -ENODEV; return -ENODEV;

View File

@ -1273,6 +1273,9 @@ static void vrf_setup(struct net_device *dev)
/* default to no qdisc; user can add if desired */ /* default to no qdisc; user can add if desired */
dev->priv_flags |= IFF_NO_QUEUE; dev->priv_flags |= IFF_NO_QUEUE;
dev->min_mtu = 0;
dev->max_mtu = 0;
} }
static int vrf_validate(struct nlattr *tb[], struct nlattr *data[], static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],

View File

@ -3557,7 +3557,7 @@ static int hwsim_get_radio_nl(struct sk_buff *msg, struct genl_info *info)
goto out_err; goto out_err;
} }
genlmsg_reply(skb, info); res = genlmsg_reply(skb, info);
break; break;
} }

View File

@ -693,7 +693,7 @@ static const char * const sd_a_groups[] = {
static const char * const sdxc_a_groups[] = { static const char * const sdxc_a_groups[] = {
"sdxc_d0_0_a", "sdxc_d13_0_a", "sdxc_d47_a", "sdxc_clk_a", "sdxc_d0_0_a", "sdxc_d13_0_a", "sdxc_d47_a", "sdxc_clk_a",
"sdxc_cmd_a", "sdxc_d0_1_a", "sdxc_d0_13_1_a" "sdxc_cmd_a", "sdxc_d0_1_a", "sdxc_d13_1_a"
}; };
static const char * const pcm_a_groups[] = { static const char * const pcm_a_groups[] = {

View File

@ -79,7 +79,7 @@ enum {
.intr_cfg_reg = 0, \ .intr_cfg_reg = 0, \
.intr_status_reg = 0, \ .intr_status_reg = 0, \
.intr_target_reg = 0, \ .intr_target_reg = 0, \
.tile = NORTH, \ .tile = SOUTH, \
.mux_bit = -1, \ .mux_bit = -1, \
.pull_bit = pull, \ .pull_bit = pull, \
.drv_bit = drv, \ .drv_bit = drv, \

View File

@ -1459,7 +1459,13 @@ static int iscsi_xmit_task(struct iscsi_conn *conn)
if (test_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx)) if (test_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx))
return -ENODATA; return -ENODATA;
spin_lock_bh(&conn->session->back_lock);
if (conn->task == NULL) {
spin_unlock_bh(&conn->session->back_lock);
return -ENODATA;
}
__iscsi_get_task(task); __iscsi_get_task(task);
spin_unlock_bh(&conn->session->back_lock);
spin_unlock_bh(&conn->session->frwd_lock); spin_unlock_bh(&conn->session->frwd_lock);
rc = conn->session->tt->xmit_task(task); rc = conn->session->tt->xmit_task(task);
spin_lock_bh(&conn->session->frwd_lock); spin_lock_bh(&conn->session->frwd_lock);

View File

@ -828,6 +828,7 @@ static struct domain_device *sas_ex_discover_end_dev(
rphy = sas_end_device_alloc(phy->port); rphy = sas_end_device_alloc(phy->port);
if (!rphy) if (!rphy)
goto out_free; goto out_free;
rphy->identify.phy_identifier = phy_id;
child->rphy = rphy; child->rphy = rphy;
get_device(&rphy->dev); get_device(&rphy->dev);
@ -854,6 +855,7 @@ static struct domain_device *sas_ex_discover_end_dev(
child->rphy = rphy; child->rphy = rphy;
get_device(&rphy->dev); get_device(&rphy->dev);
rphy->identify.phy_identifier = phy_id;
sas_fill_in_rphy(child, rphy); sas_fill_in_rphy(child, rphy);
list_add_tail(&child->disco_list_node, &parent->port->disco_list); list_add_tail(&child->disco_list_node, &parent->port->disco_list);

View File

@ -655,6 +655,7 @@ static blk_status_t scsi_result_to_blk_status(struct scsi_cmnd *cmd, int result)
set_host_byte(cmd, DID_OK); set_host_byte(cmd, DID_OK);
return BLK_STS_TARGET; return BLK_STS_TARGET;
case DID_NEXUS_FAILURE: case DID_NEXUS_FAILURE:
set_host_byte(cmd, DID_OK);
return BLK_STS_NEXUS; return BLK_STS_NEXUS;
case DID_ALLOC_FAILURE: case DID_ALLOC_FAILURE:
set_host_byte(cmd, DID_OK); set_host_byte(cmd, DID_OK);

View File

@ -142,10 +142,12 @@ int sd_zbc_report_zones(struct gendisk *disk, sector_t sector,
return -EOPNOTSUPP; return -EOPNOTSUPP;
/* /*
* Get a reply buffer for the number of requested zones plus a header. * Get a reply buffer for the number of requested zones plus a header,
* For ATA, buffers must be aligned to 512B. * without exceeding the device maximum command size. For ATA disks,
* buffers must be aligned to 512B.
*/ */
buflen = roundup((nrz + 1) * 64, 512); buflen = min(queue_max_hw_sectors(disk->queue) << 9,
roundup((nrz + 1) * 64, 512));
buf = kmalloc(buflen, gfp_mask); buf = kmalloc(buflen, gfp_mask);
if (!buf) if (!buf)
return -ENOMEM; return -ENOMEM;

View File

@ -616,7 +616,8 @@ int __ceph_finish_cap_snap(struct ceph_inode_info *ci,
capsnap->size); capsnap->size);
spin_lock(&mdsc->snap_flush_lock); spin_lock(&mdsc->snap_flush_lock);
list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list); if (list_empty(&ci->i_snap_flush_item))
list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list);
spin_unlock(&mdsc->snap_flush_lock); spin_unlock(&mdsc->snap_flush_lock);
return 1; /* caller may want to ceph_flush_snaps */ return 1; /* caller may want to ceph_flush_snaps */
} }

View File

@ -44,6 +44,7 @@
#include <linux/keyctl.h> #include <linux/keyctl.h>
#include <linux/key-type.h> #include <linux/key-type.h>
#include <keys/user-type.h> #include <keys/user-type.h>
#include <keys/request_key_auth-type.h>
#include <linux/module.h> #include <linux/module.h>
#include "internal.h" #include "internal.h"
@ -59,7 +60,7 @@ static struct key_type key_type_id_resolver_legacy;
struct idmap_legacy_upcalldata { struct idmap_legacy_upcalldata {
struct rpc_pipe_msg pipe_msg; struct rpc_pipe_msg pipe_msg;
struct idmap_msg idmap_msg; struct idmap_msg idmap_msg;
struct key_construction *key_cons; struct key *authkey;
struct idmap *idmap; struct idmap *idmap;
}; };
@ -384,7 +385,7 @@ static const match_table_t nfs_idmap_tokens = {
{ Opt_find_err, NULL } { Opt_find_err, NULL }
}; };
static int nfs_idmap_legacy_upcall(struct key_construction *, const char *, void *); static int nfs_idmap_legacy_upcall(struct key *, void *);
static ssize_t idmap_pipe_downcall(struct file *, const char __user *, static ssize_t idmap_pipe_downcall(struct file *, const char __user *,
size_t); size_t);
static void idmap_release_pipe(struct inode *); static void idmap_release_pipe(struct inode *);
@ -549,11 +550,12 @@ nfs_idmap_prepare_pipe_upcall(struct idmap *idmap,
static void static void
nfs_idmap_complete_pipe_upcall_locked(struct idmap *idmap, int ret) nfs_idmap_complete_pipe_upcall_locked(struct idmap *idmap, int ret)
{ {
struct key_construction *cons = idmap->idmap_upcall_data->key_cons; struct key *authkey = idmap->idmap_upcall_data->authkey;
kfree(idmap->idmap_upcall_data); kfree(idmap->idmap_upcall_data);
idmap->idmap_upcall_data = NULL; idmap->idmap_upcall_data = NULL;
complete_request_key(cons, ret); complete_request_key(authkey, ret);
key_put(authkey);
} }
static void static void
@ -563,15 +565,14 @@ nfs_idmap_abort_pipe_upcall(struct idmap *idmap, int ret)
nfs_idmap_complete_pipe_upcall_locked(idmap, ret); nfs_idmap_complete_pipe_upcall_locked(idmap, ret);
} }
static int nfs_idmap_legacy_upcall(struct key_construction *cons, static int nfs_idmap_legacy_upcall(struct key *authkey, void *aux)
const char *op,
void *aux)
{ {
struct idmap_legacy_upcalldata *data; struct idmap_legacy_upcalldata *data;
struct request_key_auth *rka = get_request_key_auth(authkey);
struct rpc_pipe_msg *msg; struct rpc_pipe_msg *msg;
struct idmap_msg *im; struct idmap_msg *im;
struct idmap *idmap = (struct idmap *)aux; struct idmap *idmap = (struct idmap *)aux;
struct key *key = cons->key; struct key *key = rka->target_key;
int ret = -ENOKEY; int ret = -ENOKEY;
if (!aux) if (!aux)
@ -586,7 +587,7 @@ static int nfs_idmap_legacy_upcall(struct key_construction *cons,
msg = &data->pipe_msg; msg = &data->pipe_msg;
im = &data->idmap_msg; im = &data->idmap_msg;
data->idmap = idmap; data->idmap = idmap;
data->key_cons = cons; data->authkey = key_get(authkey);
ret = nfs_idmap_prepare_message(key->description, idmap, im, msg); ret = nfs_idmap_prepare_message(key->description, idmap, im, msg);
if (ret < 0) if (ret < 0)
@ -604,7 +605,7 @@ static int nfs_idmap_legacy_upcall(struct key_construction *cons,
out2: out2:
kfree(data); kfree(data);
out1: out1:
complete_request_key(cons, ret); complete_request_key(authkey, ret);
return ret; return ret;
} }
@ -651,9 +652,10 @@ out:
static ssize_t static ssize_t
idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen) idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
{ {
struct request_key_auth *rka;
struct rpc_inode *rpci = RPC_I(file_inode(filp)); struct rpc_inode *rpci = RPC_I(file_inode(filp));
struct idmap *idmap = (struct idmap *)rpci->private; struct idmap *idmap = (struct idmap *)rpci->private;
struct key_construction *cons; struct key *authkey;
struct idmap_msg im; struct idmap_msg im;
size_t namelen_in; size_t namelen_in;
int ret = -ENOKEY; int ret = -ENOKEY;
@ -665,7 +667,8 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
if (idmap->idmap_upcall_data == NULL) if (idmap->idmap_upcall_data == NULL)
goto out_noupcall; goto out_noupcall;
cons = idmap->idmap_upcall_data->key_cons; authkey = idmap->idmap_upcall_data->authkey;
rka = get_request_key_auth(authkey);
if (mlen != sizeof(im)) { if (mlen != sizeof(im)) {
ret = -ENOSPC; ret = -ENOSPC;
@ -690,9 +693,9 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
ret = nfs_idmap_read_and_verify_message(&im, ret = nfs_idmap_read_and_verify_message(&im,
&idmap->idmap_upcall_data->idmap_msg, &idmap->idmap_upcall_data->idmap_msg,
cons->key, cons->authkey); rka->target_key, authkey);
if (ret >= 0) { if (ret >= 0) {
key_set_timeout(cons->key, nfs_idmap_cache_timeout); key_set_timeout(rka->target_key, nfs_idmap_cache_timeout);
ret = mlen; ret = mlen;
} }

View File

@ -1086,10 +1086,6 @@ static int __set_oom_adj(struct file *file, int oom_adj, bool legacy)
task_lock(p); task_lock(p);
if (!p->vfork_done && process_shares_mm(p, mm)) { if (!p->vfork_done && process_shares_mm(p, mm)) {
pr_info("updating oom_score_adj for %d (%s) from %d to %d because it shares mm with %d (%s). Report if this is unexpected.\n",
task_pid_nr(p), p->comm,
p->signal->oom_score_adj, oom_adj,
task_pid_nr(task), task->comm);
p->signal->oom_score_adj = oom_adj; p->signal->oom_score_adj = oom_adj;
if (!legacy && has_capability_noaudit(current, CAP_SYS_RESOURCE)) if (!legacy && has_capability_noaudit(current, CAP_SYS_RESOURCE))
p->signal->oom_score_adj_min = (short)oom_adj; p->signal->oom_score_adj_min = (short)oom_adj;

View File

@ -0,0 +1,36 @@
/* request_key authorisation token key type
*
* Copyright (C) 2005 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#ifndef _KEYS_REQUEST_KEY_AUTH_TYPE_H
#define _KEYS_REQUEST_KEY_AUTH_TYPE_H
#include <linux/key.h>
/*
* Authorisation record for request_key().
*/
struct request_key_auth {
struct key *target_key;
struct key *dest_keyring;
const struct cred *cred;
void *callout_info;
size_t callout_len;
pid_t pid;
char op[8];
} __randomize_layout;
static inline struct request_key_auth *get_request_key_auth(const struct key *key)
{
return key->payload.data[0];
}
#endif /* _KEYS_REQUEST_KEY_AUTH_TYPE_H */

Some files were not shown because too many files have changed in this diff Show More