1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
master
Jakub Kicinski 2021-01-14 18:34:50 -08:00
commit 1d9f03c0a1
397 changed files with 3935 additions and 2631 deletions

24
CREDITS
View File

@ -710,6 +710,10 @@ S: Las Cuevas 2385 - Bo Guemes
S: Las Heras, Mendoza CP 5539
S: Argentina
N: Jay Cliburn
E: jcliburn@gmail.com
D: ATLX Ethernet drivers
N: Steven P. Cole
E: scole@lanl.gov
E: elenstev@mesatop.com
@ -1284,6 +1288,10 @@ D: Major kbuild rework during the 2.5 cycle
D: ISDN Maintainer
S: USA
N: Gerrit Renker
E: gerrit@erg.abdn.ac.uk
D: DCCP protocol support.
N: Philip Gladstone
E: philip@gladstonefamily.net
D: Kernel / timekeeping stuff
@ -2138,6 +2146,10 @@ E: seasons@falcon.sch.bme.hu
E: seasons@makosteszta.sote.hu
D: Original author of software suspend
N: Alexey Kuznetsov
E: kuznet@ms2.inr.ac.ru
D: Author and maintainer of large parts of the networking stack
N: Jaroslav Kysela
E: perex@perex.cz
W: https://www.perex.cz
@ -2696,6 +2708,10 @@ N: Wolfgang Muees
E: wolfgang@iksw-muees.de
D: Auerswald USB driver
N: Shrijeet Mukherjee
E: shrijeet@gmail.com
D: Network routing domains (VRF).
N: Paul Mundt
E: paul.mundt@gmail.com
D: SuperH maintainer
@ -4110,6 +4126,10 @@ S: B-1206 Jingmao Guojigongyu
S: 16 Baliqiao Nanjie, Beijing 101100
S: People's Repulic of China
N: Aviad Yehezkel
E: aviadye@nvidia.com
D: Kernel TLS implementation and offload support.
N: Victor Yodaiken
E: yodaiken@fsmlabs.com
D: RTLinux (RealTime Linux)
@ -4167,6 +4187,10 @@ S: 1507 145th Place SE #B5
S: Bellevue, Washington 98007
S: USA
N: Wensong Zhang
E: wensong@linux-vs.org
D: IP virtual server (IPVS).
N: Haojian Zhuang
E: haojian.zhuang@gmail.com
D: MMP support

View File

@ -473,7 +473,7 @@ read-side critical sections that follow the idle period (the oval near
the bottom of the diagram above).
Plumbing this into the full grace-period execution is described
`below <#Forcing%20Quiescent%20States>`__.
`below <Forcing Quiescent States_>`__.
CPU-Hotplug Interface
^^^^^^^^^^^^^^^^^^^^^
@ -494,7 +494,7 @@ mask to detect CPUs having gone offline since the beginning of this
grace period.
Plumbing this into the full grace-period execution is described
`below <#Forcing%20Quiescent%20States>`__.
`below <Forcing Quiescent States_>`__.
Forcing Quiescent States
^^^^^^^^^^^^^^^^^^^^^^^^
@ -532,7 +532,7 @@ from other CPUs.
| RCU. But this diagram is complex enough as it is, so simplicity |
| overrode accuracy. You can think of it as poetic license, or you can |
| think of it as misdirection that is resolved in the |
| `stitched-together diagram <#Putting%20It%20All%20Together>`__. |
| `stitched-together diagram <Putting It All Together_>`__. |
+-----------------------------------------------------------------------+
Grace-Period Cleanup
@ -596,7 +596,7 @@ maintain ordering. For example, if the callback function wakes up a task
that runs on some other CPU, proper ordering must in place in both the
callback function and the task being awakened. To see why this is
important, consider the top half of the `grace-period
cleanup <#Grace-Period%20Cleanup>`__ diagram. The callback might be
cleanup`_ diagram. The callback might be
running on a CPU corresponding to the leftmost leaf ``rcu_node``
structure, and awaken a task that is to run on a CPU corresponding to
the rightmost leaf ``rcu_node`` structure, and the grace-period kernel

View File

@ -45,7 +45,7 @@ requirements:
#. `Other RCU Flavors`_
#. `Possible Future Changes`_
This is followed by a `summary <#Summary>`__, however, the answers to
This is followed by a summary_, however, the answers to
each quick quiz immediately follows the quiz. Select the big white space
with your mouse to see the answer.
@ -1096,7 +1096,7 @@ memory barriers.
| case, voluntary context switch) within an RCU read-side critical |
| section. However, sleeping locks may be used within userspace RCU |
| read-side critical sections, and also within Linux-kernel sleepable |
| RCU `(SRCU) <#Sleepable%20RCU>`__ read-side critical sections. In |
| RCU `(SRCU) <Sleepable RCU_>`__ read-side critical sections. In |
| addition, the -rt patchset turns spinlocks into a sleeping locks so |
| that the corresponding critical sections can be preempted, which also |
| means that these sleeplockified spinlocks (but not other sleeping |
@ -1186,7 +1186,7 @@ non-preemptible (``CONFIG_PREEMPT=n``) kernels, and thus `tiny
RCU <https://lkml.kernel.org/g/20090113221724.GA15307@linux.vnet.ibm.com>`__
was born. Josh Triplett has since taken over the small-memory banner
with his `Linux kernel tinification <https://tiny.wiki.kernel.org/>`__
project, which resulted in `SRCU <#Sleepable%20RCU>`__ becoming optional
project, which resulted in `SRCU <Sleepable RCU_>`__ becoming optional
for those kernels not needing it.
The remaining performance requirements are, for the most part,
@ -1457,8 +1457,8 @@ will vary as the value of ``HZ`` varies, and can also be changed using
the relevant Kconfig options and kernel boot parameters. RCU currently
does not do much sanity checking of these parameters, so please use
caution when changing them. Note that these forward-progress measures
are provided only for RCU, not for `SRCU <#Sleepable%20RCU>`__ or `Tasks
RCU <#Tasks%20RCU>`__.
are provided only for RCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks
RCU`_.
RCU takes the following steps in ``call_rcu()`` to encourage timely
invocation of callbacks when any given non-\ ``rcu_nocbs`` CPU has
@ -1477,8 +1477,8 @@ encouragement was provided:
Again, these are default values when running at ``HZ=1000``, and can be
overridden. Again, these forward-progress measures are provided only for
RCU, not for `SRCU <#Sleepable%20RCU>`__ or `Tasks
RCU <#Tasks%20RCU>`__. Even for RCU, callback-invocation forward
RCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks
RCU`_. Even for RCU, callback-invocation forward
progress for ``rcu_nocbs`` CPUs is much less well-developed, in part
because workloads benefiting from ``rcu_nocbs`` CPUs tend to invoke
``call_rcu()`` relatively infrequently. If workloads emerge that need
@ -1920,7 +1920,7 @@ Hotplug CPU
The Linux kernel supports CPU hotplug, which means that CPUs can come
and go. It is of course illegal to use any RCU API member from an
offline CPU, with the exception of `SRCU <#Sleepable%20RCU>`__ read-side
offline CPU, with the exception of `SRCU <Sleepable RCU_>`__ read-side
critical sections. This requirement was present from day one in
DYNIX/ptx, but on the other hand, the Linux kernel's CPU-hotplug
implementation is “interesting.”
@ -2177,7 +2177,7 @@ handles these states differently:
However, RCU must be reliably informed as to whether any given CPU is
currently in the idle loop, and, for ``NO_HZ_FULL``, also whether that
CPU is executing in usermode, as discussed
`earlier <#Energy%20Efficiency>`__. It also requires that the
`earlier <Energy Efficiency_>`__. It also requires that the
scheduling-clock interrupt be enabled when RCU needs it to be:
#. If a CPU is either idle or executing in usermode, and RCU believes it
@ -2294,7 +2294,7 @@ Performance, Scalability, Response Time, and Reliability
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expanding on the `earlier
discussion <#Performance%20and%20Scalability>`__, RCU is used heavily by
discussion <Performance and Scalability_>`__, RCU is used heavily by
hot code paths in performance-critical portions of the Linux kernel's
networking, security, virtualization, and scheduling code paths. RCU
must therefore use efficient implementations, especially in its

View File

@ -23,7 +23,7 @@ Here is what the fields mean:
- ``name``
is an identifier string. A new /proc file will be created with this
``name below /proc/sys/fs/binfmt_misc``; cannot contain slashes ``/`` for
name below ``/proc/sys/fs/binfmt_misc``; cannot contain slashes ``/`` for
obvious reasons.
- ``type``
is the type of recognition. Give ``M`` for magic and ``E`` for extension.
@ -83,7 +83,7 @@ Here is what the fields mean:
``F`` - fix binary
The usual behaviour of binfmt_misc is to spawn the
binary lazily when the misc format file is invoked. However,
this doesn``t work very well in the face of mount namespaces and
this doesn't work very well in the face of mount namespaces and
changeroots, so the ``F`` mode opens the binary as soon as the
emulation is installed and uses the opened image to spawn the
emulator, meaning it is always available once installed,

View File

@ -154,7 +154,7 @@ get the boot configuration data.
Because of this "piggyback" method, there is no need to change or
update the boot loader and the kernel image itself as long as the boot
loader passes the correct initrd file size. If by any chance, the boot
loader passes a longer size, the kernel feils to find the bootconfig data.
loader passes a longer size, the kernel fails to find the bootconfig data.
To do this operation, Linux kernel provides "bootconfig" command under
tools/bootconfig, which allows admin to apply or delete the config file

View File

@ -3,8 +3,8 @@
The kernel's command-line parameters
====================================
The following is a consolidated list of the kernel parameters as
implemented by the __setup(), core_param() and module_param() macros
The following is a consolidated list of the kernel parameters as implemented
by the __setup(), early_param(), core_param() and module_param() macros
and sorted into English Dictionary order (defined as ignoring all
punctuation and sorting digits before letters in a case insensitive
manner), and with descriptions where known.

View File

@ -1385,7 +1385,7 @@
ftrace_filter=[function-list]
[FTRACE] Limit the functions traced by the function
tracer at boot up. function-list is a comma separated
tracer at boot up. function-list is a comma-separated
list of functions. This list can be changed at run
time by the set_ftrace_filter file in the debugfs
tracing directory.
@ -1399,13 +1399,13 @@
ftrace_graph_filter=[function-list]
[FTRACE] Limit the top level callers functions traced
by the function graph tracer at boot up.
function-list is a comma separated list of functions
function-list is a comma-separated list of functions
that can be changed at run time by the
set_graph_function file in the debugfs tracing directory.
ftrace_graph_notrace=[function-list]
[FTRACE] Do not trace from the functions specified in
function-list. This list is a comma separated list of
function-list. This list is a comma-separated list of
functions that can be changed at run time by the
set_graph_notrace file in the debugfs tracing directory.
@ -2421,7 +2421,7 @@
when set.
Format: <int>
libata.force= [LIBATA] Force configurations. The format is comma
libata.force= [LIBATA] Force configurations. The format is comma-
separated list of "[ID:]VAL" where ID is
PORT[.DEVICE]. PORT and DEVICE are decimal numbers
matching port, link or device. Basically, it matches
@ -5145,7 +5145,7 @@
stacktrace_filter=[function-list]
[FTRACE] Limit the functions that the stack tracer
will trace at boot up. function-list is a comma separated
will trace at boot up. function-list is a comma-separated
list of functions. This list can be changed at run
time by the stack_trace_filter file in the debugfs
tracing directory. Note, this enables stack tracing
@ -5348,7 +5348,7 @@
trace_event=[event-list]
[FTRACE] Set and start specified trace events in order
to facilitate early boot debugging. The event-list is a
comma separated list of trace events to enable. See
comma-separated list of trace events to enable. See
also Documentation/trace/events.rst
trace_options=[option-list]

View File

@ -184,7 +184,7 @@ pages either asynchronously or synchronously, depending on the state
of the system. When the system is not loaded, most of the memory is free
and allocation requests will be satisfied immediately from the free
pages supply. As the load increases, the amount of the free pages goes
down and when it reaches a certain threshold (high watermark), an
down and when it reaches a certain threshold (low watermark), an
allocation request will awaken the ``kswapd`` daemon. It will
asynchronously scan memory pages and either just free them if the data
they contain is available elsewhere, or evict to the backing storage

View File

@ -53,7 +53,6 @@ How Linux keeps everything from happening at the same time. See
.. toctree::
:maxdepth: 1
atomic_ops
refcount-vs-atomic
irq/index
local_ops

View File

@ -1,4 +1,6 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2020 Texas Instruments Incorporated
# Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/ti/k3-bcdma.yaml#
@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments K3 DMSS BCDMA Device Tree Bindings
maintainers:
- Peter Ujfalusi <peter.ujfalusi@ti.com>
- Peter Ujfalusi <peter.ujfalusi@gmail.com>
description: |
The Block Copy DMA (BCDMA) is intended to perform similar functions as the TR

View File

@ -1,4 +1,6 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2020 Texas Instruments Incorporated
# Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/ti/k3-pktdma.yaml#
@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments K3 DMSS PKTDMA Device Tree Bindings
maintainers:
- Peter Ujfalusi <peter.ujfalusi@ti.com>
- Peter Ujfalusi <peter.ujfalusi@gmail.com>
description: |
The Packet DMA (PKTDMA) is intended to perform similar functions as the packet

View File

@ -1,4 +1,6 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2019 Texas Instruments Incorporated
# Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/ti/k3-udma.yaml#
@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments K3 NAVSS Unified DMA Device Tree Bindings
maintainers:
- Peter Ujfalusi <peter.ujfalusi@ti.com>
- Peter Ujfalusi <peter.ujfalusi@gmail.com>
description: |
The UDMA-P is intended to perform similar (but significantly upgraded)

View File

@ -163,6 +163,7 @@ allOf:
enum:
- renesas,etheravb-r8a774a1
- renesas,etheravb-r8a774b1
- renesas,etheravb-r8a774e1
- renesas,etheravb-r8a7795
- renesas,etheravb-r8a7796
- renesas,etheravb-r8a77961

View File

@ -161,7 +161,8 @@ properties:
* snps,route-dcbcp, DCB Control Packets
* snps,route-up, Untagged Packets
* snps,route-multi-broad, Multicast & Broadcast Packets
* snps,priority, RX queue priority (Range 0x0 to 0xF)
* snps,priority, bitmask of the tagged frames priorities assigned to
the queue
snps,mtl-tx-config:
$ref: /schemas/types.yaml#/definitions/phandle
@ -188,7 +189,10 @@ properties:
* snps,idle_slope, unlock on WoL
* snps,high_credit, max write outstanding req. limit
* snps,low_credit, max read outstanding req. limit
* snps,priority, TX queue priority (Range 0x0 to 0xF)
* snps,priority, bitmask of the priorities assigned to the queue.
When a PFC frame is received with priorities matching the bitmask,
the queue is blocked from transmitting for the pause time specified
in the PFC frame.
snps,reset-gpio:
deprecated: true

View File

@ -1,4 +1,6 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2020 Texas Instruments Incorporated
# Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
%YAML 1.2
---
$id: http://devicetree.org/schemas/sound/ti,j721e-cpb-audio.yaml#
@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments J721e Common Processor Board Audio Support
maintainers:
- Peter Ujfalusi <peter.ujfalusi@ti.com>
- Peter Ujfalusi <peter.ujfalusi@gmail.com>
description: |
The audio support on the board is using pcm3168a codec connected to McASP10

View File

@ -1,4 +1,6 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2020 Texas Instruments Incorporated
# Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
%YAML 1.2
---
$id: http://devicetree.org/schemas/sound/ti,j721e-cpb-ivi-audio.yaml#
@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments J721e Common Processor Board Audio Support
maintainers:
- Peter Ujfalusi <peter.ujfalusi@ti.com>
- Peter Ujfalusi <peter.ujfalusi@gmail.com>
description: |
The Infotainment board plugs into the Common Processor Board, the support of the

View File

@ -11,8 +11,12 @@ maintainers:
properties:
compatible:
items:
oneOf:
- const: ti,j721e-usb
- const: ti,am64-usb
- items:
- const: ti,j721e-usb
- const: ti,am64-usb
reg:
description: module registers

View File

@ -48,12 +48,12 @@ or ``virtualenv``, depending on how your distribution packaged Python 3.
those versions, you should run ``pip install 'docutils==0.12'``.
#) It is recommended to use the RTD theme for html output. Depending
on the Sphinx version, it should be installed in separate,
on the Sphinx version, it should be installed separately,
with ``pip install sphinx_rtd_theme``.
#) Some ReST pages contain math expressions. Due to the way Sphinx work,
#) Some ReST pages contain math expressions. Due to the way Sphinx works,
those expressions are written using LaTeX notation. It needs texlive
installed with amdfonts and amsmath in order to evaluate them.
installed with amsfonts and amsmath in order to evaluate them.
In summary, if you want to install Sphinx version 1.7.9, you should do::
@ -128,7 +128,7 @@ Sphinx Build
============
The usual way to generate the documentation is to run ``make htmldocs`` or
``make pdfdocs``. There are also other formats available, see the documentation
``make pdfdocs``. There are also other formats available: see the documentation
section of ``make help``. The generated documentation is placed in
format-specific subdirectories under ``Documentation/output``.
@ -303,17 +303,17 @@ and *targets* (e.g. a ref to ``:ref:`last row <last row>``` / :ref:`last row
- head col 3
- head col 4
* - column 1
* - row 1
- field 1.1
- field 1.2 with autospan
* - column 2
* - row 2
- field 2.1
- :rspan:`1` :cspan:`1` field 2.2 - 3.3
* .. _`last row`:
- column 3
- row 3
Rendered as:
@ -325,17 +325,17 @@ Rendered as:
- head col 3
- head col 4
* - column 1
* - row 1
- field 1.1
- field 1.2 with autospan
* - column 2
* - row 2
- field 2.1
- :rspan:`1` :cspan:`1` field 2.2 - 3.3
* .. _`last row`:
- column 3
- row 3
Cross-referencing
-----------------
@ -361,7 +361,7 @@ Figures & Images
If you want to add an image, you should use the ``kernel-figure`` and
``kernel-image`` directives. E.g. to insert a figure with a scalable
image format use SVG (:ref:`svg_image_example`)::
image format, use SVG (:ref:`svg_image_example`)::
.. kernel-figure:: svg_image.svg
:alt: simple SVG image
@ -375,7 +375,7 @@ image format use SVG (:ref:`svg_image_example`)::
SVG image example
The kernel figure (and image) directive support **DOT** formatted files, see
The kernel figure (and image) directive supports **DOT** formatted files, see
* DOT: http://graphviz.org/pdf/dotguide.pdf
* Graphviz: http://www.graphviz.org/content/dot-language
@ -394,7 +394,7 @@ A simple example (:ref:`hello_dot_file`)::
DOT's hello world example
Embed *render* markups (or languages) like Graphviz's **DOT** is provided by the
Embedded *render* markups (or languages) like Graphviz's **DOT** are provided by the
``kernel-render`` directives.::
.. kernel-render:: DOT
@ -406,7 +406,7 @@ Embed *render* markups (or languages) like Graphviz's **DOT** is provided by the
}
How this will be rendered depends on the installed tools. If Graphviz is
installed, you will see an vector image. If not the raw markup is inserted as
installed, you will see a vector image. If not, the raw markup is inserted as
*literal-block* (:ref:`hello_dot_render`).
.. _hello_dot_render:
@ -421,8 +421,8 @@ installed, you will see an vector image. If not the raw markup is inserted as
The *render* directive has all the options known from the *figure* directive,
plus option ``caption``. If ``caption`` has a value, a *figure* node is
inserted. If not, a *image* node is inserted. A ``caption`` is also needed, if
you want to refer it (:ref:`hello_svg_render`).
inserted. If not, an *image* node is inserted. A ``caption`` is also needed, if
you want to refer to it (:ref:`hello_svg_render`).
Embedded **SVG**::

View File

@ -1,7 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0-or-later
Kernel driver sbtsi_temp
==================
========================
Supported hardware:

View File

@ -598,7 +598,7 @@ more details, with real examples.
explicitly added to $(targets).
Assignments to $(targets) are without $(obj)/ prefix. if_changed may be
used in conjunction with custom rules as defined in "3.9 Custom Rules".
used in conjunction with custom rules as defined in "3.11 Custom Rules".
Note: It is a typical mistake to forget the FORCE prerequisite.
Another common pitfall is that whitespace is sometimes significant; for

View File

@ -118,11 +118,11 @@ spinlock, but you may block holding a mutex. If you can't lock a mutex,
your task will suspend itself, and be woken up when the mutex is
released. This means the CPU can do something else while you are
waiting. There are many cases when you simply can't sleep (see
`What Functions Are Safe To Call From Interrupts? <#sleeping-things>`__),
`What Functions Are Safe To Call From Interrupts?`_),
and so have to use a spinlock instead.
Neither type of lock is recursive: see
`Deadlock: Simple and Advanced <#deadlock>`__.
`Deadlock: Simple and Advanced`_.
Locks and Uniprocessor Kernels
------------------------------
@ -179,7 +179,7 @@ perfect world).
Note that you can also use spin_lock_irq() or
spin_lock_irqsave() here, which stop hardware interrupts
as well: see `Hard IRQ Context <#hard-irq-context>`__.
as well: see `Hard IRQ Context`_.
This works perfectly for UP as well: the spin lock vanishes, and this
macro simply becomes local_bh_disable()
@ -230,7 +230,7 @@ The Same Softirq
~~~~~~~~~~~~~~~~
The same softirq can run on the other CPUs: you can use a per-CPU array
(see `Per-CPU Data <#per-cpu-data>`__) for better performance. If you're
(see `Per-CPU Data`_) for better performance. If you're
going so far as to use a softirq, you probably care about scalable
performance enough to justify the extra complexity.

View File

@ -10,18 +10,177 @@ Introduction
The following is a random collection of documentation regarding
network devices.
struct net_device allocation rules
==================================
struct net_device lifetime rules
================================
Network device structures need to persist even after module is unloaded and
must be allocated with alloc_netdev_mqs() and friends.
If device has registered successfully, it will be freed on last use
by free_netdev(). This is required to handle the pathologic case cleanly
(example: rmmod mydriver </sys/class/net/myeth/mtu )
by free_netdev(). This is required to handle the pathological case cleanly
(example: ``rmmod mydriver </sys/class/net/myeth/mtu``)
alloc_netdev_mqs()/alloc_netdev() reserve extra space for driver
alloc_netdev_mqs() / alloc_netdev() reserve extra space for driver
private data which gets freed when the network device is freed. If
separately allocated data is attached to the network device
(netdev_priv(dev)) then it is up to the module exit handler to free that.
(netdev_priv()) then it is up to the module exit handler to free that.
There are two groups of APIs for registering struct net_device.
First group can be used in normal contexts where ``rtnl_lock`` is not already
held: register_netdev(), unregister_netdev().
Second group can be used when ``rtnl_lock`` is already held:
register_netdevice(), unregister_netdevice(), free_netdevice().
Simple drivers
--------------
Most drivers (especially device drivers) handle lifetime of struct net_device
in context where ``rtnl_lock`` is not held (e.g. driver probe and remove paths).
In that case the struct net_device registration is done using
the register_netdev(), and unregister_netdev() functions:
.. code-block:: c
int probe()
{
struct my_device_priv *priv;
int err;
dev = alloc_netdev_mqs(...);
if (!dev)
return -ENOMEM;
priv = netdev_priv(dev);
/* ... do all device setup before calling register_netdev() ...
*/
err = register_netdev(dev);
if (err)
goto err_undo;
/* net_device is visible to the user! */
err_undo:
/* ... undo the device setup ... */
free_netdev(dev);
return err;
}
void remove()
{
unregister_netdev(dev);
free_netdev(dev);
}
Note that after calling register_netdev() the device is visible in the system.
Users can open it and start sending / receiving traffic immediately,
or run any other callback, so all initialization must be done prior to
registration.
unregister_netdev() closes the device and waits for all users to be done
with it. The memory of struct net_device itself may still be referenced
by sysfs but all operations on that device will fail.
free_netdev() can be called after unregister_netdev() returns on when
register_netdev() failed.
Device management under RTNL
----------------------------
Registering struct net_device while in context which already holds
the ``rtnl_lock`` requires extra care. In those scenarios most drivers
will want to make use of struct net_device's ``needs_free_netdev``
and ``priv_destructor`` members for freeing of state.
Example flow of netdev handling under ``rtnl_lock``:
.. code-block:: c
static void my_setup(struct net_device *dev)
{
dev->needs_free_netdev = true;
}
static void my_destructor(struct net_device *dev)
{
some_obj_destroy(priv->obj);
some_uninit(priv);
}
int create_link()
{
struct my_device_priv *priv;
int err;
ASSERT_RTNL();
dev = alloc_netdev(sizeof(*priv), "net%d", NET_NAME_UNKNOWN, my_setup);
if (!dev)
return -ENOMEM;
priv = netdev_priv(dev);
/* Implicit constructor */
err = some_init(priv);
if (err)
goto err_free_dev;
priv->obj = some_obj_create();
if (!priv->obj) {
err = -ENOMEM;
goto err_some_uninit;
}
/* End of constructor, set the destructor: */
dev->priv_destructor = my_destructor;
err = register_netdevice(dev);
if (err)
/* register_netdevice() calls destructor on failure */
goto err_free_dev;
/* If anything fails now unregister_netdevice() (or unregister_netdev())
* will take care of calling my_destructor and free_netdev().
*/
return 0;
err_some_uninit:
some_uninit(priv);
err_free_dev:
free_netdev(dev);
return err;
}
If struct net_device.priv_destructor is set it will be called by the core
some time after unregister_netdevice(), it will also be called if
register_netdevice() fails. The callback may be invoked with or without
``rtnl_lock`` held.
There is no explicit constructor callback, driver "constructs" the private
netdev state after allocating it and before registration.
Setting struct net_device.needs_free_netdev makes core call free_netdevice()
automatically after unregister_netdevice() when all references to the device
are gone. It only takes effect after a successful call to register_netdevice()
so if register_netdevice() fails driver is responsible for calling
free_netdev().
free_netdev() is safe to call on error paths right after unregister_netdevice()
or when register_netdevice() fails. Parts of netdev (de)registration process
happen after ``rtnl_lock`` is released, therefore in those cases free_netdev()
will defer some of the processing until ``rtnl_lock`` is released.
Devices spawned from struct rtnl_link_ops should never free the
struct net_device directly.
.ndo_init and .ndo_uninit
~~~~~~~~~~~~~~~~~~~~~~~~~
``.ndo_init`` and ``.ndo_uninit`` callbacks are called during net_device
registration and de-registration, under ``rtnl_lock``. Drivers can use
those e.g. when parts of their init process need to run under ``rtnl_lock``.
``.ndo_init`` runs before device is visible in the system, ``.ndo_uninit``
runs during de-registering after device is closed but other subsystems
may still have outstanding references to the netdevice.
MTU
===

View File

@ -530,7 +530,7 @@ TLS device feature flags only control adding of new TLS connection
offloads, old connections will remain active after flags are cleared.
TLS encryption cannot be offloaded to devices without checksum calculation
offload. Hence, TLS TX device feature flag requires NETIF_F_HW_CSUM being set.
offload. Hence, TLS TX device feature flag requires TX csum offload being set.
Disabling the latter implies clearing the former. Disabling TX checksum offload
should not affect old connections, and drivers should make sure checksum
calculation does not break for them.

View File

@ -249,10 +249,8 @@ features; most of these are found in the "kernel hacking" submenu. Several
of these options should be turned on for any kernel used for development or
testing purposes. In particular, you should turn on:
- ENABLE_MUST_CHECK and FRAME_WARN to get an
extra set of warnings for problems like the use of deprecated interfaces
or ignoring an important return value from a function. The output
generated by these warnings can be verbose, but one need not worry about
- FRAME_WARN to get warnings for stack frames larger than a given amount.
The output generated can be verbose, but one need not worry about
warnings from other parts of the kernel.
- DEBUG_OBJECTS will add code to track the lifetime of various objects

View File

@ -1501,7 +1501,7 @@ Module for Digigram miXart8 sound cards.
This module supports multiple cards.
Note: One miXart8 board will be represented as 4 alsa cards.
See MIXART.txt for details.
See Documentation/sound/cards/mixart.rst for details.
When the driver is compiled as a module and the hotplug firmware
is supported, the firmware data is loaded via hotplug automatically.

View File

@ -71,7 +71,7 @@ core/oss
The codes for PCM and mixer OSS emulation modules are stored in this
directory. The rawmidi OSS emulation is included in the ALSA rawmidi
code since it's quite small. The sequencer code is stored in
``core/seq/oss`` directory (see `below <#core-seq-oss>`__).
``core/seq/oss`` directory (see `below <core/seq/oss_>`__).
core/seq
~~~~~~~~
@ -382,7 +382,7 @@ where ``enable[dev]`` is the module option.
Each time the ``probe`` callback is called, check the availability of
the device. If not available, simply increment the device index and
returns. dev will be incremented also later (`step 7
<#set-the-pci-driver-data-and-return-zero>`__).
<7) Set the PCI driver data and return zero._>`__).
2) Create a card instance
~~~~~~~~~~~~~~~~~~~~~~~~~
@ -450,10 +450,10 @@ field contains the information shown in ``/proc/asound/cards``.
5) Create other components, such as mixer, MIDI, etc.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Here you define the basic components such as `PCM <#PCM-Interface>`__,
mixer (e.g. `AC97 <#API-for-AC97-Codec>`__), MIDI (e.g.
`MPU-401 <#MIDI-MPU401-UART-Interface>`__), and other interfaces.
Also, if you want a `proc file <#Proc-Interface>`__, define it here,
Here you define the basic components such as `PCM <PCM Interface_>`__,
mixer (e.g. `AC97 <API for AC97 Codec_>`__), MIDI (e.g.
`MPU-401 <MIDI (MPU401-UART) Interface_>`__), and other interfaces.
Also, if you want a `proc file <Proc Interface_>`__, define it here,
too.
6) Register the card instance.
@ -941,7 +941,7 @@ The allocation of an interrupt source is done like this:
chip->irq = pci->irq;
where :c:func:`snd_mychip_interrupt()` is the interrupt handler
defined `later <#pcm-interface-interrupt-handler>`__. Note that
defined `later <PCM Interrupt Handler_>`__. Note that
``chip->irq`` should be defined only when :c:func:`request_irq()`
succeeded.
@ -3104,7 +3104,7 @@ processing the output stream in the irq handler.
If the MPU-401 interface shares its interrupt with the other logical
devices on the card, set ``MPU401_INFO_IRQ_HOOK`` (see
`below <#MIDI-Interrupt-Handler>`__).
`below <MIDI Interrupt Handler_>`__).
Usually, the port address corresponds to the command port and port + 1
corresponds to the data port. If not, you may change the ``cport``

View File

@ -392,9 +392,14 @@ This ioctl is obsolete and has been removed.
Errors:
===== =============================
======= ==============================================================
EINTR an unmasked signal is pending
===== =============================
ENOEXEC the vcpu hasn't been initialized or the guest tried to execute
instructions from device memory (arm64)
ENOSYS data abort outside memslots with no syndrome info and
KVM_CAP_ARM_NISV_TO_USER not enabled (arm64)
EPERM SVE feature set but not finalized (arm64)
======= ==============================================================
This ioctl is used to run a guest virtual cpu. While there are no
explicit parameters, there is an implicit parameter block that can be

View File

@ -820,7 +820,6 @@ M: Netanel Belgazal <netanel@amazon.com>
M: Arthur Kiyanovski <akiyano@amazon.com>
R: Guy Tzalik <gtzalik@amazon.com>
R: Saeed Bishara <saeedb@amazon.com>
R: Zorik Machulsky <zorik@amazon.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst
@ -2942,7 +2941,6 @@ S: Maintained
F: drivers/hwmon/asus_atk0110.c
ATLX ETHERNET DRIVERS
M: Jay Cliburn <jcliburn@gmail.com>
M: Chris Snook <chris.snook@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
@ -3895,7 +3893,7 @@ F: drivers/mtd/nand/raw/cadence-nand-controller.c
CADENCE USB3 DRD IP DRIVER
M: Peter Chen <peter.chen@nxp.com>
M: Pawel Laszczak <pawell@cadence.com>
M: Roger Quadros <rogerq@ti.com>
R: Roger Quadros <rogerq@kernel.org>
R: Aswath Govindraju <a-govindraju@ti.com>
L: linux-usb@vger.kernel.org
S: Maintained
@ -4937,9 +4935,8 @@ F: Documentation/scsi/dc395x.rst
F: drivers/scsi/dc395x.*
DCCP PROTOCOL
M: Gerrit Renker <gerrit@erg.abdn.ac.uk>
L: dccp@vger.kernel.org
S: Maintained
S: Orphan
W: http://www.linuxfoundation.org/collaborate/workgroups/networking/dccp
F: include/linux/dccp.h
F: include/linux/tfrc.h
@ -7378,7 +7375,6 @@ L: linux-hardening@vger.kernel.org
S: Maintained
F: Documentation/kbuild/gcc-plugins.rst
F: scripts/Makefile.gcc-plugins
F: scripts/gcc-plugin.sh
F: scripts/gcc-plugins/
GCOV BASED KERNEL PROFILING
@ -9255,7 +9251,7 @@ F: tools/testing/selftests/sgx/*
K: \bSGX_
INTERCONNECT API
M: Georgi Djakov <georgi.djakov@linaro.org>
M: Georgi Djakov <djakov@kernel.org>
L: linux-pm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/interconnect/
@ -9288,7 +9284,7 @@ F: drivers/net/ethernet/sgi/ioc3-eth.c
IOMAP FILESYSTEM LIBRARY
M: Christoph Hellwig <hch@infradead.org>
M: Darrick J. Wong <darrick.wong@oracle.com>
M: Darrick J. Wong <djwong@kernel.org>
M: linux-xfs@vger.kernel.org
M: linux-fsdevel@vger.kernel.org
L: linux-xfs@vger.kernel.org
@ -9342,7 +9338,6 @@ W: http://www.adaptec.com/
F: drivers/scsi/ips*
IPVS
M: Wensong Zhang <wensong@linux-vs.org>
M: Simon Horman <horms@verge.net.au>
M: Julian Anastasov <ja@ssi.bg>
L: netdev@vger.kernel.org
@ -9791,7 +9786,7 @@ F: tools/testing/selftests/kvm/s390x/
KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
M: Paolo Bonzini <pbonzini@redhat.com>
R: Sean Christopherson <sean.j.christopherson@intel.com>
R: Sean Christopherson <seanjc@google.com>
R: Vitaly Kuznetsov <vkuznets@redhat.com>
R: Wanpeng Li <wanpengli@tencent.com>
R: Jim Mattson <jmattson@google.com>
@ -10275,7 +10270,6 @@ S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev
F: Documentation/atomic_bitops.txt
F: Documentation/atomic_t.txt
F: Documentation/core-api/atomic_ops.rst
F: Documentation/core-api/refcount-vs-atomic.rst
F: Documentation/litmus-tests/
F: Documentation/memory-barriers.txt
@ -12433,7 +12427,6 @@ F: tools/testing/selftests/net/ipsec.c
NETWORKING [IPv4/IPv6]
M: "David S. Miller" <davem@davemloft.net>
M: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
L: netdev@vger.kernel.org
S: Maintained
@ -12490,7 +12483,6 @@ F: net/ipv6/tcp*.c
NETWORKING [TLS]
M: Boris Pismenny <borisp@nvidia.com>
M: Aviad Yehezkel <aviadye@nvidia.com>
M: John Fastabend <john.fastabend@gmail.com>
M: Daniel Borkmann <daniel@iogearbox.net>
M: Jakub Kicinski <kuba@kernel.org>
@ -12865,7 +12857,7 @@ F: include/misc/ocxl*
F: include/uapi/misc/ocxl.h
OMAP AUDIO SUPPORT
M: Peter Ujfalusi <peter.ujfalusi@ti.com>
M: Peter Ujfalusi <peter.ujfalusi@gmail.com>
M: Jarkko Nikula <jarkko.nikula@bitmer.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
L: linux-omap@vger.kernel.org
@ -16725,6 +16717,8 @@ M: Samuel Thibault <samuel.thibault@ens-lyon.org>
L: speakup@linux-speakup.org
S: Odd Fixes
W: http://www.linux-speakup.org/
W: https://github.com/linux-speakup/speakup
B: https://github.com/linux-speakup/speakup/issues
F: drivers/accessibility/speakup/
SPEAR CLOCK FRAMEWORK SUPPORT
@ -17556,7 +17550,7 @@ F: arch/xtensa/
F: drivers/irqchip/irq-xtensa-*
TEXAS INSTRUMENTS ASoC DRIVERS
M: Peter Ujfalusi <peter.ujfalusi@ti.com>
M: Peter Ujfalusi <peter.ujfalusi@gmail.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Maintained
F: sound/soc/ti/
@ -17568,6 +17562,19 @@ S: Supported
F: Documentation/devicetree/bindings/iio/dac/ti,dac7612.txt
F: drivers/iio/dac/ti-dac7612.c
TEXAS INSTRUMENTS DMA DRIVERS
M: Peter Ujfalusi <peter.ujfalusi@gmail.com>
L: dmaengine@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/dma/ti-dma-crossbar.txt
F: Documentation/devicetree/bindings/dma/ti-edma.txt
F: Documentation/devicetree/bindings/dma/ti/
F: drivers/dma/ti/
X: drivers/dma/ti/cppi41.c
F: include/linux/dma/k3-udma-glue.h
F: include/linux/dma/ti-cppi5.h
F: include/linux/dma/k3-psil.h
TEXAS INSTRUMENTS' SYSTEM CONTROL INTERFACE (TISCI) PROTOCOL DRIVER
M: Nishanth Menon <nm@ti.com>
M: Tero Kristo <t-kristo@ti.com>
@ -17853,7 +17860,7 @@ F: Documentation/devicetree/bindings/net/nfc/trf7970a.txt
F: drivers/nfc/trf7970a.c
TI TWL4030 SERIES SOC CODEC DRIVER
M: Peter Ujfalusi <peter.ujfalusi@ti.com>
M: Peter Ujfalusi <peter.ujfalusi@gmail.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Maintained
F: sound/soc/codecs/twl4030*
@ -19073,7 +19080,6 @@ K: regulator_get_optional
VRF
M: David Ahern <dsahern@kernel.org>
M: Shrijeet Mukherjee <shrijeet@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/vrf.rst
@ -19520,7 +19526,7 @@ F: arch/x86/xen/*swiotlb*
F: drivers/xen/*swiotlb*
XFS FILESYSTEM
M: Darrick J. Wong <darrick.wong@oracle.com>
M: Darrick J. Wong <djwong@kernel.org>
M: linux-xfs@vger.kernel.org
L: linux-xfs@vger.kernel.org
S: Supported

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 11
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Kleptomaniac Octopus
# *DOCUMENTATION*

View File

@ -1105,6 +1105,12 @@ config HAVE_ARCH_PFN_VALID
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
bool
config ARCH_SPLIT_ARG64
bool
help
If a 32-bit architecture requires 64-bit arguments to be split into
pairs of 32-bit arguments, select this option.
source "kernel/gcov/Kconfig"
source "scripts/gcc-plugins/Kconfig"

View File

@ -10,6 +10,7 @@
#ifndef __ASSEMBLY__
#define clear_page(paddr) memset((paddr), 0, PAGE_SIZE)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
#define copy_page(to, from) memcpy((to), (from), PAGE_SIZE)
struct vm_area_struct;

View File

@ -307,7 +307,7 @@ resume_user_mode_begin:
mov r0, sp ; pt_regs for arg to do_signal()/do_notify_resume()
GET_CURR_THR_INFO_FLAGS r9
and.f 0, r9, TIF_SIGPENDING|TIF_NOTIFY_SIGNAL
and.f 0, r9, _TIF_SIGPENDING|_TIF_NOTIFY_SIGNAL
bz .Lchk_notify_resume
; Normal Trap/IRQ entry only saves Scratch (caller-saved) regs

View File

@ -7,6 +7,7 @@ menuconfig ARC_SOC_HSDK
depends on ISA_ARCV2
select ARC_HAS_ACCL_REGS
select ARC_IRQ_NO_AUTOSAVE
select ARC_FPU_SAVE_RESTORE
select CLK_HSDK
select RESET_CONTROLLER
select RESET_HSDK

View File

@ -494,3 +494,11 @@
clock-names = "sysclk";
};
};
&aes1_target {
status = "disabled";
};
&aes2_target {
status = "disabled";
};

View File

@ -45,18 +45,21 @@
emac: gem@30000 {
compatible = "cadence,gem";
reg = <0x30000 0x10000>;
interrupt-parent = <&vic0>;
interrupts = <31>;
};
dmac1: dmac@40000 {
compatible = "snps,dw-dmac";
reg = <0x40000 0x10000>;
interrupt-parent = <&vic0>;
interrupts = <25>;
};
dmac2: dmac@50000 {
compatible = "snps,dw-dmac";
reg = <0x50000 0x10000>;
interrupt-parent = <&vic0>;
interrupts = <26>;
};
@ -233,6 +236,7 @@
axi2pico@c0000000 {
compatible = "picochip,axi2pico-pc3x2";
reg = <0xc0000000 0x10000>;
interrupt-parent = <&vic0>;
interrupts = <13 14 15 16 17 18 19 20 21>;
};
};

View File

@ -329,6 +329,7 @@
panel@0 {
compatible = "samsung,s6e63m0";
reg = <0>;
max-brightness = <15>;
vdd3-supply = <&panel_reg_3v0>;
vci-supply = <&panel_reg_1v8>;
reset-gpios = <&gpio4 11 GPIO_ACTIVE_LOW>;

View File

@ -279,6 +279,7 @@ CONFIG_SERIAL_OMAP_CONSOLE=y
CONFIG_SERIAL_DEV_BUS=y
CONFIG_I2C_CHARDEV=y
CONFIG_SPI=y
CONFIG_SPI_GPIO=m
CONFIG_SPI_OMAP24XX=y
CONFIG_SPI_TI_QSPI=m
CONFIG_HSI=m
@ -296,7 +297,6 @@ CONFIG_GPIO_TWL4030=y
CONFIG_W1=m
CONFIG_HDQ_MASTER_OMAP=m
CONFIG_W1_SLAVE_DS250X=m
CONFIG_POWER_AVS=y
CONFIG_POWER_RESET=y
CONFIG_POWER_RESET_GPIO=y
CONFIG_BATTERY_BQ27XXX=m

View File

@ -230,10 +230,12 @@ static int _omap_device_notifier_call(struct notifier_block *nb,
break;
case BUS_NOTIFY_BIND_DRIVER:
od = to_omap_device(pdev);
if (od && (od->_state == OMAP_DEVICE_STATE_ENABLED) &&
pm_runtime_status_suspended(dev)) {
if (od) {
od->_driver_status = BUS_NOTIFY_BIND_DRIVER;
pm_runtime_set_active(dev);
if (od->_state == OMAP_DEVICE_STATE_ENABLED &&
pm_runtime_status_suspended(dev)) {
pm_runtime_set_active(dev);
}
}
break;
case BUS_NOTIFY_ADD_DEVICE:

View File

@ -71,7 +71,7 @@ static struct omap_voltdm_pmic omap_cpcap_iva = {
.vp_vstepmin = OMAP4_VP_VSTEPMIN_VSTEPMIN,
.vp_vstepmax = OMAP4_VP_VSTEPMAX_VSTEPMAX,
.vddmin = 900000,
.vddmax = 1350000,
.vddmax = 1375000,
.vp_timeout_us = OMAP4_VP_VLIMITTO_TIMEOUT_US,
.i2c_slave_addr = 0x44,
.volt_reg_addr = 0x0,

View File

@ -10,7 +10,7 @@
#
# Copyright (C) 1995-2001 by Russell King
LDFLAGS_vmlinux :=--no-undefined -X -z norelro
LDFLAGS_vmlinux :=--no-undefined -X
ifeq ($(CONFIG_RELOCATABLE), y)
# Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
@ -115,16 +115,20 @@ KBUILD_CPPFLAGS += -mbig-endian
CHECKFLAGS += -D__AARCH64EB__
# Prefer the baremetal ELF build target, but not all toolchains include
# it so fall back to the standard linux version if needed.
KBUILD_LDFLAGS += -EB $(call ld-option, -maarch64elfb, -maarch64linuxb)
KBUILD_LDFLAGS += -EB $(call ld-option, -maarch64elfb, -maarch64linuxb -z norelro)
UTS_MACHINE := aarch64_be
else
KBUILD_CPPFLAGS += -mlittle-endian
CHECKFLAGS += -D__AARCH64EL__
# Same as above, prefer ELF but fall back to linux target if needed.
KBUILD_LDFLAGS += -EL $(call ld-option, -maarch64elf, -maarch64linux)
KBUILD_LDFLAGS += -EL $(call ld-option, -maarch64elf, -maarch64linux -z norelro)
UTS_MACHINE := aarch64
endif
ifeq ($(CONFIG_LD_IS_LLD), y)
KBUILD_LDFLAGS += -z norelro
endif
CHECKFLAGS += -D__aarch64__
ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y)

View File

@ -127,7 +127,7 @@
compatible = "snps,dw-apb-gpio-port";
gpio-controller;
#gpio-cells = <2>;
snps,nr-gpios = <32>;
ngpios = <32>;
reg = <0>;
interrupt-controller;
#interrupt-cells = <2>;
@ -145,7 +145,7 @@
compatible = "snps,dw-apb-gpio-port";
gpio-controller;
#gpio-cells = <2>;
snps,nr-gpios = <32>;
ngpios = <32>;
reg = <0>;
interrupt-controller;
#interrupt-cells = <2>;
@ -163,7 +163,7 @@
compatible = "snps,dw-apb-gpio-port";
gpio-controller;
#gpio-cells = <2>;
snps,nr-gpios = <8>;
ngpios = <8>;
reg = <0>;
interrupt-controller;
#interrupt-cells = <2>;

View File

@ -17,6 +17,7 @@
#include <linux/jump_label.h>
#include <linux/kvm_types.h>
#include <linux/percpu.h>
#include <linux/psci.h>
#include <asm/arch_gicv3.h>
#include <asm/barrier.h>
#include <asm/cpufeature.h>
@ -240,6 +241,28 @@ struct kvm_host_data {
struct kvm_pmu_events pmu_events;
};
struct kvm_host_psci_config {
/* PSCI version used by host. */
u32 version;
/* Function IDs used by host if version is v0.1. */
struct psci_0_1_function_ids function_ids_0_1;
bool psci_0_1_cpu_suspend_implemented;
bool psci_0_1_cpu_on_implemented;
bool psci_0_1_cpu_off_implemented;
bool psci_0_1_migrate_implemented;
};
extern struct kvm_host_psci_config kvm_nvhe_sym(kvm_host_psci_config);
#define kvm_host_psci_config CHOOSE_NVHE_SYM(kvm_host_psci_config)
extern s64 kvm_nvhe_sym(hyp_physvirt_offset);
#define hyp_physvirt_offset CHOOSE_NVHE_SYM(hyp_physvirt_offset)
extern u64 kvm_nvhe_sym(hyp_cpu_logical_map)[NR_CPUS];
#define hyp_cpu_logical_map CHOOSE_NVHE_SYM(hyp_cpu_logical_map)
struct vcpu_reset_state {
unsigned long pc;
unsigned long r0;

View File

@ -94,7 +94,8 @@
#endif /* CONFIG_ARM64_FORCE_52BIT */
extern phys_addr_t arm64_dma_phys_limit;
#define ARCH_LOW_ADDRESS_LIMIT (arm64_dma_phys_limit - 1)
extern phys_addr_t arm64_dma32_phys_limit;
#define ARCH_LOW_ADDRESS_LIMIT ((arm64_dma_phys_limit ? : arm64_dma32_phys_limit) - 1)
struct debug_info {
#ifdef CONFIG_HAVE_HW_BREAKPOINT

View File

@ -176,10 +176,21 @@ static inline void __uaccess_enable_hw_pan(void)
* The Tag check override (TCO) bit disables temporarily the tag checking
* preventing the issue.
*/
static inline void uaccess_disable_privileged(void)
static inline void __uaccess_disable_tco(void)
{
asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(0),
ARM64_MTE, CONFIG_KASAN_HW_TAGS));
}
static inline void __uaccess_enable_tco(void)
{
asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(1),
ARM64_MTE, CONFIG_KASAN_HW_TAGS));
}
static inline void uaccess_disable_privileged(void)
{
__uaccess_disable_tco();
if (uaccess_ttbr0_disable())
return;
@ -189,8 +200,7 @@ static inline void uaccess_disable_privileged(void)
static inline void uaccess_enable_privileged(void)
{
asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(1),
ARM64_MTE, CONFIG_KASAN_HW_TAGS));
__uaccess_enable_tco();
if (uaccess_ttbr0_enable())
return;

View File

@ -2568,7 +2568,7 @@ static void verify_hyp_capabilities(void)
int parange, ipa_max;
unsigned int safe_vmid_bits, vmid_bits;
if (!IS_ENABLED(CONFIG_KVM) || !IS_ENABLED(CONFIG_KVM_ARM_HOST))
if (!IS_ENABLED(CONFIG_KVM))
return;
safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);

View File

@ -182,7 +182,6 @@ alternative_else_nop_endif
mrs_s \tmp2, SYS_GCR_EL1
bfi \tmp2, \tmp, #0, #16
msr_s SYS_GCR_EL1, \tmp2
isb
#endif
.endm
@ -194,6 +193,7 @@ alternative_else_nop_endif
ldr_l \tmp, gcr_kernel_excl
mte_set_gcr \tmp, \tmp2
isb
1:
#endif
.endm

View File

@ -434,7 +434,7 @@ static void __init hyp_mode_check(void)
"CPU: CPUs started in inconsistent modes");
else
pr_info("CPU: All CPU(s) started at EL1\n");
if (IS_ENABLED(CONFIG_KVM))
if (IS_ENABLED(CONFIG_KVM) && !is_kernel_in_hyp_mode())
kvm_compute_layout();
}
@ -807,7 +807,6 @@ int arch_show_interrupts(struct seq_file *p, int prec)
unsigned int cpu, i;
for (i = 0; i < NR_IPI; i++) {
unsigned int irq = irq_desc_get_irq(ipi_desc[i]);
seq_printf(p, "%*s%u:%s", prec - 1, "IPI", i,
prec >= 4 ? " " : "");
for_each_online_cpu(cpu)

View File

@ -42,7 +42,6 @@
#include <asm/smp.h>
#include <asm/stack_pointer.h>
#include <asm/stacktrace.h>
#include <asm/exception.h>
#include <asm/system_misc.h>
#include <asm/sysreg.h>

View File

@ -24,8 +24,7 @@ btildflags-$(CONFIG_ARM64_BTI_KERNEL) += -z force-bti
# routines, as x86 does (see 6f121e548f83 ("x86, vdso: Reimplement vdso.so
# preparation in build-time C")).
ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \
-Bsymbolic $(call ld-option, --no-eh-frame-hdr) --build-id=sha1 -n \
$(btildflags-y) -T
-Bsymbolic --build-id=sha1 -n $(btildflags-y) -T
ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO

View File

@ -40,9 +40,6 @@ SECTIONS
PROVIDE (_etext = .);
PROVIDE (etext = .);
.eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
.eh_frame : { KEEP (*(.eh_frame)) } :text
.dynamic : { *(.dynamic) } :text :dynamic
.rodata : { *(.rodata*) } :text
@ -54,6 +51,7 @@ SECTIONS
*(.note.GNU-stack)
*(.data .data.* .gnu.linkonce.d.* .sdata*)
*(.bss .sbss .dynbss .dynsbss)
*(.eh_frame .eh_frame_hdr)
}
}
@ -66,7 +64,6 @@ PHDRS
text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
note PT_NOTE FLAGS(4); /* PF_R */
eh_frame_hdr PT_GNU_EH_FRAME;
}
/*

View File

@ -49,14 +49,6 @@ if KVM
source "virt/kvm/Kconfig"
config KVM_ARM_PMU
bool "Virtual Performance Monitoring Unit (PMU) support"
depends on HW_PERF_EVENTS
default y
help
Adds support for a virtual Performance Monitoring Unit (PMU) in
virtual machines.
endif # KVM
endif # VIRTUALIZATION

View File

@ -24,4 +24,4 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \
vgic/vgic-its.o vgic/vgic-debug.o
kvm-$(CONFIG_KVM_ARM_PMU) += pmu-emul.o
kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o

View File

@ -1129,9 +1129,10 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
if (!irqchip_in_kernel(vcpu->kvm))
goto no_vgic;
if (!vgic_initialized(vcpu->kvm))
return -ENODEV;
/*
* At this stage, we have the guarantee that the vgic is both
* available and initialized.
*/
if (!timer_irqs_are_valid(vcpu)) {
kvm_debug("incorrectly configured timer irqs\n");
return -EINVAL;

View File

@ -65,10 +65,6 @@ static bool vgic_present;
static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
extern u64 kvm_nvhe_sym(__cpu_logical_map)[NR_CPUS];
extern u32 kvm_nvhe_sym(kvm_host_psci_version);
extern struct psci_0_1_function_ids kvm_nvhe_sym(kvm_host_psci_0_1_function_ids);
int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
@ -584,11 +580,9 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
* Map the VGIC hardware resources before running a vcpu the
* first time on this VM.
*/
if (unlikely(!vgic_ready(kvm))) {
ret = kvm_vgic_map_resources(kvm);
if (ret)
return ret;
}
ret = kvm_vgic_map_resources(kvm);
if (ret)
return ret;
} else {
/*
* Tell the rest of the code that there are userspace irqchip
@ -1574,12 +1568,12 @@ static struct notifier_block hyp_init_cpu_pm_nb = {
.notifier_call = hyp_init_cpu_pm_notifier,
};
static void __init hyp_cpu_pm_init(void)
static void hyp_cpu_pm_init(void)
{
if (!is_protected_kvm_enabled())
cpu_pm_register_notifier(&hyp_init_cpu_pm_nb);
}
static void __init hyp_cpu_pm_exit(void)
static void hyp_cpu_pm_exit(void)
{
if (!is_protected_kvm_enabled())
cpu_pm_unregister_notifier(&hyp_init_cpu_pm_nb);
@ -1604,9 +1598,12 @@ static void init_cpu_logical_map(void)
* allow any other CPUs from the `possible` set to boot.
*/
for_each_online_cpu(cpu)
kvm_nvhe_sym(__cpu_logical_map)[cpu] = cpu_logical_map(cpu);
hyp_cpu_logical_map[cpu] = cpu_logical_map(cpu);
}
#define init_psci_0_1_impl_state(config, what) \
config.psci_0_1_ ## what ## _implemented = psci_ops.what
static bool init_psci_relay(void)
{
/*
@ -1618,8 +1615,15 @@ static bool init_psci_relay(void)
return false;
}
kvm_nvhe_sym(kvm_host_psci_version) = psci_ops.get_version();
kvm_nvhe_sym(kvm_host_psci_0_1_function_ids) = get_psci_0_1_function_ids();
kvm_host_psci_config.version = psci_ops.get_version();
if (kvm_host_psci_config.version == PSCI_VERSION(0, 1)) {
kvm_host_psci_config.function_ids_0_1 = get_psci_0_1_function_ids();
init_psci_0_1_impl_state(kvm_host_psci_config, cpu_suspend);
init_psci_0_1_impl_state(kvm_host_psci_config, cpu_on);
init_psci_0_1_impl_state(kvm_host_psci_config, cpu_off);
init_psci_0_1_impl_state(kvm_host_psci_config, migrate);
}
return true;
}

View File

@ -59,4 +59,13 @@ static inline void __adjust_pc(struct kvm_vcpu *vcpu)
}
}
/*
* Skip an instruction while host sysregs are live.
* Assumes host is always 64-bit.
*/
static inline void kvm_skip_host_instr(void)
{
write_sysreg_el2(read_sysreg_el2(SYS_ELR) + 4, SYS_ELR);
}
#endif

View File

@ -157,11 +157,6 @@ static void default_host_smc_handler(struct kvm_cpu_context *host_ctxt)
__kvm_hyp_host_forward_smc(host_ctxt);
}
static void skip_host_instruction(void)
{
write_sysreg_el2(read_sysreg_el2(SYS_ELR) + 4, SYS_ELR);
}
static void handle_host_smc(struct kvm_cpu_context *host_ctxt)
{
bool handled;
@ -170,11 +165,8 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt)
if (!handled)
default_host_smc_handler(host_ctxt);
/*
* Unlike HVC, the return address of an SMC is the instruction's PC.
* Move the return address past the instruction.
*/
skip_host_instruction();
/* SMC was trapped, move ELR past the current PC. */
kvm_skip_host_instr();
}
void handle_trap(struct kvm_cpu_context *host_ctxt)

View File

@ -14,14 +14,14 @@
* Other CPUs should not be allowed to boot because their features were
* not checked against the finalized system capabilities.
*/
u64 __ro_after_init __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID };
u64 __ro_after_init hyp_cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID };
u64 cpu_logical_map(unsigned int cpu)
{
if (cpu >= ARRAY_SIZE(__cpu_logical_map))
if (cpu >= ARRAY_SIZE(hyp_cpu_logical_map))
hyp_panic();
return __cpu_logical_map[cpu];
return hyp_cpu_logical_map[cpu];
}
unsigned long __hyp_per_cpu_offset(unsigned int cpu)

View File

@ -7,11 +7,8 @@
#include <asm/kvm_asm.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
#include <kvm/arm_hypercalls.h>
#include <linux/arm-smccc.h>
#include <linux/kvm_host.h>
#include <linux/psci.h>
#include <kvm/arm_psci.h>
#include <uapi/linux/psci.h>
#include <nvhe/trap_handler.h>
@ -22,9 +19,8 @@ void kvm_hyp_cpu_resume(unsigned long r0);
void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
/* Config options set by the host. */
__ro_after_init u32 kvm_host_psci_version;
__ro_after_init struct psci_0_1_function_ids kvm_host_psci_0_1_function_ids;
__ro_after_init s64 hyp_physvirt_offset;
struct kvm_host_psci_config __ro_after_init kvm_host_psci_config;
s64 __ro_after_init hyp_physvirt_offset;
#define __hyp_pa(x) ((phys_addr_t)((x)) + hyp_physvirt_offset)
@ -47,19 +43,16 @@ struct psci_boot_args {
static DEFINE_PER_CPU(struct psci_boot_args, cpu_on_args) = PSCI_BOOT_ARGS_INIT;
static DEFINE_PER_CPU(struct psci_boot_args, suspend_args) = PSCI_BOOT_ARGS_INIT;
static u64 get_psci_func_id(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(u64, func_id, host_ctxt, 0);
return func_id;
}
#define is_psci_0_1(what, func_id) \
(kvm_host_psci_config.psci_0_1_ ## what ## _implemented && \
(func_id) == kvm_host_psci_config.function_ids_0_1.what)
static bool is_psci_0_1_call(u64 func_id)
{
return (func_id == kvm_host_psci_0_1_function_ids.cpu_suspend) ||
(func_id == kvm_host_psci_0_1_function_ids.cpu_on) ||
(func_id == kvm_host_psci_0_1_function_ids.cpu_off) ||
(func_id == kvm_host_psci_0_1_function_ids.migrate);
return (is_psci_0_1(cpu_suspend, func_id) ||
is_psci_0_1(cpu_on, func_id) ||
is_psci_0_1(cpu_off, func_id) ||
is_psci_0_1(migrate, func_id));
}
static bool is_psci_0_2_call(u64 func_id)
@ -69,16 +62,6 @@ static bool is_psci_0_2_call(u64 func_id)
(PSCI_0_2_FN64(0) <= func_id && func_id <= PSCI_0_2_FN64(31));
}
static bool is_psci_call(u64 func_id)
{
switch (kvm_host_psci_version) {
case PSCI_VERSION(0, 1):
return is_psci_0_1_call(func_id);
default:
return is_psci_0_2_call(func_id);
}
}
static unsigned long psci_call(unsigned long fn, unsigned long arg0,
unsigned long arg1, unsigned long arg2)
{
@ -248,15 +231,14 @@ asmlinkage void __noreturn kvm_host_psci_cpu_entry(bool is_cpu_on)
static unsigned long psci_0_1_handler(u64 func_id, struct kvm_cpu_context *host_ctxt)
{
if ((func_id == kvm_host_psci_0_1_function_ids.cpu_off) ||
(func_id == kvm_host_psci_0_1_function_ids.migrate))
if (is_psci_0_1(cpu_off, func_id) || is_psci_0_1(migrate, func_id))
return psci_forward(host_ctxt);
else if (func_id == kvm_host_psci_0_1_function_ids.cpu_on)
if (is_psci_0_1(cpu_on, func_id))
return psci_cpu_on(func_id, host_ctxt);
else if (func_id == kvm_host_psci_0_1_function_ids.cpu_suspend)
if (is_psci_0_1(cpu_suspend, func_id))
return psci_cpu_suspend(func_id, host_ctxt);
else
return PSCI_RET_NOT_SUPPORTED;
return PSCI_RET_NOT_SUPPORTED;
}
static unsigned long psci_0_2_handler(u64 func_id, struct kvm_cpu_context *host_ctxt)
@ -298,20 +280,23 @@ static unsigned long psci_1_0_handler(u64 func_id, struct kvm_cpu_context *host_
bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt)
{
u64 func_id = get_psci_func_id(host_ctxt);
DECLARE_REG(u64, func_id, host_ctxt, 0);
unsigned long ret;
if (!is_psci_call(func_id))
return false;
switch (kvm_host_psci_version) {
switch (kvm_host_psci_config.version) {
case PSCI_VERSION(0, 1):
if (!is_psci_0_1_call(func_id))
return false;
ret = psci_0_1_handler(func_id, host_ctxt);
break;
case PSCI_VERSION(0, 2):
if (!is_psci_0_2_call(func_id))
return false;
ret = psci_0_2_handler(func_id, host_ctxt);
break;
default:
if (!is_psci_0_2_call(func_id))
return false;
ret = psci_1_0_handler(func_id, host_ctxt);
break;
}

View File

@ -850,8 +850,6 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
return -EINVAL;
}
kvm_pmu_vcpu_reset(vcpu);
return 0;
}

View File

@ -594,6 +594,10 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 pmcr, val;
/* No PMU available, PMCR_EL0 may UNDEF... */
if (!kvm_arm_support_pmu_v3())
return;
pmcr = read_sysreg(pmcr_el0);
/*
* Writable bits of PMCR_EL0 (ARMV8_PMU_PMCR_MASK) are reset to UNKNOWN
@ -919,7 +923,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
#define reg_to_encoding(x) \
sys_reg((u32)(x)->Op0, (u32)(x)->Op1, \
(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2);
(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
/* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
#define DBG_BCR_BVR_WCR_WVR_EL1(n) \

View File

@ -34,17 +34,16 @@ static u64 __early_kern_hyp_va(u64 addr)
}
/*
* Store a hyp VA <-> PA offset into a hyp-owned variable.
* Store a hyp VA <-> PA offset into a EL2-owned variable.
*/
static void init_hyp_physvirt_offset(void)
{
extern s64 kvm_nvhe_sym(hyp_physvirt_offset);
u64 kern_va, hyp_va;
/* Compute the offset from the hyp VA and PA of a random symbol. */
kern_va = (u64)kvm_ksym_ref(__hyp_text_start);
kern_va = (u64)lm_alias(__hyp_text_start);
hyp_va = __early_kern_hyp_va(kern_va);
CHOOSE_NVHE_SYM(hyp_physvirt_offset) = (s64)__pa(kern_va) - (s64)hyp_va;
hyp_physvirt_offset = (s64)__pa(kern_va) - (s64)hyp_va;
}
/*

View File

@ -419,7 +419,8 @@ int vgic_lazy_init(struct kvm *kvm)
* Map the MMIO regions depending on the VGIC model exposed to the guest
* called on the first VCPU run.
* Also map the virtual CPU interface into the VM.
* v2/v3 derivatives call vgic_init if not already done.
* v2 calls vgic_init() if not already done.
* v3 and derivatives return an error if the VGIC is not initialized.
* vgic_ready() returns true if this function has succeeded.
* @kvm: kvm struct pointer
*/
@ -428,7 +429,13 @@ int kvm_vgic_map_resources(struct kvm *kvm)
struct vgic_dist *dist = &kvm->arch.vgic;
int ret = 0;
if (likely(vgic_ready(kvm)))
return 0;
mutex_lock(&kvm->lock);
if (vgic_ready(kvm))
goto out;
if (!irqchip_in_kernel(kvm))
goto out;
@ -439,6 +446,8 @@ int kvm_vgic_map_resources(struct kvm *kvm)
if (ret)
__kvm_vgic_destroy(kvm);
else
dist->ready = true;
out:
mutex_unlock(&kvm->lock);

View File

@ -306,20 +306,15 @@ int vgic_v2_map_resources(struct kvm *kvm)
struct vgic_dist *dist = &kvm->arch.vgic;
int ret = 0;
if (vgic_ready(kvm))
goto out;
if (IS_VGIC_ADDR_UNDEF(dist->vgic_dist_base) ||
IS_VGIC_ADDR_UNDEF(dist->vgic_cpu_base)) {
kvm_err("Need to set vgic cpu and dist addresses first\n");
ret = -ENXIO;
goto out;
return -ENXIO;
}
if (!vgic_v2_check_base(dist->vgic_dist_base, dist->vgic_cpu_base)) {
kvm_err("VGIC CPU and dist frames overlap\n");
ret = -EINVAL;
goto out;
return -EINVAL;
}
/*
@ -329,13 +324,13 @@ int vgic_v2_map_resources(struct kvm *kvm)
ret = vgic_init(kvm);
if (ret) {
kvm_err("Unable to initialize VGIC dynamic data structures\n");
goto out;
return ret;
}
ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V2);
if (ret) {
kvm_err("Unable to register VGIC MMIO regions\n");
goto out;
return ret;
}
if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
@ -344,14 +339,11 @@ int vgic_v2_map_resources(struct kvm *kvm)
KVM_VGIC_V2_CPU_SIZE, true);
if (ret) {
kvm_err("Unable to remap VGIC CPU to VCPU\n");
goto out;
return ret;
}
}
dist->ready = true;
out:
return ret;
return 0;
}
DEFINE_STATIC_KEY_FALSE(vgic_v2_cpuif_trap);

View File

@ -500,29 +500,23 @@ int vgic_v3_map_resources(struct kvm *kvm)
int ret = 0;
int c;
if (vgic_ready(kvm))
goto out;
kvm_for_each_vcpu(c, vcpu, kvm) {
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
if (IS_VGIC_ADDR_UNDEF(vgic_cpu->rd_iodev.base_addr)) {
kvm_debug("vcpu %d redistributor base not set\n", c);
ret = -ENXIO;
goto out;
return -ENXIO;
}
}
if (IS_VGIC_ADDR_UNDEF(dist->vgic_dist_base)) {
kvm_err("Need to set vgic distributor addresses first\n");
ret = -ENXIO;
goto out;
return -ENXIO;
}
if (!vgic_v3_check_base(kvm)) {
kvm_err("VGIC redist and dist frames overlap\n");
ret = -EINVAL;
goto out;
return -EINVAL;
}
/*
@ -530,22 +524,19 @@ int vgic_v3_map_resources(struct kvm *kvm)
* the VGIC before we need to use it.
*/
if (!vgic_initialized(kvm)) {
ret = -EBUSY;
goto out;
return -EBUSY;
}
ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V3);
if (ret) {
kvm_err("Unable to register VGICv3 dist MMIO regions\n");
goto out;
return ret;
}
if (kvm_vgic_global_state.has_gicv4_1)
vgic_v4_configure_vsgis(kvm);
dist->ready = true;
out:
return ret;
return 0;
}
DEFINE_STATIC_KEY_FALSE(vgic_v3_cpuif_trap);

View File

@ -59,7 +59,7 @@ EXPORT_SYMBOL(memstart_addr);
* bit addressable memory area.
*/
phys_addr_t arm64_dma_phys_limit __ro_after_init;
static phys_addr_t arm64_dma32_phys_limit __ro_after_init;
phys_addr_t arm64_dma32_phys_limit __ro_after_init;
#ifdef CONFIG_KEXEC_CORE
/*

View File

@ -46,7 +46,7 @@
#endif
#ifdef CONFIG_KASAN_HW_TAGS
#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1
#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1
#else
#define TCR_KASAN_HW_FLAGS 0
#endif

View File

@ -260,10 +260,19 @@ __secondary_hold_acknowledge:
MachineCheck:
EXCEPTION_PROLOG_0
#ifdef CONFIG_PPC_CHRP
#ifdef CONFIG_VMAP_STACK
mtspr SPRN_SPRG_SCRATCH2,r1
mfspr r1, SPRN_SPRG_THREAD
lwz r1, RTAS_SP(r1)
cmpwi cr1, r1, 0
bne cr1, 7f
mfspr r1, SPRN_SPRG_SCRATCH2
#else
mfspr r11, SPRN_SPRG_THREAD
lwz r11, RTAS_SP(r11)
cmpwi cr1, r11, 0
bne cr1, 7f
#endif
#endif /* CONFIG_PPC_CHRP */
EXCEPTION_PROLOG_1 for_rtas=1
7: EXCEPTION_PROLOG_2

View File

@ -85,7 +85,7 @@ SECTIONS
ALIGN_FUNCTION();
#endif
/* careful! __ftr_alt_* sections need to be close to .text */
*(.text.hot TEXT_MAIN .text.fixup .text.unlikely .fixup __ftr_alt_* .ref.text);
*(.text.hot .text.hot.* TEXT_MAIN .text.fixup .text.unlikely .text.unlikely.* .fixup __ftr_alt_* .ref.text);
#ifdef CONFIG_PPC64
*(.tramp.ftrace.text);
#endif

View File

@ -19,6 +19,7 @@ config X86_32
select KMAP_LOCAL
select MODULES_USE_ELF_REL
select OLD_SIGACTION
select ARCH_SPLIT_ARG64
config X86_64
def_bool y

View File

@ -16,6 +16,7 @@
#include <asm/hyperv-tlfs.h>
#include <asm/mshyperv.h>
#include <asm/idtentry.h>
#include <linux/kexec.h>
#include <linux/version.h>
#include <linux/vmalloc.h>
#include <linux/mm.h>
@ -26,6 +27,8 @@
#include <linux/syscore_ops.h>
#include <clocksource/hyperv_timer.h>
int hyperv_init_cpuhp;
void *hv_hypercall_pg;
EXPORT_SYMBOL_GPL(hv_hypercall_pg);
@ -401,6 +404,7 @@ void __init hyperv_init(void)
register_syscore_ops(&hv_syscore_ops);
hyperv_init_cpuhp = cpuhp;
return;
remove_cpuhp_state:

View File

@ -66,11 +66,17 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
if (!hv_hypercall_pg)
goto do_native;
if (cpumask_empty(cpus))
return;
local_irq_save(flags);
/*
* Only check the mask _after_ interrupt has been disabled to avoid the
* mask changing under our feet.
*/
if (cpumask_empty(cpus)) {
local_irq_restore(flags);
return;
}
flush_pcpu = (struct hv_tlb_flush **)
this_cpu_ptr(hyperv_pcpu_input_arg);

View File

@ -1010,9 +1010,21 @@ struct kvm_arch {
*/
bool tdp_mmu_enabled;
/* List of struct tdp_mmu_pages being used as roots */
/*
* List of struct kvmp_mmu_pages being used as roots.
* All struct kvm_mmu_pages in the list should have
* tdp_mmu_page set.
* All struct kvm_mmu_pages in the list should have a positive
* root_count except when a thread holds the MMU lock and is removing
* an entry from the list.
*/
struct list_head tdp_mmu_roots;
/* List of struct tdp_mmu_pages not being used as roots */
/*
* List of struct kvmp_mmu_pages not being used as roots.
* All struct kvm_mmu_pages in the list should have
* tdp_mmu_page set and a root_count of 0.
*/
struct list_head tdp_mmu_pages;
};
@ -1287,6 +1299,8 @@ struct kvm_x86_ops {
void (*migrate_timers)(struct kvm_vcpu *vcpu);
void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector);
};
struct kvm_x86_nested_ops {
@ -1468,6 +1482,7 @@ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in);
int kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
int kvm_emulate_halt(struct kvm_vcpu *vcpu);
int kvm_vcpu_halt(struct kvm_vcpu *vcpu);
int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu);
int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu);
void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);

View File

@ -74,6 +74,8 @@ static inline void hv_disable_stimer0_percpu_irq(int irq) {}
#if IS_ENABLED(CONFIG_HYPERV)
extern int hyperv_init_cpuhp;
extern void *hv_hypercall_pg;
extern void __percpu **hyperv_pcpu_input_arg;

View File

@ -135,14 +135,32 @@ static void hv_machine_shutdown(void)
{
if (kexec_in_progress && hv_kexec_handler)
hv_kexec_handler();
/*
* Call hv_cpu_die() on all the CPUs, otherwise later the hypervisor
* corrupts the old VP Assist Pages and can crash the kexec kernel.
*/
if (kexec_in_progress && hyperv_init_cpuhp > 0)
cpuhp_remove_state(hyperv_init_cpuhp);
/* The function calls stop_other_cpus(). */
native_machine_shutdown();
/* Disable the hypercall page when there is only 1 active CPU. */
if (kexec_in_progress)
hyperv_cleanup();
}
static void hv_machine_crash_shutdown(struct pt_regs *regs)
{
if (hv_crash_handler)
hv_crash_handler(regs);
/* The function calls crash_smp_send_stop(). */
native_machine_crash_shutdown(regs);
/* Disable the hypercall page when there is only 1 active CPU. */
hyperv_cleanup();
}
#endif /* CONFIG_KEXEC_CORE */
#endif /* CONFIG_HYPERV */

View File

@ -167,9 +167,6 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
*repeat = 0;
*uniform = 1;
/* Make end inclusive instead of exclusive */
end--;
prev_match = MTRR_TYPE_INVALID;
for (i = 0; i < num_var_ranges; ++i) {
unsigned short start_state, end_state, inclusive;
@ -261,6 +258,9 @@ u8 mtrr_type_lookup(u64 start, u64 end, u8 *uniform)
int repeat;
u64 partial_end;
/* Make end inclusive instead of exclusive */
end--;
if (!mtrr_state_set)
return MTRR_TYPE_INVALID;

View File

@ -525,89 +525,70 @@ static void rdtgroup_remove(struct rdtgroup *rdtgrp)
kfree(rdtgrp);
}
struct task_move_callback {
struct callback_head work;
struct rdtgroup *rdtgrp;
};
static void move_myself(struct callback_head *head)
static void _update_task_closid_rmid(void *task)
{
struct task_move_callback *callback;
struct rdtgroup *rdtgrp;
callback = container_of(head, struct task_move_callback, work);
rdtgrp = callback->rdtgrp;
/*
* If resource group was deleted before this task work callback
* was invoked, then assign the task to root group and free the
* resource group.
* If the task is still current on this CPU, update PQR_ASSOC MSR.
* Otherwise, the MSR is updated when the task is scheduled in.
*/
if (atomic_dec_and_test(&rdtgrp->waitcount) &&
(rdtgrp->flags & RDT_DELETED)) {
current->closid = 0;
current->rmid = 0;
rdtgroup_remove(rdtgrp);
}
if (task == current)
resctrl_sched_in();
}
if (unlikely(current->flags & PF_EXITING))
goto out;
preempt_disable();
/* update PQR_ASSOC MSR to make resource group go into effect */
resctrl_sched_in();
preempt_enable();
out:
kfree(callback);
static void update_task_closid_rmid(struct task_struct *t)
{
if (IS_ENABLED(CONFIG_SMP) && task_curr(t))
smp_call_function_single(task_cpu(t), _update_task_closid_rmid, t, 1);
else
_update_task_closid_rmid(t);
}
static int __rdtgroup_move_task(struct task_struct *tsk,
struct rdtgroup *rdtgrp)
{
struct task_move_callback *callback;
int ret;
callback = kzalloc(sizeof(*callback), GFP_KERNEL);
if (!callback)
return -ENOMEM;
callback->work.func = move_myself;
callback->rdtgrp = rdtgrp;
/* If the task is already in rdtgrp, no need to move the task. */
if ((rdtgrp->type == RDTCTRL_GROUP && tsk->closid == rdtgrp->closid &&
tsk->rmid == rdtgrp->mon.rmid) ||
(rdtgrp->type == RDTMON_GROUP && tsk->rmid == rdtgrp->mon.rmid &&
tsk->closid == rdtgrp->mon.parent->closid))
return 0;
/*
* Take a refcount, so rdtgrp cannot be freed before the
* callback has been invoked.
* Set the task's closid/rmid before the PQR_ASSOC MSR can be
* updated by them.
*
* For ctrl_mon groups, move both closid and rmid.
* For monitor groups, can move the tasks only from
* their parent CTRL group.
*/
atomic_inc(&rdtgrp->waitcount);
ret = task_work_add(tsk, &callback->work, TWA_RESUME);
if (ret) {
/*
* Task is exiting. Drop the refcount and free the callback.
* No need to check the refcount as the group cannot be
* deleted before the write function unlocks rdtgroup_mutex.
*/
atomic_dec(&rdtgrp->waitcount);
kfree(callback);
rdt_last_cmd_puts("Task exited\n");
} else {
/*
* For ctrl_mon groups move both closid and rmid.
* For monitor groups, can move the tasks only from
* their parent CTRL group.
*/
if (rdtgrp->type == RDTCTRL_GROUP) {
tsk->closid = rdtgrp->closid;
if (rdtgrp->type == RDTCTRL_GROUP) {
tsk->closid = rdtgrp->closid;
tsk->rmid = rdtgrp->mon.rmid;
} else if (rdtgrp->type == RDTMON_GROUP) {
if (rdtgrp->mon.parent->closid == tsk->closid) {
tsk->rmid = rdtgrp->mon.rmid;
} else if (rdtgrp->type == RDTMON_GROUP) {
if (rdtgrp->mon.parent->closid == tsk->closid) {
tsk->rmid = rdtgrp->mon.rmid;
} else {
rdt_last_cmd_puts("Can't move task to different control group\n");
ret = -EINVAL;
}
} else {
rdt_last_cmd_puts("Can't move task to different control group\n");
return -EINVAL;
}
}
return ret;
/*
* Ensure the task's closid and rmid are written before determining if
* the task is current that will decide if it will be interrupted.
*/
barrier();
/*
* By now, the task's closid and rmid are set. If the task is current
* on a CPU, the PQR_ASSOC MSR needs to be updated to make the resource
* group go into effect. If the task is not current, the MSR will be
* updated when the task is scheduled in.
*/
update_task_closid_rmid(tsk);
return 0;
}
static bool is_closid_match(struct task_struct *t, struct rdtgroup *r)

View File

@ -305,14 +305,14 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
case 0xe4:
case 0xe5:
*exitinfo |= IOIO_TYPE_IN;
*exitinfo |= (u64)insn->immediate.value << 16;
*exitinfo |= (u8)insn->immediate.value << 16;
break;
/* OUT immediate opcodes */
case 0xe6:
case 0xe7:
*exitinfo |= IOIO_TYPE_OUT;
*exitinfo |= (u64)insn->immediate.value << 16;
*exitinfo |= (u8)insn->immediate.value << 16;
break;
/* IN register opcodes */

View File

@ -674,7 +674,7 @@ static bool pv_eoi_get_pending(struct kvm_vcpu *vcpu)
(unsigned long long)vcpu->arch.pv_eoi.msr_val);
return false;
}
return val & 0x1;
return val & KVM_PV_EOI_ENABLED;
}
static void pv_eoi_set_pending(struct kvm_vcpu *vcpu)
@ -2898,7 +2898,7 @@ void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
/* evaluate pending_events before reading the vector */
smp_rmb();
sipi_vector = apic->sipi_vector;
kvm_vcpu_deliver_sipi_vector(vcpu, sipi_vector);
kvm_x86_ops.vcpu_deliver_sipi_vector(vcpu, sipi_vector);
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
}
}

View File

@ -49,7 +49,7 @@ static inline u64 rsvd_bits(int s, int e)
if (e < s)
return 0;
return ((1ULL << (e - s + 1)) - 1) << s;
return ((2ULL << (e - s)) - 1) << s;
}
void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask);

View File

@ -3493,26 +3493,25 @@ static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
* Return the level of the lowest level SPTE added to sptes.
* That SPTE may be non-present.
*/
static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes)
static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level)
{
struct kvm_shadow_walk_iterator iterator;
int leaf = vcpu->arch.mmu->root_level;
int leaf = -1;
u64 spte;
walk_shadow_page_lockless_begin(vcpu);
for (shadow_walk_init(&iterator, vcpu, addr);
for (shadow_walk_init(&iterator, vcpu, addr),
*root_level = iterator.level;
shadow_walk_okay(&iterator);
__shadow_walk_next(&iterator, spte)) {
leaf = iterator.level;
spte = mmu_spte_get_lockless(iterator.sptep);
sptes[leaf - 1] = spte;
sptes[leaf] = spte;
if (!is_shadow_present_pte(spte))
break;
}
walk_shadow_page_lockless_end(vcpu);
@ -3520,14 +3519,12 @@ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes)
return leaf;
}
/* return true if reserved bit is detected on spte. */
/* return true if reserved bit(s) are detected on a valid, non-MMIO SPTE. */
static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
{
u64 sptes[PT64_ROOT_MAX_LEVEL];
u64 sptes[PT64_ROOT_MAX_LEVEL + 1];
struct rsvd_bits_validate *rsvd_check;
int root = vcpu->arch.mmu->shadow_root_level;
int leaf;
int level;
int root, leaf, level;
bool reserved = false;
if (!VALID_PAGE(vcpu->arch.mmu->root_hpa)) {
@ -3536,35 +3533,45 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
}
if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes);
leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root);
else
leaf = get_walk(vcpu, addr, sptes);
leaf = get_walk(vcpu, addr, sptes, &root);
if (unlikely(leaf < 0)) {
*sptep = 0ull;
return reserved;
}
*sptep = sptes[leaf];
/*
* Skip reserved bits checks on the terminal leaf if it's not a valid
* SPTE. Note, this also (intentionally) skips MMIO SPTEs, which, by
* design, always have reserved bits set. The purpose of the checks is
* to detect reserved bits on non-MMIO SPTEs. i.e. buggy SPTEs.
*/
if (!is_shadow_present_pte(sptes[leaf]))
leaf++;
rsvd_check = &vcpu->arch.mmu->shadow_zero_check;
for (level = root; level >= leaf; level--) {
if (!is_shadow_present_pte(sptes[level - 1]))
break;
for (level = root; level >= leaf; level--)
/*
* Use a bitwise-OR instead of a logical-OR to aggregate the
* reserved bit and EPT's invalid memtype/XWR checks to avoid
* adding a Jcc in the loop.
*/
reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level - 1]) |
__is_rsvd_bits_set(rsvd_check, sptes[level - 1],
level);
}
reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level]) |
__is_rsvd_bits_set(rsvd_check, sptes[level], level);
if (reserved) {
pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n",
__func__, addr);
for (level = root; level >= leaf; level--)
pr_err("------ spte 0x%llx level %d.\n",
sptes[level - 1], level);
sptes[level], level);
}
*sptep = sptes[leaf - 1];
return reserved;
}

View File

@ -44,7 +44,48 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
}
#define for_each_tdp_mmu_root(_kvm, _root) \
static void tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root)
{
if (kvm_mmu_put_root(kvm, root))
kvm_tdp_mmu_free_root(kvm, root);
}
static inline bool tdp_mmu_next_root_valid(struct kvm *kvm,
struct kvm_mmu_page *root)
{
lockdep_assert_held(&kvm->mmu_lock);
if (list_entry_is_head(root, &kvm->arch.tdp_mmu_roots, link))
return false;
kvm_mmu_get_root(kvm, root);
return true;
}
static inline struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
struct kvm_mmu_page *root)
{
struct kvm_mmu_page *next_root;
next_root = list_next_entry(root, link);
tdp_mmu_put_root(kvm, root);
return next_root;
}
/*
* Note: this iterator gets and puts references to the roots it iterates over.
* This makes it safe to release the MMU lock and yield within the loop, but
* if exiting the loop early, the caller must drop the reference to the most
* recent root. (Unless keeping a live reference is desirable.)
*/
#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \
for (_root = list_first_entry(&_kvm->arch.tdp_mmu_roots, \
typeof(*_root), link); \
tdp_mmu_next_root_valid(_kvm, _root); \
_root = tdp_mmu_next_root(_kvm, _root))
#define for_each_tdp_mmu_root(_kvm, _root) \
list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link)
bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa)
@ -447,18 +488,9 @@ bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end)
struct kvm_mmu_page *root;
bool flush = false;
for_each_tdp_mmu_root(kvm, root) {
/*
* Take a reference on the root so that it cannot be freed if
* this thread releases the MMU lock and yields in this loop.
*/
kvm_mmu_get_root(kvm, root);
for_each_tdp_mmu_root_yield_safe(kvm, root)
flush |= zap_gfn_range(kvm, root, start, end, true);
kvm_mmu_put_root(kvm, root);
}
return flush;
}
@ -619,13 +651,7 @@ static int kvm_tdp_mmu_handle_hva_range(struct kvm *kvm, unsigned long start,
int ret = 0;
int as_id;
for_each_tdp_mmu_root(kvm, root) {
/*
* Take a reference on the root so that it cannot be freed if
* this thread releases the MMU lock and yields in this loop.
*/
kvm_mmu_get_root(kvm, root);
for_each_tdp_mmu_root_yield_safe(kvm, root) {
as_id = kvm_mmu_page_as_id(root);
slots = __kvm_memslots(kvm, as_id);
kvm_for_each_memslot(memslot, slots) {
@ -647,8 +673,6 @@ static int kvm_tdp_mmu_handle_hva_range(struct kvm *kvm, unsigned long start,
ret |= handler(kvm, memslot, root, gfn_start,
gfn_end, data);
}
kvm_mmu_put_root(kvm, root);
}
return ret;
@ -838,21 +862,13 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, struct kvm_memory_slot *slot,
int root_as_id;
bool spte_set = false;
for_each_tdp_mmu_root(kvm, root) {
for_each_tdp_mmu_root_yield_safe(kvm, root) {
root_as_id = kvm_mmu_page_as_id(root);
if (root_as_id != slot->as_id)
continue;
/*
* Take a reference on the root so that it cannot be freed if
* this thread releases the MMU lock and yields in this loop.
*/
kvm_mmu_get_root(kvm, root);
spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn,
slot->base_gfn + slot->npages, min_level);
kvm_mmu_put_root(kvm, root);
}
return spte_set;
@ -906,21 +922,13 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, struct kvm_memory_slot *slot)
int root_as_id;
bool spte_set = false;
for_each_tdp_mmu_root(kvm, root) {
for_each_tdp_mmu_root_yield_safe(kvm, root) {
root_as_id = kvm_mmu_page_as_id(root);
if (root_as_id != slot->as_id)
continue;
/*
* Take a reference on the root so that it cannot be freed if
* this thread releases the MMU lock and yields in this loop.
*/
kvm_mmu_get_root(kvm, root);
spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn,
slot->base_gfn + slot->npages);
kvm_mmu_put_root(kvm, root);
}
return spte_set;
@ -1029,21 +1037,13 @@ bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot)
int root_as_id;
bool spte_set = false;
for_each_tdp_mmu_root(kvm, root) {
for_each_tdp_mmu_root_yield_safe(kvm, root) {
root_as_id = kvm_mmu_page_as_id(root);
if (root_as_id != slot->as_id)
continue;
/*
* Take a reference on the root so that it cannot be freed if
* this thread releases the MMU lock and yields in this loop.
*/
kvm_mmu_get_root(kvm, root);
spte_set |= set_dirty_gfn_range(kvm, root, slot->base_gfn,
slot->base_gfn + slot->npages);
kvm_mmu_put_root(kvm, root);
}
return spte_set;
}
@ -1089,21 +1089,13 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
struct kvm_mmu_page *root;
int root_as_id;
for_each_tdp_mmu_root(kvm, root) {
for_each_tdp_mmu_root_yield_safe(kvm, root) {
root_as_id = kvm_mmu_page_as_id(root);
if (root_as_id != slot->as_id)
continue;
/*
* Take a reference on the root so that it cannot be freed if
* this thread releases the MMU lock and yields in this loop.
*/
kvm_mmu_get_root(kvm, root);
zap_collapsible_spte_range(kvm, root, slot->base_gfn,
slot->base_gfn + slot->npages);
kvm_mmu_put_root(kvm, root);
}
}
@ -1160,16 +1152,19 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm,
* Return the level of the lowest level SPTE added to sptes.
* That SPTE may be non-present.
*/
int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes)
int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
int *root_level)
{
struct tdp_iter iter;
struct kvm_mmu *mmu = vcpu->arch.mmu;
int leaf = vcpu->arch.mmu->shadow_root_level;
gfn_t gfn = addr >> PAGE_SHIFT;
int leaf = -1;
*root_level = vcpu->arch.mmu->shadow_root_level;
tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
leaf = iter.level;
sptes[leaf - 1] = iter.old_spte;
sptes[leaf] = iter.old_spte;
}
return leaf;

View File

@ -44,5 +44,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm,
struct kvm_memory_slot *slot, gfn_t gfn);
int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes);
int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
int *root_level);
#endif /* __KVM_X86_MMU_TDP_MMU_H */

View File

@ -199,6 +199,7 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm)
static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
if (!nested_svm_vmrun_msrpm(svm)) {
vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
vcpu->run->internal.suberror =
@ -595,6 +596,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
svm->nested.vmcb12_gpa = 0;
WARN_ON_ONCE(svm->nested.nested_run_pending);
kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, &svm->vcpu);
/* in case we halted in L2 */
svm->vcpu.arch.mp_state = KVM_MP_STATE_RUNNABLE;
@ -754,6 +757,7 @@ void svm_leave_nested(struct vcpu_svm *svm)
leave_guest_mode(&svm->vcpu);
copy_vmcb_control_area(&vmcb->control, &hsave->control);
nested_svm_uninit_mmu_context(&svm->vcpu);
vmcb_mark_all_dirty(svm->vmcb);
}
kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, &svm->vcpu);
@ -1194,6 +1198,10 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
* in the registers, the save area of the nested state instead
* contains saved L1 state.
*/
svm->nested.nested_run_pending =
!!(kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING);
copy_vmcb_control_area(&hsave->control, &svm->vmcb->control);
hsave->save = *save;

View File

@ -1563,6 +1563,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
goto vmgexit_err;
break;
case SVM_VMGEXIT_NMI_COMPLETE:
case SVM_VMGEXIT_AP_HLT_LOOP:
case SVM_VMGEXIT_AP_JUMP_TABLE:
case SVM_VMGEXIT_UNSUPPORTED_EVENT:
break;
@ -1888,6 +1889,9 @@ int sev_handle_vmgexit(struct vcpu_svm *svm)
case SVM_VMGEXIT_NMI_COMPLETE:
ret = svm_invoke_exit_handler(svm, SVM_EXIT_IRET);
break;
case SVM_VMGEXIT_AP_HLT_LOOP:
ret = kvm_emulate_ap_reset_hold(&svm->vcpu);
break;
case SVM_VMGEXIT_AP_JUMP_TABLE: {
struct kvm_sev_info *sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info;
@ -2001,7 +2005,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
* of which one step is to perform a VMLOAD. Since hardware does not
* perform a VMSAVE on VMRUN, the host savearea must be updated.
*/
asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
asm volatile(__ex("vmsave %0") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
/*
* Certain MSRs are restored on VMEXIT, only save ones that aren't
@ -2040,3 +2044,21 @@ void sev_es_vcpu_put(struct vcpu_svm *svm)
wrmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]);
}
}
void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
{
struct vcpu_svm *svm = to_svm(vcpu);
/* First SIPI: Use the values as initially set by the VMM */
if (!svm->received_first_sipi) {
svm->received_first_sipi = true;
return;
}
/*
* Subsequent SIPI: Return from an AP Reset Hold VMGEXIT, where
* the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a
* non-zero value.
*/
ghcb_set_sw_exit_info_2(svm->ghcb, 1);
}

View File

@ -3677,8 +3677,6 @@ static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
return EXIT_FASTPATH_NONE;
}
void __svm_vcpu_run(unsigned long vmcb_pa, unsigned long *regs);
static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu,
struct vcpu_svm *svm)
{
@ -4384,6 +4382,14 @@ static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
(vmcb_is_intercept(&svm->vmcb->control, INTERCEPT_INIT));
}
static void svm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
{
if (!sev_es_guest(vcpu->kvm))
return kvm_vcpu_deliver_sipi_vector(vcpu, vector);
sev_vcpu_deliver_sipi_vector(vcpu, vector);
}
static void svm_vm_destroy(struct kvm *kvm)
{
avic_vm_destroy(kvm);
@ -4526,6 +4532,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.msr_filter_changed = svm_msr_filter_changed,
.complete_emulated_msr = svm_complete_emulated_msr,
.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
};
static struct kvm_x86_init_ops svm_init_ops __initdata = {

View File

@ -185,6 +185,7 @@ struct vcpu_svm {
struct vmcb_save_area *vmsa;
struct ghcb *ghcb;
struct kvm_host_map ghcb_map;
bool received_first_sipi;
/* SEV-ES scratch area support */
void *ghcb_sa;
@ -591,6 +592,7 @@ void sev_es_init_vmcb(struct vcpu_svm *svm);
void sev_es_create_vcpu(struct vcpu_svm *svm);
void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu);
void sev_es_vcpu_put(struct vcpu_svm *svm);
void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
/* vmenter.S */

View File

@ -4442,6 +4442,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
/* trying to cancel vmlaunch/vmresume is a bug */
WARN_ON_ONCE(vmx->nested.nested_run_pending);
kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
/* Service the TLB flush request for L2 before switching to L1. */
if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
kvm_vcpu_flush_tlb_current(vcpu);

View File

@ -7707,6 +7707,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.msr_filter_changed = vmx_msr_filter_changed,
.complete_emulated_msr = kvm_complete_insn_gp,
.cpu_dirty_log_size = vmx_cpu_dirty_log_size,
.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
};
static __init int hardware_setup(void)

View File

@ -7976,17 +7976,22 @@ void kvm_arch_exit(void)
kmem_cache_destroy(x86_fpu_cache);
}
int kvm_vcpu_halt(struct kvm_vcpu *vcpu)
static int __kvm_vcpu_halt(struct kvm_vcpu *vcpu, int state, int reason)
{
++vcpu->stat.halt_exits;
if (lapic_in_kernel(vcpu)) {
vcpu->arch.mp_state = KVM_MP_STATE_HALTED;
vcpu->arch.mp_state = state;
return 1;
} else {
vcpu->run->exit_reason = KVM_EXIT_HLT;
vcpu->run->exit_reason = reason;
return 0;
}
}
int kvm_vcpu_halt(struct kvm_vcpu *vcpu)
{
return __kvm_vcpu_halt(vcpu, KVM_MP_STATE_HALTED, KVM_EXIT_HLT);
}
EXPORT_SYMBOL_GPL(kvm_vcpu_halt);
int kvm_emulate_halt(struct kvm_vcpu *vcpu)
@ -8000,6 +8005,14 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_emulate_halt);
int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu)
{
int ret = kvm_skip_emulated_instruction(vcpu);
return __kvm_vcpu_halt(vcpu, KVM_MP_STATE_AP_RESET_HOLD, KVM_EXIT_AP_RESET_HOLD) && ret;
}
EXPORT_SYMBOL_GPL(kvm_emulate_ap_reset_hold);
#ifdef CONFIG_X86_64
static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
unsigned long clock_type)
@ -8789,7 +8802,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
if (kvm_request_pending(vcpu)) {
if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) {
if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) {
if (WARN_ON_ONCE(!is_guest_mode(vcpu)))
;
else if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) {
r = 0;
goto out;
}
@ -9094,6 +9109,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu)
kvm_apic_accept_events(vcpu);
switch(vcpu->arch.mp_state) {
case KVM_MP_STATE_HALTED:
case KVM_MP_STATE_AP_RESET_HOLD:
vcpu->arch.pv.pv_unhalted = false;
vcpu->arch.mp_state =
KVM_MP_STATE_RUNNABLE;
@ -9520,8 +9536,9 @@ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
kvm_load_guest_fpu(vcpu);
kvm_apic_accept_events(vcpu);
if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED &&
vcpu->arch.pv.pv_unhalted)
if ((vcpu->arch.mp_state == KVM_MP_STATE_HALTED ||
vcpu->arch.mp_state == KVM_MP_STATE_AP_RESET_HOLD) &&
vcpu->arch.pv.pv_unhalted)
mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
else
mp_state->mp_state = vcpu->arch.mp_state;
@ -10152,6 +10169,7 @@ void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
kvm_set_segment(vcpu, &cs, VCPU_SREG_CS);
kvm_rip_write(vcpu, 0);
}
EXPORT_SYMBOL_GPL(kvm_vcpu_deliver_sipi_vector);
int kvm_arch_hardware_enable(void)
{

View File

@ -829,6 +829,8 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
}
free_page((unsigned long)pmd_sv);
pgtable_pmd_page_dtor(virt_to_page(pmd));
free_page((unsigned long)pmd);
return 1;

View File

@ -6332,13 +6332,13 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd,
* limit 'something'.
*/
/* no more than 50% of tags for async I/O */
bfqd->word_depths[0][0] = max((1U << bt->sb.shift) >> 1, 1U);
bfqd->word_depths[0][0] = max(bt->sb.depth >> 1, 1U);
/*
* no more than 75% of tags for sync writes (25% extra tags
* w.r.t. async I/O, to prevent async I/O from starving sync
* writes)
*/
bfqd->word_depths[0][1] = max(((1U << bt->sb.shift) * 3) >> 2, 1U);
bfqd->word_depths[0][1] = max((bt->sb.depth * 3) >> 2, 1U);
/*
* In-word depths in case some bfq_queue is being weight-
@ -6348,9 +6348,9 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd,
* shortage.
*/
/* no more than ~18% of tags for async I/O */
bfqd->word_depths[1][0] = max(((1U << bt->sb.shift) * 3) >> 4, 1U);
bfqd->word_depths[1][0] = max((bt->sb.depth * 3) >> 4, 1U);
/* no more than ~37% of tags for sync writes (~20% extra tags) */
bfqd->word_depths[1][1] = max(((1U << bt->sb.shift) * 6) >> 4, 1U);
bfqd->word_depths[1][1] = max((bt->sb.depth * 6) >> 4, 1U);
for (i = 0; i < 2; i++)
for (j = 0; j < 2; j++)

View File

@ -2551,8 +2551,8 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
bool use_debt, ioc_locked;
unsigned long flags;
/* bypass IOs if disabled or for root cgroup */
if (!ioc->enabled || !iocg->level)
/* bypass IOs if disabled, still initializing, or for root cgroup */
if (!ioc->enabled || !iocg || !iocg->level)
return;
/* calculate the absolute vtime cost */
@ -2679,14 +2679,14 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq,
struct bio *bio)
{
struct ioc_gq *iocg = blkg_to_iocg(bio->bi_blkg);
struct ioc *ioc = iocg->ioc;
struct ioc *ioc = rqos_to_ioc(rqos);
sector_t bio_end = bio_end_sector(bio);
struct ioc_now now;
u64 vtime, abs_cost, cost;
unsigned long flags;
/* bypass if disabled or for root cgroup */
if (!ioc->enabled || !iocg->level)
/* bypass if disabled, still initializing, or for root cgroup */
if (!ioc->enabled || !iocg || !iocg->level)
return;
abs_cost = calc_vtime_cost(bio, iocg, true);
@ -2863,6 +2863,12 @@ static int blk_iocost_init(struct request_queue *q)
ioc_refresh_params(ioc, true);
spin_unlock_irq(&ioc->lock);
/*
* rqos must be added before activation to allow iocg_pd_init() to
* lookup the ioc from q. This means that the rqos methods may get
* called before policy activation completion, can't assume that the
* target bio has an iocg associated and need to test for NULL iocg.
*/
rq_qos_add(q, rqos);
ret = blkcg_activate_policy(q, &blkcg_policy_iocost);
if (ret) {

View File

@ -246,6 +246,7 @@ static const char *const hctx_flag_name[] = {
HCTX_FLAG_NAME(BLOCKING),
HCTX_FLAG_NAME(NO_SCHED),
HCTX_FLAG_NAME(STACKING),
HCTX_FLAG_NAME(TAG_HCTX_SHARED),
};
#undef HCTX_FLAG_NAME

View File

@ -246,15 +246,18 @@ struct block_device *disk_part_iter_next(struct disk_part_iter *piter)
part = rcu_dereference(ptbl->part[piter->idx]);
if (!part)
continue;
if (!bdev_nr_sectors(part) &&
!(piter->flags & DISK_PITER_INCL_EMPTY) &&
!(piter->flags & DISK_PITER_INCL_EMPTY_PART0 &&
piter->idx == 0))
continue;
piter->part = bdgrab(part);
if (!piter->part)
continue;
if (!bdev_nr_sectors(part) &&
!(piter->flags & DISK_PITER_INCL_EMPTY) &&
!(piter->flags & DISK_PITER_INCL_EMPTY_PART0 &&
piter->idx == 0)) {
bdput(piter->part);
piter->part = NULL;
continue;
}
piter->idx += inc;
break;
}

View File

@ -354,7 +354,7 @@ static uint32_t derive_pub_key(const void *pub_key, uint32_t len, uint8_t *buf)
memcpy(cur, e, sizeof(e));
cur += sizeof(e);
/* Zero parameters to satisfy set_pub_key ABI. */
memset(cur, 0, SETKEY_PARAMS_SIZE);
memzero_explicit(cur, SETKEY_PARAMS_SIZE);
return cur - buf;
}

View File

@ -395,9 +395,6 @@ config ACPI_CONTAINER
This helps support hotplug of nodes, CPUs, and memory.
To compile this driver as a module, choose M here:
the module will be called container.
config ACPI_HOTPLUG_MEMORY
bool "Memory Hotplug"
depends on MEMORY_HOTPLUG
@ -411,9 +408,6 @@ config ACPI_HOTPLUG_MEMORY
removing memory devices at runtime, you need not enable
this driver.
To compile this driver as a module, choose M here:
the module will be called acpi_memhotplug.
config ACPI_HOTPLUG_IOAPIC
bool
depends on PCI

View File

@ -105,18 +105,8 @@ static void lpi_device_get_constraints_amd(void)
for (i = 0; i < out_obj->package.count; i++) {
union acpi_object *package = &out_obj->package.elements[i];
struct lpi_device_info_amd info = { };
if (package->type == ACPI_TYPE_INTEGER) {
switch (i) {
case 0:
info.revision = package->integer.value;
break;
case 1:
info.count = package->integer.value;
break;
}
} else if (package->type == ACPI_TYPE_PACKAGE) {
if (package->type == ACPI_TYPE_PACKAGE) {
lpi_constraints_table = kcalloc(package->package.count,
sizeof(*lpi_constraints_table),
GFP_KERNEL);
@ -135,12 +125,10 @@ static void lpi_device_get_constraints_amd(void)
for (k = 0; k < info_obj->package.count; ++k) {
union acpi_object *obj = &info_obj->package.elements[k];
union acpi_object *obj_new;
list = &lpi_constraints_table[lpi_constraints_table_size];
list->min_dstate = -1;
obj_new = &obj[k];
switch (k) {
case 0:
dev_info.enabled = obj->integer.value;

View File

@ -4414,6 +4414,12 @@ static inline bool fwnode_is_primary(struct fwnode_handle *fwnode)
*
* Set the device's firmware node pointer to @fwnode, but if a secondary
* firmware node of the device is present, preserve it.
*
* Valid fwnode cases are:
* - primary --> secondary --> -ENODEV
* - primary --> NULL
* - secondary --> -ENODEV
* - NULL
*/
void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
{
@ -4432,8 +4438,9 @@ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
} else {
if (fwnode_is_primary(fn)) {
dev->fwnode = fn->secondary;
/* Set fn->secondary = NULL, so fn remains the primary fwnode */
if (!(parent && fn == parent->fwnode))
fn->secondary = ERR_PTR(-ENODEV);
fn->secondary = NULL;
} else {
dev->fwnode = NULL;
}

View File

@ -445,6 +445,7 @@ config BLK_DEV_RBD
config BLK_DEV_RSXX
tristate "IBM Flash Adapter 900GB Full Height PCIe Device Driver"
depends on PCI
select CRC32
help
Device driver for IBM's high speed PCIe SSD
storage device: Flash Adapter 900GB Full Height.

View File

@ -7,6 +7,7 @@ config BLK_DEV_RNBD_CLIENT
tristate "RDMA Network Block Device driver client"
depends on INFINIBAND_RTRS_CLIENT
select BLK_DEV_RNBD
select SG_POOL
help
RNBD client is a network block device driver using rdma transport.

Some files were not shown because too many files have changed in this diff Show More