1
0
Fork 0

Linux 4.20-rc4

-----BEGIN PGP SIGNATURE-----
 
 iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAlv7H/seHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGhasH/AvNibkzTAPQR7jE
 r/R7NIaFUdPpaHYpLpoYIdSWkqpeWtN3q4eldJKWcpFCaE83EzDVu1PRw79rt/lm
 JIYbZ2SZWbTB7d2TLEiEF2bFXr+OF8MVc1tSAsP5xTMvV6mIz7+i2xr9R0D4YrKY
 ejd+ExF3HimuRf557Uey1MGsRIhedHsHjHbWjvR1MuKzvUv9nAL+1E2KJbfPXacp
 LGXElCYcPXoKw+GxMS7C28idgLoEJoS/eVieRiMGsGlgEe/MuaNMU+GRLMde9lI0
 c/G/4uumJTHVY7npOjuNt1+2FoLVt8oLlhPcB06R7FIwJN+TupLQifqBkIe2saT6
 R6ExonQ=
 =+/RR
 -----END PGP SIGNATURE-----

Merge v4.20-rc4 into drm-next

Requested by Boris Brezillon for some vc4 fixes that are needed for future vc4 work.

Signed-off-by: Dave Airlie <airlied@redhat.com>
hifive-unleashed-5.1
Dave Airlie 2018-11-29 10:34:03 +10:00
commit 1ec28f8b8a
274 changed files with 2842 additions and 1644 deletions

View File

@ -2204,6 +2204,10 @@ S: Post Office Box 371
S: North Little Rock, Arkansas 72115
S: USA
N: Christopher Li
E: sparse@chrisli.org
D: Sparse maintainer 2009 - 2018
N: Stephan Linz
E: linz@mazet.de
E: Stephan.Linz@gmx.de

View File

@ -4713,6 +4713,8 @@
prevent spurious wakeup);
n = USB_QUIRK_DELAY_CTRL_MSG (Device needs a
pause after every control message);
o = USB_QUIRK_HUB_SLOW_RESET (Hub needs extra
delay after resetting its port);
Example: quirks=0781:5580:bk,0a5c:5834:gij
usbhid.mousepoll=

View File

@ -32,16 +32,17 @@ Disclosure and embargoed information
The security list is not a disclosure channel. For that, see Coordination
below.
Once a robust fix has been developed, our preference is to release the
fix in a timely fashion, treating it no differently than any of the other
thousands of changes and fixes the Linux kernel project releases every
month.
Once a robust fix has been developed, the release process starts. Fixes
for publicly known bugs are released immediately.
However, at the request of the reporter, we will postpone releasing the
fix for up to 5 business days after the date of the report or after the
embargo has lifted; whichever comes first. The only exception to that
rule is if the bug is publicly known, in which case the preference is to
release the fix as soon as it's available.
Although our preference is to release fixes for publicly undisclosed bugs
as soon as they become available, this may be postponed at the request of
the reporter or an affected party for up to 7 calendar days from the start
of the release process, with an exceptional extension to 14 calendar days
if it is agreed that the criticality of the bug requires more time. The
only valid reason for deferring the publication of a fix is to accommodate
the logistics of QA and large scale rollouts which require release
coordination.
Whilst embargoed information may be shared with trusted individuals in
order to develop a fix, such information will not be published alongside

View File

@ -74,7 +74,8 @@ using :c:func:`xa_load`. xa_store will overwrite any entry with the
new entry and return the previous entry stored at that index. You can
use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a
``NULL`` entry. There is no difference between an entry that has never
been stored to and one that has most recently had ``NULL`` stored to it.
been stored to, one that has been erased and one that has most recently
had ``NULL`` stored to it.
You can conditionally replace an entry at an index by using
:c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if
@ -105,23 +106,44 @@ may result in the entry being marked at some, but not all of the other
indices. Storing into one index may result in the entry retrieved by
some, but not all of the other indices changing.
Sometimes you need to ensure that a subsequent call to :c:func:`xa_store`
will not need to allocate memory. The :c:func:`xa_reserve` function
will store a reserved entry at the indicated index. Users of the normal
API will see this entry as containing ``NULL``. If you do not need to
use the reserved entry, you can call :c:func:`xa_release` to remove the
unused entry. If another user has stored to the entry in the meantime,
:c:func:`xa_release` will do nothing; if instead you want the entry to
become ``NULL``, you should use :c:func:`xa_erase`.
If all entries in the array are ``NULL``, the :c:func:`xa_empty` function
will return ``true``.
Finally, you can remove all entries from an XArray by calling
:c:func:`xa_destroy`. If the XArray entries are pointers, you may wish
to free the entries first. You can do this by iterating over all present
entries in the XArray using the :c:func:`xa_for_each` iterator.
ID assignment
-------------
Allocating XArrays
------------------
If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or
initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,
the XArray changes to track whether entries are in use or not.
You can call :c:func:`xa_alloc` to store the entry at any unused index
in the XArray. If you need to modify the array from interrupt context,
you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable
interrupts while allocating the ID. Unlike :c:func:`xa_store`, allocating
a ``NULL`` pointer does not delete an entry. Instead it reserves an
entry like :c:func:`xa_reserve` and you can release it using either
:c:func:`xa_erase` or :c:func:`xa_release`. To use ID assignment, the
XArray must be defined with :c:func:`DEFINE_XARRAY_ALLOC`, or initialised
by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,
interrupts while allocating the ID.
Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert`
will mark the entry as being allocated. Unlike a normal XArray, storing
``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`.
To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if
you only want to free the entry if it's ``NULL``).
You cannot use ``XA_MARK_0`` with an allocating XArray as this mark
is used to track whether an entry is free or not. The other marks are
available for your use.
Memory allocation
-----------------
@ -158,6 +180,8 @@ Takes RCU read lock:
Takes xa_lock internally:
* :c:func:`xa_store`
* :c:func:`xa_store_bh`
* :c:func:`xa_store_irq`
* :c:func:`xa_insert`
* :c:func:`xa_erase`
* :c:func:`xa_erase_bh`
@ -167,6 +191,9 @@ Takes xa_lock internally:
* :c:func:`xa_alloc`
* :c:func:`xa_alloc_bh`
* :c:func:`xa_alloc_irq`
* :c:func:`xa_reserve`
* :c:func:`xa_reserve_bh`
* :c:func:`xa_reserve_irq`
* :c:func:`xa_destroy`
* :c:func:`xa_set_mark`
* :c:func:`xa_clear_mark`
@ -177,6 +204,7 @@ Assumes xa_lock held on entry:
* :c:func:`__xa_erase`
* :c:func:`__xa_cmpxchg`
* :c:func:`__xa_alloc`
* :c:func:`__xa_reserve`
* :c:func:`__xa_set_mark`
* :c:func:`__xa_clear_mark`
@ -234,7 +262,8 @@ Sharing the XArray with interrupt context is also possible, either
using :c:func:`xa_lock_irqsave` in both the interrupt handler and process
context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock`
in the interrupt handler. Some of the more common patterns have helper
functions such as :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.
functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`,
:c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.
Sometimes you need to protect access to the XArray with a mutex because
that lock sits above another mutex in the locking hierarchy. That does
@ -322,7 +351,8 @@ to :c:func:`xas_retry`, and retry the operation if it returns ``true``.
- :c:func:`xa_is_zero`
- Zero entries appear as ``NULL`` through the Normal API, but occupy
an entry in the XArray which can be used to reserve the index for
future use.
future use. This is used by allocating XArrays for allocated entries
which are ``NULL``.
Other internal entries may be added in the future. As far as possible, they
will be handled by :c:func:`xas_retry`.

View File

@ -17,7 +17,7 @@ Example:
reg = <1>;
clocks = <&clk32m>;
interrupt-parent = <&gpio4>;
interrupts = <13 IRQ_TYPE_EDGE_RISING>;
interrupts = <13 IRQ_TYPE_LEVEL_HIGH>;
vdd-supply = <&reg5v0>;
xceiver-supply = <&reg5v0>;
};

View File

@ -5,6 +5,7 @@ Required properties:
- compatible: "renesas,can-r8a7743" if CAN controller is a part of R8A7743 SoC.
"renesas,can-r8a7744" if CAN controller is a part of R8A7744 SoC.
"renesas,can-r8a7745" if CAN controller is a part of R8A7745 SoC.
"renesas,can-r8a774a1" if CAN controller is a part of R8A774A1 SoC.
"renesas,can-r8a7778" if CAN controller is a part of R8A7778 SoC.
"renesas,can-r8a7779" if CAN controller is a part of R8A7779 SoC.
"renesas,can-r8a7790" if CAN controller is a part of R8A7790 SoC.
@ -14,26 +15,32 @@ Required properties:
"renesas,can-r8a7794" if CAN controller is a part of R8A7794 SoC.
"renesas,can-r8a7795" if CAN controller is a part of R8A7795 SoC.
"renesas,can-r8a7796" if CAN controller is a part of R8A7796 SoC.
"renesas,can-r8a77965" if CAN controller is a part of R8A77965 SoC.
"renesas,rcar-gen1-can" for a generic R-Car Gen1 compatible device.
"renesas,rcar-gen2-can" for a generic R-Car Gen2 or RZ/G1
compatible device.
"renesas,rcar-gen3-can" for a generic R-Car Gen3 compatible device.
"renesas,rcar-gen3-can" for a generic R-Car Gen3 or RZ/G2
compatible device.
When compatible with the generic version, nodes must list the
SoC-specific version corresponding to the platform first
followed by the generic version.
- reg: physical base address and size of the R-Car CAN register map.
- interrupts: interrupt specifier for the sole interrupt.
- clocks: phandles and clock specifiers for 3 CAN clock inputs.
- clock-names: 3 clock input name strings: "clkp1", "clkp2", "can_clk".
- clocks: phandles and clock specifiers for 2 CAN clock inputs for RZ/G2
devices.
phandles and clock specifiers for 3 CAN clock inputs for every other
SoC.
- clock-names: 2 clock input name strings for RZ/G2: "clkp1", "can_clk".
3 clock input name strings for every other SoC: "clkp1", "clkp2",
"can_clk".
- pinctrl-0: pin control group to be used for this controller.
- pinctrl-names: must be "default".
Required properties for "renesas,can-r8a7795" and "renesas,can-r8a7796"
compatible:
In R8A7795 and R8A7796 SoCs, "clkp2" can be CANFD clock. This is a div6 clock
and can be used by both CAN and CAN FD controller at the same time. It needs to
be scaled to maximum frequency if any of these controllers use it. This is done
Required properties for R8A7795, R8A7796 and R8A77965:
For the denoted SoCs, "clkp2" can be CANFD clock. This is a div6 clock and can
be used by both CAN and CAN FD controller at the same time. It needs to be
scaled to maximum frequency if any of these controllers use it. This is done
using the below properties:
- assigned-clocks: phandle of clkp2(CANFD) clock.
@ -42,8 +49,9 @@ using the below properties:
Optional properties:
- renesas,can-clock-select: R-Car CAN Clock Source Select. Valid values are:
<0x0> (default) : Peripheral clock (clkp1)
<0x1> : Peripheral clock (clkp2)
<0x3> : Externally input clock
<0x1> : Peripheral clock (clkp2) (not supported by
RZ/G2 devices)
<0x3> : External input clock
Example
-------

View File

@ -7,7 +7,7 @@ limitations.
Current Binding
---------------
Switches are true Linux devices and can be probes by any means. Once
Switches are true Linux devices and can be probed by any means. Once
probed, they register to the DSA framework, passing a node
pointer. This node is expected to fulfil the following binding, and
may contain additional properties as required by the device it is

View File

@ -190,16 +190,7 @@ A few EV_REL codes have special meanings:
* REL_WHEEL, REL_HWHEEL:
- These codes are used for vertical and horizontal scroll wheels,
respectively. The value is the number of "notches" moved on the wheel, the
physical size of which varies by device. For high-resolution wheels (which
report multiple events for each notch of movement, or do not have notches)
this may be an approximation based on the high-resolution scroll events.
* REL_WHEEL_HI_RES:
- If a vertical scroll wheel supports high-resolution scrolling, this code
will be emitted in addition to REL_WHEEL. The value is the (approximate)
distance travelled by the user's finger, in microns.
respectively.
EV_ABS
------

View File

@ -40,7 +40,7 @@ To use the :ref:`format` ioctls applications set the ``type`` field of the
the desired operation. Both drivers and applications must set the remainder of
the :c:type:`v4l2_format` structure to 0.
.. _v4l2-meta-format:
.. c:type:: v4l2_meta_format
.. tabularcolumns:: |p{1.4cm}|p{2.2cm}|p{13.9cm}|

View File

@ -132,6 +132,11 @@ The format as returned by :ref:`VIDIOC_TRY_FMT <VIDIOC_G_FMT>` must be identical
- ``sdr``
- Definition of a data format, see :ref:`pixfmt`, used by SDR
capture and output devices.
* -
- struct :c:type:`v4l2_meta_format`
- ``meta``
- Definition of a metadata format, see :ref:`meta-formats`, used by
metadata capture devices.
* -
- __u8
- ``raw_data``\ [200]

View File

@ -1056,18 +1056,23 @@ The kernel interface functions are as follows:
u32 rxrpc_kernel_check_life(struct socket *sock,
struct rxrpc_call *call);
void rxrpc_kernel_probe_life(struct socket *sock,
struct rxrpc_call *call);
This returns a number that is updated when ACKs are received from the peer
(notably including PING RESPONSE ACKs which we can elicit by sending PING
ACKs to see if the call still exists on the server). The caller should
compare the numbers of two calls to see if the call is still alive after
waiting for a suitable interval.
The first function returns a number that is updated when ACKs are received
from the peer (notably including PING RESPONSE ACKs which we can elicit by
sending PING ACKs to see if the call still exists on the server). The
caller should compare the numbers of two calls to see if the call is still
alive after waiting for a suitable interval.
This allows the caller to work out if the server is still contactable and
if the call is still alive on the server whilst waiting for the server to
process a client operation.
This function may transmit a PING ACK.
The second function causes a ping ACK to be transmitted to try to provoke
the peer into responding, which would then cause the value returned by the
first function to change. Note that this must be called in TASK_RUNNING
state.
(*) Get reply timestamp.

View File

@ -180,6 +180,7 @@ F: drivers/net/hamradio/6pack.c
8169 10/100/1000 GIGABIT ETHERNET DRIVER
M: Realtek linux nic maintainers <nic_swsd@realtek.com>
M: Heiner Kallweit <hkallweit1@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/realtek/r8169.c
@ -717,7 +718,7 @@ F: include/linux/mfd/altera-a10sr.h
F: include/dt-bindings/reset/altr,rst-mgr-a10sr.h
ALTERA TRIPLE SPEED ETHERNET DRIVER
M: Vince Bridgers <vbridger@opensource.altera.com>
M: Thor Thayer <thor.thayer@linux.intel.com>
L: netdev@vger.kernel.org
L: nios2-dev@lists.rocketboards.org (moderated for non-subscribers)
S: Maintained
@ -3276,6 +3277,12 @@ F: include/uapi/linux/caif/
F: include/net/caif/
F: net/caif/
CAKE QDISC
M: Toke Høiland-Jørgensen <toke@toke.dk>
L: cake@lists.bufferbloat.net (moderated for non-subscribers)
S: Maintained
F: net/sched/sch_cake.c
CALGARY x86-64 IOMMU
M: Muli Ben-Yehuda <mulix@mulix.org>
M: Jon Mason <jdmason@kudzu.us>
@ -5541,6 +5548,7 @@ F: net/bridge/
ETHERNET PHY LIBRARY
M: Andrew Lunn <andrew@lunn.ch>
M: Florian Fainelli <f.fainelli@gmail.com>
M: Heiner Kallweit <hkallweit1@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/ABI/testing/sysfs-bus-mdio
@ -6312,6 +6320,7 @@ F: tools/testing/selftests/gpio/
GPIO SUBSYSTEM
M: Linus Walleij <linus.walleij@linaro.org>
M: Bartosz Golaszewski <bgolaszewski@baylibre.com>
L: linux-gpio@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git
S: Maintained
@ -7449,6 +7458,20 @@ S: Maintained
F: Documentation/fb/intelfb.txt
F: drivers/video/fbdev/intelfb/
INTEL GPIO DRIVERS
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
L: linux-gpio@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git
F: drivers/gpio/gpio-ich.c
F: drivers/gpio/gpio-intel-mid.c
F: drivers/gpio/gpio-lynxpoint.c
F: drivers/gpio/gpio-merrifield.c
F: drivers/gpio/gpio-ml-ioh.c
F: drivers/gpio/gpio-pch.c
F: drivers/gpio/gpio-sch.c
F: drivers/gpio/gpio-sodaville.c
INTEL GVT-g DRIVERS (Intel GPU Virtualization)
M: Zhenyu Wang <zhenyuw@linux.intel.com>
M: Zhi Wang <zhi.a.wang@intel.com>
@ -7459,12 +7482,6 @@ T: git https://github.com/intel/gvt-linux.git
S: Supported
F: drivers/gpu/drm/i915/gvt/
INTEL PMIC GPIO DRIVER
R: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
S: Maintained
F: drivers/gpio/gpio-*cove.c
F: drivers/gpio/gpio-msic.c
INTEL HID EVENT DRIVER
M: Alex Hung <alex.hung@canonical.com>
L: platform-driver-x86@vger.kernel.org
@ -7552,12 +7569,6 @@ W: https://01.org/linux-acpi
S: Supported
F: drivers/platform/x86/intel_menlow.c
INTEL MERRIFIELD GPIO DRIVER
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
L: linux-gpio@vger.kernel.org
S: Maintained
F: drivers/gpio/gpio-merrifield.c
INTEL MIC DRIVERS (mic)
M: Sudeep Dutt <sudeep.dutt@intel.com>
M: Ashutosh Dixit <ashutosh.dixit@intel.com>
@ -7590,6 +7601,13 @@ F: drivers/platform/x86/intel_punit_ipc.c
F: arch/x86/include/asm/intel_pmc_ipc.h
F: arch/x86/include/asm/intel_punit_ipc.h
INTEL PMIC GPIO DRIVERS
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git
F: drivers/gpio/gpio-*cove.c
F: drivers/gpio/gpio-msic.c
INTEL MULTIFUNCTION PMIC DEVICE DRIVERS
R: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
S: Maintained
@ -13991,11 +14009,10 @@ F: drivers/tty/serial/sunzilog.h
F: drivers/tty/vcc.c
SPARSE CHECKER
M: "Christopher Li" <sparse@chrisli.org>
M: "Luc Van Oostenryck" <luc.vanoostenryck@gmail.com>
L: linux-sparse@vger.kernel.org
W: https://sparse.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/devel/sparse/sparse.git
T: git git://git.kernel.org/pub/scm/devel/sparse/chrisl/sparse.git
S: Maintained
F: include/linux/compiler.h
@ -14092,6 +14109,7 @@ F: Documentation/devicetree/bindings/iio/proximity/vl53l0x.txt
STABLE BRANCH
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
M: Sasha Levin <sashal@kernel.org>
L: stable@vger.kernel.org
S: Supported
F: Documentation/process/stable-kernel-rules.rst

View File

@ -2,8 +2,8 @@
VERSION = 4
PATCHLEVEL = 20
SUBLEVEL = 0
EXTRAVERSION = -rc3
NAME = "People's Front"
EXTRAVERSION = -rc4
NAME = Shy Crocodile
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"

View File

@ -468,7 +468,7 @@
SCTLR_ELx_SA | SCTLR_ELx_I | SCTLR_ELx_WXN | \
SCTLR_ELx_DSSBS | ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff
#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffffUL
#error "Inconsistent SCTLR_EL2 set/clear bits"
#endif
@ -509,7 +509,7 @@
SCTLR_EL1_UMA | SCTLR_ELx_WXN | ENDIAN_CLEAR_EL1 |\
SCTLR_ELx_DSSBS | SCTLR_EL1_NTWI | SCTLR_EL1_RES0)
#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff
#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffffUL
#error "Inconsistent SCTLR_EL1 set/clear bits"
#endif

View File

@ -1333,7 +1333,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.cpu_enable = cpu_enable_hw_dbm,
},
#endif
#ifdef CONFIG_ARM64_SSBD
{
.desc = "CRC32 instructions",
.capability = ARM64_HAS_CRC32,
@ -1343,6 +1342,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64ISAR0_CRC32_SHIFT,
.min_field_value = 1,
},
#ifdef CONFIG_ARM64_SSBD
{
.desc = "Speculative Store Bypassing Safe (SSBS)",
.capability = ARM64_SSBS,

View File

@ -140,6 +140,7 @@ CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_DS1307=y
CONFIG_STAGING=y
CONFIG_OCTEON_ETHERNET=y
CONFIG_OCTEON_USB=y
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_RAS=y
CONFIG_EXT4_FS=y

View File

@ -794,6 +794,7 @@ static void __init arch_mem_init(char **cmdline_p)
/* call board setup routine */
plat_mem_setup();
memblock_set_bottom_up(true);
/*
* Make sure all kernel memory is in the maps. The "UP" and

View File

@ -2260,10 +2260,8 @@ void __init trap_init(void)
unsigned long size = 0x200 + VECTORSPACING*64;
phys_addr_t ebase_pa;
memblock_set_bottom_up(true);
ebase = (unsigned long)
memblock_alloc_from(size, 1 << fls(size), 0);
memblock_set_bottom_up(false);
/*
* Try to ensure ebase resides in KSeg0 if possible.
@ -2307,6 +2305,7 @@ void __init trap_init(void)
if (board_ebase_setup)
board_ebase_setup();
per_cpu_trap_init(true);
memblock_set_bottom_up(false);
/*
* Copy the generic exception handlers to their final destination.

View File

@ -231,6 +231,8 @@ static __init void prom_meminit(void)
cpumask_clear(&__node_data[(node)]->cpumask);
}
}
max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());
for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) {
node = cpu / loongson_sysconf.cores_per_node;
if (node >= num_online_nodes())
@ -248,19 +250,9 @@ static __init void prom_meminit(void)
void __init paging_init(void)
{
unsigned node;
unsigned long zones_size[MAX_NR_ZONES] = {0, };
pagetable_init();
for_each_online_node(node) {
unsigned long start_pfn, end_pfn;
get_pfn_range_for_nid(node, &start_pfn, &end_pfn);
if (end_pfn > max_low_pfn)
max_low_pfn = end_pfn;
}
#ifdef CONFIG_ZONE_DMA32
zones_size[ZONE_DMA32] = MAX_DMA32_PFN;
#endif

View File

@ -435,6 +435,7 @@ void __init prom_meminit(void)
mlreset();
szmem();
max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());
for (node = 0; node < MAX_COMPACT_NODES; node++) {
if (node_online(node)) {
@ -455,18 +456,8 @@ extern void setup_zero_pages(void);
void __init paging_init(void)
{
unsigned long zones_size[MAX_NR_ZONES] = {0, };
unsigned node;
pagetable_init();
for_each_online_node(node) {
unsigned long start_pfn, end_pfn;
get_pfn_range_for_nid(node, &start_pfn, &end_pfn);
if (end_pfn > max_low_pfn)
max_low_pfn = end_pfn;
}
zones_size[ZONE_NORMAL] = max_low_pfn;
free_area_init_nodes(zones_size);
}

View File

@ -71,6 +71,10 @@ KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
# arch specific predefines for sparse
CHECKFLAGS += -D__riscv -D__riscv_xlen=$(BITS)
# Default target when executing plain make
boot := arch/riscv/boot
KBUILD_IMAGE := $(boot)/Image.gz
head-y := arch/riscv/kernel/head.o
core-y += arch/riscv/kernel/ arch/riscv/mm/
@ -81,4 +85,13 @@ PHONY += vdso_install
vdso_install:
$(Q)$(MAKE) $(build)=arch/riscv/kernel/vdso $@
all: vmlinux
all: Image.gz
Image: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
Image.%: Image
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
zinstall install:
$(Q)$(MAKE) $(build)=$(boot) $@

2
arch/riscv/boot/.gitignore vendored 100644
View File

@ -0,0 +1,2 @@
Image
Image.gz

View File

@ -0,0 +1,33 @@
#
# arch/riscv/boot/Makefile
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies.
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 2018, Anup Patel.
# Author: Anup Patel <anup@brainfault.org>
#
# Based on the ia64 and arm64 boot/Makefile.
#
OBJCOPYFLAGS_Image :=-O binary -R .note -R .note.gnu.build-id -R .comment -S
targets := Image
$(obj)/Image: vmlinux FORCE
$(call if_changed,objcopy)
$(obj)/Image.gz: $(obj)/Image FORCE
$(call if_changed,gzip)
install:
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
$(obj)/Image System.map "$(INSTALL_PATH)"
zinstall:
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
$(obj)/Image.gz System.map "$(INSTALL_PATH)"

View File

@ -0,0 +1,60 @@
#!/bin/sh
#
# arch/riscv/boot/install.sh
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1995 by Linus Torvalds
#
# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin
# Adapted from code in arch/i386/boot/install.sh by Russell King
#
# "make install" script for the RISC-V Linux port
#
# Arguments:
# $1 - kernel version
# $2 - kernel image file
# $3 - kernel map file
# $4 - default install path (blank if root directory)
#
verify () {
if [ ! -f "$1" ]; then
echo "" 1>&2
echo " *** Missing file: $1" 1>&2
echo ' *** You need to run "make" before "make install".' 1>&2
echo "" 1>&2
exit 1
fi
}
# Make sure the files actually exist
verify "$2"
verify "$3"
# User may have a custom install script
if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi
if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
if [ "$(basename $2)" = "Image.gz" ]; then
# Compressed install
echo "Installing compressed kernel"
base=vmlinuz
else
# Normal install
echo "Installing normal kernel"
base=vmlinux
fi
if [ -f $4/$base-$1 ]; then
mv $4/$base-$1 $4/$base-$1.old
fi
cat $2 > $4/$base-$1
# Install system map file
if [ -f $4/System.map-$1 ]; then
mv $4/System.map-$1 $4/System.map-$1.old
fi
cp $3 $4/System.map-$1

View File

@ -8,6 +8,7 @@
#define MODULE_ARCH_VERMAGIC "riscv"
struct module;
u64 module_emit_got_entry(struct module *mod, u64 val);
u64 module_emit_plt_entry(struct module *mod, u64 val);

View File

@ -400,13 +400,13 @@ extern unsigned long __must_check __asm_copy_from_user(void *to,
static inline unsigned long
raw_copy_from_user(void *to, const void __user *from, unsigned long n)
{
return __asm_copy_to_user(to, from, n);
return __asm_copy_from_user(to, from, n);
}
static inline unsigned long
raw_copy_to_user(void __user *to, const void *from, unsigned long n)
{
return __asm_copy_from_user(to, from, n);
return __asm_copy_to_user(to, from, n);
}
extern long strncpy_from_user(char *dest, const char __user *src, long count);

View File

@ -13,10 +13,9 @@
/*
* There is explicitly no include guard here because this file is expected to
* be included multiple times. See uapi/asm/syscalls.h for more info.
* be included multiple times.
*/
#define __ARCH_WANT_NEW_STAT
#define __ARCH_WANT_SYS_CLONE
#include <uapi/asm/unistd.h>
#include <uapi/asm/syscalls.h>

View File

@ -1,13 +1,25 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2017-2018 SiFive
* Copyright (C) 2018 David Abdurachmanov <david.abdurachmanov@gmail.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* There is explicitly no include guard here because this file is expected to
* be included multiple times in order to define the syscall macros via
* __SYSCALL.
*/
#ifdef __LP64__
#define __ARCH_WANT_NEW_STAT
#endif /* __LP64__ */
#include <asm-generic/unistd.h>
/*
* Allows the instruction cache to be flushed from userspace. Despite RISC-V

View File

@ -64,7 +64,7 @@ int riscv_of_processor_hartid(struct device_node *node)
static void print_isa(struct seq_file *f, const char *orig_isa)
{
static const char *ext = "mafdc";
static const char *ext = "mafdcsu";
const char *isa = orig_isa;
const char *e;
@ -88,11 +88,14 @@ static void print_isa(struct seq_file *f, const char *orig_isa)
/*
* Check the rest of the ISA string for valid extensions, printing those
* we find. RISC-V ISA strings define an order, so we only print the
* extension bits when they're in order.
* extension bits when they're in order. Hide the supervisor (S)
* extension from userspace as it's not accessible from there.
*/
for (e = ext; *e != '\0'; ++e) {
if (isa[0] == e[0]) {
seq_write(f, isa, 1);
if (isa[0] != 's')
seq_write(f, isa, 1);
isa++;
}
}

View File

@ -44,6 +44,16 @@ ENTRY(_start)
amoadd.w a3, a2, (a3)
bnez a3, .Lsecondary_start
/* Clear BSS for flat non-ELF images */
la a3, __bss_start
la a4, __bss_stop
ble a4, a3, clear_bss_done
clear_bss:
REG_S zero, (a3)
add a3, a3, RISCV_SZPTR
blt a3, a4, clear_bss
clear_bss_done:
/* Save hart ID and DTB physical address */
mv s0, a0
mv s1, a1

View File

@ -74,7 +74,7 @@ SECTIONS
*(.sbss*)
}
BSS_SECTION(0, 0, 0)
BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
EXCEPTION_TABLE(0x10)
NOTES

View File

@ -30,6 +30,7 @@ static const struct acpi_device_id forbidden_id_list[] = {
{"PNP0200", 0}, /* AT DMA Controller */
{"ACPI0009", 0}, /* IOxAPIC */
{"ACPI000A", 0}, /* IOAPIC */
{"SMB0001", 0}, /* ACPI SMBUS virtual device */
{"", 0},
};

View File

@ -201,19 +201,28 @@ static const struct of_device_id ti_cpufreq_of_match[] = {
{},
};
static const struct of_device_id *ti_cpufreq_match_node(void)
{
struct device_node *np;
const struct of_device_id *match;
np = of_find_node_by_path("/");
match = of_match_node(ti_cpufreq_of_match, np);
of_node_put(np);
return match;
}
static int ti_cpufreq_probe(struct platform_device *pdev)
{
u32 version[VERSION_COUNT];
struct device_node *np;
const struct of_device_id *match;
struct opp_table *ti_opp_table;
struct ti_cpufreq_data *opp_data;
const char * const reg_names[] = {"vdd", "vbb"};
int ret;
np = of_find_node_by_path("/");
match = of_match_node(ti_cpufreq_of_match, np);
of_node_put(np);
match = dev_get_platdata(&pdev->dev);
if (!match)
return -ENODEV;
@ -290,7 +299,14 @@ fail_put_node:
static int ti_cpufreq_init(void)
{
platform_device_register_simple("ti-cpufreq", -1, NULL, 0);
const struct of_device_id *match;
/* Check to ensure we are on a compatible platform */
match = ti_cpufreq_match_node();
if (match)
platform_device_register_data(NULL, "ti-cpufreq", -1, match,
sizeof(*match));
return 0;
}
module_init(ti_cpufreq_init);

View File

@ -184,6 +184,7 @@ static long udmabuf_create(const struct udmabuf_create_list *head,
exp_info.ops = &udmabuf_ops;
exp_info.size = ubuf->pagecount << PAGE_SHIFT;
exp_info.priv = ubuf;
exp_info.flags = O_RDWR;
buf = dma_buf_export(&exp_info);
if (IS_ERR(buf)) {

View File

@ -13,6 +13,7 @@
#include <linux/of.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/sched.h>
#include <linux/serdev.h>
#include <linux/slab.h>
@ -63,7 +64,7 @@ static int gnss_serial_write_raw(struct gnss_device *gdev,
int ret;
/* write is only buffered synchronously */
ret = serdev_device_write(serdev, buf, count, 0);
ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT);
if (ret < 0)
return ret;

View File

@ -16,6 +16,7 @@
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/regulator/consumer.h>
#include <linux/sched.h>
#include <linux/serdev.h>
#include <linux/slab.h>
#include <linux/wait.h>
@ -83,7 +84,7 @@ static int sirf_write_raw(struct gnss_device *gdev, const unsigned char *buf,
int ret;
/* write is only buffered synchronously */
ret = serdev_device_write(serdev, buf, count, 0);
ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT);
if (ret < 0)
return ret;

View File

@ -35,8 +35,8 @@
#define gpio_mockup_err(...) pr_err(GPIO_MOCKUP_NAME ": " __VA_ARGS__)
enum {
GPIO_MOCKUP_DIR_OUT = 0,
GPIO_MOCKUP_DIR_IN = 1,
GPIO_MOCKUP_DIR_IN = 0,
GPIO_MOCKUP_DIR_OUT = 1,
};
/*
@ -131,7 +131,7 @@ static int gpio_mockup_get_direction(struct gpio_chip *gc, unsigned int offset)
{
struct gpio_mockup_chip *chip = gpiochip_get_data(gc);
return chip->lines[offset].dir;
return !chip->lines[offset].dir;
}
static int gpio_mockup_to_irq(struct gpio_chip *gc, unsigned int offset)

View File

@ -268,8 +268,8 @@ static int pxa_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
if (pxa_gpio_has_pinctrl()) {
ret = pinctrl_gpio_direction_input(chip->base + offset);
if (!ret)
return 0;
if (ret)
return ret;
}
spin_lock_irqsave(&gpio_lock, flags);

View File

@ -1295,7 +1295,7 @@ int gpiochip_add_data_with_key(struct gpio_chip *chip, void *data,
gdev->descs = kcalloc(chip->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL);
if (!gdev->descs) {
status = -ENOMEM;
goto err_free_gdev;
goto err_free_ida;
}
if (chip->ngpio == 0) {
@ -1427,8 +1427,9 @@ err_free_label:
kfree_const(gdev->label);
err_free_descs:
kfree(gdev->descs);
err_free_gdev:
err_free_ida:
ida_simple_remove(&gpio_ida, gdev->id);
err_free_gdev:
/* failures here can mean systems won't boot... */
pr_err("%s: GPIOs %d..%d (%s) failed to register, %d\n", __func__,
gdev->base, gdev->base + gdev->ngpio - 1,

View File

@ -501,8 +501,11 @@ void amdgpu_amdkfd_set_compute_idle(struct kgd_dev *kgd, bool idle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
amdgpu_dpm_switch_power_profile(adev,
PP_SMC_POWER_PROFILE_COMPUTE, !idle);
if (adev->powerplay.pp_funcs &&
adev->powerplay.pp_funcs->switch_power_profile)
amdgpu_dpm_switch_power_profile(adev,
PP_SMC_POWER_PROFILE_COMPUTE,
!idle);
}
bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32 vmid)

View File

@ -626,6 +626,13 @@ int amdgpu_display_modeset_create_props(struct amdgpu_device *adev)
"dither",
amdgpu_dither_enum_list, sz);
if (amdgpu_device_has_dc_support(adev)) {
adev->mode_info.max_bpc_property =
drm_property_create_range(adev->ddev, 0, "max bpc", 8, 16);
if (!adev->mode_info.max_bpc_property)
return -ENOMEM;
}
return 0;
}

View File

@ -338,6 +338,8 @@ struct amdgpu_mode_info {
struct drm_property *audio_property;
/* FMT dithering */
struct drm_property *dither_property;
/* maximum number of bits per channel for monitor color */
struct drm_property *max_bpc_property;
/* hardcoded DFP edid from BIOS */
struct edid *bios_hardcoded_edid;
int bios_hardcoded_edid_size;

View File

@ -46,6 +46,7 @@ MODULE_FIRMWARE("amdgpu/tahiti_mc.bin");
MODULE_FIRMWARE("amdgpu/pitcairn_mc.bin");
MODULE_FIRMWARE("amdgpu/verde_mc.bin");
MODULE_FIRMWARE("amdgpu/oland_mc.bin");
MODULE_FIRMWARE("amdgpu/hainan_mc.bin");
MODULE_FIRMWARE("amdgpu/si58_mc.bin");
#define MC_SEQ_MISC0__MT__MASK 0xf0000000

View File

@ -65,6 +65,13 @@
#define mmMP0_MISC_LIGHT_SLEEP_CTRL 0x01ba
#define mmMP0_MISC_LIGHT_SLEEP_CTRL_BASE_IDX 0
/* for Vega20 register name change */
#define mmHDP_MEM_POWER_CTRL 0x00d4
#define HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK 0x00000001L
#define HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK 0x00000002L
#define HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK 0x00010000L
#define HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK 0x00020000L
#define mmHDP_MEM_POWER_CTRL_BASE_IDX 0
/*
* Indirect registers accessor
*/
@ -870,15 +877,33 @@ static void soc15_update_hdp_light_sleep(struct amdgpu_device *adev, bool enable
{
uint32_t def, data;
def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS));
if (adev->asic_type == CHIP_VEGA20) {
def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_CTRL));
if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK;
else
data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK;
if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
data |= HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK |
HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK |
HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK |
HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK;
else
data &= ~(HDP_MEM_POWER_CTRL__IPH_MEM_POWER_CTRL_EN_MASK |
HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK |
HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK |
HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK);
if (def != data)
WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS), data);
if (def != data)
WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_CTRL), data);
} else {
def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS));
if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK;
else
data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK;
if (def != data)
WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS), data);
}
}
static void soc15_update_drm_clock_gating(struct amdgpu_device *adev, bool enable)

View File

@ -2422,8 +2422,15 @@ static void update_stream_scaling_settings(const struct drm_display_mode *mode,
static enum dc_color_depth
convert_color_depth_from_display_info(const struct drm_connector *connector)
{
struct dm_connector_state *dm_conn_state =
to_dm_connector_state(connector->state);
uint32_t bpc = connector->display_info.bpc;
/* TODO: Remove this when there's support for max_bpc in drm */
if (dm_conn_state && bpc > dm_conn_state->max_bpc)
/* Round down to nearest even number. */
bpc = dm_conn_state->max_bpc - (dm_conn_state->max_bpc & 1);
switch (bpc) {
case 0:
/*
@ -3007,6 +3014,9 @@ int amdgpu_dm_connector_atomic_set_property(struct drm_connector *connector,
} else if (property == adev->mode_info.underscan_property) {
dm_new_state->underscan_enable = val;
ret = 0;
} else if (property == adev->mode_info.max_bpc_property) {
dm_new_state->max_bpc = val;
ret = 0;
}
return ret;
@ -3049,6 +3059,9 @@ int amdgpu_dm_connector_atomic_get_property(struct drm_connector *connector,
} else if (property == adev->mode_info.underscan_property) {
*val = dm_state->underscan_enable;
ret = 0;
} else if (property == adev->mode_info.max_bpc_property) {
*val = dm_state->max_bpc;
ret = 0;
}
return ret;
}
@ -3859,6 +3872,9 @@ void amdgpu_dm_connector_init_helper(struct amdgpu_display_manager *dm,
drm_object_attach_property(&aconnector->base.base,
adev->mode_info.underscan_vborder_property,
0);
drm_object_attach_property(&aconnector->base.base,
adev->mode_info.max_bpc_property,
0);
}

View File

@ -252,6 +252,7 @@ struct dm_connector_state {
enum amdgpu_rmx_type scaling;
uint8_t underscan_vborder;
uint8_t underscan_hborder;
uint8_t max_bpc;
bool underscan_enable;
bool freesync_enable;
bool freesync_capable;

View File

@ -4525,12 +4525,12 @@ static int smu7_get_sclk_od(struct pp_hwmgr *hwmgr)
struct smu7_single_dpm_table *sclk_table = &(data->dpm_table.sclk_table);
struct smu7_single_dpm_table *golden_sclk_table =
&(data->golden_dpm_table.sclk_table);
int value;
int value = sclk_table->dpm_levels[sclk_table->count - 1].value;
int golden_value = golden_sclk_table->dpm_levels
[golden_sclk_table->count - 1].value;
value = (sclk_table->dpm_levels[sclk_table->count - 1].value -
golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) *
100 /
golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value;
value -= golden_value;
value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}
@ -4567,12 +4567,12 @@ static int smu7_get_mclk_od(struct pp_hwmgr *hwmgr)
struct smu7_single_dpm_table *mclk_table = &(data->dpm_table.mclk_table);
struct smu7_single_dpm_table *golden_mclk_table =
&(data->golden_dpm_table.mclk_table);
int value;
int value = mclk_table->dpm_levels[mclk_table->count - 1].value;
int golden_value = golden_mclk_table->dpm_levels
[golden_mclk_table->count - 1].value;
value = (mclk_table->dpm_levels[mclk_table->count - 1].value -
golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value) *
100 /
golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value;
value -= golden_value;
value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}

View File

@ -4522,15 +4522,13 @@ static int vega10_get_sclk_od(struct pp_hwmgr *hwmgr)
struct vega10_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table);
struct vega10_single_dpm_table *golden_sclk_table =
&(data->golden_dpm_table.gfx_table);
int value;
value = (sclk_table->dpm_levels[sclk_table->count - 1].value -
golden_sclk_table->dpm_levels
[golden_sclk_table->count - 1].value) *
100 /
golden_sclk_table->dpm_levels
int value = sclk_table->dpm_levels[sclk_table->count - 1].value;
int golden_value = golden_sclk_table->dpm_levels
[golden_sclk_table->count - 1].value;
value -= golden_value;
value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}
@ -4575,16 +4573,13 @@ static int vega10_get_mclk_od(struct pp_hwmgr *hwmgr)
struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table);
struct vega10_single_dpm_table *golden_mclk_table =
&(data->golden_dpm_table.mem_table);
int value;
value = (mclk_table->dpm_levels
[mclk_table->count - 1].value -
golden_mclk_table->dpm_levels
[golden_mclk_table->count - 1].value) *
100 /
golden_mclk_table->dpm_levels
int value = mclk_table->dpm_levels[mclk_table->count - 1].value;
int golden_value = golden_mclk_table->dpm_levels
[golden_mclk_table->count - 1].value;
value -= golden_value;
value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}

View File

@ -2243,12 +2243,12 @@ static int vega12_get_sclk_od(struct pp_hwmgr *hwmgr)
struct vega12_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table);
struct vega12_single_dpm_table *golden_sclk_table =
&(data->golden_dpm_table.gfx_table);
int value;
int value = sclk_table->dpm_levels[sclk_table->count - 1].value;
int golden_value = golden_sclk_table->dpm_levels
[golden_sclk_table->count - 1].value;
value = (sclk_table->dpm_levels[sclk_table->count - 1].value -
golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) *
100 /
golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value;
value -= golden_value;
value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}
@ -2264,16 +2264,13 @@ static int vega12_get_mclk_od(struct pp_hwmgr *hwmgr)
struct vega12_single_dpm_table *mclk_table = &(data->dpm_table.mem_table);
struct vega12_single_dpm_table *golden_mclk_table =
&(data->golden_dpm_table.mem_table);
int value;
value = (mclk_table->dpm_levels
[mclk_table->count - 1].value -
golden_mclk_table->dpm_levels
[golden_mclk_table->count - 1].value) *
100 /
golden_mclk_table->dpm_levels
int value = mclk_table->dpm_levels[mclk_table->count - 1].value;
int golden_value = golden_mclk_table->dpm_levels
[golden_mclk_table->count - 1].value;
value -= golden_value;
value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}

View File

@ -75,7 +75,17 @@ static void vega20_set_default_registry_data(struct pp_hwmgr *hwmgr)
data->phy_clk_quad_eqn_b = PPREGKEY_VEGA20QUADRATICEQUATION_DFLT;
data->phy_clk_quad_eqn_c = PPREGKEY_VEGA20QUADRATICEQUATION_DFLT;
data->registry_data.disallowed_features = 0x0;
/*
* Disable the following features for now:
* GFXCLK DS
* SOCLK DS
* LCLK DS
* DCEFCLK DS
* FCLK DS
* MP1CLK DS
* MP0CLK DS
*/
data->registry_data.disallowed_features = 0xE0041C00;
data->registry_data.od_state_in_dc_support = 0;
data->registry_data.thermal_support = 1;
data->registry_data.skip_baco_hardware = 0;
@ -1313,12 +1323,13 @@ static int vega20_get_sclk_od(
&(data->dpm_table.gfx_table);
struct vega20_single_dpm_table *golden_sclk_table =
&(data->golden_dpm_table.gfx_table);
int value;
int value = sclk_table->dpm_levels[sclk_table->count - 1].value;
int golden_value = golden_sclk_table->dpm_levels
[golden_sclk_table->count - 1].value;
/* od percentage */
value = DIV_ROUND_UP((sclk_table->dpm_levels[sclk_table->count - 1].value -
golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) * 100,
golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value);
value -= golden_value;
value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}
@ -1358,12 +1369,13 @@ static int vega20_get_mclk_od(
&(data->dpm_table.mem_table);
struct vega20_single_dpm_table *golden_mclk_table =
&(data->golden_dpm_table.mem_table);
int value;
int value = mclk_table->dpm_levels[mclk_table->count - 1].value;
int golden_value = golden_mclk_table->dpm_levels
[golden_mclk_table->count - 1].value;
/* od percentage */
value = DIV_ROUND_UP((mclk_table->dpm_levels[mclk_table->count - 1].value -
golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value) * 100,
golden_mclk_table->dpm_levels[golden_mclk_table->count - 1].value);
value -= golden_value;
value = DIV_ROUND_UP(value * 100, golden_value);
return value;
}

View File

@ -60,8 +60,29 @@ static const struct pci_device_id pciidlist[] = {
MODULE_DEVICE_TABLE(pci, pciidlist);
static void ast_kick_out_firmware_fb(struct pci_dev *pdev)
{
struct apertures_struct *ap;
bool primary = false;
ap = alloc_apertures(1);
if (!ap)
return;
ap->ranges[0].base = pci_resource_start(pdev, 0);
ap->ranges[0].size = pci_resource_len(pdev, 0);
#ifdef CONFIG_X86
primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
#endif
drm_fb_helper_remove_conflicting_framebuffers(ap, "astdrmfb", primary);
kfree(ap);
}
static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
ast_kick_out_firmware_fb(pdev);
return drm_get_pci_dev(pdev, ent, &driver);
}

View File

@ -568,6 +568,7 @@ static int ast_crtc_do_set_base(struct drm_crtc *crtc,
}
ast_bo_unreserve(bo);
ast_set_offset_reg(crtc);
ast_set_start_address_crt1(crtc, (u32)gpu_addr);
return 0;
@ -1254,7 +1255,7 @@ static int ast_cursor_move(struct drm_crtc *crtc,
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, ((y >> 8) & 0x07));
/* dummy write to fire HWC */
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xCB, 0xFF, 0x00);
ast_show_cursor(crtc);
return 0;
}

View File

@ -219,6 +219,9 @@ int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
mutex_lock(&fb_helper->lock);
drm_connector_list_iter_begin(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
continue;
ret = __drm_fb_helper_add_one_connector(fb_helper, connector);
if (ret)
goto fail;

View File

@ -214,6 +214,12 @@ static int vc4_atomic_commit(struct drm_device *dev,
return 0;
}
/* We know for sure we don't want an async update here. Set
* state->legacy_cursor_update to false to prevent
* drm_atomic_helper_setup_commit() from auto-completing
* commit->flip_done.
*/
state->legacy_cursor_update = false;
ret = drm_atomic_helper_setup_commit(state, nonblock);
if (ret)
return ret;

View File

@ -854,7 +854,7 @@ void vc4_plane_async_set_fb(struct drm_plane *plane, struct drm_framebuffer *fb)
static void vc4_plane_atomic_async_update(struct drm_plane *plane,
struct drm_plane_state *state)
{
struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state);
struct vc4_plane_state *vc4_state, *new_vc4_state;
if (plane->state->fb != state->fb) {
vc4_plane_async_set_fb(plane, state->fb);
@ -875,7 +875,18 @@ static void vc4_plane_atomic_async_update(struct drm_plane *plane,
plane->state->src_y = state->src_y;
/* Update the display list based on the new crtc_x/y. */
vc4_plane_atomic_check(plane, plane->state);
vc4_plane_atomic_check(plane, state);
new_vc4_state = to_vc4_plane_state(state);
vc4_state = to_vc4_plane_state(plane->state);
/* Update the current vc4_state pos0, pos2 and ptr0 dlist entries. */
vc4_state->dlist[vc4_state->pos0_offset] =
new_vc4_state->dlist[vc4_state->pos0_offset];
vc4_state->dlist[vc4_state->pos2_offset] =
new_vc4_state->dlist[vc4_state->pos2_offset];
vc4_state->dlist[vc4_state->ptr0_offset] =
new_vc4_state->dlist[vc4_state->ptr0_offset];
/* Note that we can't just call vc4_plane_write_dlist()
* because that would smash the context data that the HVS is

View File

@ -275,6 +275,9 @@
#define USB_VENDOR_ID_CIDC 0x1677
#define I2C_VENDOR_ID_CIRQUE 0x0488
#define I2C_PRODUCT_ID_CIRQUE_121F 0x121F
#define USB_VENDOR_ID_CJTOUCH 0x24b8
#define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0020 0x0020
#define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0040 0x0040
@ -707,6 +710,7 @@
#define USB_VENDOR_ID_LG 0x1fd2
#define USB_DEVICE_ID_LG_MULTITOUCH 0x0064
#define USB_DEVICE_ID_LG_MELFAS_MT 0x6007
#define I2C_DEVICE_ID_LG_8001 0x8001
#define USB_VENDOR_ID_LOGITECH 0x046d
#define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e
@ -805,6 +809,7 @@
#define USB_DEVICE_ID_MS_TYPE_COVER_2 0x07a9
#define USB_DEVICE_ID_MS_POWER_COVER 0x07da
#define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd
#define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb
#define USB_VENDOR_ID_MOJO 0x8282
#define USB_DEVICE_ID_RETRO_ADAPTER 0x3201
@ -1043,6 +1048,7 @@
#define USB_VENDOR_ID_SYMBOL 0x05e0
#define USB_DEVICE_ID_SYMBOL_SCANNER_1 0x0800
#define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300
#define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200
#define USB_VENDOR_ID_SYNAPTICS 0x06cb
#define USB_DEVICE_ID_SYNAPTICS_TP 0x0001
@ -1204,6 +1210,8 @@
#define USB_DEVICE_ID_PRIMAX_MOUSE_4D22 0x4d22
#define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05
#define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72
#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f
#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22 0x4e22
#define USB_VENDOR_ID_RISO_KAGAKU 0x1294 /* Riso Kagaku Corp. */

View File

@ -325,6 +325,9 @@ static const struct hid_device_id hid_battery_quirks[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM,
USB_DEVICE_ID_ELECOM_BM084),
HID_BATTERY_QUIRK_IGNORE },
{ HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL,
USB_DEVICE_ID_SYMBOL_SCANNER_3),
HID_BATTERY_QUIRK_IGNORE },
{}
};
@ -1838,47 +1841,3 @@ void hidinput_disconnect(struct hid_device *hid)
}
EXPORT_SYMBOL_GPL(hidinput_disconnect);
/**
* hid_scroll_counter_handle_scroll() - Send high- and low-resolution scroll
* events given a high-resolution wheel
* movement.
* @counter: a hid_scroll_counter struct describing the wheel.
* @hi_res_value: the movement of the wheel, in the mouse's high-resolution
* units.
*
* Given a high-resolution movement, this function converts the movement into
* microns and emits high-resolution scroll events for the input device. It also
* uses the multiplier from &struct hid_scroll_counter to emit low-resolution
* scroll events when appropriate for backwards-compatibility with userspace
* input libraries.
*/
void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter,
int hi_res_value)
{
int low_res_value, remainder, multiplier;
input_report_rel(counter->dev, REL_WHEEL_HI_RES,
hi_res_value * counter->microns_per_hi_res_unit);
/*
* Update the low-res remainder with the high-res value,
* but reset if the direction has changed.
*/
remainder = counter->remainder;
if ((remainder ^ hi_res_value) < 0)
remainder = 0;
remainder += hi_res_value;
/*
* Then just use the resolution multiplier to see if
* we should send a low-res (aka regular wheel) event.
*/
multiplier = counter->resolution_multiplier;
low_res_value = remainder / multiplier;
remainder -= low_res_value * multiplier;
counter->remainder = remainder;
if (low_res_value)
input_report_rel(counter->dev, REL_WHEEL, low_res_value);
}
EXPORT_SYMBOL_GPL(hid_scroll_counter_handle_scroll);

View File

@ -64,14 +64,6 @@ MODULE_PARM_DESC(disable_tap_to_click,
#define HIDPP_QUIRK_NO_HIDINPUT BIT(23)
#define HIDPP_QUIRK_FORCE_OUTPUT_REPORTS BIT(24)
#define HIDPP_QUIRK_UNIFYING BIT(25)
#define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(26)
#define HIDPP_QUIRK_HI_RES_SCROLL_X2120 BIT(27)
#define HIDPP_QUIRK_HI_RES_SCROLL_X2121 BIT(28)
/* Convenience constant to check for any high-res support. */
#define HIDPP_QUIRK_HI_RES_SCROLL (HIDPP_QUIRK_HI_RES_SCROLL_1P0 | \
HIDPP_QUIRK_HI_RES_SCROLL_X2120 | \
HIDPP_QUIRK_HI_RES_SCROLL_X2121)
#define HIDPP_QUIRK_DELAYED_INIT HIDPP_QUIRK_NO_HIDINPUT
@ -157,7 +149,6 @@ struct hidpp_device {
unsigned long capabilities;
struct hidpp_battery battery;
struct hid_scroll_counter vertical_wheel_counter;
};
/* HID++ 1.0 error codes */
@ -409,53 +400,32 @@ static void hidpp_prefix_name(char **name, int name_length)
#define HIDPP_SET_LONG_REGISTER 0x82
#define HIDPP_GET_LONG_REGISTER 0x83
/**
* hidpp10_set_register_bit() - Sets a single bit in a HID++ 1.0 register.
* @hidpp_dev: the device to set the register on.
* @register_address: the address of the register to modify.
* @byte: the byte of the register to modify. Should be less than 3.
* Return: 0 if successful, otherwise a negative error code.
*/
static int hidpp10_set_register_bit(struct hidpp_device *hidpp_dev,
u8 register_address, u8 byte, u8 bit)
#define HIDPP_REG_GENERAL 0x00
static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)
{
struct hidpp_report response;
int ret;
u8 params[3] = { 0 };
ret = hidpp_send_rap_command_sync(hidpp_dev,
REPORT_ID_HIDPP_SHORT,
HIDPP_GET_REGISTER,
register_address,
NULL, 0, &response);
REPORT_ID_HIDPP_SHORT,
HIDPP_GET_REGISTER,
HIDPP_REG_GENERAL,
NULL, 0, &response);
if (ret)
return ret;
memcpy(params, response.rap.params, 3);
params[byte] |= BIT(bit);
/* Set the battery bit */
params[0] |= BIT(4);
return hidpp_send_rap_command_sync(hidpp_dev,
REPORT_ID_HIDPP_SHORT,
HIDPP_SET_REGISTER,
register_address,
params, 3, &response);
}
#define HIDPP_REG_GENERAL 0x00
static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)
{
return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_GENERAL, 0, 4);
}
#define HIDPP_REG_FEATURES 0x01
/* On HID++ 1.0 devices, high-res scroll was called "scrolling acceleration". */
static int hidpp10_enable_scrolling_acceleration(struct hidpp_device *hidpp_dev)
{
return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_FEATURES, 0, 6);
REPORT_ID_HIDPP_SHORT,
HIDPP_SET_REGISTER,
HIDPP_REG_GENERAL,
params, 3, &response);
}
#define HIDPP_REG_BATTERY_STATUS 0x07
@ -1166,100 +1136,6 @@ static int hidpp_battery_get_property(struct power_supply *psy,
return ret;
}
/* -------------------------------------------------------------------------- */
/* 0x2120: Hi-resolution scrolling */
/* -------------------------------------------------------------------------- */
#define HIDPP_PAGE_HI_RESOLUTION_SCROLLING 0x2120
#define CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE 0x10
static int hidpp_hrs_set_highres_scrolling_mode(struct hidpp_device *hidpp,
bool enabled, u8 *multiplier)
{
u8 feature_index;
u8 feature_type;
int ret;
u8 params[1];
struct hidpp_report response;
ret = hidpp_root_get_feature(hidpp,
HIDPP_PAGE_HI_RESOLUTION_SCROLLING,
&feature_index,
&feature_type);
if (ret)
return ret;
params[0] = enabled ? BIT(0) : 0;
ret = hidpp_send_fap_command_sync(hidpp, feature_index,
CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE,
params, sizeof(params), &response);
if (ret)
return ret;
*multiplier = response.fap.params[1];
return 0;
}
/* -------------------------------------------------------------------------- */
/* 0x2121: HiRes Wheel */
/* -------------------------------------------------------------------------- */
#define HIDPP_PAGE_HIRES_WHEEL 0x2121
#define CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY 0x00
#define CMD_HIRES_WHEEL_SET_WHEEL_MODE 0x20
static int hidpp_hrw_get_wheel_capability(struct hidpp_device *hidpp,
u8 *multiplier)
{
u8 feature_index;
u8 feature_type;
int ret;
struct hidpp_report response;
ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,
&feature_index, &feature_type);
if (ret)
goto return_default;
ret = hidpp_send_fap_command_sync(hidpp, feature_index,
CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY,
NULL, 0, &response);
if (ret)
goto return_default;
*multiplier = response.fap.params[0];
return 0;
return_default:
hid_warn(hidpp->hid_dev,
"Couldn't get wheel multiplier (error %d), assuming %d.\n",
ret, *multiplier);
return ret;
}
static int hidpp_hrw_set_wheel_mode(struct hidpp_device *hidpp, bool invert,
bool high_resolution, bool use_hidpp)
{
u8 feature_index;
u8 feature_type;
int ret;
u8 params[1];
struct hidpp_report response;
ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,
&feature_index, &feature_type);
if (ret)
return ret;
params[0] = (invert ? BIT(2) : 0) |
(high_resolution ? BIT(1) : 0) |
(use_hidpp ? BIT(0) : 0);
return hidpp_send_fap_command_sync(hidpp, feature_index,
CMD_HIRES_WHEEL_SET_WHEEL_MODE,
params, sizeof(params), &response);
}
/* -------------------------------------------------------------------------- */
/* 0x4301: Solar Keyboard */
/* -------------------------------------------------------------------------- */
@ -2523,8 +2399,7 @@ static int m560_raw_event(struct hid_device *hdev, u8 *data, int size)
input_report_rel(mydata->input, REL_Y, v);
v = hid_snto32(data[6], 8);
hid_scroll_counter_handle_scroll(
&hidpp->vertical_wheel_counter, v);
input_report_rel(mydata->input, REL_WHEEL, v);
input_sync(mydata->input);
}
@ -2652,72 +2527,6 @@ static int g920_get_config(struct hidpp_device *hidpp)
return 0;
}
/* -------------------------------------------------------------------------- */
/* High-resolution scroll wheels */
/* -------------------------------------------------------------------------- */
/**
* struct hi_res_scroll_info - Stores info on a device's high-res scroll wheel.
* @product_id: the HID product ID of the device being described.
* @microns_per_hi_res_unit: the distance moved by the user's finger for each
* high-resolution unit reported by the device, in
* 256ths of a millimetre.
*/
struct hi_res_scroll_info {
__u32 product_id;
int microns_per_hi_res_unit;
};
static struct hi_res_scroll_info hi_res_scroll_devices[] = {
{ /* Anywhere MX */
.product_id = 0x1017, .microns_per_hi_res_unit = 445 },
{ /* Performance MX */
.product_id = 0x101a, .microns_per_hi_res_unit = 406 },
{ /* M560 */
.product_id = 0x402d, .microns_per_hi_res_unit = 435 },
{ /* MX Master 2S */
.product_id = 0x4069, .microns_per_hi_res_unit = 406 },
};
static int hi_res_scroll_look_up_microns(__u32 product_id)
{
int i;
int num_devices = sizeof(hi_res_scroll_devices)
/ sizeof(hi_res_scroll_devices[0]);
for (i = 0; i < num_devices; i++) {
if (hi_res_scroll_devices[i].product_id == product_id)
return hi_res_scroll_devices[i].microns_per_hi_res_unit;
}
/* We don't have a value for this device, so use a sensible default. */
return 406;
}
static int hi_res_scroll_enable(struct hidpp_device *hidpp)
{
int ret;
u8 multiplier = 8;
if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2121) {
ret = hidpp_hrw_set_wheel_mode(hidpp, false, true, false);
hidpp_hrw_get_wheel_capability(hidpp, &multiplier);
} else if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2120) {
ret = hidpp_hrs_set_highres_scrolling_mode(hidpp, true,
&multiplier);
} else /* if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_1P0) */
ret = hidpp10_enable_scrolling_acceleration(hidpp);
if (ret)
return ret;
hidpp->vertical_wheel_counter.resolution_multiplier = multiplier;
hidpp->vertical_wheel_counter.microns_per_hi_res_unit =
hi_res_scroll_look_up_microns(hidpp->hid_dev->product);
hid_info(hidpp->hid_dev, "multiplier = %d, microns = %d\n",
multiplier,
hidpp->vertical_wheel_counter.microns_per_hi_res_unit);
return 0;
}
/* -------------------------------------------------------------------------- */
/* Generic HID++ devices */
/* -------------------------------------------------------------------------- */
@ -2763,11 +2572,6 @@ static void hidpp_populate_input(struct hidpp_device *hidpp,
wtp_populate_input(hidpp, input, origin_is_hid_core);
else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560)
m560_populate_input(hidpp, input, origin_is_hid_core);
if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) {
input_set_capability(input, EV_REL, REL_WHEEL_HI_RES);
hidpp->vertical_wheel_counter.dev = input;
}
}
static int hidpp_input_configured(struct hid_device *hdev,
@ -2886,27 +2690,6 @@ static int hidpp_raw_event(struct hid_device *hdev, struct hid_report *report,
return 0;
}
static int hidpp_event(struct hid_device *hdev, struct hid_field *field,
struct hid_usage *usage, __s32 value)
{
/* This function will only be called for scroll events, due to the
* restriction imposed in hidpp_usages.
*/
struct hidpp_device *hidpp = hid_get_drvdata(hdev);
struct hid_scroll_counter *counter = &hidpp->vertical_wheel_counter;
/* A scroll event may occur before the multiplier has been retrieved or
* the input device set, or high-res scroll enabling may fail. In such
* cases we must return early (falling back to default behaviour) to
* avoid a crash in hid_scroll_counter_handle_scroll.
*/
if (!(hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) || value == 0
|| counter->dev == NULL || counter->resolution_multiplier == 0)
return 0;
hid_scroll_counter_handle_scroll(counter, value);
return 1;
}
static int hidpp_initialize_battery(struct hidpp_device *hidpp)
{
static atomic_t battery_no = ATOMIC_INIT(0);
@ -3118,9 +2901,6 @@ static void hidpp_connect_event(struct hidpp_device *hidpp)
if (hidpp->battery.ps)
power_supply_changed(hidpp->battery.ps);
if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL)
hi_res_scroll_enable(hidpp);
if (!(hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT) || hidpp->delayed_input)
/* if the input nodes are already created, we can stop now */
return;
@ -3306,63 +3086,35 @@ static void hidpp_remove(struct hid_device *hdev)
mutex_destroy(&hidpp->send_mutex);
}
#define LDJ_DEVICE(product) \
HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, \
USB_VENDOR_ID_LOGITECH, (product))
static const struct hid_device_id hidpp_devices[] = {
{ /* wireless touchpad */
LDJ_DEVICE(0x4011),
HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
USB_VENDOR_ID_LOGITECH, 0x4011),
.driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT |
HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS },
{ /* wireless touchpad T650 */
LDJ_DEVICE(0x4101),
HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
USB_VENDOR_ID_LOGITECH, 0x4101),
.driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT },
{ /* wireless touchpad T651 */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
USB_DEVICE_ID_LOGITECH_T651),
.driver_data = HIDPP_QUIRK_CLASS_WTP },
{ /* Mouse Logitech Anywhere MX */
LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
{ /* Mouse Logitech Cube */
LDJ_DEVICE(0x4010), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },
{ /* Mouse Logitech M335 */
LDJ_DEVICE(0x4050), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* Mouse Logitech M515 */
LDJ_DEVICE(0x4007), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },
{ /* Mouse logitech M560 */
LDJ_DEVICE(0x402d),
.driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560
| HIDPP_QUIRK_HI_RES_SCROLL_X2120 },
{ /* Mouse Logitech M705 (firmware RQM17) */
LDJ_DEVICE(0x101b), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
{ /* Mouse Logitech M705 (firmware RQM67) */
LDJ_DEVICE(0x406d), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* Mouse Logitech M720 */
LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* Mouse Logitech MX Anywhere 2 */
LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* Mouse Logitech MX Anywhere 2S */
LDJ_DEVICE(0x406a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* Mouse Logitech MX Master */
LDJ_DEVICE(0x4041), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ LDJ_DEVICE(0x4060), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ LDJ_DEVICE(0x4071), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* Mouse Logitech MX Master 2S */
LDJ_DEVICE(0x4069), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },
{ /* Mouse Logitech Performance MX */
LDJ_DEVICE(0x101a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
USB_VENDOR_ID_LOGITECH, 0x402d),
.driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 },
{ /* Keyboard logitech K400 */
LDJ_DEVICE(0x4024),
HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
USB_VENDOR_ID_LOGITECH, 0x4024),
.driver_data = HIDPP_QUIRK_CLASS_K400 },
{ /* Solar Keyboard Logitech K750 */
LDJ_DEVICE(0x4002),
HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
USB_VENDOR_ID_LOGITECH, 0x4002),
.driver_data = HIDPP_QUIRK_CLASS_K750 },
{ LDJ_DEVICE(HID_ANY_ID) },
{ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
USB_VENDOR_ID_LOGITECH, HID_ANY_ID)},
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G920_WHEEL),
.driver_data = HIDPP_QUIRK_CLASS_G920 | HIDPP_QUIRK_FORCE_OUTPUT_REPORTS},
@ -3371,19 +3123,12 @@ static const struct hid_device_id hidpp_devices[] = {
MODULE_DEVICE_TABLE(hid, hidpp_devices);
static const struct hid_usage_id hidpp_usages[] = {
{ HID_GD_WHEEL, EV_REL, REL_WHEEL },
{ HID_ANY_ID - 1, HID_ANY_ID - 1, HID_ANY_ID - 1}
};
static struct hid_driver hidpp_driver = {
.name = "logitech-hidpp-device",
.id_table = hidpp_devices,
.probe = hidpp_probe,
.remove = hidpp_remove,
.raw_event = hidpp_raw_event,
.usage_table = hidpp_usages,
.event = hidpp_event,
.input_configured = hidpp_input_configured,
.input_mapping = hidpp_input_mapping,
.input_mapped = hidpp_input_mapped,

View File

@ -1814,6 +1814,12 @@ static const struct hid_device_id mt_devices[] = {
MT_USB_DEVICE(USB_VENDOR_ID_CHUNGHWAT,
USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH) },
/* Cirque devices */
{ .driver_data = MT_CLS_WIN_8_DUAL,
HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
I2C_VENDOR_ID_CIRQUE,
I2C_PRODUCT_ID_CIRQUE_121F) },
/* CJTouch panels */
{ .driver_data = MT_CLS_NSMU,
MT_USB_DEVICE(USB_VENDOR_ID_CJTOUCH,

View File

@ -107,6 +107,7 @@ static const struct hid_device_id hid_quirks[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C05A), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C06A), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS },
@ -129,6 +130,8 @@ static const struct hid_device_id hid_quirks[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET },
{ HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET },
{ HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003), HID_QUIRK_NOGET },

View File

@ -23,8 +23,9 @@
* In order to avoid breaking them this driver creates a layered hidraw device,
* so it can detect when the client is running and then:
* - it will not send any command to the controller.
* - this input device will be disabled, to avoid double input of the same
* - this input device will be removed, to avoid double input of the same
* user action.
* When the client is closed, this input device will be created again.
*
* For additional functions, such as changing the right-pad margin or switching
* the led, you can use the user-space tool at:
@ -113,7 +114,7 @@ struct steam_device {
spinlock_t lock;
struct hid_device *hdev, *client_hdev;
struct mutex mutex;
bool client_opened, input_opened;
bool client_opened;
struct input_dev __rcu *input;
unsigned long quirks;
struct work_struct work_connect;
@ -279,18 +280,6 @@ static void steam_set_lizard_mode(struct steam_device *steam, bool enable)
}
}
static void steam_update_lizard_mode(struct steam_device *steam)
{
mutex_lock(&steam->mutex);
if (!steam->client_opened) {
if (steam->input_opened)
steam_set_lizard_mode(steam, false);
else
steam_set_lizard_mode(steam, lizard_mode);
}
mutex_unlock(&steam->mutex);
}
static int steam_input_open(struct input_dev *dev)
{
struct steam_device *steam = input_get_drvdata(dev);
@ -301,7 +290,6 @@ static int steam_input_open(struct input_dev *dev)
return ret;
mutex_lock(&steam->mutex);
steam->input_opened = true;
if (!steam->client_opened && lizard_mode)
steam_set_lizard_mode(steam, false);
mutex_unlock(&steam->mutex);
@ -313,7 +301,6 @@ static void steam_input_close(struct input_dev *dev)
struct steam_device *steam = input_get_drvdata(dev);
mutex_lock(&steam->mutex);
steam->input_opened = false;
if (!steam->client_opened && lizard_mode)
steam_set_lizard_mode(steam, true);
mutex_unlock(&steam->mutex);
@ -400,7 +387,7 @@ static int steam_battery_register(struct steam_device *steam)
return 0;
}
static int steam_register(struct steam_device *steam)
static int steam_input_register(struct steam_device *steam)
{
struct hid_device *hdev = steam->hdev;
struct input_dev *input;
@ -414,17 +401,6 @@ static int steam_register(struct steam_device *steam)
return 0;
}
/*
* Unlikely, but getting the serial could fail, and it is not so
* important, so make up a serial number and go on.
*/
if (steam_get_serial(steam) < 0)
strlcpy(steam->serial_no, "XXXXXXXXXX",
sizeof(steam->serial_no));
hid_info(hdev, "Steam Controller '%s' connected",
steam->serial_no);
input = input_allocate_device();
if (!input)
return -ENOMEM;
@ -492,11 +468,6 @@ static int steam_register(struct steam_device *steam)
goto input_register_fail;
rcu_assign_pointer(steam->input, input);
/* ignore battery errors, we can live without it */
if (steam->quirks & STEAM_QUIRK_WIRELESS)
steam_battery_register(steam);
return 0;
input_register_fail:
@ -504,27 +475,88 @@ input_register_fail:
return ret;
}
static void steam_unregister(struct steam_device *steam)
static void steam_input_unregister(struct steam_device *steam)
{
struct input_dev *input;
rcu_read_lock();
input = rcu_dereference(steam->input);
rcu_read_unlock();
if (!input)
return;
RCU_INIT_POINTER(steam->input, NULL);
synchronize_rcu();
input_unregister_device(input);
}
static void steam_battery_unregister(struct steam_device *steam)
{
struct power_supply *battery;
rcu_read_lock();
input = rcu_dereference(steam->input);
battery = rcu_dereference(steam->battery);
rcu_read_unlock();
if (battery) {
RCU_INIT_POINTER(steam->battery, NULL);
synchronize_rcu();
power_supply_unregister(battery);
if (!battery)
return;
RCU_INIT_POINTER(steam->battery, NULL);
synchronize_rcu();
power_supply_unregister(battery);
}
static int steam_register(struct steam_device *steam)
{
int ret;
/*
* This function can be called several times in a row with the
* wireless adaptor, without steam_unregister() between them, because
* another client send a get_connection_status command, for example.
* The battery and serial number are set just once per device.
*/
if (!steam->serial_no[0]) {
/*
* Unlikely, but getting the serial could fail, and it is not so
* important, so make up a serial number and go on.
*/
if (steam_get_serial(steam) < 0)
strlcpy(steam->serial_no, "XXXXXXXXXX",
sizeof(steam->serial_no));
hid_info(steam->hdev, "Steam Controller '%s' connected",
steam->serial_no);
/* ignore battery errors, we can live without it */
if (steam->quirks & STEAM_QUIRK_WIRELESS)
steam_battery_register(steam);
mutex_lock(&steam_devices_lock);
list_add(&steam->list, &steam_devices);
mutex_unlock(&steam_devices_lock);
}
if (input) {
RCU_INIT_POINTER(steam->input, NULL);
synchronize_rcu();
mutex_lock(&steam->mutex);
if (!steam->client_opened) {
steam_set_lizard_mode(steam, lizard_mode);
ret = steam_input_register(steam);
} else {
ret = 0;
}
mutex_unlock(&steam->mutex);
return ret;
}
static void steam_unregister(struct steam_device *steam)
{
steam_battery_unregister(steam);
steam_input_unregister(steam);
if (steam->serial_no[0]) {
hid_info(steam->hdev, "Steam Controller '%s' disconnected",
steam->serial_no);
input_unregister_device(input);
mutex_lock(&steam_devices_lock);
list_del(&steam->list);
mutex_unlock(&steam_devices_lock);
steam->serial_no[0] = 0;
}
}
@ -600,6 +632,9 @@ static int steam_client_ll_open(struct hid_device *hdev)
mutex_lock(&steam->mutex);
steam->client_opened = true;
mutex_unlock(&steam->mutex);
steam_input_unregister(steam);
return ret;
}
@ -609,13 +644,13 @@ static void steam_client_ll_close(struct hid_device *hdev)
mutex_lock(&steam->mutex);
steam->client_opened = false;
if (steam->input_opened)
steam_set_lizard_mode(steam, false);
else
steam_set_lizard_mode(steam, lizard_mode);
mutex_unlock(&steam->mutex);
hid_hw_close(steam->hdev);
if (steam->connected) {
steam_set_lizard_mode(steam, lizard_mode);
steam_input_register(steam);
}
}
static int steam_client_ll_raw_request(struct hid_device *hdev,
@ -744,11 +779,6 @@ static int steam_probe(struct hid_device *hdev,
}
}
mutex_lock(&steam_devices_lock);
steam_update_lizard_mode(steam);
list_add(&steam->list, &steam_devices);
mutex_unlock(&steam_devices_lock);
return 0;
hid_hw_open_fail:
@ -774,10 +804,6 @@ static void steam_remove(struct hid_device *hdev)
return;
}
mutex_lock(&steam_devices_lock);
list_del(&steam->list);
mutex_unlock(&steam_devices_lock);
hid_destroy_device(steam->client_hdev);
steam->client_opened = false;
cancel_work_sync(&steam->work_connect);
@ -792,12 +818,14 @@ static void steam_remove(struct hid_device *hdev)
static void steam_do_connect_event(struct steam_device *steam, bool connected)
{
unsigned long flags;
bool changed;
spin_lock_irqsave(&steam->lock, flags);
changed = steam->connected != connected;
steam->connected = connected;
spin_unlock_irqrestore(&steam->lock, flags);
if (schedule_work(&steam->work_connect) == 0)
if (changed && schedule_work(&steam->work_connect) == 0)
dbg_hid("%s: connected=%d event already queued\n",
__func__, connected);
}
@ -1019,13 +1047,8 @@ static int steam_raw_event(struct hid_device *hdev,
return 0;
rcu_read_lock();
input = rcu_dereference(steam->input);
if (likely(input)) {
if (likely(input))
steam_do_input_event(steam, input, data);
} else {
dbg_hid("%s: input data without connect event\n",
__func__);
steam_do_connect_event(steam, true);
}
rcu_read_unlock();
break;
case STEAM_EV_CONNECT:
@ -1074,7 +1097,10 @@ static int steam_param_set_lizard_mode(const char *val,
mutex_lock(&steam_devices_lock);
list_for_each_entry(steam, &steam_devices, list) {
steam_update_lizard_mode(steam);
mutex_lock(&steam->mutex);
if (!steam->client_opened)
steam_set_lizard_mode(steam, lizard_mode);
mutex_unlock(&steam->mutex);
}
mutex_unlock(&steam_devices_lock);
return 0;

View File

@ -177,6 +177,8 @@ static const struct i2c_hid_quirks {
I2C_HID_QUIRK_NO_RUNTIME_PM },
{ I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_4B33,
I2C_HID_QUIRK_DELAY_AFTER_SLEEP },
{ USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_8001,
I2C_HID_QUIRK_NO_RUNTIME_PM },
{ 0, 0 }
};

View File

@ -12,6 +12,7 @@
#include <linux/atomic.h>
#include <linux/compat.h>
#include <linux/cred.h>
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/hid.h>
@ -496,12 +497,13 @@ static int uhid_dev_create2(struct uhid_device *uhid,
goto err_free;
}
len = min(sizeof(hid->name), sizeof(ev->u.create2.name));
strlcpy(hid->name, ev->u.create2.name, len);
len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys));
strlcpy(hid->phys, ev->u.create2.phys, len);
len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq));
strlcpy(hid->uniq, ev->u.create2.uniq, len);
/* @hid is zero-initialized, strncpy() is correct, strlcpy() not */
len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1;
strncpy(hid->name, ev->u.create2.name, len);
len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1;
strncpy(hid->phys, ev->u.create2.phys, len);
len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1;
strncpy(hid->uniq, ev->u.create2.uniq, len);
hid->ll_driver = &uhid_hid_driver;
hid->bus = ev->u.create2.bus;
@ -722,6 +724,17 @@ static ssize_t uhid_char_write(struct file *file, const char __user *buffer,
switch (uhid->input_buf.type) {
case UHID_CREATE:
/*
* 'struct uhid_create_req' contains a __user pointer which is
* copied from, so it's unsafe to allow this with elevated
* privileges (e.g. from a setuid binary) or via kernel_write().
*/
if (file->f_cred != current_cred() || uaccess_kernel()) {
pr_err_once("UHID_CREATE from different security context by process %d (%s), this is not allowed.\n",
task_tgid_vnr(current), current->comm);
ret = -EACCES;
goto unlock;
}
ret = uhid_dev_create(uhid, &uhid->input_buf);
break;
case UHID_CREATE2:

View File

@ -353,6 +353,9 @@ static void process_ib_ipinfo(void *in_msg, void *out_msg, int op)
out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled;
/* fallthrough */
case KVP_OP_GET_IP_INFO:
utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id,
MAX_ADAPTER_ID_SIZE,
UTF16_LITTLE_ENDIAN,
@ -405,7 +408,11 @@ kvp_send_key(struct work_struct *dummy)
process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO);
break;
case KVP_OP_GET_IP_INFO:
/* We only need to pass on message->kvp_hdr.operation. */
/*
* We only need to pass on the info of operation, adapter_id
* and addr_family to the userland kvp daemon.
*/
process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO);
break;
case KVP_OP_SET:
switch (in_msg->body.kvp_set.data.value_type) {
@ -446,9 +453,9 @@ kvp_send_key(struct work_struct *dummy)
}
break;
case KVP_OP_GET:
/*
* The key is always a string - utf16 encoding.
*/
message->body.kvp_set.data.key_size =
utf16s_to_utf8s(
(wchar_t *)in_msg->body.kvp_set.data.key,
@ -456,6 +463,17 @@ kvp_send_key(struct work_struct *dummy)
UTF16_LITTLE_ENDIAN,
message->body.kvp_set.data.key,
HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;
break;
case KVP_OP_GET:
message->body.kvp_get.data.key_size =
utf16s_to_utf8s(
(wchar_t *)in_msg->body.kvp_get.data.key,
in_msg->body.kvp_get.data.key_size,
UTF16_LITTLE_ENDIAN,
message->body.kvp_get.data.key,
HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;
break;
case KVP_OP_DELETE:

View File

@ -797,7 +797,8 @@ static int iommu_init_ga_log(struct amd_iommu *iommu)
entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512;
memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET,
&entry, sizeof(entry));
entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL;
entry = (iommu_virt_to_phys(iommu->ga_log_tail) &
(BIT_ULL(52)-1)) & ~7ULL;
memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET,
&entry, sizeof(entry));
writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);

View File

@ -3075,7 +3075,7 @@ static int copy_context_table(struct intel_iommu *iommu,
}
if (old_ce)
iounmap(old_ce);
memunmap(old_ce);
ret = 0;
if (devfn < 0x80)

View File

@ -595,7 +595,7 @@ static irqreturn_t prq_event_thread(int irq, void *d)
pr_err("%s: Page request without PASID: %08llx %08llx\n",
iommu->name, ((unsigned long long *)req)[0],
((unsigned long long *)req)[1]);
goto bad_req;
goto no_pasid;
}
if (!svm || svm->pasid != req->pasid) {

View File

@ -498,6 +498,9 @@ static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain)
static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain)
{
if (!domain->mmu)
return;
/*
* Disable the context. Flush the TLB as required when modifying the
* context registers.

View File

@ -807,7 +807,7 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
}
if (adap->transmit_queue_sz >= CEC_MAX_MSG_TX_QUEUE_SZ) {
dprintk(1, "%s: transmit queue full\n", __func__);
dprintk(2, "%s: transmit queue full\n", __func__);
return -EBUSY;
}
@ -1180,6 +1180,8 @@ static int cec_config_log_addr(struct cec_adapter *adap,
{
struct cec_log_addrs *las = &adap->log_addrs;
struct cec_msg msg = { };
const unsigned int max_retries = 2;
unsigned int i;
int err;
if (cec_has_log_addr(adap, log_addr))
@ -1188,19 +1190,44 @@ static int cec_config_log_addr(struct cec_adapter *adap,
/* Send poll message */
msg.len = 1;
msg.msg[0] = (log_addr << 4) | log_addr;
err = cec_transmit_msg_fh(adap, &msg, NULL, true);
for (i = 0; i < max_retries; i++) {
err = cec_transmit_msg_fh(adap, &msg, NULL, true);
/*
* While trying to poll the physical address was reset
* and the adapter was unconfigured, so bail out.
*/
if (!adap->is_configuring)
return -EINTR;
if (err)
return err;
/*
* The message was aborted due to a disconnect or
* unconfigure, just bail out.
*/
if (msg.tx_status & CEC_TX_STATUS_ABORTED)
return -EINTR;
if (msg.tx_status & CEC_TX_STATUS_OK)
return 0;
if (msg.tx_status & CEC_TX_STATUS_NACK)
break;
/*
* Retry up to max_retries times if the message was neither
* OKed or NACKed. This can happen due to e.g. a Lost
* Arbitration condition.
*/
}
/*
* While trying to poll the physical address was reset
* and the adapter was unconfigured, so bail out.
* If we are unable to get an OK or a NACK after max_retries attempts
* (and note that each attempt already consists of four polls), then
* then we assume that something is really weird and that it is not a
* good idea to try and claim this logical address.
*/
if (!adap->is_configuring)
return -EINTR;
if (err)
return err;
if (msg.tx_status & CEC_TX_STATUS_OK)
if (i == max_retries)
return 0;
/*

View File

@ -1918,7 +1918,6 @@ static int tc358743_probe_of(struct tc358743_state *state)
ret = v4l2_fwnode_endpoint_alloc_parse(of_fwnode_handle(ep), &endpoint);
if (ret) {
dev_err(dev, "failed to parse endpoint\n");
ret = ret;
goto put_node;
}

View File

@ -1844,14 +1844,12 @@ fail_mutex_destroy:
static void cio2_pci_remove(struct pci_dev *pci_dev)
{
struct cio2_device *cio2 = pci_get_drvdata(pci_dev);
unsigned int i;
cio2_notifier_exit(cio2);
cio2_fbpt_exit_dummy(cio2);
for (i = 0; i < CIO2_QUEUES; i++)
cio2_queue_exit(cio2, &cio2->queue[i]);
v4l2_device_unregister(&cio2->v4l2_dev);
media_device_unregister(&cio2->media_dev);
cio2_notifier_exit(cio2);
cio2_queues_exit(cio2);
cio2_fbpt_exit_dummy(cio2);
v4l2_device_unregister(&cio2->v4l2_dev);
media_device_cleanup(&cio2->media_dev);
mutex_destroy(&cio2->lock);
}

View File

@ -1587,6 +1587,8 @@ static void isp_pm_complete(struct device *dev)
static void isp_unregister_entities(struct isp_device *isp)
{
media_device_unregister(&isp->media_dev);
omap3isp_csi2_unregister_entities(&isp->isp_csi2a);
omap3isp_ccp2_unregister_entities(&isp->isp_ccp2);
omap3isp_ccdc_unregister_entities(&isp->isp_ccdc);
@ -1597,7 +1599,6 @@ static void isp_unregister_entities(struct isp_device *isp)
omap3isp_stat_unregister_entities(&isp->isp_hist);
v4l2_device_unregister(&isp->v4l2_dev);
media_device_unregister(&isp->media_dev);
media_device_cleanup(&isp->media_dev);
}

View File

@ -42,7 +42,7 @@ MODULE_PARM_DESC(debug, " activates debug info");
#define MAX_WIDTH 4096U
#define MIN_WIDTH 640U
#define MAX_HEIGHT 2160U
#define MIN_HEIGHT 480U
#define MIN_HEIGHT 360U
#define dprintk(dev, fmt, arg...) \
v4l2_dbg(1, debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)

View File

@ -1009,7 +1009,7 @@ static const struct v4l2_m2m_ops m2m_ops = {
static const struct media_device_ops m2m_media_ops = {
.req_validate = vb2_request_validate,
.req_queue = vb2_m2m_request_queue,
.req_queue = v4l2_m2m_request_queue,
};
static int vim2m_probe(struct platform_device *pdev)

View File

@ -1664,6 +1664,11 @@ static int std_validate(const struct v4l2_ctrl *ctrl, u32 idx,
p_mpeg2_slice_params->forward_ref_index >= VIDEO_MAX_FRAME)
return -EINVAL;
if (p_mpeg2_slice_params->pad ||
p_mpeg2_slice_params->picture.pad ||
p_mpeg2_slice_params->sequence.pad)
return -EINVAL;
return 0;
case V4L2_CTRL_TYPE_MPEG2_QUANTIZATION:

View File

@ -193,6 +193,22 @@ int v4l2_event_pending(struct v4l2_fh *fh)
}
EXPORT_SYMBOL_GPL(v4l2_event_pending);
static void __v4l2_event_unsubscribe(struct v4l2_subscribed_event *sev)
{
struct v4l2_fh *fh = sev->fh;
unsigned int i;
lockdep_assert_held(&fh->subscribe_lock);
assert_spin_locked(&fh->vdev->fh_lock);
/* Remove any pending events for this subscription */
for (i = 0; i < sev->in_use; i++) {
list_del(&sev->events[sev_pos(sev, i)].list);
fh->navailable--;
}
list_del(&sev->list);
}
int v4l2_event_subscribe(struct v4l2_fh *fh,
const struct v4l2_event_subscription *sub, unsigned elems,
const struct v4l2_subscribed_event_ops *ops)
@ -224,27 +240,23 @@ int v4l2_event_subscribe(struct v4l2_fh *fh,
spin_lock_irqsave(&fh->vdev->fh_lock, flags);
found_ev = v4l2_event_subscribed(fh, sub->type, sub->id);
if (!found_ev)
list_add(&sev->list, &fh->subscribed);
spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
if (found_ev) {
/* Already listening */
kvfree(sev);
goto out_unlock;
}
if (sev->ops && sev->ops->add) {
} else if (sev->ops && sev->ops->add) {
ret = sev->ops->add(sev, elems);
if (ret) {
spin_lock_irqsave(&fh->vdev->fh_lock, flags);
__v4l2_event_unsubscribe(sev);
spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
kvfree(sev);
goto out_unlock;
}
}
spin_lock_irqsave(&fh->vdev->fh_lock, flags);
list_add(&sev->list, &fh->subscribed);
spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
out_unlock:
mutex_unlock(&fh->subscribe_lock);
return ret;
@ -279,7 +291,6 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
{
struct v4l2_subscribed_event *sev;
unsigned long flags;
int i;
if (sub->type == V4L2_EVENT_ALL) {
v4l2_event_unsubscribe_all(fh);
@ -291,14 +302,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
spin_lock_irqsave(&fh->vdev->fh_lock, flags);
sev = v4l2_event_subscribed(fh, sub->type, sub->id);
if (sev != NULL) {
/* Remove any pending events for this subscription */
for (i = 0; i < sev->in_use; i++) {
list_del(&sev->events[sev_pos(sev, i)].list);
fh->navailable--;
}
list_del(&sev->list);
}
if (sev != NULL)
__v4l2_event_unsubscribe(sev);
spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);

View File

@ -953,7 +953,7 @@ void v4l2_m2m_buf_queue(struct v4l2_m2m_ctx *m2m_ctx,
}
EXPORT_SYMBOL_GPL(v4l2_m2m_buf_queue);
void vb2_m2m_request_queue(struct media_request *req)
void v4l2_m2m_request_queue(struct media_request *req)
{
struct media_request_object *obj, *obj_safe;
struct v4l2_m2m_ctx *m2m_ctx = NULL;
@ -997,7 +997,7 @@ void vb2_m2m_request_queue(struct media_request *req)
if (m2m_ctx)
v4l2_m2m_try_schedule(m2m_ctx);
}
EXPORT_SYMBOL_GPL(vb2_m2m_request_queue);
EXPORT_SYMBOL_GPL(v4l2_m2m_request_queue);
/* Videobuf2 ioctl helpers */

View File

@ -132,7 +132,7 @@ static const struct of_device_id atmel_ssc_dt_ids[] = {
MODULE_DEVICE_TABLE(of, atmel_ssc_dt_ids);
#endif
static inline const struct atmel_ssc_platform_data * __init
static inline const struct atmel_ssc_platform_data *
atmel_ssc_get_driver_data(struct platform_device *pdev)
{
if (pdev->dev.of_node) {

View File

@ -27,6 +27,9 @@
#include <linux/delay.h>
#include <linux/bitops.h>
#include <asm/uv/uv_hub.h>
#include <linux/nospec.h>
#include "gru.h"
#include "grutables.h"
#include "gruhandles.h"
@ -196,6 +199,7 @@ int gru_dump_chiplet_request(unsigned long arg)
/* Currently, only dump by gid is implemented */
if (req.gid >= gru_max_gids)
return -EINVAL;
req.gid = array_index_nospec(req.gid, gru_max_gids);
gru = GID_TO_GRU(req.gid);
ubuf = req.buf;

View File

@ -12,6 +12,7 @@
* - JMicron (hardware and technical support)
*/
#include <linux/bitfield.h>
#include <linux/string.h>
#include <linux/delay.h>
#include <linux/highmem.h>
@ -462,6 +463,9 @@ struct intel_host {
u32 dsm_fns;
int drv_strength;
bool d3_retune;
bool rpm_retune_ok;
u32 glk_rx_ctrl1;
u32 glk_tun_val;
};
static const guid_t intel_dsm_guid =
@ -791,6 +795,77 @@ cleanup:
return ret;
}
#ifdef CONFIG_PM
#define GLK_RX_CTRL1 0x834
#define GLK_TUN_VAL 0x840
#define GLK_PATH_PLL GENMASK(13, 8)
#define GLK_DLY GENMASK(6, 0)
/* Workaround firmware failing to restore the tuning value */
static void glk_rpm_retune_wa(struct sdhci_pci_chip *chip, bool susp)
{
struct sdhci_pci_slot *slot = chip->slots[0];
struct intel_host *intel_host = sdhci_pci_priv(slot);
struct sdhci_host *host = slot->host;
u32 glk_rx_ctrl1;
u32 glk_tun_val;
u32 dly;
if (intel_host->rpm_retune_ok || !mmc_can_retune(host->mmc))
return;
glk_rx_ctrl1 = sdhci_readl(host, GLK_RX_CTRL1);
glk_tun_val = sdhci_readl(host, GLK_TUN_VAL);
if (susp) {
intel_host->glk_rx_ctrl1 = glk_rx_ctrl1;
intel_host->glk_tun_val = glk_tun_val;
return;
}
if (!intel_host->glk_tun_val)
return;
if (glk_rx_ctrl1 != intel_host->glk_rx_ctrl1) {
intel_host->rpm_retune_ok = true;
return;
}
dly = FIELD_PREP(GLK_DLY, FIELD_GET(GLK_PATH_PLL, glk_rx_ctrl1) +
(intel_host->glk_tun_val << 1));
if (dly == FIELD_GET(GLK_DLY, glk_rx_ctrl1))
return;
glk_rx_ctrl1 = (glk_rx_ctrl1 & ~GLK_DLY) | dly;
sdhci_writel(host, glk_rx_ctrl1, GLK_RX_CTRL1);
intel_host->rpm_retune_ok = true;
chip->rpm_retune = true;
mmc_retune_needed(host->mmc);
pr_info("%s: Requiring re-tune after rpm resume", mmc_hostname(host->mmc));
}
static void glk_rpm_retune_chk(struct sdhci_pci_chip *chip, bool susp)
{
if (chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC &&
!chip->rpm_retune)
glk_rpm_retune_wa(chip, susp);
}
static int glk_runtime_suspend(struct sdhci_pci_chip *chip)
{
glk_rpm_retune_chk(chip, true);
return sdhci_cqhci_runtime_suspend(chip);
}
static int glk_runtime_resume(struct sdhci_pci_chip *chip)
{
glk_rpm_retune_chk(chip, false);
return sdhci_cqhci_runtime_resume(chip);
}
#endif
#ifdef CONFIG_ACPI
static int ni_set_max_freq(struct sdhci_pci_slot *slot)
{
@ -879,8 +954,8 @@ static const struct sdhci_pci_fixes sdhci_intel_glk_emmc = {
.resume = sdhci_cqhci_resume,
#endif
#ifdef CONFIG_PM
.runtime_suspend = sdhci_cqhci_runtime_suspend,
.runtime_resume = sdhci_cqhci_runtime_resume,
.runtime_suspend = glk_runtime_suspend,
.runtime_resume = glk_runtime_resume,
#endif
.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
@ -1762,8 +1837,13 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
device_init_wakeup(&pdev->dev, true);
if (slot->cd_idx >= 0) {
ret = mmc_gpiod_request_cd(host->mmc, NULL, slot->cd_idx,
ret = mmc_gpiod_request_cd(host->mmc, "cd", slot->cd_idx,
slot->cd_override_level, 0, NULL);
if (ret && ret != -EPROBE_DEFER)
ret = mmc_gpiod_request_cd(host->mmc, NULL,
slot->cd_idx,
slot->cd_override_level,
0, NULL);
if (ret == -EPROBE_DEFER)
goto remove;

View File

@ -2032,8 +2032,7 @@ atmel_hsmc_nand_controller_legacy_init(struct atmel_hsmc_nand_controller *nc)
int ret;
nand_np = dev->of_node;
nfc_np = of_find_compatible_node(dev->of_node, NULL,
"atmel,sama5d3-nfc");
nfc_np = of_get_compatible_child(dev->of_node, "atmel,sama5d3-nfc");
if (!nfc_np) {
dev_err(dev, "Could not find device node for sama5d3-nfc\n");
return -ENODEV;
@ -2447,15 +2446,19 @@ static int atmel_nand_controller_probe(struct platform_device *pdev)
}
if (caps->legacy_of_bindings) {
struct device_node *nfc_node;
u32 ale_offs = 21;
/*
* If we are parsing legacy DT props and the DT contains a
* valid NFC node, forward the request to the sama5 logic.
*/
if (of_find_compatible_node(pdev->dev.of_node, NULL,
"atmel,sama5d3-nfc"))
nfc_node = of_get_compatible_child(pdev->dev.of_node,
"atmel,sama5d3-nfc");
if (nfc_node) {
caps = &atmel_sama5_nand_caps;
of_node_put(nfc_node);
}
/*
* Even if the compatible says we are dealing with an

View File

@ -150,15 +150,15 @@
#define NAND_VERSION_MINOR_SHIFT 16
/* NAND OP_CMDs */
#define PAGE_READ 0x2
#define PAGE_READ_WITH_ECC 0x3
#define PAGE_READ_WITH_ECC_SPARE 0x4
#define PROGRAM_PAGE 0x6
#define PAGE_PROGRAM_WITH_ECC 0x7
#define PROGRAM_PAGE_SPARE 0x9
#define BLOCK_ERASE 0xa
#define FETCH_ID 0xb
#define RESET_DEVICE 0xd
#define OP_PAGE_READ 0x2
#define OP_PAGE_READ_WITH_ECC 0x3
#define OP_PAGE_READ_WITH_ECC_SPARE 0x4
#define OP_PROGRAM_PAGE 0x6
#define OP_PAGE_PROGRAM_WITH_ECC 0x7
#define OP_PROGRAM_PAGE_SPARE 0x9
#define OP_BLOCK_ERASE 0xa
#define OP_FETCH_ID 0xb
#define OP_RESET_DEVICE 0xd
/* Default Value for NAND_DEV_CMD_VLD */
#define NAND_DEV_CMD_VLD_VAL (READ_START_VLD | WRITE_START_VLD | \
@ -692,11 +692,11 @@ static void update_rw_regs(struct qcom_nand_host *host, int num_cw, bool read)
if (read) {
if (host->use_ecc)
cmd = PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE;
cmd = OP_PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE;
else
cmd = PAGE_READ | PAGE_ACC | LAST_PAGE;
cmd = OP_PAGE_READ | PAGE_ACC | LAST_PAGE;
} else {
cmd = PROGRAM_PAGE | PAGE_ACC | LAST_PAGE;
cmd = OP_PROGRAM_PAGE | PAGE_ACC | LAST_PAGE;
}
if (host->use_ecc) {
@ -1170,7 +1170,7 @@ static int nandc_param(struct qcom_nand_host *host)
* in use. we configure the controller to perform a raw read of 512
* bytes to read onfi params
*/
nandc_set_reg(nandc, NAND_FLASH_CMD, PAGE_READ | PAGE_ACC | LAST_PAGE);
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | PAGE_ACC | LAST_PAGE);
nandc_set_reg(nandc, NAND_ADDR0, 0);
nandc_set_reg(nandc, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE
@ -1224,7 +1224,7 @@ static int erase_block(struct qcom_nand_host *host, int page_addr)
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
nandc_set_reg(nandc, NAND_FLASH_CMD,
BLOCK_ERASE | PAGE_ACC | LAST_PAGE);
OP_BLOCK_ERASE | PAGE_ACC | LAST_PAGE);
nandc_set_reg(nandc, NAND_ADDR0, page_addr);
nandc_set_reg(nandc, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_DEV0_CFG0,
@ -1255,7 +1255,7 @@ static int read_id(struct qcom_nand_host *host, int column)
if (column == -1)
return 0;
nandc_set_reg(nandc, NAND_FLASH_CMD, FETCH_ID);
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_FETCH_ID);
nandc_set_reg(nandc, NAND_ADDR0, column);
nandc_set_reg(nandc, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_FLASH_CHIP_SELECT,
@ -1276,7 +1276,7 @@ static int reset(struct qcom_nand_host *host)
struct nand_chip *chip = &host->chip;
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
nandc_set_reg(nandc, NAND_FLASH_CMD, RESET_DEVICE);
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_RESET_DEVICE);
nandc_set_reg(nandc, NAND_EXEC_CMD, 1);
write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL);

View File

@ -644,9 +644,23 @@ static int cqspi_indirect_write_execute(struct spi_nor *nor, loff_t to_addr,
ndelay(cqspi->wr_delay);
while (remaining > 0) {
size_t write_words, mod_bytes;
write_bytes = remaining > page_size ? page_size : remaining;
iowrite32_rep(cqspi->ahb_base, txbuf,
DIV_ROUND_UP(write_bytes, 4));
write_words = write_bytes / 4;
mod_bytes = write_bytes % 4;
/* Write 4 bytes at a time then single bytes. */
if (write_words) {
iowrite32_rep(cqspi->ahb_base, txbuf, write_words);
txbuf += (write_words * 4);
}
if (mod_bytes) {
unsigned int temp = 0xFFFFFFFF;
memcpy(&temp, txbuf, mod_bytes);
iowrite32(temp, cqspi->ahb_base);
txbuf += mod_bytes;
}
if (!wait_for_completion_timeout(&cqspi->transfer_complete,
msecs_to_jiffies(CQSPI_TIMEOUT_MS))) {
@ -655,7 +669,6 @@ static int cqspi_indirect_write_execute(struct spi_nor *nor, loff_t to_addr,
goto failwr;
}
txbuf += write_bytes;
remaining -= write_bytes;
if (remaining > 0)

View File

@ -2156,7 +2156,7 @@ spi_nor_set_pp_settings(struct spi_nor_pp_command *pp,
* @nor: pointer to a 'struct spi_nor'
* @addr: offset in the serial flash memory
* @len: number of bytes to read
* @buf: buffer where the data is copied into
* @buf: buffer where the data is copied into (dma-safe memory)
*
* Return: 0 on success, -errno otherwise.
*/
@ -2521,6 +2521,34 @@ static int spi_nor_map_cmp_erase_type(const void *l, const void *r)
return left->size - right->size;
}
/**
* spi_nor_sort_erase_mask() - sort erase mask
* @map: the erase map of the SPI NOR
* @erase_mask: the erase type mask to be sorted
*
* Replicate the sort done for the map's erase types in BFPT: sort the erase
* mask in ascending order with the smallest erase type size starting from
* BIT(0) in the sorted erase mask.
*
* Return: sorted erase mask.
*/
static u8 spi_nor_sort_erase_mask(struct spi_nor_erase_map *map, u8 erase_mask)
{
struct spi_nor_erase_type *erase_type = map->erase_type;
int i;
u8 sorted_erase_mask = 0;
if (!erase_mask)
return 0;
/* Replicate the sort done for the map's erase types. */
for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)
if (erase_type[i].size && erase_mask & BIT(erase_type[i].idx))
sorted_erase_mask |= BIT(i);
return sorted_erase_mask;
}
/**
* spi_nor_regions_sort_erase_types() - sort erase types in each region
* @map: the erase map of the SPI NOR
@ -2536,19 +2564,13 @@ static int spi_nor_map_cmp_erase_type(const void *l, const void *r)
static void spi_nor_regions_sort_erase_types(struct spi_nor_erase_map *map)
{
struct spi_nor_erase_region *region = map->regions;
struct spi_nor_erase_type *erase_type = map->erase_type;
int i;
u8 region_erase_mask, sorted_erase_mask;
while (region) {
region_erase_mask = region->offset & SNOR_ERASE_TYPE_MASK;
/* Replicate the sort done for the map's erase types. */
sorted_erase_mask = 0;
for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)
if (erase_type[i].size &&
region_erase_mask & BIT(erase_type[i].idx))
sorted_erase_mask |= BIT(i);
sorted_erase_mask = spi_nor_sort_erase_mask(map,
region_erase_mask);
/* Overwrite erase mask. */
region->offset = (region->offset & ~SNOR_ERASE_TYPE_MASK) |
@ -2855,52 +2877,84 @@ static u8 spi_nor_smpt_read_dummy(const struct spi_nor *nor, const u32 settings)
* spi_nor_get_map_in_use() - get the configuration map in use
* @nor: pointer to a 'struct spi_nor'
* @smpt: pointer to the sector map parameter table
* @smpt_len: sector map parameter table length
*
* Return: pointer to the map in use, ERR_PTR(-errno) otherwise.
*/
static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt)
static const u32 *spi_nor_get_map_in_use(struct spi_nor *nor, const u32 *smpt,
u8 smpt_len)
{
const u32 *ret = NULL;
u32 i, addr;
const u32 *ret;
u8 *buf;
u32 addr;
int err;
u8 i;
u8 addr_width, read_opcode, read_dummy;
u8 read_data_mask, data_byte, map_id;
u8 read_data_mask, map_id;
/* Use a kmalloc'ed bounce buffer to guarantee it is DMA-able. */
buf = kmalloc(sizeof(*buf), GFP_KERNEL);
if (!buf)
return ERR_PTR(-ENOMEM);
addr_width = nor->addr_width;
read_dummy = nor->read_dummy;
read_opcode = nor->read_opcode;
map_id = 0;
i = 0;
/* Determine if there are any optional Detection Command Descriptors */
while (!(smpt[i] & SMPT_DESC_TYPE_MAP)) {
for (i = 0; i < smpt_len; i += 2) {
if (smpt[i] & SMPT_DESC_TYPE_MAP)
break;
read_data_mask = SMPT_CMD_READ_DATA(smpt[i]);
nor->addr_width = spi_nor_smpt_addr_width(nor, smpt[i]);
nor->read_dummy = spi_nor_smpt_read_dummy(nor, smpt[i]);
nor->read_opcode = SMPT_CMD_OPCODE(smpt[i]);
addr = smpt[i + 1];
err = spi_nor_read_raw(nor, addr, 1, &data_byte);
if (err)
err = spi_nor_read_raw(nor, addr, 1, buf);
if (err) {
ret = ERR_PTR(err);
goto out;
}
/*
* Build an index value that is used to select the Sector Map
* Configuration that is currently in use.
*/
map_id = map_id << 1 | !!(data_byte & read_data_mask);
i = i + 2;
map_id = map_id << 1 | !!(*buf & read_data_mask);
}
/* Find the matching configuration map */
while (SMPT_MAP_ID(smpt[i]) != map_id) {
/*
* If command descriptors are provided, they always precede map
* descriptors in the table. There is no need to start the iteration
* over smpt array all over again.
*
* Find the matching configuration map.
*/
ret = ERR_PTR(-EINVAL);
while (i < smpt_len) {
if (SMPT_MAP_ID(smpt[i]) == map_id) {
ret = smpt + i;
break;
}
/*
* If there are no more configuration map descriptors and no
* configuration ID matched the configuration identifier, the
* sector address map is unknown.
*/
if (smpt[i] & SMPT_DESC_END)
goto out;
break;
/* increment the table index to the next map */
i += SMPT_MAP_REGION_COUNT(smpt[i]) + 1;
}
ret = smpt + i;
/* fall through */
out:
kfree(buf);
nor->addr_width = addr_width;
nor->read_dummy = read_dummy;
nor->read_opcode = read_opcode;
@ -2946,7 +3000,7 @@ static int spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
u64 offset;
u32 region_count;
int i, j;
u8 erase_type;
u8 erase_type, uniform_erase_type;
region_count = SMPT_MAP_REGION_COUNT(*smpt);
/*
@ -2959,7 +3013,7 @@ static int spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
return -ENOMEM;
map->regions = region;
map->uniform_erase_type = 0xff;
uniform_erase_type = 0xff;
offset = 0;
/* Populate regions. */
for (i = 0; i < region_count; i++) {
@ -2974,12 +3028,15 @@ static int spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
* Save the erase types that are supported in all regions and
* can erase the entire flash memory.
*/
map->uniform_erase_type &= erase_type;
uniform_erase_type &= erase_type;
offset = (region[i].offset & ~SNOR_ERASE_FLAGS_MASK) +
region[i].size;
}
map->uniform_erase_type = spi_nor_sort_erase_mask(map,
uniform_erase_type);
spi_nor_region_mark_end(&region[i - 1]);
return 0;
@ -3020,9 +3077,9 @@ static int spi_nor_parse_smpt(struct spi_nor *nor,
for (i = 0; i < smpt_header->length; i++)
smpt[i] = le32_to_cpu(smpt[i]);
sector_map = spi_nor_get_map_in_use(nor, smpt);
if (!sector_map) {
ret = -EINVAL;
sector_map = spi_nor_get_map_in_use(nor, smpt, smpt_header->length);
if (IS_ERR(sector_map)) {
ret = PTR_ERR(sector_map);
goto out;
}
@ -3125,7 +3182,7 @@ static int spi_nor_parse_sfdp(struct spi_nor *nor,
if (err)
goto exit;
/* Parse other parameter headers. */
/* Parse optional parameter tables. */
for (i = 0; i < header.nph; i++) {
param_header = &param_headers[i];
@ -3138,8 +3195,17 @@ static int spi_nor_parse_sfdp(struct spi_nor *nor,
break;
}
if (err)
goto exit;
if (err) {
dev_warn(dev, "Failed to parse optional parameter table: %04x\n",
SFDP_PARAM_HEADER_ID(param_header));
/*
* Let's not drop all information we extracted so far
* if optional table parsers fail. In case of failing,
* each optional parser is responsible to roll back to
* the previously known spi_nor data.
*/
err = 0;
}
}
exit:

View File

@ -477,6 +477,34 @@ void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
}
EXPORT_SYMBOL_GPL(can_put_echo_skb);
struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr)
{
struct can_priv *priv = netdev_priv(dev);
struct sk_buff *skb = priv->echo_skb[idx];
struct canfd_frame *cf;
if (idx >= priv->echo_skb_max) {
netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n",
__func__, idx, priv->echo_skb_max);
return NULL;
}
if (!skb) {
netdev_err(dev, "%s: BUG! Trying to echo non existing skb: can_priv::echo_skb[%u]\n",
__func__, idx);
return NULL;
}
/* Using "struct canfd_frame::len" for the frame
* length is supported on both CAN and CANFD frames.
*/
cf = (struct canfd_frame *)skb->data;
*len_ptr = cf->len;
priv->echo_skb[idx] = NULL;
return skb;
}
/*
* Get the skb from the stack and loop it back locally
*
@ -486,22 +514,16 @@ EXPORT_SYMBOL_GPL(can_put_echo_skb);
*/
unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)
{
struct can_priv *priv = netdev_priv(dev);
struct sk_buff *skb;
u8 len;
BUG_ON(idx >= priv->echo_skb_max);
skb = __can_get_echo_skb(dev, idx, &len);
if (!skb)
return 0;
if (priv->echo_skb[idx]) {
struct sk_buff *skb = priv->echo_skb[idx];
struct can_frame *cf = (struct can_frame *)skb->data;
u8 dlc = cf->can_dlc;
netif_rx(skb);
netif_rx(priv->echo_skb[idx]);
priv->echo_skb[idx] = NULL;
return dlc;
}
return 0;
return len;
}
EXPORT_SYMBOL_GPL(can_get_echo_skb);

View File

@ -135,13 +135,12 @@
/* FLEXCAN interrupt flag register (IFLAG) bits */
/* Errata ERR005829 step7: Reserve first valid MB */
#define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8
#define FLEXCAN_TX_MB_OFF_FIFO 9
#define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8
#define FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP 0
#define FLEXCAN_TX_MB_OFF_TIMESTAMP 1
#define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_OFF_TIMESTAMP + 1)
#define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST 63
#define FLEXCAN_IFLAG_MB(x) BIT(x)
#define FLEXCAN_TX_MB 63
#define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP + 1)
#define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST (FLEXCAN_TX_MB - 1)
#define FLEXCAN_IFLAG_MB(x) BIT(x & 0x1f)
#define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7)
#define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6)
#define FLEXCAN_IFLAG_RX_FIFO_AVAILABLE BIT(5)
@ -259,9 +258,7 @@ struct flexcan_priv {
struct can_rx_offload offload;
struct flexcan_regs __iomem *regs;
struct flexcan_mb __iomem *tx_mb;
struct flexcan_mb __iomem *tx_mb_reserved;
u8 tx_mb_idx;
u32 reg_ctrl_default;
u32 reg_imask1_default;
u32 reg_imask2_default;
@ -515,6 +512,7 @@ static int flexcan_get_berr_counter(const struct net_device *dev,
static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
const struct flexcan_priv *priv = netdev_priv(dev);
struct flexcan_regs __iomem *regs = priv->regs;
struct can_frame *cf = (struct can_frame *)skb->data;
u32 can_id;
u32 data;
@ -537,17 +535,17 @@ static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *de
if (cf->can_dlc > 0) {
data = be32_to_cpup((__be32 *)&cf->data[0]);
priv->write(data, &priv->tx_mb->data[0]);
priv->write(data, &regs->mb[FLEXCAN_TX_MB].data[0]);
}
if (cf->can_dlc > 4) {
data = be32_to_cpup((__be32 *)&cf->data[4]);
priv->write(data, &priv->tx_mb->data[1]);
priv->write(data, &regs->mb[FLEXCAN_TX_MB].data[1]);
}
can_put_echo_skb(skb, dev, 0);
priv->write(can_id, &priv->tx_mb->can_id);
priv->write(ctrl, &priv->tx_mb->can_ctrl);
priv->write(can_id, &regs->mb[FLEXCAN_TX_MB].can_id);
priv->write(ctrl, &regs->mb[FLEXCAN_TX_MB].can_ctrl);
/* Errata ERR005829 step8:
* Write twice INACTIVE(0x8) code to first MB.
@ -563,9 +561,13 @@ static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *de
static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr)
{
struct flexcan_priv *priv = netdev_priv(dev);
struct flexcan_regs __iomem *regs = priv->regs;
struct sk_buff *skb;
struct can_frame *cf;
bool rx_errors = false, tx_errors = false;
u32 timestamp;
timestamp = priv->read(&regs->timer) << 16;
skb = alloc_can_err_skb(dev, &cf);
if (unlikely(!skb))
@ -612,17 +614,21 @@ static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr)
if (tx_errors)
dev->stats.tx_errors++;
can_rx_offload_irq_queue_err_skb(&priv->offload, skb);
can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
}
static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
{
struct flexcan_priv *priv = netdev_priv(dev);
struct flexcan_regs __iomem *regs = priv->regs;
struct sk_buff *skb;
struct can_frame *cf;
enum can_state new_state, rx_state, tx_state;
int flt;
struct can_berr_counter bec;
u32 timestamp;
timestamp = priv->read(&regs->timer) << 16;
flt = reg_esr & FLEXCAN_ESR_FLT_CONF_MASK;
if (likely(flt == FLEXCAN_ESR_FLT_CONF_ACTIVE)) {
@ -652,7 +658,7 @@ static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
if (unlikely(new_state == CAN_STATE_BUS_OFF))
can_bus_off(dev);
can_rx_offload_irq_queue_err_skb(&priv->offload, skb);
can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
}
static inline struct flexcan_priv *rx_offload_to_priv(struct can_rx_offload *offload)
@ -720,9 +726,14 @@ static unsigned int flexcan_mailbox_read(struct can_rx_offload *offload,
priv->write(BIT(n - 32), &regs->iflag2);
} else {
priv->write(FLEXCAN_IFLAG_RX_FIFO_AVAILABLE, &regs->iflag1);
priv->read(&regs->timer);
}
/* Read the Free Running Timer. It is optional but recommended
* to unlock Mailbox as soon as possible and make it available
* for reception.
*/
priv->read(&regs->timer);
return 1;
}
@ -732,9 +743,9 @@ static inline u64 flexcan_read_reg_iflag_rx(struct flexcan_priv *priv)
struct flexcan_regs __iomem *regs = priv->regs;
u32 iflag1, iflag2;
iflag2 = priv->read(&regs->iflag2) & priv->reg_imask2_default;
iflag1 = priv->read(&regs->iflag1) & priv->reg_imask1_default &
~FLEXCAN_IFLAG_MB(priv->tx_mb_idx);
iflag2 = priv->read(&regs->iflag2) & priv->reg_imask2_default &
~FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB);
iflag1 = priv->read(&regs->iflag1) & priv->reg_imask1_default;
return (u64)iflag2 << 32 | iflag1;
}
@ -746,11 +757,9 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
struct flexcan_priv *priv = netdev_priv(dev);
struct flexcan_regs __iomem *regs = priv->regs;
irqreturn_t handled = IRQ_NONE;
u32 reg_iflag1, reg_esr;
u32 reg_iflag2, reg_esr;
enum can_state last_state = priv->can.state;
reg_iflag1 = priv->read(&regs->iflag1);
/* reception interrupt */
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
u64 reg_iflag;
@ -764,6 +773,9 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
break;
}
} else {
u32 reg_iflag1;
reg_iflag1 = priv->read(&regs->iflag1);
if (reg_iflag1 & FLEXCAN_IFLAG_RX_FIFO_AVAILABLE) {
handled = IRQ_HANDLED;
can_rx_offload_irq_offload_fifo(&priv->offload);
@ -779,17 +791,22 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
}
}
reg_iflag2 = priv->read(&regs->iflag2);
/* transmission complete interrupt */
if (reg_iflag1 & FLEXCAN_IFLAG_MB(priv->tx_mb_idx)) {
if (reg_iflag2 & FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB)) {
u32 reg_ctrl = priv->read(&regs->mb[FLEXCAN_TX_MB].can_ctrl);
handled = IRQ_HANDLED;
stats->tx_bytes += can_get_echo_skb(dev, 0);
stats->tx_bytes += can_rx_offload_get_echo_skb(&priv->offload,
0, reg_ctrl << 16);
stats->tx_packets++;
can_led_event(dev, CAN_LED_EVENT_TX);
/* after sending a RTR frame MB is in RX mode */
priv->write(FLEXCAN_MB_CODE_TX_INACTIVE,
&priv->tx_mb->can_ctrl);
priv->write(FLEXCAN_IFLAG_MB(priv->tx_mb_idx), &regs->iflag1);
&regs->mb[FLEXCAN_TX_MB].can_ctrl);
priv->write(FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB), &regs->iflag2);
netif_wake_queue(dev);
}
@ -931,15 +948,13 @@ static int flexcan_chip_start(struct net_device *dev)
reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV |
FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_SRX_DIS | FLEXCAN_MCR_IRMQ |
FLEXCAN_MCR_IDAM_C;
FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(FLEXCAN_TX_MB);
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
reg_mcr &= ~FLEXCAN_MCR_FEN;
reg_mcr |= FLEXCAN_MCR_MAXMB(priv->offload.mb_last);
} else {
reg_mcr |= FLEXCAN_MCR_FEN |
FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
}
else
reg_mcr |= FLEXCAN_MCR_FEN;
netdev_dbg(dev, "%s: writing mcr=0x%08x", __func__, reg_mcr);
priv->write(reg_mcr, &regs->mcr);
@ -982,16 +997,17 @@ static int flexcan_chip_start(struct net_device *dev)
priv->write(reg_ctrl2, &regs->ctrl2);
}
/* clear and invalidate all mailboxes first */
for (i = priv->tx_mb_idx; i < ARRAY_SIZE(regs->mb); i++) {
priv->write(FLEXCAN_MB_CODE_RX_INACTIVE,
&regs->mb[i].can_ctrl);
}
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++)
for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) {
priv->write(FLEXCAN_MB_CODE_RX_EMPTY,
&regs->mb[i].can_ctrl);
}
} else {
/* clear and invalidate unused mailboxes first */
for (i = FLEXCAN_TX_MB_RESERVED_OFF_FIFO; i <= ARRAY_SIZE(regs->mb); i++) {
priv->write(FLEXCAN_MB_CODE_RX_INACTIVE,
&regs->mb[i].can_ctrl);
}
}
/* Errata ERR005829: mark first TX mailbox as INACTIVE */
@ -1000,7 +1016,7 @@ static int flexcan_chip_start(struct net_device *dev)
/* mark TX mailbox as INACTIVE */
priv->write(FLEXCAN_MB_CODE_TX_INACTIVE,
&priv->tx_mb->can_ctrl);
&regs->mb[FLEXCAN_TX_MB].can_ctrl);
/* acceptance mask/acceptance code (accept everything) */
priv->write(0x0, &regs->rxgmask);
@ -1355,17 +1371,13 @@ static int flexcan_probe(struct platform_device *pdev)
priv->devtype_data = devtype_data;
priv->reg_xceiver = reg_xceiver;
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_TIMESTAMP;
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
priv->tx_mb_reserved = &regs->mb[FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP];
} else {
priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_FIFO;
else
priv->tx_mb_reserved = &regs->mb[FLEXCAN_TX_MB_RESERVED_OFF_FIFO];
}
priv->tx_mb = &regs->mb[priv->tx_mb_idx];
priv->reg_imask1_default = FLEXCAN_IFLAG_MB(priv->tx_mb_idx);
priv->reg_imask2_default = 0;
priv->reg_imask1_default = 0;
priv->reg_imask2_default = FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB);
priv->offload.mailbox_read = flexcan_mailbox_read;

View File

@ -24,6 +24,9 @@
#define RCAR_CAN_DRV_NAME "rcar_can"
#define RCAR_SUPPORTED_CLOCKS (BIT(CLKR_CLKP1) | BIT(CLKR_CLKP2) | \
BIT(CLKR_CLKEXT))
/* Mailbox configuration:
* mailbox 60 - 63 - Rx FIFO mailboxes
* mailbox 56 - 59 - Tx FIFO mailboxes
@ -789,7 +792,7 @@ static int rcar_can_probe(struct platform_device *pdev)
goto fail_clk;
}
if (clock_select >= ARRAY_SIZE(clock_names)) {
if (!(BIT(clock_select) & RCAR_SUPPORTED_CLOCKS)) {
err = -EINVAL;
dev_err(&pdev->dev, "invalid CAN clock selected\n");
goto fail_clk;

View File

@ -211,7 +211,54 @@ int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload)
}
EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_fifo);
int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb)
int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
struct sk_buff *skb, u32 timestamp)
{
struct can_rx_offload_cb *cb;
unsigned long flags;
if (skb_queue_len(&offload->skb_queue) >
offload->skb_queue_len_max)
return -ENOMEM;
cb = can_rx_offload_get_cb(skb);
cb->timestamp = timestamp;
spin_lock_irqsave(&offload->skb_queue.lock, flags);
__skb_queue_add_sort(&offload->skb_queue, skb, can_rx_offload_compare);
spin_unlock_irqrestore(&offload->skb_queue.lock, flags);
can_rx_offload_schedule(offload);
return 0;
}
EXPORT_SYMBOL_GPL(can_rx_offload_queue_sorted);
unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload,
unsigned int idx, u32 timestamp)
{
struct net_device *dev = offload->dev;
struct net_device_stats *stats = &dev->stats;
struct sk_buff *skb;
u8 len;
int err;
skb = __can_get_echo_skb(dev, idx, &len);
if (!skb)
return 0;
err = can_rx_offload_queue_sorted(offload, skb, timestamp);
if (err) {
stats->rx_errors++;
stats->tx_fifo_errors++;
}
return len;
}
EXPORT_SYMBOL_GPL(can_rx_offload_get_echo_skb);
int can_rx_offload_queue_tail(struct can_rx_offload *offload,
struct sk_buff *skb)
{
if (skb_queue_len(&offload->skb_queue) >
offload->skb_queue_len_max)
@ -222,7 +269,7 @@ int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_b
return 0;
}
EXPORT_SYMBOL_GPL(can_rx_offload_irq_queue_err_skb);
EXPORT_SYMBOL_GPL(can_rx_offload_queue_tail);
static int can_rx_offload_init_queue(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight)
{

View File

@ -760,7 +760,7 @@ static int hi3110_open(struct net_device *net)
{
struct hi3110_priv *priv = netdev_priv(net);
struct spi_device *spi = priv->spi;
unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_RISING;
unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_HIGH;
int ret;
ret = open_candev(net);

View File

@ -528,7 +528,6 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
context = &priv->tx_contexts[i];
context->echo_index = i;
can_put_echo_skb(skb, netdev, context->echo_index);
++priv->active_tx_contexts;
if (priv->active_tx_contexts >= (int)dev->max_tx_urbs)
netif_stop_queue(netdev);
@ -553,7 +552,6 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
dev_kfree_skb(skb);
spin_lock_irqsave(&priv->tx_contexts_lock, flags);
can_free_echo_skb(netdev, context->echo_index);
context->echo_index = dev->max_tx_urbs;
--priv->active_tx_contexts;
netif_wake_queue(netdev);
@ -564,6 +562,8 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
context->priv = priv;
can_put_echo_skb(skb, netdev, context->echo_index);
usb_fill_bulk_urb(urb, dev->udev,
usb_sndbulkpipe(dev->udev,
dev->bulk_out->bEndpointAddress),

View File

@ -1019,6 +1019,11 @@ kvaser_usb_hydra_error_frame(struct kvaser_usb_net_priv *priv,
new_state : CAN_STATE_ERROR_ACTIVE;
can_change_state(netdev, cf, tx_state, rx_state);
if (priv->can.restart_ms &&
old_state >= CAN_STATE_BUS_OFF &&
new_state < CAN_STATE_BUS_OFF)
cf->can_id |= CAN_ERR_RESTARTED;
}
if (new_state == CAN_STATE_BUS_OFF) {
@ -1028,11 +1033,6 @@ kvaser_usb_hydra_error_frame(struct kvaser_usb_net_priv *priv,
can_bus_off(netdev);
}
if (priv->can.restart_ms &&
old_state >= CAN_STATE_BUS_OFF &&
new_state < CAN_STATE_BUS_OFF)
cf->can_id |= CAN_ERR_RESTARTED;
}
if (!skb) {

View File

@ -35,10 +35,6 @@
#include <linux/slab.h>
#include <linux/usb.h>
#include <linux/can.h>
#include <linux/can/dev.h>
#include <linux/can/error.h>
#define UCAN_DRIVER_NAME "ucan"
#define UCAN_MAX_RX_URBS 8
/* the CAN controller needs a while to enable/disable the bus */
@ -1575,11 +1571,8 @@ err_firmware_needs_update:
/* disconnect the device */
static void ucan_disconnect(struct usb_interface *intf)
{
struct usb_device *udev;
struct ucan_priv *up = usb_get_intfdata(intf);
udev = interface_to_usbdev(intf);
usb_set_intfdata(intf, NULL);
if (up) {

View File

@ -1848,6 +1848,8 @@ static void ena_down(struct ena_adapter *adapter)
rc = ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason);
if (rc)
dev_err(&adapter->pdev->dev, "Device reset failed\n");
/* stop submitting admin commands on a device that was reset */
ena_com_set_admin_running_state(adapter->ena_dev, false);
}
ena_destroy_all_io_queues(adapter);
@ -1914,6 +1916,9 @@ static int ena_close(struct net_device *netdev)
netif_dbg(adapter, ifdown, netdev, "%s\n", __func__);
if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
return 0;
if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags))
ena_down(adapter);
@ -2613,9 +2618,7 @@ static void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
ena_down(adapter);
/* Stop the device from sending AENQ events (in case reset flag is set
* and device is up, ena_close already reset the device
* In case the reset flag is set and the device is up, ena_down()
* already perform the reset, so it can be skipped.
* and device is up, ena_down() already reset the device.
*/
if (!(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags) && dev_up))
ena_com_dev_reset(adapter->ena_dev, adapter->reset_reason);
@ -2694,8 +2697,8 @@ err_device_destroy:
ena_com_abort_admin_commands(ena_dev);
ena_com_wait_for_abort_completion(ena_dev);
ena_com_admin_destroy(ena_dev);
ena_com_mmio_reg_read_request_destroy(ena_dev);
ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE);
ena_com_mmio_reg_read_request_destroy(ena_dev);
err:
clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
@ -3452,6 +3455,8 @@ err_rss:
ena_com_rss_destroy(ena_dev);
err_free_msix:
ena_com_dev_reset(ena_dev, ENA_REGS_RESET_INIT_ERR);
/* stop submitting admin commands on a device that was reset */
ena_com_set_admin_running_state(ena_dev, false);
ena_free_mgmnt_irq(adapter);
ena_disable_msix(adapter);
err_worker_destroy:
@ -3498,18 +3503,12 @@ static void ena_remove(struct pci_dev *pdev)
cancel_work_sync(&adapter->reset_task);
unregister_netdev(netdev);
/* If the device is running then we want to make sure the device will be
* reset to make sure no more events will be issued by the device.
*/
if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
rtnl_lock();
ena_destroy_device(adapter, true);
rtnl_unlock();
unregister_netdev(netdev);
free_netdev(netdev);
ena_com_rss_destroy(ena_dev);

View File

@ -45,7 +45,7 @@
#define DRV_MODULE_VER_MAJOR 2
#define DRV_MODULE_VER_MINOR 0
#define DRV_MODULE_VER_SUBMINOR 1
#define DRV_MODULE_VER_SUBMINOR 2
#define DRV_MODULE_NAME "ena"
#ifndef DRV_MODULE_VERSION

View File

@ -1419,7 +1419,7 @@ static int sparc_lance_probe_one(struct platform_device *op,
prop = of_get_property(nd, "tpe-link-test?", NULL);
if (!prop)
goto no_link_test;
goto node_put;
if (strcmp(prop, "true")) {
printk(KERN_NOTICE "SunLance: warning: overriding option "
@ -1428,6 +1428,8 @@ static int sparc_lance_probe_one(struct platform_device *op,
"to ecd@skynet.be\n");
auxio_set_lte(AUXIO_LTE_ON);
}
node_put:
of_node_put(nd);
no_link_test:
lp->auto_select = 1;
lp->tpe = 0;

View File

@ -2191,6 +2191,13 @@ void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id,
#define PMF_DMAE_C(bp) (BP_PORT(bp) * MAX_DMAE_C_PER_PORT + \
E1HVN_MAX)
/* Following is the DMAE channel number allocation for the clients.
* MFW: OCBB/OCSD implementations use DMAE channels 14/15 respectively.
* Driver: 0-3 and 8-11 (for PF dmae operations)
* 4 and 12 (for stats requests)
*/
#define BNX2X_FW_DMAE_C 13 /* Channel for FW DMAE operations */
/* PCIE link and speed */
#define PCICFG_LINK_WIDTH 0x1f00000
#define PCICFG_LINK_WIDTH_SHIFT 20

View File

@ -6149,6 +6149,7 @@ static inline int bnx2x_func_send_start(struct bnx2x *bp,
rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag);
rdata->path_id = BP_PATH(bp);
rdata->network_cos_mode = start_params->network_cos_mode;
rdata->dmae_cmd_id = BNX2X_FW_DMAE_C;
rdata->vxlan_dst_port = cpu_to_le16(start_params->vxlan_dst_port);
rdata->geneve_dst_port = cpu_to_le16(start_params->geneve_dst_port);

View File

@ -1675,7 +1675,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
} else {
if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L4_CS_ERR_BITS) {
if (dev->features & NETIF_F_RXCSUM)
cpr->rx_l4_csum_errors++;
bnapi->cp_ring.rx_l4_csum_errors++;
}
}
@ -8714,6 +8714,26 @@ static int bnxt_set_features(struct net_device *dev, netdev_features_t features)
return rc;
}
static int bnxt_dbg_hwrm_ring_info_get(struct bnxt *bp, u8 ring_type,
u32 ring_id, u32 *prod, u32 *cons)
{
struct hwrm_dbg_ring_info_get_output *resp = bp->hwrm_cmd_resp_addr;
struct hwrm_dbg_ring_info_get_input req = {0};
int rc;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_DBG_RING_INFO_GET, -1, -1);
req.ring_type = ring_type;
req.fw_ring_id = cpu_to_le32(ring_id);
mutex_lock(&bp->hwrm_cmd_lock);
rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (!rc) {
*prod = le32_to_cpu(resp->producer_index);
*cons = le32_to_cpu(resp->consumer_index);
}
mutex_unlock(&bp->hwrm_cmd_lock);
return rc;
}
static void bnxt_dump_tx_sw_state(struct bnxt_napi *bnapi)
{
struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
@ -8821,6 +8841,11 @@ static void bnxt_timer(struct timer_list *t)
bnxt_queue_sp_work(bp);
}
}
if ((bp->flags & BNXT_FLAG_CHIP_P5) && netif_carrier_ok(dev)) {
set_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event);
bnxt_queue_sp_work(bp);
}
bnxt_restart_timer:
mod_timer(&bp->timer, jiffies + bp->current_interval);
}
@ -8851,6 +8876,44 @@ static void bnxt_reset(struct bnxt *bp, bool silent)
bnxt_rtnl_unlock_sp(bp);
}
static void bnxt_chk_missed_irq(struct bnxt *bp)
{
int i;
if (!(bp->flags & BNXT_FLAG_CHIP_P5))
return;
for (i = 0; i < bp->cp_nr_rings; i++) {
struct bnxt_napi *bnapi = bp->bnapi[i];
struct bnxt_cp_ring_info *cpr;
u32 fw_ring_id;
int j;
if (!bnapi)
continue;
cpr = &bnapi->cp_ring;
for (j = 0; j < 2; j++) {
struct bnxt_cp_ring_info *cpr2 = cpr->cp_ring_arr[j];
u32 val[2];
if (!cpr2 || cpr2->has_more_work ||
!bnxt_has_work(bp, cpr2))
continue;
if (cpr2->cp_raw_cons != cpr2->last_cp_raw_cons) {
cpr2->last_cp_raw_cons = cpr2->cp_raw_cons;
continue;
}
fw_ring_id = cpr2->cp_ring_struct.fw_ring_id;
bnxt_dbg_hwrm_ring_info_get(bp,
DBG_RING_INFO_GET_REQ_RING_TYPE_L2_CMPL,
fw_ring_id, &val[0], &val[1]);
cpr->missed_irqs++;
}
}
}
static void bnxt_cfg_ntp_filters(struct bnxt *);
static void bnxt_sp_task(struct work_struct *work)
@ -8930,6 +8993,9 @@ static void bnxt_sp_task(struct work_struct *work)
if (test_and_clear_bit(BNXT_FLOW_STATS_SP_EVENT, &bp->sp_event))
bnxt_tc_flow_stats_work(bp);
if (test_and_clear_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event))
bnxt_chk_missed_irq(bp);
/* These functions below will clear BNXT_STATE_IN_SP_TASK. They
* must be the last functions to be called before exiting.
*/
@ -10087,6 +10153,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
}
bnxt_hwrm_func_qcfg(bp);
bnxt_hwrm_vnic_qcaps(bp);
bnxt_hwrm_port_led_qcaps(bp);
bnxt_ethtool_init(bp);
bnxt_dcb_init(bp);
@ -10120,7 +10187,6 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
VNIC_RSS_CFG_REQ_HASH_TYPE_UDP_IPV6;
}
bnxt_hwrm_vnic_qcaps(bp);
if (bnxt_rfs_supported(bp)) {
dev->hw_features |= NETIF_F_NTUPLE;
if (bnxt_rfs_capable(bp)) {

View File

@ -798,6 +798,8 @@ struct bnxt_cp_ring_info {
u8 had_work_done:1;
u8 has_more_work:1;
u32 last_cp_raw_cons;
struct bnxt_coal rx_ring_coal;
u64 rx_packets;
u64 rx_bytes;
@ -816,6 +818,7 @@ struct bnxt_cp_ring_info {
dma_addr_t hw_stats_map;
u32 hw_stats_ctx_id;
u64 rx_l4_csum_errors;
u64 missed_irqs;
struct bnxt_ring_struct cp_ring_struct;
@ -1527,6 +1530,7 @@ struct bnxt {
#define BNXT_LINK_SPEED_CHNG_SP_EVENT 14
#define BNXT_FLOW_STATS_SP_EVENT 15
#define BNXT_UPDATE_PHY_SP_EVENT 16
#define BNXT_RING_COAL_NOW_SP_EVENT 17
struct bnxt_hw_resc hw_resc;
struct bnxt_pf_info pf;

View File

@ -137,7 +137,7 @@ reset_coalesce:
return rc;
}
#define BNXT_NUM_STATS 21
#define BNXT_NUM_STATS 22
#define BNXT_RX_STATS_ENTRY(counter) \
{ BNXT_RX_STATS_OFFSET(counter), __stringify(counter) }
@ -384,6 +384,7 @@ static void bnxt_get_ethtool_stats(struct net_device *dev,
for (k = 0; k < stat_fields; j++, k++)
buf[j] = le64_to_cpu(hw_stats[k]);
buf[j++] = cpr->rx_l4_csum_errors;
buf[j++] = cpr->missed_irqs;
bnxt_sw_func_stats[RX_TOTAL_DISCARDS].counter +=
le64_to_cpu(cpr->hw_stats->rx_discard_pkts);
@ -468,6 +469,8 @@ static void bnxt_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
buf += ETH_GSTRING_LEN;
sprintf(buf, "[%d]: rx_l4_csum_errors", i);
buf += ETH_GSTRING_LEN;
sprintf(buf, "[%d]: missed_irqs", i);
buf += ETH_GSTRING_LEN;
}
for (i = 0; i < BNXT_NUM_SW_FUNC_STATS; i++) {
strcpy(buf, bnxt_sw_func_stats[i].string);
@ -2942,8 +2945,8 @@ bnxt_fill_coredump_record(struct bnxt *bp, struct bnxt_coredump_record *record,
record->asic_state = 0;
strlcpy(record->system_name, utsname()->nodename,
sizeof(record->system_name));
record->year = cpu_to_le16(tm.tm_year);
record->month = cpu_to_le16(tm.tm_mon);
record->year = cpu_to_le16(tm.tm_year + 1900);
record->month = cpu_to_le16(tm.tm_mon + 1);
record->day = cpu_to_le16(tm.tm_mday);
record->hour = cpu_to_le16(tm.tm_hour);
record->minute = cpu_to_le16(tm.tm_min);

Some files were not shown because too many files have changed in this diff Show More