1
0
Fork 0

drm-misc-next for 5.4:

UAPI Changes:
 
 Cross-subsystem Changes:
 
 Core Changes:
   - dma-buf: add reservation_object_fences helper, relax
              reservation_object_add_shared_fence, remove
              reservation_object seq number (and then
              restored)
   - dma-fence: Shrinkage of the dma_fence structure,
                Merge dma_fence_signal and dma_fence_signal_locked,
                Store the timestamp in struct dma_fence in a union with
                cb_list
 
 Driver Changes:
   - More dt-bindings YAML conversions
   - More removal of drmP.h includes
   - dw-hdmi: Support get_eld and various i2s improvements
   - gm12u320: Few fixes
   - meson: Global cleanup
   - panfrost: Few refactors, Support for GPU heap allocations
   - sun4i: Support for DDC enable GPIO
   - New panels: TI nspire, NEC NL8048HL11, LG Philips LB035Q02,
                 Sharp LS037V7DW01, Sony ACX565AKM, Toppoly TD028TTEC1
                 Toppoly TD043MTEA1
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQRcEzekXsqa64kGDp7j7w1vZxhRxQUCXVqvpwAKCRDj7w1vZxhR
 xa3RAQDzAnt5zeesAxX4XhRJzHoCEwj2PJj9Re6xMJ9PlcfcvwD+OS+bcB6jfiXV
 Ug9IBd/DqjlmD9G9MxFxfSV946rksAw=
 =8uv4
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2019-08-19' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.4:

UAPI Changes:

Cross-subsystem Changes:

Core Changes:
  - dma-buf: add reservation_object_fences helper, relax
             reservation_object_add_shared_fence, remove
             reservation_object seq number (and then
             restored)
  - dma-fence: Shrinkage of the dma_fence structure,
               Merge dma_fence_signal and dma_fence_signal_locked,
               Store the timestamp in struct dma_fence in a union with
               cb_list

Driver Changes:
  - More dt-bindings YAML conversions
  - More removal of drmP.h includes
  - dw-hdmi: Support get_eld and various i2s improvements
  - gm12u320: Few fixes
  - meson: Global cleanup
  - panfrost: Few refactors, Support for GPU heap allocations
  - sun4i: Support for DDC enable GPIO
  - New panels: TI nspire, NEC NL8048HL11, LG Philips LB035Q02,
                Sharp LS037V7DW01, Sony ACX565AKM, Toppoly TD028TTEC1
                Toppoly TD043MTEA1

Signed-off-by: Dave Airlie <airlied@redhat.com>
[airlied: fixup dma_resv rename fallout]

From: Maxime Ripard <maxime.ripard@bootlin.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190819141923.7l2adietcr2pioct@flea
alistair/sunxi64-5.4-dsi
Dave Airlie 2019-08-21 15:38:43 +10:00
commit 5f680625d9
223 changed files with 5070 additions and 3852 deletions

View File

@ -1,119 +0,0 @@
Amlogic specific extensions to the Synopsys Designware HDMI Controller
======================================================================
The Amlogic Meson Synopsys Designware Integration is composed of :
- A Synopsys DesignWare HDMI Controller IP
- A TOP control block controlling the Clocks and PHY
- A custom HDMI PHY in order to convert video to TMDS signal
___________________________________
| HDMI TOP |<= HPD
|___________________________________|
| | |
| Synopsys HDMI | HDMI PHY |=> TMDS
| Controller |________________|
|___________________________________|<=> DDC
The HDMI TOP block only supports HPD sensing.
The Synopsys HDMI Controller interrupt is routed through the
TOP Block interrupt.
Communication to the TOP Block and the Synopsys HDMI Controller is done
via a pair of dedicated addr+read/write registers.
The HDMI PHY is configured by registers in the HHI register block.
Pixel data arrives in 4:4:4 format from the VENC block and the VPU HDMI mux
selects either the ENCI encoder for the 576i or 480i formats or the ENCP
encoder for all the other formats including interlaced HD formats.
The VENC uses a DVI encoder on top of the ENCI or ENCP encoders to generate
DVI timings for the HDMI controller.
Amlogic Meson GXBB, GXL and GXM SoCs families embeds the Synopsys DesignWare
HDMI TX IP version 2.01a with HDCP and I2C & S/PDIF
audio source interfaces.
Required properties:
- compatible: value should be different for each SoC family as :
- GXBB (S905) : "amlogic,meson-gxbb-dw-hdmi"
- GXL (S905X, S905D) : "amlogic,meson-gxl-dw-hdmi"
- GXM (S912) : "amlogic,meson-gxm-dw-hdmi"
followed by the common "amlogic,meson-gx-dw-hdmi"
- G12A (S905X2, S905Y2, S905D2) : "amlogic,meson-g12a-dw-hdmi"
- reg: Physical base address and length of the controller's registers.
- interrupts: The HDMI interrupt number
- clocks, clock-names : must have the phandles to the HDMI iahb and isfr clocks,
and the Amlogic Meson venci clocks as described in
Documentation/devicetree/bindings/clock/clock-bindings.txt,
the clocks are soc specific, the clock-names should be "iahb", "isfr", "venci"
- resets, resets-names: must have the phandles to the HDMI apb, glue and phy
resets as described in :
Documentation/devicetree/bindings/reset/reset.txt,
the reset-names should be "hdmitx_apb", "hdmitx", "hdmitx_phy"
Optional properties:
- hdmi-supply: Optional phandle to an external 5V regulator to power the HDMI
logic, as described in the file ../regulator/regulator.txt
Required nodes:
The connections to the HDMI ports are modeled using the OF graph
bindings specified in Documentation/devicetree/bindings/graph.txt.
The following table lists for each supported model the port number
corresponding to each HDMI output and input.
Port 0 Port 1
-----------------------------------------
S905 (GXBB) VENC Input TMDS Output
S905X (GXL) VENC Input TMDS Output
S905D (GXL) VENC Input TMDS Output
S912 (GXM) VENC Input TMDS Output
S905X2 (G12A) VENC Input TMDS Output
S905Y2 (G12A) VENC Input TMDS Output
S905D2 (G12A) VENC Input TMDS Output
Example:
hdmi-connector {
compatible = "hdmi-connector";
type = "a";
port {
hdmi_connector_in: endpoint {
remote-endpoint = <&hdmi_tx_tmds_out>;
};
};
};
hdmi_tx: hdmi-tx@c883a000 {
compatible = "amlogic,meson-gxbb-dw-hdmi", "amlogic,meson-gx-dw-hdmi";
reg = <0x0 0xc883a000 0x0 0x1c>;
interrupts = <GIC_SPI 57 IRQ_TYPE_EDGE_RISING>;
resets = <&reset RESET_HDMITX_CAPB3>,
<&reset RESET_HDMI_SYSTEM_RESET>,
<&reset RESET_HDMI_TX>;
reset-names = "hdmitx_apb", "hdmitx", "hdmitx_phy";
clocks = <&clkc CLKID_HDMI_PCLK>,
<&clkc CLKID_CLK81>,
<&clkc CLKID_GCLK_VENCI_INT0>;
clock-names = "isfr", "iahb", "venci";
#address-cells = <1>;
#size-cells = <0>;
/* VPU VENC Input */
hdmi_tx_venc_port: port@0 {
reg = <0>;
hdmi_tx_in: endpoint {
remote-endpoint = <&hdmi_tx_out>;
};
};
/* TMDS Output */
hdmi_tx_tmds_port: port@1 {
reg = <1>;
hdmi_tx_tmds_out: endpoint {
remote-endpoint = <&hdmi_connector_in>;
};
};
};

View File

@ -0,0 +1,150 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
# Copyright 2019 BayLibre, SAS
%YAML 1.2
---
$id: "http://devicetree.org/schemas/display/amlogic,meson-dw-hdmi.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Amlogic specific extensions to the Synopsys Designware HDMI Controller
maintainers:
- Neil Armstrong <narmstrong@baylibre.com>
description: |
The Amlogic Meson Synopsys Designware Integration is composed of
- A Synopsys DesignWare HDMI Controller IP
- A TOP control block controlling the Clocks and PHY
- A custom HDMI PHY in order to convert video to TMDS signal
___________________________________
| HDMI TOP |<= HPD
|___________________________________|
| | |
| Synopsys HDMI | HDMI PHY |=> TMDS
| Controller |________________|
|___________________________________|<=> DDC
The HDMI TOP block only supports HPD sensing.
The Synopsys HDMI Controller interrupt is routed through the
TOP Block interrupt.
Communication to the TOP Block and the Synopsys HDMI Controller is done
via a pair of dedicated addr+read/write registers.
The HDMI PHY is configured by registers in the HHI register block.
Pixel data arrives in "4:4:4" format from the VENC block and the VPU HDMI mux
selects either the ENCI encoder for the 576i or 480i formats or the ENCP
encoder for all the other formats including interlaced HD formats.
The VENC uses a DVI encoder on top of the ENCI or ENCP encoders to generate
DVI timings for the HDMI controller.
Amlogic Meson GXBB, GXL and GXM SoCs families embeds the Synopsys DesignWare
HDMI TX IP version 2.01a with HDCP and I2C & S/PDIF
audio source interfaces.
properties:
compatible:
oneOf:
- items:
- enum:
- amlogic,meson-gxbb-dw-hdmi # GXBB (S905)
- amlogic,meson-gxl-dw-hdmi # GXL (S905X, S905D)
- amlogic,meson-gxm-dw-hdmi # GXM (S912)
- const: amlogic,meson-gx-dw-hdmi
- enum:
- amlogic,meson-g12a-dw-hdmi # G12A (S905X2, S905Y2, S905D2)
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
minItems: 3
clock-names:
items:
- const: isfr
- const: iahb
- const: venci
resets:
minItems: 3
reset-names:
items:
- const: hdmitx_apb
- const: hdmitx
- const: hdmitx_phy
hdmi-supply:
description: phandle to an external 5V regulator to power the HDMI logic
allOf:
- $ref: /schemas/types.yaml#/definitions/phandle
port@0:
type: object
description:
A port node pointing to the VENC Input port node.
port@1:
type: object
description:
A port node pointing to the TMDS Output port node.
"#address-cells":
const: 1
"#size-cells":
const: 0
"#sound-dai-cells":
const: 0
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
- resets
- reset-names
- port@0
- port@1
- "#address-cells"
- "#size-cells"
additionalProperties: false
examples:
- |
hdmi_tx: hdmi-tx@c883a000 {
compatible = "amlogic,meson-gxbb-dw-hdmi", "amlogic,meson-gx-dw-hdmi";
reg = <0xc883a000 0x1c>;
interrupts = <57>;
resets = <&reset_apb>, <&reset_hdmitx>, <&reset_hdmitx_phy>;
reset-names = "hdmitx_apb", "hdmitx", "hdmitx_phy";
clocks = <&clk_isfr>, <&clk_iahb>, <&clk_venci>;
clock-names = "isfr", "iahb", "venci";
#address-cells = <1>;
#size-cells = <0>;
/* VPU VENC Input */
hdmi_tx_venc_port: port@0 {
reg = <0>;
hdmi_tx_in: endpoint {
remote-endpoint = <&hdmi_tx_out>;
};
};
/* TMDS Output */
hdmi_tx_tmds_port: port@1 {
reg = <1>;
hdmi_tx_tmds_out: endpoint {
remote-endpoint = <&hdmi_connector_in>;
};
};
};

View File

@ -1,121 +0,0 @@
Amlogic Meson Display Controller
================================
The Amlogic Meson Display controller is composed of several components
that are going to be documented below:
DMC|---------------VPU (Video Processing Unit)----------------|------HHI------|
| vd1 _______ _____________ _________________ | |
D |-------| |----| | | | | HDMI PLL |
D | vd2 | VIU | | Video Post | | Video Encoders |<---|-----VCLK |
R |-------| |----| Processing | | | | |
| osd2 | | | |---| Enci ----------|----|-----VDAC------|
R |-------| CSC |----| Scalers | | Encp ----------|----|----HDMI-TX----|
A | osd1 | | | Blenders | | Encl ----------|----|---------------|
M |-------|______|----|____________| |________________| | |
___|__________________________________________________________|_______________|
VIU: Video Input Unit
---------------------
The Video Input Unit is in charge of the pixel scanout from the DDR memory.
It fetches the frames addresses, stride and parameters from the "Canvas" memory.
This part is also in charge of the CSC (Colorspace Conversion).
It can handle 2 OSD Planes and 2 Video Planes.
VPP: Video Post Processing
--------------------------
The Video Post Processing is in charge of the scaling and blending of the
various planes into a single pixel stream.
There is a special "pre-blending" used by the video planes with a dedicated
scaler and a "post-blending" to merge with the OSD Planes.
The OSD planes also have a dedicated scaler for one of the OSD.
VENC: Video Encoders
--------------------
The VENC is composed of the multiple pixel encoders :
- ENCI : Interlace Video encoder for CVBS and Interlace HDMI
- ENCP : Progressive Video Encoder for HDMI
- ENCL : LCD LVDS Encoder
The VENC Unit gets a Pixel Clocks (VCLK) from a dedicated HDMI PLL and clock
tree and provides the scanout clock to the VPP and VIU.
The ENCI is connected to a single VDAC for Composite Output.
The ENCI and ENCP are connected to an on-chip HDMI Transceiver.
Device Tree Bindings:
---------------------
VPU: Video Processing Unit
--------------------------
Required properties:
- compatible: value should be different for each SoC family as :
- GXBB (S905) : "amlogic,meson-gxbb-vpu"
- GXL (S905X, S905D) : "amlogic,meson-gxl-vpu"
- GXM (S912) : "amlogic,meson-gxm-vpu"
followed by the common "amlogic,meson-gx-vpu"
- G12A (S905X2, S905Y2, S905D2) : "amlogic,meson-g12a-vpu"
- reg: base address and size of he following memory-mapped regions :
- vpu
- hhi
- reg-names: should contain the names of the previous memory regions
- interrupts: should contain the VENC Vsync interrupt number
- amlogic,canvas: phandle to canvas provider node as described in the file
../soc/amlogic/amlogic,canvas.txt
Optional properties:
- power-domains: Optional phandle to associated power domain as described in
the file ../power/power_domain.txt
Required nodes:
The connections to the VPU output video ports are modeled using the OF graph
bindings specified in Documentation/devicetree/bindings/graph.txt.
The following table lists for each supported model the port number
corresponding to each VPU output.
Port 0 Port 1
-----------------------------------------
S905 (GXBB) CVBS VDAC HDMI-TX
S905X (GXL) CVBS VDAC HDMI-TX
S905D (GXL) CVBS VDAC HDMI-TX
S912 (GXM) CVBS VDAC HDMI-TX
S905X2 (G12A) CVBS VDAC HDMI-TX
S905Y2 (G12A) CVBS VDAC HDMI-TX
S905D2 (G12A) CVBS VDAC HDMI-TX
Example:
tv-connector {
compatible = "composite-video-connector";
port {
tv_connector_in: endpoint {
remote-endpoint = <&cvbs_vdac_out>;
};
};
};
vpu: vpu@d0100000 {
compatible = "amlogic,meson-gxbb-vpu";
reg = <0x0 0xd0100000 0x0 0x100000>,
<0x0 0xc883c000 0x0 0x1000>,
<0x0 0xc8838000 0x0 0x1000>;
reg-names = "vpu", "hhi", "dmc";
interrupts = <GIC_SPI 3 IRQ_TYPE_EDGE_RISING>;
#address-cells = <1>;
#size-cells = <0>;
/* CVBS VDAC output port */
port@0 {
reg = <0>;
cvbs_vdac_out: endpoint {
remote-endpoint = <&tv_connector_in>;
};
};
};

View File

@ -0,0 +1,137 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
# Copyright 2019 BayLibre, SAS
%YAML 1.2
---
$id: "http://devicetree.org/schemas/display/amlogic,meson-vpu.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Amlogic Meson Display Controller
maintainers:
- Neil Armstrong <narmstrong@baylibre.com>
description: |
The Amlogic Meson Display controller is composed of several components
that are going to be documented below
DMC|---------------VPU (Video Processing Unit)----------------|------HHI------|
| vd1 _______ _____________ _________________ | |
D |-------| |----| | | | | HDMI PLL |
D | vd2 | VIU | | Video Post | | Video Encoders |<---|-----VCLK |
R |-------| |----| Processing | | | | |
| osd2 | | | |---| Enci ----------|----|-----VDAC------|
R |-------| CSC |----| Scalers | | Encp ----------|----|----HDMI-TX----|
A | osd1 | | | Blenders | | Encl ----------|----|---------------|
M |-------|______|----|____________| |________________| | |
___|__________________________________________________________|_______________|
VIU: Video Input Unit
---------------------
The Video Input Unit is in charge of the pixel scanout from the DDR memory.
It fetches the frames addresses, stride and parameters from the "Canvas" memory.
This part is also in charge of the CSC (Colorspace Conversion).
It can handle 2 OSD Planes and 2 Video Planes.
VPP: Video Post Processing
--------------------------
The Video Post Processing is in charge of the scaling and blending of the
various planes into a single pixel stream.
There is a special "pre-blending" used by the video planes with a dedicated
scaler and a "post-blending" to merge with the OSD Planes.
The OSD planes also have a dedicated scaler for one of the OSD.
VENC: Video Encoders
--------------------
The VENC is composed of the multiple pixel encoders
- ENCI : Interlace Video encoder for CVBS and Interlace HDMI
- ENCP : Progressive Video Encoder for HDMI
- ENCL : LCD LVDS Encoder
The VENC Unit gets a Pixel Clocks (VCLK) from a dedicated HDMI PLL and clock
tree and provides the scanout clock to the VPP and VIU.
The ENCI is connected to a single VDAC for Composite Output.
The ENCI and ENCP are connected to an on-chip HDMI Transceiver.
properties:
compatible:
oneOf:
- items:
- enum:
- amlogic,meson-gxbb-vpu # GXBB (S905)
- amlogic,meson-gxl-vpu # GXL (S905X, S905D)
- amlogic,meson-gxm-vpu # GXM (S912)
- const: amlogic,meson-gx-vpu
- enum:
- amlogic,meson-g12a-vpu # G12A (S905X2, S905Y2, S905D2)
reg:
maxItems: 2
reg-names:
items:
- const: vpu
- const: hhi
interrupts:
maxItems: 1
power-domains:
maxItems: 1
description: phandle to the associated power domain
port@0:
type: object
description:
A port node pointing to the CVBS VDAC port node.
port@1:
type: object
description:
A port node pointing to the HDMI-TX port node.
"#address-cells":
const: 1
"#size-cells":
const: 0
required:
- compatible
- reg
- interrupts
- port@0
- port@1
- "#address-cells"
- "#size-cells"
examples:
- |
vpu: vpu@d0100000 {
compatible = "amlogic,meson-gxbb-vpu", "amlogic,meson-gx-vpu";
reg = <0xd0100000 0x100000>, <0xc883c000 0x1000>;
reg-names = "vpu", "hhi";
interrupts = <3>;
#address-cells = <1>;
#size-cells = <0>;
/* CVBS VDAC output port */
port@0 {
reg = <0>;
cvbs_vdac_out: endpoint {
remote-endpoint = <&tv_connector_in>;
};
};
/* HDMI TX output port */
port@1 {
reg = <1>;
hdmi_tx_out: endpoint {
remote-endpoint = <&hdmi_tx_in>;
};
};
};

View File

@ -9,6 +9,7 @@ Optional properties:
- label: a symbolic name for the connector - label: a symbolic name for the connector
- hpd-gpios: HPD GPIO number - hpd-gpios: HPD GPIO number
- ddc-i2c-bus: phandle link to the I2C controller used for DDC EDID probing - ddc-i2c-bus: phandle link to the I2C controller used for DDC EDID probing
- ddc-en-gpios: signal to enable DDC bus
Required nodes: Required nodes:
- Video port for HDMI input - Video port for HDMI input

View File

@ -0,0 +1,62 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/nec,nl8048hl11.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NEC NL8048HL11 4.1" WVGA TFT LCD panel
description:
The NEC NL8048HL11 is a 4.1" WVGA TFT LCD panel with a 24-bit RGB parallel
data interface and an SPI control interface.
maintainers:
- Laurent Pinchart <laurent.pinchart@ideasonboard.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: nec,nl8048hl11
label: true
port: true
reg: true
reset-gpios: true
spi-max-frequency:
maximum: 10000000
required:
- compatible
- reg
- reset-gpios
- port
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
spi0 {
#address-cells = <1>;
#size-cells = <0>;
lcd_panel: panel@0 {
compatible = "nec,nl8048hl11";
reg = <0>;
spi-max-frequency = <10000000>;
reset-gpios = <&gpio7 7 GPIO_ACTIVE_LOW>;
port {
lcd_in: endpoint {
remote-endpoint = <&dpi_out>;
};
};
};
};
...

View File

@ -0,0 +1,36 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/ti,nspire.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments NSPIRE Display Panels
maintainers:
- Linus Walleij <linus.walleij@linaro.org>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
enum:
- ti,nspire-cx-lcd-panel
- ti,nspire-classic-lcd-panel
port: true
required:
- compatible
additionalProperties: false
examples:
- |
panel {
compatible = "ti,nspire-cx-lcd-panel";
port {
panel_in: endpoint {
remote-endpoint = <&pads>;
};
};
};

View File

@ -511,6 +511,8 @@ patternProperties:
description: Lenovo Group Ltd. description: Lenovo Group Ltd.
"^lg,.*": "^lg,.*":
description: LG Corporation description: LG Corporation
"^lgphilips,.*":
description: LG Display
"^libretech,.*": "^libretech,.*":
description: Shenzhen Libre Technology Co., Ltd description: Shenzhen Libre Technology Co., Ltd
"^licheepi,.*": "^licheepi,.*":
@ -933,6 +935,9 @@ patternProperties:
description: Tecon Microprocessor Technologies, LLC. description: Tecon Microprocessor Technologies, LLC.
"^topeet,.*": "^topeet,.*":
description: Topeet description: Topeet
"^toppoly,.*":
description: TPO (deprecated, use tpo)
deprecated: true
"^toradex,.*": "^toradex,.*":
description: Toradex AG description: Toradex AG
"^toshiba,.*": "^toshiba,.*":

View File

@ -5334,8 +5334,8 @@ L: linux-amlogic@lists.infradead.org
W: http://linux-meson.com/ W: http://linux-meson.com/
S: Supported S: Supported
F: drivers/gpu/drm/meson/ F: drivers/gpu/drm/meson/
F: Documentation/devicetree/bindings/display/amlogic,meson-vpu.txt F: Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
F: Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.txt F: Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
F: Documentation/gpu/meson.rst F: Documentation/gpu/meson.rst
T: git git://anongit.freedesktop.org/drm/drm-misc T: git git://anongit.freedesktop.org/drm/drm-misc

View File

@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \ obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
reservation.o seqno-fence.o dma-resv.o seqno-fence.o
obj-$(CONFIG_SYNC_FILE) += sync_file.o obj-$(CONFIG_SYNC_FILE) += sync_file.o
obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o
obj-$(CONFIG_UDMABUF) += udmabuf.o obj-$(CONFIG_UDMABUF) += udmabuf.o

View File

@ -21,7 +21,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/poll.h> #include <linux/poll.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/mount.h> #include <linux/mount.h>
#include <linux/pseudo_fs.h> #include <linux/pseudo_fs.h>
@ -104,8 +104,8 @@ static int dma_buf_release(struct inode *inode, struct file *file)
list_del(&dmabuf->list_node); list_del(&dmabuf->list_node);
mutex_unlock(&db_list.lock); mutex_unlock(&db_list.lock);
if (dmabuf->resv == (struct reservation_object *)&dmabuf[1]) if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
reservation_object_fini(dmabuf->resv); dma_resv_fini(dmabuf->resv);
module_put(dmabuf->owner); module_put(dmabuf->owner);
kfree(dmabuf); kfree(dmabuf);
@ -165,7 +165,7 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
* To support cross-device and cross-driver synchronization of buffer access * To support cross-device and cross-driver synchronization of buffer access
* implicit fences (represented internally in the kernel with &struct fence) can * implicit fences (represented internally in the kernel with &struct fence) can
* be attached to a &dma_buf. The glue for that and a few related things are * be attached to a &dma_buf. The glue for that and a few related things are
* provided in the &reservation_object structure. * provided in the &dma_resv structure.
* *
* Userspace can query the state of these implicitly tracked fences using poll() * Userspace can query the state of these implicitly tracked fences using poll()
* and related system calls: * and related system calls:
@ -195,8 +195,8 @@ static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
static __poll_t dma_buf_poll(struct file *file, poll_table *poll) static __poll_t dma_buf_poll(struct file *file, poll_table *poll)
{ {
struct dma_buf *dmabuf; struct dma_buf *dmabuf;
struct reservation_object *resv; struct dma_resv *resv;
struct reservation_object_list *fobj; struct dma_resv_list *fobj;
struct dma_fence *fence_excl; struct dma_fence *fence_excl;
__poll_t events; __poll_t events;
unsigned shared_count, seq; unsigned shared_count, seq;
@ -506,13 +506,13 @@ err_alloc_file:
struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
{ {
struct dma_buf *dmabuf; struct dma_buf *dmabuf;
struct reservation_object *resv = exp_info->resv; struct dma_resv *resv = exp_info->resv;
struct file *file; struct file *file;
size_t alloc_size = sizeof(struct dma_buf); size_t alloc_size = sizeof(struct dma_buf);
int ret; int ret;
if (!exp_info->resv) if (!exp_info->resv)
alloc_size += sizeof(struct reservation_object); alloc_size += sizeof(struct dma_resv);
else else
/* prevent &dma_buf[1] == dma_buf->resv */ /* prevent &dma_buf[1] == dma_buf->resv */
alloc_size += 1; alloc_size += 1;
@ -544,8 +544,8 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0; dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0;
if (!resv) { if (!resv) {
resv = (struct reservation_object *)&dmabuf[1]; resv = (struct dma_resv *)&dmabuf[1];
reservation_object_init(resv); dma_resv_init(resv);
} }
dmabuf->resv = resv; dmabuf->resv = resv;
@ -909,11 +909,11 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
{ {
bool write = (direction == DMA_BIDIRECTIONAL || bool write = (direction == DMA_BIDIRECTIONAL ||
direction == DMA_TO_DEVICE); direction == DMA_TO_DEVICE);
struct reservation_object *resv = dmabuf->resv; struct dma_resv *resv = dmabuf->resv;
long ret; long ret;
/* Wait on any implicit rendering fences */ /* Wait on any implicit rendering fences */
ret = reservation_object_wait_timeout_rcu(resv, write, true, ret = dma_resv_wait_timeout_rcu(resv, write, true,
MAX_SCHEDULE_TIMEOUT); MAX_SCHEDULE_TIMEOUT);
if (ret < 0) if (ret < 0)
return ret; return ret;
@ -1154,8 +1154,8 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused)
int ret; int ret;
struct dma_buf *buf_obj; struct dma_buf *buf_obj;
struct dma_buf_attachment *attach_obj; struct dma_buf_attachment *attach_obj;
struct reservation_object *robj; struct dma_resv *robj;
struct reservation_object_list *fobj; struct dma_resv_list *fobj;
struct dma_fence *fence; struct dma_fence *fence;
unsigned seq; unsigned seq;
int count = 0, attach_count, shared_count, i; int count = 0, attach_count, shared_count, i;

View File

@ -13,6 +13,8 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/dma-fence-array.h> #include <linux/dma-fence-array.h>
#define PENDING_ERROR 1
static const char *dma_fence_array_get_driver_name(struct dma_fence *fence) static const char *dma_fence_array_get_driver_name(struct dma_fence *fence)
{ {
return "dma_fence_array"; return "dma_fence_array";
@ -23,10 +25,29 @@ static const char *dma_fence_array_get_timeline_name(struct dma_fence *fence)
return "unbound"; return "unbound";
} }
static void dma_fence_array_set_pending_error(struct dma_fence_array *array,
int error)
{
/*
* Propagate the first error reported by any of our fences, but only
* before we ourselves are signaled.
*/
if (error)
cmpxchg(&array->base.error, PENDING_ERROR, error);
}
static void dma_fence_array_clear_pending_error(struct dma_fence_array *array)
{
/* Clear the error flag if not actually set. */
cmpxchg(&array->base.error, PENDING_ERROR, 0);
}
static void irq_dma_fence_array_work(struct irq_work *wrk) static void irq_dma_fence_array_work(struct irq_work *wrk)
{ {
struct dma_fence_array *array = container_of(wrk, typeof(*array), work); struct dma_fence_array *array = container_of(wrk, typeof(*array), work);
dma_fence_array_clear_pending_error(array);
dma_fence_signal(&array->base); dma_fence_signal(&array->base);
dma_fence_put(&array->base); dma_fence_put(&array->base);
} }
@ -38,6 +59,8 @@ static void dma_fence_array_cb_func(struct dma_fence *f,
container_of(cb, struct dma_fence_array_cb, cb); container_of(cb, struct dma_fence_array_cb, cb);
struct dma_fence_array *array = array_cb->array; struct dma_fence_array *array = array_cb->array;
dma_fence_array_set_pending_error(array, f->error);
if (atomic_dec_and_test(&array->num_pending)) if (atomic_dec_and_test(&array->num_pending))
irq_work_queue(&array->work); irq_work_queue(&array->work);
else else
@ -63,9 +86,14 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
dma_fence_get(&array->base); dma_fence_get(&array->base);
if (dma_fence_add_callback(array->fences[i], &cb[i].cb, if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
dma_fence_array_cb_func)) { dma_fence_array_cb_func)) {
int error = array->fences[i]->error;
dma_fence_array_set_pending_error(array, error);
dma_fence_put(&array->base); dma_fence_put(&array->base);
if (atomic_dec_and_test(&array->num_pending)) if (atomic_dec_and_test(&array->num_pending)) {
dma_fence_array_clear_pending_error(array);
return false; return false;
}
} }
} }
@ -142,6 +170,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences); atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
array->fences = fences; array->fences = fences;
array->base.error = PENDING_ERROR;
return array; return array;
} }
EXPORT_SYMBOL(dma_fence_array_create); EXPORT_SYMBOL(dma_fence_array_create);

View File

@ -60,7 +60,7 @@ static atomic64_t dma_fence_context_counter = ATOMIC64_INIT(1);
* *
* - Then there's also implicit fencing, where the synchronization points are * - Then there's also implicit fencing, where the synchronization points are
* implicitly passed around as part of shared &dma_buf instances. Such * implicitly passed around as part of shared &dma_buf instances. Such
* implicit fences are stored in &struct reservation_object through the * implicit fences are stored in &struct dma_resv through the
* &dma_buf.resv pointer. * &dma_buf.resv pointer.
*/ */
@ -129,31 +129,27 @@ EXPORT_SYMBOL(dma_fence_context_alloc);
int dma_fence_signal_locked(struct dma_fence *fence) int dma_fence_signal_locked(struct dma_fence *fence)
{ {
struct dma_fence_cb *cur, *tmp; struct dma_fence_cb *cur, *tmp;
int ret = 0; struct list_head cb_list;
lockdep_assert_held(fence->lock); lockdep_assert_held(fence->lock);
if (WARN_ON(!fence)) if (unlikely(test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
&fence->flags)))
return -EINVAL; return -EINVAL;
if (test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { /* Stash the cb_list before replacing it with the timestamp */
ret = -EINVAL; list_replace(&fence->cb_list, &cb_list);
/* fence->timestamp = ktime_get();
* we might have raced with the unlocked dma_fence_signal, set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
* still run through all callbacks trace_dma_fence_signaled(fence);
*/
} else {
fence->timestamp = ktime_get();
set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
trace_dma_fence_signaled(fence);
}
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { list_for_each_entry_safe(cur, tmp, &cb_list, node) {
list_del_init(&cur->node); INIT_LIST_HEAD(&cur->node);
cur->func(fence, cur); cur->func(fence, cur);
} }
return ret;
return 0;
} }
EXPORT_SYMBOL(dma_fence_signal_locked); EXPORT_SYMBOL(dma_fence_signal_locked);
@ -173,28 +169,16 @@ EXPORT_SYMBOL(dma_fence_signal_locked);
int dma_fence_signal(struct dma_fence *fence) int dma_fence_signal(struct dma_fence *fence)
{ {
unsigned long flags; unsigned long flags;
int ret;
if (!fence) if (!fence)
return -EINVAL; return -EINVAL;
if (test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) spin_lock_irqsave(fence->lock, flags);
return -EINVAL; ret = dma_fence_signal_locked(fence);
spin_unlock_irqrestore(fence->lock, flags);
fence->timestamp = ktime_get(); return ret;
set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
trace_dma_fence_signaled(fence);
if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
struct dma_fence_cb *cur, *tmp;
spin_lock_irqsave(fence->lock, flags);
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
list_del_init(&cur->node);
cur->func(fence, cur);
}
spin_unlock_irqrestore(fence->lock, flags);
}
return 0;
} }
EXPORT_SYMBOL(dma_fence_signal); EXPORT_SYMBOL(dma_fence_signal);
@ -248,7 +232,8 @@ void dma_fence_release(struct kref *kref)
trace_dma_fence_destroy(fence); trace_dma_fence_destroy(fence);
if (WARN(!list_empty(&fence->cb_list), if (WARN(!list_empty(&fence->cb_list) &&
!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags),
"Fence %s:%s:%llx:%llx released with pending signals!\n", "Fence %s:%s:%llx:%llx released with pending signals!\n",
fence->ops->get_driver_name(fence), fence->ops->get_driver_name(fence),
fence->ops->get_timeline_name(fence), fence->ops->get_timeline_name(fence),

View File

@ -32,7 +32,7 @@
* Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com> * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
*/ */
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include <linux/export.h> #include <linux/export.h>
/** /**
@ -56,16 +56,15 @@ const char reservation_seqcount_string[] = "reservation_seqcount";
EXPORT_SYMBOL(reservation_seqcount_string); EXPORT_SYMBOL(reservation_seqcount_string);
/** /**
* reservation_object_list_alloc - allocate fence list * dma_resv_list_alloc - allocate fence list
* @shared_max: number of fences we need space for * @shared_max: number of fences we need space for
* *
* Allocate a new reservation_object_list and make sure to correctly initialize * Allocate a new dma_resv_list and make sure to correctly initialize
* shared_max. * shared_max.
*/ */
static struct reservation_object_list * static struct dma_resv_list *dma_resv_list_alloc(unsigned int shared_max)
reservation_object_list_alloc(unsigned int shared_max)
{ {
struct reservation_object_list *list; struct dma_resv_list *list;
list = kmalloc(offsetof(typeof(*list), shared[shared_max]), GFP_KERNEL); list = kmalloc(offsetof(typeof(*list), shared[shared_max]), GFP_KERNEL);
if (!list) if (!list)
@ -78,12 +77,12 @@ reservation_object_list_alloc(unsigned int shared_max)
} }
/** /**
* reservation_object_list_free - free fence list * dma_resv_list_free - free fence list
* @list: list to free * @list: list to free
* *
* Free a reservation_object_list and make sure to drop all references. * Free a dma_resv_list and make sure to drop all references.
*/ */
static void reservation_object_list_free(struct reservation_object_list *list) static void dma_resv_list_free(struct dma_resv_list *list)
{ {
unsigned int i; unsigned int i;
@ -97,10 +96,10 @@ static void reservation_object_list_free(struct reservation_object_list *list)
} }
/** /**
* reservation_object_init - initialize a reservation object * dma_resv_init - initialize a reservation object
* @obj: the reservation object * @obj: the reservation object
*/ */
void reservation_object_init(struct reservation_object *obj) void dma_resv_init(struct dma_resv *obj)
{ {
ww_mutex_init(&obj->lock, &reservation_ww_class); ww_mutex_init(&obj->lock, &reservation_ww_class);
@ -109,15 +108,15 @@ void reservation_object_init(struct reservation_object *obj)
RCU_INIT_POINTER(obj->fence, NULL); RCU_INIT_POINTER(obj->fence, NULL);
RCU_INIT_POINTER(obj->fence_excl, NULL); RCU_INIT_POINTER(obj->fence_excl, NULL);
} }
EXPORT_SYMBOL(reservation_object_init); EXPORT_SYMBOL(dma_resv_init);
/** /**
* reservation_object_fini - destroys a reservation object * dma_resv_fini - destroys a reservation object
* @obj: the reservation object * @obj: the reservation object
*/ */
void reservation_object_fini(struct reservation_object *obj) void dma_resv_fini(struct dma_resv *obj)
{ {
struct reservation_object_list *fobj; struct dma_resv_list *fobj;
struct dma_fence *excl; struct dma_fence *excl;
/* /*
@ -129,32 +128,31 @@ void reservation_object_fini(struct reservation_object *obj)
dma_fence_put(excl); dma_fence_put(excl);
fobj = rcu_dereference_protected(obj->fence, 1); fobj = rcu_dereference_protected(obj->fence, 1);
reservation_object_list_free(fobj); dma_resv_list_free(fobj);
ww_mutex_destroy(&obj->lock); ww_mutex_destroy(&obj->lock);
} }
EXPORT_SYMBOL(reservation_object_fini); EXPORT_SYMBOL(dma_resv_fini);
/** /**
* reservation_object_reserve_shared - Reserve space to add shared fences to * dma_resv_reserve_shared - Reserve space to add shared fences to
* a reservation_object. * a dma_resv.
* @obj: reservation object * @obj: reservation object
* @num_fences: number of fences we want to add * @num_fences: number of fences we want to add
* *
* Should be called before reservation_object_add_shared_fence(). Must * Should be called before dma_resv_add_shared_fence(). Must
* be called with obj->lock held. * be called with obj->lock held.
* *
* RETURNS * RETURNS
* Zero for success, or -errno * Zero for success, or -errno
*/ */
int reservation_object_reserve_shared(struct reservation_object *obj, int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences)
unsigned int num_fences)
{ {
struct reservation_object_list *old, *new; struct dma_resv_list *old, *new;
unsigned int i, j, k, max; unsigned int i, j, k, max;
reservation_object_assert_held(obj); dma_resv_assert_held(obj);
old = reservation_object_get_list(obj); old = dma_resv_get_list(obj);
if (old && old->shared_max) { if (old && old->shared_max) {
if ((old->shared_count + num_fences) <= old->shared_max) if ((old->shared_count + num_fences) <= old->shared_max)
@ -166,7 +164,7 @@ int reservation_object_reserve_shared(struct reservation_object *obj,
max = 4; max = 4;
} }
new = reservation_object_list_alloc(max); new = dma_resv_list_alloc(max);
if (!new) if (!new)
return -ENOMEM; return -ENOMEM;
@ -180,7 +178,7 @@ int reservation_object_reserve_shared(struct reservation_object *obj,
struct dma_fence *fence; struct dma_fence *fence;
fence = rcu_dereference_protected(old->shared[i], fence = rcu_dereference_protected(old->shared[i],
reservation_object_held(obj)); dma_resv_held(obj));
if (dma_fence_is_signaled(fence)) if (dma_fence_is_signaled(fence))
RCU_INIT_POINTER(new->shared[--k], fence); RCU_INIT_POINTER(new->shared[--k], fence);
else else
@ -206,35 +204,34 @@ int reservation_object_reserve_shared(struct reservation_object *obj,
struct dma_fence *fence; struct dma_fence *fence;
fence = rcu_dereference_protected(new->shared[i], fence = rcu_dereference_protected(new->shared[i],
reservation_object_held(obj)); dma_resv_held(obj));
dma_fence_put(fence); dma_fence_put(fence);
} }
kfree_rcu(old, rcu); kfree_rcu(old, rcu);
return 0; return 0;
} }
EXPORT_SYMBOL(reservation_object_reserve_shared); EXPORT_SYMBOL(dma_resv_reserve_shared);
/** /**
* reservation_object_add_shared_fence - Add a fence to a shared slot * dma_resv_add_shared_fence - Add a fence to a shared slot
* @obj: the reservation object * @obj: the reservation object
* @fence: the shared fence to add * @fence: the shared fence to add
* *
* Add a fence to a shared slot, obj->lock must be held, and * Add a fence to a shared slot, obj->lock must be held, and
* reservation_object_reserve_shared() has been called. * dma_resv_reserve_shared() has been called.
*/ */
void reservation_object_add_shared_fence(struct reservation_object *obj, void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
struct dma_fence *fence)
{ {
struct reservation_object_list *fobj; struct dma_resv_list *fobj;
struct dma_fence *old; struct dma_fence *old;
unsigned int i, count; unsigned int i, count;
dma_fence_get(fence); dma_fence_get(fence);
reservation_object_assert_held(obj); dma_resv_assert_held(obj);
fobj = reservation_object_get_list(obj); fobj = dma_resv_get_list(obj);
count = fobj->shared_count; count = fobj->shared_count;
preempt_disable(); preempt_disable();
@ -243,7 +240,7 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
for (i = 0; i < count; ++i) { for (i = 0; i < count; ++i) {
old = rcu_dereference_protected(fobj->shared[i], old = rcu_dereference_protected(fobj->shared[i],
reservation_object_held(obj)); dma_resv_held(obj));
if (old->context == fence->context || if (old->context == fence->context ||
dma_fence_is_signaled(old)) dma_fence_is_signaled(old))
goto replace; goto replace;
@ -262,25 +259,24 @@ replace:
preempt_enable(); preempt_enable();
dma_fence_put(old); dma_fence_put(old);
} }
EXPORT_SYMBOL(reservation_object_add_shared_fence); EXPORT_SYMBOL(dma_resv_add_shared_fence);
/** /**
* reservation_object_add_excl_fence - Add an exclusive fence. * dma_resv_add_excl_fence - Add an exclusive fence.
* @obj: the reservation object * @obj: the reservation object
* @fence: the shared fence to add * @fence: the shared fence to add
* *
* Add a fence to the exclusive slot. The obj->lock must be held. * Add a fence to the exclusive slot. The obj->lock must be held.
*/ */
void reservation_object_add_excl_fence(struct reservation_object *obj, void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
struct dma_fence *fence)
{ {
struct dma_fence *old_fence = reservation_object_get_excl(obj); struct dma_fence *old_fence = dma_resv_get_excl(obj);
struct reservation_object_list *old; struct dma_resv_list *old;
u32 i = 0; u32 i = 0;
reservation_object_assert_held(obj); dma_resv_assert_held(obj);
old = reservation_object_get_list(obj); old = dma_resv_get_list(obj);
if (old) if (old)
i = old->shared_count; i = old->shared_count;
@ -299,27 +295,26 @@ void reservation_object_add_excl_fence(struct reservation_object *obj,
/* inplace update, no shared fences */ /* inplace update, no shared fences */
while (i--) while (i--)
dma_fence_put(rcu_dereference_protected(old->shared[i], dma_fence_put(rcu_dereference_protected(old->shared[i],
reservation_object_held(obj))); dma_resv_held(obj)));
dma_fence_put(old_fence); dma_fence_put(old_fence);
} }
EXPORT_SYMBOL(reservation_object_add_excl_fence); EXPORT_SYMBOL(dma_resv_add_excl_fence);
/** /**
* reservation_object_copy_fences - Copy all fences from src to dst. * dma_resv_copy_fences - Copy all fences from src to dst.
* @dst: the destination reservation object * @dst: the destination reservation object
* @src: the source reservation object * @src: the source reservation object
* *
* Copy all fences from src to dst. dst-lock must be held. * Copy all fences from src to dst. dst-lock must be held.
*/ */
int reservation_object_copy_fences(struct reservation_object *dst, int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
struct reservation_object *src)
{ {
struct reservation_object_list *src_list, *dst_list; struct dma_resv_list *src_list, *dst_list;
struct dma_fence *old, *new; struct dma_fence *old, *new;
unsigned i; unsigned i;
reservation_object_assert_held(dst); dma_resv_assert_held(dst);
rcu_read_lock(); rcu_read_lock();
src_list = rcu_dereference(src->fence); src_list = rcu_dereference(src->fence);
@ -330,7 +325,7 @@ retry:
rcu_read_unlock(); rcu_read_unlock();
dst_list = reservation_object_list_alloc(shared_count); dst_list = dma_resv_list_alloc(shared_count);
if (!dst_list) if (!dst_list)
return -ENOMEM; return -ENOMEM;
@ -351,7 +346,7 @@ retry:
continue; continue;
if (!dma_fence_get_rcu(fence)) { if (!dma_fence_get_rcu(fence)) {
reservation_object_list_free(dst_list); dma_resv_list_free(dst_list);
src_list = rcu_dereference(src->fence); src_list = rcu_dereference(src->fence);
goto retry; goto retry;
} }
@ -370,8 +365,8 @@ retry:
new = dma_fence_get_rcu_safe(&src->fence_excl); new = dma_fence_get_rcu_safe(&src->fence_excl);
rcu_read_unlock(); rcu_read_unlock();
src_list = reservation_object_get_list(dst); src_list = dma_resv_get_list(dst);
old = reservation_object_get_excl(dst); old = dma_resv_get_excl(dst);
preempt_disable(); preempt_disable();
write_seqcount_begin(&dst->seq); write_seqcount_begin(&dst->seq);
@ -381,15 +376,15 @@ retry:
write_seqcount_end(&dst->seq); write_seqcount_end(&dst->seq);
preempt_enable(); preempt_enable();
reservation_object_list_free(src_list); dma_resv_list_free(src_list);
dma_fence_put(old); dma_fence_put(old);
return 0; return 0;
} }
EXPORT_SYMBOL(reservation_object_copy_fences); EXPORT_SYMBOL(dma_resv_copy_fences);
/** /**
* reservation_object_get_fences_rcu - Get an object's shared and exclusive * dma_resv_get_fences_rcu - Get an object's shared and exclusive
* fences without update side lock held * fences without update side lock held
* @obj: the reservation object * @obj: the reservation object
* @pfence_excl: the returned exclusive fence (or NULL) * @pfence_excl: the returned exclusive fence (or NULL)
@ -401,10 +396,10 @@ EXPORT_SYMBOL(reservation_object_copy_fences);
* exclusive fence is not specified the fence is put into the array of the * exclusive fence is not specified the fence is put into the array of the
* shared fences as well. Returns either zero or -ENOMEM. * shared fences as well. Returns either zero or -ENOMEM.
*/ */
int reservation_object_get_fences_rcu(struct reservation_object *obj, int dma_resv_get_fences_rcu(struct dma_resv *obj,
struct dma_fence **pfence_excl, struct dma_fence **pfence_excl,
unsigned *pshared_count, unsigned *pshared_count,
struct dma_fence ***pshared) struct dma_fence ***pshared)
{ {
struct dma_fence **shared = NULL; struct dma_fence **shared = NULL;
struct dma_fence *fence_excl; struct dma_fence *fence_excl;
@ -412,7 +407,7 @@ int reservation_object_get_fences_rcu(struct reservation_object *obj,
int ret = 1; int ret = 1;
do { do {
struct reservation_object_list *fobj; struct dma_resv_list *fobj;
unsigned int i, seq; unsigned int i, seq;
size_t sz = 0; size_t sz = 0;
@ -487,10 +482,10 @@ unlock:
*pshared = shared; *pshared = shared;
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu); EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
/** /**
* reservation_object_wait_timeout_rcu - Wait on reservation's objects * dma_resv_wait_timeout_rcu - Wait on reservation's objects
* shared and/or exclusive fences. * shared and/or exclusive fences.
* @obj: the reservation object * @obj: the reservation object
* @wait_all: if true, wait on all fences, else wait on just exclusive fence * @wait_all: if true, wait on all fences, else wait on just exclusive fence
@ -501,9 +496,9 @@ EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu);
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
* greater than zer on success. * greater than zer on success.
*/ */
long reservation_object_wait_timeout_rcu(struct reservation_object *obj, long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
bool wait_all, bool intr, bool wait_all, bool intr,
unsigned long timeout) unsigned long timeout)
{ {
struct dma_fence *fence; struct dma_fence *fence;
unsigned seq, shared_count; unsigned seq, shared_count;
@ -531,8 +526,7 @@ retry:
} }
if (wait_all) { if (wait_all) {
struct reservation_object_list *fobj = struct dma_resv_list *fobj = rcu_dereference(obj->fence);
rcu_dereference(obj->fence);
if (fobj) if (fobj)
shared_count = fobj->shared_count; shared_count = fobj->shared_count;
@ -575,11 +569,10 @@ unlock_retry:
rcu_read_unlock(); rcu_read_unlock();
goto retry; goto retry;
} }
EXPORT_SYMBOL_GPL(reservation_object_wait_timeout_rcu); EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
static inline int static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
reservation_object_test_signaled_single(struct dma_fence *passed_fence)
{ {
struct dma_fence *fence, *lfence = passed_fence; struct dma_fence *fence, *lfence = passed_fence;
int ret = 1; int ret = 1;
@ -596,7 +589,7 @@ reservation_object_test_signaled_single(struct dma_fence *passed_fence)
} }
/** /**
* reservation_object_test_signaled_rcu - Test if a reservation object's * dma_resv_test_signaled_rcu - Test if a reservation object's
* fences have been signaled. * fences have been signaled.
* @obj: the reservation object * @obj: the reservation object
* @test_all: if true, test all fences, otherwise only test the exclusive * @test_all: if true, test all fences, otherwise only test the exclusive
@ -605,8 +598,7 @@ reservation_object_test_signaled_single(struct dma_fence *passed_fence)
* RETURNS * RETURNS
* true if all fences signaled, else false * true if all fences signaled, else false
*/ */
bool reservation_object_test_signaled_rcu(struct reservation_object *obj, bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
bool test_all)
{ {
unsigned seq, shared_count; unsigned seq, shared_count;
int ret; int ret;
@ -620,8 +612,7 @@ retry:
if (test_all) { if (test_all) {
unsigned i; unsigned i;
struct reservation_object_list *fobj = struct dma_resv_list *fobj = rcu_dereference(obj->fence);
rcu_dereference(obj->fence);
if (fobj) if (fobj)
shared_count = fobj->shared_count; shared_count = fobj->shared_count;
@ -629,7 +620,7 @@ retry:
for (i = 0; i < shared_count; ++i) { for (i = 0; i < shared_count; ++i) {
struct dma_fence *fence = rcu_dereference(fobj->shared[i]); struct dma_fence *fence = rcu_dereference(fobj->shared[i]);
ret = reservation_object_test_signaled_single(fence); ret = dma_resv_test_signaled_single(fence);
if (ret < 0) if (ret < 0)
goto retry; goto retry;
else if (!ret) else if (!ret)
@ -644,8 +635,7 @@ retry:
struct dma_fence *fence_excl = rcu_dereference(obj->fence_excl); struct dma_fence *fence_excl = rcu_dereference(obj->fence_excl);
if (fence_excl) { if (fence_excl) {
ret = reservation_object_test_signaled_single( ret = dma_resv_test_signaled_single(fence_excl);
fence_excl);
if (ret < 0) if (ret < 0)
goto retry; goto retry;
@ -657,4 +647,4 @@ retry:
rcu_read_unlock(); rcu_read_unlock();
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(reservation_object_test_signaled_rcu); EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);

View File

@ -132,17 +132,14 @@ static void timeline_fence_release(struct dma_fence *fence)
{ {
struct sync_pt *pt = dma_fence_to_sync_pt(fence); struct sync_pt *pt = dma_fence_to_sync_pt(fence);
struct sync_timeline *parent = dma_fence_parent(fence); struct sync_timeline *parent = dma_fence_parent(fence);
unsigned long flags;
spin_lock_irqsave(fence->lock, flags);
if (!list_empty(&pt->link)) { if (!list_empty(&pt->link)) {
unsigned long flags; list_del(&pt->link);
rb_erase(&pt->node, &parent->pt_tree);
spin_lock_irqsave(fence->lock, flags);
if (!list_empty(&pt->link)) {
list_del(&pt->link);
rb_erase(&pt->node, &parent->pt_tree);
}
spin_unlock_irqrestore(fence->lock, flags);
} }
spin_unlock_irqrestore(fence->lock, flags);
sync_timeline_put(parent); sync_timeline_put(parent);
dma_fence_free(fence); dma_fence_free(fence);
@ -265,7 +262,8 @@ static struct sync_pt *sync_pt_create(struct sync_timeline *obj,
p = &parent->rb_left; p = &parent->rb_left;
} else { } else {
if (dma_fence_get_rcu(&other->base)) { if (dma_fence_get_rcu(&other->base)) {
dma_fence_put(&pt->base); sync_timeline_put(obj);
kfree(pt);
pt = other; pt = other;
goto unlock; goto unlock;
} }

View File

@ -419,7 +419,7 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
* info->num_fences. * info->num_fences.
*/ */
if (!info.num_fences) { if (!info.num_fences) {
info.status = dma_fence_is_signaled(sync_file->fence); info.status = dma_fence_get_status(sync_file->fence);
goto no_fences; goto no_fences;
} else { } else {
info.status = 1; info.status = 1;

View File

@ -218,14 +218,14 @@ void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo)
static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
struct amdgpu_amdkfd_fence *ef) struct amdgpu_amdkfd_fence *ef)
{ {
struct reservation_object *resv = bo->tbo.base.resv; struct dma_resv *resv = bo->tbo.base.resv;
struct reservation_object_list *old, *new; struct dma_resv_list *old, *new;
unsigned int i, j, k; unsigned int i, j, k;
if (!ef) if (!ef)
return -EINVAL; return -EINVAL;
old = reservation_object_get_list(resv); old = dma_resv_get_list(resv);
if (!old) if (!old)
return 0; return 0;
@ -241,7 +241,7 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
struct dma_fence *f; struct dma_fence *f;
f = rcu_dereference_protected(old->shared[i], f = rcu_dereference_protected(old->shared[i],
reservation_object_held(resv)); dma_resv_held(resv));
if (f->context == ef->base.context) if (f->context == ef->base.context)
RCU_INIT_POINTER(new->shared[--j], f); RCU_INIT_POINTER(new->shared[--j], f);
@ -263,7 +263,7 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
struct dma_fence *f; struct dma_fence *f;
f = rcu_dereference_protected(new->shared[i], f = rcu_dereference_protected(new->shared[i],
reservation_object_held(resv)); dma_resv_held(resv));
dma_fence_put(f); dma_fence_put(f);
} }
kfree_rcu(old, rcu); kfree_rcu(old, rcu);
@ -887,7 +887,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void **process_info,
AMDGPU_FENCE_OWNER_KFD, false); AMDGPU_FENCE_OWNER_KFD, false);
if (ret) if (ret)
goto wait_pd_fail; goto wait_pd_fail;
ret = reservation_object_reserve_shared(vm->root.base.bo->tbo.base.resv, 1); ret = dma_resv_reserve_shared(vm->root.base.bo->tbo.base.resv, 1);
if (ret) if (ret)
goto reserve_shared_fail; goto reserve_shared_fail;
amdgpu_bo_fence(vm->root.base.bo, amdgpu_bo_fence(vm->root.base.bo,
@ -2133,7 +2133,7 @@ int amdgpu_amdkfd_add_gws_to_process(void *info, void *gws, struct kgd_mem **mem
* Add process eviction fence to bo so they can * Add process eviction fence to bo so they can
* evict each other. * evict each other.
*/ */
ret = reservation_object_reserve_shared(gws_bo->tbo.base.resv, 1); ret = dma_resv_reserve_shared(gws_bo->tbo.base.resv, 1);
if (ret) if (ret)
goto reserve_shared_fail; goto reserve_shared_fail;
amdgpu_bo_fence(gws_bo, &process_info->eviction_fence->base, true); amdgpu_bo_fence(gws_bo, &process_info->eviction_fence->base, true);

View File

@ -730,7 +730,7 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
list_for_each_entry(e, &p->validated, tv.head) { list_for_each_entry(e, &p->validated, tv.head) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
struct reservation_object *resv = bo->tbo.base.resv; struct dma_resv *resv = bo->tbo.base.resv;
r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, p->filp, r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, p->filp,
amdgpu_bo_explicit_sync(bo)); amdgpu_bo_explicit_sync(bo));
@ -1727,7 +1727,7 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
*map = mapping; *map = mapping;
/* Double check that the BO is reserved by this CS */ /* Double check that the BO is reserved by this CS */
if (reservation_object_locking_ctx((*bo)->tbo.base.resv) != &parser->ticket) if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->ticket)
return -EINVAL; return -EINVAL;
if (!((*bo)->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)) { if (!((*bo)->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)) {

View File

@ -205,7 +205,7 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
goto unpin; goto unpin;
} }
r = reservation_object_get_fences_rcu(new_abo->tbo.base.resv, &work->excl, r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
&work->shared_count, &work->shared_count,
&work->shared); &work->shared);
if (unlikely(r != 0)) { if (unlikely(r != 0)) {

View File

@ -137,23 +137,23 @@ int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
} }
static int static int
__reservation_object_make_exclusive(struct reservation_object *obj) __dma_resv_make_exclusive(struct dma_resv *obj)
{ {
struct dma_fence **fences; struct dma_fence **fences;
unsigned int count; unsigned int count;
int r; int r;
if (!reservation_object_get_list(obj)) /* no shared fences to convert */ if (!dma_resv_get_list(obj)) /* no shared fences to convert */
return 0; return 0;
r = reservation_object_get_fences_rcu(obj, NULL, &count, &fences); r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
if (r) if (r)
return r; return r;
if (count == 0) { if (count == 0) {
/* Now that was unexpected. */ /* Now that was unexpected. */
} else if (count == 1) { } else if (count == 1) {
reservation_object_add_excl_fence(obj, fences[0]); dma_resv_add_excl_fence(obj, fences[0]);
dma_fence_put(fences[0]); dma_fence_put(fences[0]);
kfree(fences); kfree(fences);
} else { } else {
@ -165,7 +165,7 @@ __reservation_object_make_exclusive(struct reservation_object *obj)
if (!array) if (!array)
goto err_fences_put; goto err_fences_put;
reservation_object_add_excl_fence(obj, &array->base); dma_resv_add_excl_fence(obj, &array->base);
dma_fence_put(&array->base); dma_fence_put(&array->base);
} }
@ -216,7 +216,7 @@ static int amdgpu_dma_buf_map_attach(struct dma_buf *dma_buf,
* fences on the reservation object into a single exclusive * fences on the reservation object into a single exclusive
* fence. * fence.
*/ */
r = __reservation_object_make_exclusive(bo->tbo.base.resv); r = __dma_resv_make_exclusive(bo->tbo.base.resv);
if (r) if (r)
goto error_unreserve; goto error_unreserve;
} }
@ -367,7 +367,7 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach, struct dma_buf_attachment *attach,
struct sg_table *sg) struct sg_table *sg)
{ {
struct reservation_object *resv = attach->dmabuf->resv; struct dma_resv *resv = attach->dmabuf->resv;
struct amdgpu_device *adev = dev->dev_private; struct amdgpu_device *adev = dev->dev_private;
struct amdgpu_bo *bo; struct amdgpu_bo *bo;
struct amdgpu_bo_param bp; struct amdgpu_bo_param bp;
@ -380,7 +380,7 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
bp.flags = 0; bp.flags = 0;
bp.type = ttm_bo_type_sg; bp.type = ttm_bo_type_sg;
bp.resv = resv; bp.resv = resv;
reservation_object_lock(resv, NULL); dma_resv_lock(resv, NULL);
ret = amdgpu_bo_create(adev, &bp, &bo); ret = amdgpu_bo_create(adev, &bp, &bo);
if (ret) if (ret)
goto error; goto error;
@ -392,11 +392,11 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
if (attach->dmabuf->ops != &amdgpu_dmabuf_ops) if (attach->dmabuf->ops != &amdgpu_dmabuf_ops)
bo->prime_shared_count = 1; bo->prime_shared_count = 1;
reservation_object_unlock(resv); dma_resv_unlock(resv);
return &bo->tbo.base; return &bo->tbo.base;
error: error:
reservation_object_unlock(resv); dma_resv_unlock(resv);
return ERR_PTR(ret); return ERR_PTR(ret);
} }

View File

@ -50,7 +50,7 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size, int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
int alignment, u32 initial_domain, int alignment, u32 initial_domain,
u64 flags, enum ttm_bo_type type, u64 flags, enum ttm_bo_type type,
struct reservation_object *resv, struct dma_resv *resv,
struct drm_gem_object **obj) struct drm_gem_object **obj)
{ {
struct amdgpu_bo *bo; struct amdgpu_bo *bo;
@ -215,7 +215,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void *data,
union drm_amdgpu_gem_create *args = data; union drm_amdgpu_gem_create *args = data;
uint64_t flags = args->in.domain_flags; uint64_t flags = args->in.domain_flags;
uint64_t size = args->in.bo_size; uint64_t size = args->in.bo_size;
struct reservation_object *resv = NULL; struct dma_resv *resv = NULL;
struct drm_gem_object *gobj; struct drm_gem_object *gobj;
uint32_t handle; uint32_t handle;
int r; int r;
@ -433,7 +433,7 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
return -ENOENT; return -ENOENT;
} }
robj = gem_to_amdgpu_bo(gobj); robj = gem_to_amdgpu_bo(gobj);
ret = reservation_object_wait_timeout_rcu(robj->tbo.base.resv, true, true, ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
timeout); timeout);
/* ret == 0 means not signaled, /* ret == 0 means not signaled,

View File

@ -47,7 +47,7 @@ void amdgpu_gem_force_release(struct amdgpu_device *adev);
int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size, int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
int alignment, u32 initial_domain, int alignment, u32 initial_domain,
u64 flags, enum ttm_bo_type type, u64 flags, enum ttm_bo_type type,
struct reservation_object *resv, struct dma_resv *resv,
struct drm_gem_object **obj); struct drm_gem_object **obj);
int amdgpu_mode_dumb_create(struct drm_file *file_priv, int amdgpu_mode_dumb_create(struct drm_file *file_priv,

View File

@ -104,7 +104,7 @@ static void amdgpu_pasid_free_cb(struct dma_fence *fence,
* *
* Free the pasid only after all the fences in resv are signaled. * Free the pasid only after all the fences in resv are signaled.
*/ */
void amdgpu_pasid_free_delayed(struct reservation_object *resv, void amdgpu_pasid_free_delayed(struct dma_resv *resv,
unsigned int pasid) unsigned int pasid)
{ {
struct dma_fence *fence, **fences; struct dma_fence *fence, **fences;
@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct reservation_object *resv,
unsigned count; unsigned count;
int r; int r;
r = reservation_object_get_fences_rcu(resv, NULL, &count, &fences); r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
if (r) if (r)
goto fallback; goto fallback;
@ -156,7 +156,7 @@ fallback:
/* Not enough memory for the delayed delete, as last resort /* Not enough memory for the delayed delete, as last resort
* block for all the fences to complete. * block for all the fences to complete.
*/ */
reservation_object_wait_timeout_rcu(resv, true, false, dma_resv_wait_timeout_rcu(resv, true, false,
MAX_SCHEDULE_TIMEOUT); MAX_SCHEDULE_TIMEOUT);
amdgpu_pasid_free(pasid); amdgpu_pasid_free(pasid);
} }

View File

@ -72,7 +72,7 @@ struct amdgpu_vmid_mgr {
int amdgpu_pasid_alloc(unsigned int bits); int amdgpu_pasid_alloc(unsigned int bits);
void amdgpu_pasid_free(unsigned int pasid); void amdgpu_pasid_free(unsigned int pasid);
void amdgpu_pasid_free_delayed(struct reservation_object *resv, void amdgpu_pasid_free_delayed(struct dma_resv *resv,
unsigned int pasid); unsigned int pasid);
bool amdgpu_vmid_had_gpu_reset(struct amdgpu_device *adev, bool amdgpu_vmid_had_gpu_reset(struct amdgpu_device *adev,

View File

@ -179,7 +179,7 @@ static void amdgpu_mn_invalidate_node(struct amdgpu_mn_node *node,
if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, start, end)) if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, start, end))
continue; continue;
r = reservation_object_wait_timeout_rcu(bo->tbo.base.resv, r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
true, false, MAX_SCHEDULE_TIMEOUT); true, false, MAX_SCHEDULE_TIMEOUT);
if (r <= 0) if (r <= 0)
DRM_ERROR("(%ld) failed to wait for user bo\n", r); DRM_ERROR("(%ld) failed to wait for user bo\n", r);

View File

@ -550,7 +550,7 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
fail_unreserve: fail_unreserve:
if (!bp->resv) if (!bp->resv)
reservation_object_unlock(bo->tbo.base.resv); dma_resv_unlock(bo->tbo.base.resv);
amdgpu_bo_unref(&bo); amdgpu_bo_unref(&bo);
return r; return r;
} }
@ -612,13 +612,13 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
if ((flags & AMDGPU_GEM_CREATE_SHADOW) && !(adev->flags & AMD_IS_APU)) { if ((flags & AMDGPU_GEM_CREATE_SHADOW) && !(adev->flags & AMD_IS_APU)) {
if (!bp->resv) if (!bp->resv)
WARN_ON(reservation_object_lock((*bo_ptr)->tbo.base.resv, WARN_ON(dma_resv_lock((*bo_ptr)->tbo.base.resv,
NULL)); NULL));
r = amdgpu_bo_create_shadow(adev, bp->size, *bo_ptr); r = amdgpu_bo_create_shadow(adev, bp->size, *bo_ptr);
if (!bp->resv) if (!bp->resv)
reservation_object_unlock((*bo_ptr)->tbo.base.resv); dma_resv_unlock((*bo_ptr)->tbo.base.resv);
if (r) if (r)
amdgpu_bo_unref(bo_ptr); amdgpu_bo_unref(bo_ptr);
@ -715,7 +715,7 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
return 0; return 0;
} }
r = reservation_object_wait_timeout_rcu(bo->tbo.base.resv, false, false, r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
MAX_SCHEDULE_TIMEOUT); MAX_SCHEDULE_TIMEOUT);
if (r < 0) if (r < 0)
return r; return r;
@ -1093,7 +1093,7 @@ int amdgpu_bo_set_tiling_flags(struct amdgpu_bo *bo, u64 tiling_flags)
*/ */
void amdgpu_bo_get_tiling_flags(struct amdgpu_bo *bo, u64 *tiling_flags) void amdgpu_bo_get_tiling_flags(struct amdgpu_bo *bo, u64 *tiling_flags)
{ {
reservation_object_assert_held(bo->tbo.base.resv); dma_resv_assert_held(bo->tbo.base.resv);
if (tiling_flags) if (tiling_flags)
*tiling_flags = bo->tiling_flags; *tiling_flags = bo->tiling_flags;
@ -1242,7 +1242,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
!(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) !(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
return; return;
reservation_object_lock(bo->base.resv, NULL); dma_resv_lock(bo->base.resv, NULL);
r = amdgpu_fill_buffer(abo, AMDGPU_POISON, bo->base.resv, &fence); r = amdgpu_fill_buffer(abo, AMDGPU_POISON, bo->base.resv, &fence);
if (!WARN_ON(r)) { if (!WARN_ON(r)) {
@ -1250,7 +1250,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
dma_fence_put(fence); dma_fence_put(fence);
} }
reservation_object_unlock(bo->base.resv); dma_resv_unlock(bo->base.resv);
} }
/** /**
@ -1325,12 +1325,12 @@ int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence, void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,
bool shared) bool shared)
{ {
struct reservation_object *resv = bo->tbo.base.resv; struct dma_resv *resv = bo->tbo.base.resv;
if (shared) if (shared)
reservation_object_add_shared_fence(resv, fence); dma_resv_add_shared_fence(resv, fence);
else else
reservation_object_add_excl_fence(resv, fence); dma_resv_add_excl_fence(resv, fence);
} }
/** /**
@ -1370,7 +1370,7 @@ int amdgpu_bo_sync_wait(struct amdgpu_bo *bo, void *owner, bool intr)
u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo) u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo)
{ {
WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_SYSTEM); WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_SYSTEM);
WARN_ON_ONCE(!reservation_object_is_locked(bo->tbo.base.resv) && WARN_ON_ONCE(!dma_resv_is_locked(bo->tbo.base.resv) &&
!bo->pin_count && bo->tbo.type != ttm_bo_type_kernel); !bo->pin_count && bo->tbo.type != ttm_bo_type_kernel);
WARN_ON_ONCE(bo->tbo.mem.start == AMDGPU_BO_INVALID_OFFSET); WARN_ON_ONCE(bo->tbo.mem.start == AMDGPU_BO_INVALID_OFFSET);
WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_VRAM && WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_VRAM &&

View File

@ -41,7 +41,7 @@ struct amdgpu_bo_param {
u32 preferred_domain; u32 preferred_domain;
u64 flags; u64 flags;
enum ttm_bo_type type; enum ttm_bo_type type;
struct reservation_object *resv; struct dma_resv *resv;
}; };
/* bo virtual addresses in a vm */ /* bo virtual addresses in a vm */

View File

@ -190,10 +190,10 @@ int amdgpu_sync_fence(struct amdgpu_device *adev, struct amdgpu_sync *sync,
*/ */
int amdgpu_sync_resv(struct amdgpu_device *adev, int amdgpu_sync_resv(struct amdgpu_device *adev,
struct amdgpu_sync *sync, struct amdgpu_sync *sync,
struct reservation_object *resv, struct dma_resv *resv,
void *owner, bool explicit_sync) void *owner, bool explicit_sync)
{ {
struct reservation_object_list *flist; struct dma_resv_list *flist;
struct dma_fence *f; struct dma_fence *f;
void *fence_owner; void *fence_owner;
unsigned i; unsigned i;
@ -203,16 +203,16 @@ int amdgpu_sync_resv(struct amdgpu_device *adev,
return -EINVAL; return -EINVAL;
/* always sync to the exclusive fence */ /* always sync to the exclusive fence */
f = reservation_object_get_excl(resv); f = dma_resv_get_excl(resv);
r = amdgpu_sync_fence(adev, sync, f, false); r = amdgpu_sync_fence(adev, sync, f, false);
flist = reservation_object_get_list(resv); flist = dma_resv_get_list(resv);
if (!flist || r) if (!flist || r)
return r; return r;
for (i = 0; i < flist->shared_count; ++i) { for (i = 0; i < flist->shared_count; ++i) {
f = rcu_dereference_protected(flist->shared[i], f = rcu_dereference_protected(flist->shared[i],
reservation_object_held(resv)); dma_resv_held(resv));
/* We only want to trigger KFD eviction fences on /* We only want to trigger KFD eviction fences on
* evict or move jobs. Skip KFD fences otherwise. * evict or move jobs. Skip KFD fences otherwise.
*/ */

View File

@ -27,7 +27,7 @@
#include <linux/hashtable.h> #include <linux/hashtable.h>
struct dma_fence; struct dma_fence;
struct reservation_object; struct dma_resv;
struct amdgpu_device; struct amdgpu_device;
struct amdgpu_ring; struct amdgpu_ring;
@ -44,7 +44,7 @@ int amdgpu_sync_fence(struct amdgpu_device *adev, struct amdgpu_sync *sync,
struct dma_fence *f, bool explicit); struct dma_fence *f, bool explicit);
int amdgpu_sync_resv(struct amdgpu_device *adev, int amdgpu_sync_resv(struct amdgpu_device *adev,
struct amdgpu_sync *sync, struct amdgpu_sync *sync,
struct reservation_object *resv, struct dma_resv *resv,
void *owner, void *owner,
bool explicit_sync); bool explicit_sync);
struct dma_fence *amdgpu_sync_peek_fence(struct amdgpu_sync *sync, struct dma_fence *amdgpu_sync_peek_fence(struct amdgpu_sync *sync,

View File

@ -303,7 +303,7 @@ int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
struct amdgpu_copy_mem *src, struct amdgpu_copy_mem *src,
struct amdgpu_copy_mem *dst, struct amdgpu_copy_mem *dst,
uint64_t size, uint64_t size,
struct reservation_object *resv, struct dma_resv *resv,
struct dma_fence **f) struct dma_fence **f)
{ {
struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring; struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
@ -1486,7 +1486,7 @@ static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
{ {
unsigned long num_pages = bo->mem.num_pages; unsigned long num_pages = bo->mem.num_pages;
struct drm_mm_node *node = bo->mem.mm_node; struct drm_mm_node *node = bo->mem.mm_node;
struct reservation_object_list *flist; struct dma_resv_list *flist;
struct dma_fence *f; struct dma_fence *f;
int i; int i;
@ -1494,18 +1494,18 @@ static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
* cleanly handle page faults. * cleanly handle page faults.
*/ */
if (bo->type == ttm_bo_type_kernel && if (bo->type == ttm_bo_type_kernel &&
!reservation_object_test_signaled_rcu(bo->base.resv, true)) !dma_resv_test_signaled_rcu(bo->base.resv, true))
return false; return false;
/* If bo is a KFD BO, check if the bo belongs to the current process. /* If bo is a KFD BO, check if the bo belongs to the current process.
* If true, then return false as any KFD process needs all its BOs to * If true, then return false as any KFD process needs all its BOs to
* be resident to run successfully * be resident to run successfully
*/ */
flist = reservation_object_get_list(bo->base.resv); flist = dma_resv_get_list(bo->base.resv);
if (flist) { if (flist) {
for (i = 0; i < flist->shared_count; ++i) { for (i = 0; i < flist->shared_count; ++i) {
f = rcu_dereference_protected(flist->shared[i], f = rcu_dereference_protected(flist->shared[i],
reservation_object_held(bo->base.resv)); dma_resv_held(bo->base.resv));
if (amdkfd_fence_check_mm(f, current->mm)) if (amdkfd_fence_check_mm(f, current->mm))
return false; return false;
} }
@ -2009,7 +2009,7 @@ error_free:
int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset, int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
uint64_t dst_offset, uint32_t byte_count, uint64_t dst_offset, uint32_t byte_count,
struct reservation_object *resv, struct dma_resv *resv,
struct dma_fence **fence, bool direct_submit, struct dma_fence **fence, bool direct_submit,
bool vm_needs_flush) bool vm_needs_flush)
{ {
@ -2083,7 +2083,7 @@ error_free:
int amdgpu_fill_buffer(struct amdgpu_bo *bo, int amdgpu_fill_buffer(struct amdgpu_bo *bo,
uint32_t src_data, uint32_t src_data,
struct reservation_object *resv, struct dma_resv *resv,
struct dma_fence **fence) struct dma_fence **fence)
{ {
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);

View File

@ -85,18 +85,18 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset, int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
uint64_t dst_offset, uint32_t byte_count, uint64_t dst_offset, uint32_t byte_count,
struct reservation_object *resv, struct dma_resv *resv,
struct dma_fence **fence, bool direct_submit, struct dma_fence **fence, bool direct_submit,
bool vm_needs_flush); bool vm_needs_flush);
int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev, int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
struct amdgpu_copy_mem *src, struct amdgpu_copy_mem *src,
struct amdgpu_copy_mem *dst, struct amdgpu_copy_mem *dst,
uint64_t size, uint64_t size,
struct reservation_object *resv, struct dma_resv *resv,
struct dma_fence **f); struct dma_fence **f);
int amdgpu_fill_buffer(struct amdgpu_bo *bo, int amdgpu_fill_buffer(struct amdgpu_bo *bo,
uint32_t src_data, uint32_t src_data,
struct reservation_object *resv, struct dma_resv *resv,
struct dma_fence **fence); struct dma_fence **fence);
int amdgpu_mmap(struct file *filp, struct vm_area_struct *vma); int amdgpu_mmap(struct file *filp, struct vm_area_struct *vma);

View File

@ -1073,7 +1073,7 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
ib->length_dw = 16; ib->length_dw = 16;
if (direct) { if (direct) {
r = reservation_object_wait_timeout_rcu(bo->tbo.base.resv, r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
true, false, true, false,
msecs_to_jiffies(10)); msecs_to_jiffies(10));
if (r == 0) if (r == 0)

View File

@ -1702,7 +1702,7 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
ttm = container_of(bo->tbo.ttm, struct ttm_dma_tt, ttm); ttm = container_of(bo->tbo.ttm, struct ttm_dma_tt, ttm);
pages_addr = ttm->dma_address; pages_addr = ttm->dma_address;
} }
exclusive = reservation_object_get_excl(bo->tbo.base.resv); exclusive = dma_resv_get_excl(bo->tbo.base.resv);
} }
if (bo) { if (bo) {
@ -1879,18 +1879,18 @@ static void amdgpu_vm_free_mapping(struct amdgpu_device *adev,
*/ */
static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm) static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
{ {
struct reservation_object *resv = vm->root.base.bo->tbo.base.resv; struct dma_resv *resv = vm->root.base.bo->tbo.base.resv;
struct dma_fence *excl, **shared; struct dma_fence *excl, **shared;
unsigned i, shared_count; unsigned i, shared_count;
int r; int r;
r = reservation_object_get_fences_rcu(resv, &excl, r = dma_resv_get_fences_rcu(resv, &excl,
&shared_count, &shared); &shared_count, &shared);
if (r) { if (r) {
/* Not enough memory to grab the fence list, as last resort /* Not enough memory to grab the fence list, as last resort
* block for all the fences to complete. * block for all the fences to complete.
*/ */
reservation_object_wait_timeout_rcu(resv, true, false, dma_resv_wait_timeout_rcu(resv, true, false,
MAX_SCHEDULE_TIMEOUT); MAX_SCHEDULE_TIMEOUT);
return; return;
} }
@ -1978,7 +1978,7 @@ int amdgpu_vm_handle_moved(struct amdgpu_device *adev,
struct amdgpu_vm *vm) struct amdgpu_vm *vm)
{ {
struct amdgpu_bo_va *bo_va, *tmp; struct amdgpu_bo_va *bo_va, *tmp;
struct reservation_object *resv; struct dma_resv *resv;
bool clear; bool clear;
int r; int r;
@ -1997,7 +1997,7 @@ int amdgpu_vm_handle_moved(struct amdgpu_device *adev,
spin_unlock(&vm->invalidated_lock); spin_unlock(&vm->invalidated_lock);
/* Try to reserve the BO to avoid clearing its ptes */ /* Try to reserve the BO to avoid clearing its ptes */
if (!amdgpu_vm_debug && reservation_object_trylock(resv)) if (!amdgpu_vm_debug && dma_resv_trylock(resv))
clear = false; clear = false;
/* Somebody else is using the BO right now */ /* Somebody else is using the BO right now */
else else
@ -2008,7 +2008,7 @@ int amdgpu_vm_handle_moved(struct amdgpu_device *adev,
return r; return r;
if (!clear) if (!clear)
reservation_object_unlock(resv); dma_resv_unlock(resv);
spin_lock(&vm->invalidated_lock); spin_lock(&vm->invalidated_lock);
} }
spin_unlock(&vm->invalidated_lock); spin_unlock(&vm->invalidated_lock);
@ -2416,7 +2416,7 @@ void amdgpu_vm_bo_trace_cs(struct amdgpu_vm *vm, struct ww_acquire_ctx *ticket)
struct amdgpu_bo *bo; struct amdgpu_bo *bo;
bo = mapping->bo_va->base.bo; bo = mapping->bo_va->base.bo;
if (reservation_object_locking_ctx(bo->tbo.base.resv) != if (dma_resv_locking_ctx(bo->tbo.base.resv) !=
ticket) ticket)
continue; continue;
} }
@ -2649,7 +2649,7 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
*/ */
long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout) long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
{ {
return reservation_object_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv, return dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
true, true, timeout); true, true, timeout);
} }
@ -2724,7 +2724,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
if (r) if (r)
goto error_free_root; goto error_free_root;
r = reservation_object_reserve_shared(root->tbo.base.resv, 1); r = dma_resv_reserve_shared(root->tbo.base.resv, 1);
if (r) if (r)
goto error_unreserve; goto error_unreserve;

View File

@ -5695,7 +5695,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
* deadlock during GPU reset when this fence will not signal * deadlock during GPU reset when this fence will not signal
* but we hold reservation lock for the BO. * but we hold reservation lock for the BO.
*/ */
r = reservation_object_wait_timeout_rcu(abo->tbo.base.resv, true, r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
false, false,
msecs_to_jiffies(5000)); msecs_to_jiffies(5000));
if (unlikely(r <= 0)) if (unlikely(r <= 0))

View File

@ -27,7 +27,7 @@ static void komeda_crtc_update_clock_ratio(struct komeda_crtc_state *kcrtc_st)
return; return;
} }
pxlclk = kcrtc_st->base.adjusted_mode.crtc_clock * 1000; pxlclk = kcrtc_st->base.adjusted_mode.crtc_clock * 1000ULL;
aclk = komeda_crtc_get_aclk(kcrtc_st); aclk = komeda_crtc_get_aclk(kcrtc_st);
kcrtc_st->clock_ratio = div64_u64(aclk << 32, pxlclk); kcrtc_st->clock_ratio = div64_u64(aclk << 32, pxlclk);

View File

@ -9,7 +9,12 @@
* Implementation of a CRTC class for the HDLCD driver. * Implementation of a CRTC class for the HDLCD driver.
*/ */
#include <drm/drmP.h> #include <linux/clk.h>
#include <linux/of_graph.h>
#include <linux/platform_data/simplefb.h>
#include <video/videomode.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
@ -19,10 +24,7 @@
#include <drm/drm_of.h> #include <drm/drm_of.h>
#include <drm/drm_plane_helper.h> #include <drm/drm_plane_helper.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_probe_helper.h>
#include <linux/clk.h> #include <drm/drm_vblank.h>
#include <linux/of_graph.h>
#include <linux/platform_data/simplefb.h>
#include <video/videomode.h>
#include "hdlcd_drv.h" #include "hdlcd_drv.h"
#include "hdlcd_regs.h" #include "hdlcd_regs.h"

View File

@ -14,21 +14,26 @@
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/component.h> #include <linux/component.h>
#include <linux/console.h> #include <linux/console.h>
#include <linux/dma-mapping.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/of_graph.h> #include <linux/of_graph.h>
#include <linux/of_reserved_mem.h> #include <linux/of_reserved_mem.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <drm/drmP.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_debugfs.h>
#include <drm/drm_drv.h>
#include <drm/drm_fb_cma_helper.h> #include <drm/drm_fb_cma_helper.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <drm/drm_gem_cma_helper.h> #include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h> #include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_irq.h>
#include <drm/drm_modeset_helper.h> #include <drm/drm_modeset_helper.h>
#include <drm/drm_of.h> #include <drm/drm_of.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
#include "hdlcd_drv.h" #include "hdlcd_drv.h"
#include "hdlcd_regs.h" #include "hdlcd_regs.h"

View File

@ -6,14 +6,17 @@
* ARM Mali DP500/DP550/DP650 driver (crtc operations) * ARM Mali DP500/DP550/DP650 driver (crtc operations)
*/ */
#include <drm/drmP.h> #include <linux/clk.h>
#include <linux/pm_runtime.h>
#include <video/videomode.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_probe_helper.h>
#include <linux/clk.h> #include <drm/drm_vblank.h>
#include <linux/pm_runtime.h>
#include <video/videomode.h>
#include "malidp_drv.h" #include "malidp_drv.h"
#include "malidp_hw.h" #include "malidp_hw.h"

View File

@ -15,17 +15,19 @@
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <drm/drmP.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_drv.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_fb_cma_helper.h> #include <drm/drm_fb_cma_helper.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_gem_cma_helper.h> #include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h> #include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_modeset_helper.h> #include <drm/drm_modeset_helper.h>
#include <drm/drm_of.h> #include <drm/drm_of.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
#include "malidp_drv.h" #include "malidp_drv.h"
#include "malidp_mw.h" #include "malidp_mw.h"

View File

@ -9,12 +9,13 @@
#ifndef __MALIDP_DRV_H__ #ifndef __MALIDP_DRV_H__
#define __MALIDP_DRV_H__ #define __MALIDP_DRV_H__
#include <drm/drm_writeback.h>
#include <drm/drm_encoder.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <drm/drmP.h>
#include <drm/drm_writeback.h>
#include <drm/drm_encoder.h>
#include "malidp_hw.h" #include "malidp_hw.h"
#define MALIDP_CONFIG_VALID_INIT 0 #define MALIDP_CONFIG_VALID_INIT 0

View File

@ -9,12 +9,17 @@
*/ */
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/delay.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/io.h> #include <linux/io.h>
#include <drm/drmP.h>
#include <video/videomode.h> #include <video/videomode.h>
#include <video/display_timing.h> #include <video/display_timing.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_vblank.h>
#include <drm/drm_print.h>
#include "malidp_drv.h" #include "malidp_drv.h"
#include "malidp_hw.h" #include "malidp_hw.h"
#include "malidp_mw.h" #include "malidp_mw.h"

View File

@ -5,13 +5,14 @@
* *
* ARM Mali DP Writeback connector implementation * ARM Mali DP Writeback connector implementation
*/ */
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_fb_cma_helper.h> #include <drm/drm_fb_cma_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_gem_cma_helper.h> #include <drm/drm_gem_cma_helper.h>
#include <drm/drmP.h> #include <drm/drm_probe_helper.h>
#include <drm/drm_writeback.h> #include <drm/drm_writeback.h>
#include "malidp_drv.h" #include "malidp_drv.h"

View File

@ -7,11 +7,13 @@
*/ */
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/platform_device.h>
#include <drm/drmP.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_fb_cma_helper.h> #include <drm/drm_fb_cma_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_gem_cma_helper.h> #include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h> #include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_plane_helper.h> #include <drm/drm_plane_helper.h>

View File

@ -3,15 +3,19 @@
* Copyright (C) 2012 Russell King * Copyright (C) 2012 Russell King
* Rewritten from the dovefb driver, and Armada510 manuals. * Rewritten from the dovefb driver, and Armada510 manuals.
*/ */
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/component.h> #include <linux/component.h>
#include <linux/module.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <drm/drmP.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
#include "armada_crtc.h" #include "armada_crtc.h"
#include "armada_drm.h" #include "armada_drm.h"
#include "armada_fb.h" #include "armada_fb.h"

View File

@ -3,11 +3,15 @@
* Copyright (C) 2012 Russell King * Copyright (C) 2012 Russell King
* Rewritten from the dovefb driver, and Armada510 manuals. * Rewritten from the dovefb driver, and Armada510 manuals.
*/ */
#include <linux/ctype.h> #include <linux/ctype.h>
#include <linux/debugfs.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <drm/drmP.h> #include <linux/uaccess.h>
#include <drm/drm_debugfs.h>
#include <drm/drm_file.h>
#include "armada_crtc.h" #include "armada_crtc.h"
#include "armada_drm.h" #include "armada_drm.h"

View File

@ -8,11 +8,14 @@
#include <linux/kfifo.h> #include <linux/kfifo.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <drm/drmP.h>
#include <drm/drm_device.h>
#include <drm/drm_mm.h>
struct armada_crtc; struct armada_crtc;
struct armada_gem_object; struct armada_gem_object;
struct clk; struct clk;
struct drm_display_mode;
struct drm_fb_helper; struct drm_fb_helper;
static inline void static inline void

View File

@ -2,14 +2,22 @@
/* /*
* Copyright (C) 2012 Russell King * Copyright (C) 2012 Russell King
*/ */
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/component.h> #include <linux/component.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of_graph.h> #include <linux/of_graph.h>
#include <linux/platform_device.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_ioctl.h>
#include <drm/drm_prime.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_probe_helper.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <drm/drm_of.h> #include <drm/drm_of.h>
#include <drm/drm_vblank.h>
#include "armada_crtc.h" #include "armada_crtc.h"
#include "armada_drm.h" #include "armada_drm.h"
#include "armada_gem.h" #include "armada_gem.h"

View File

@ -2,9 +2,12 @@
/* /*
* Copyright (C) 2012 Russell King * Copyright (C) 2012 Russell King
*/ */
#include <drm/drm_modeset_helper.h> #include <drm/drm_modeset_helper.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_gem_framebuffer_helper.h> #include <drm/drm_gem_framebuffer_helper.h>
#include "armada_drm.h" #include "armada_drm.h"
#include "armada_fb.h" #include "armada_fb.h"
#include "armada_gem.h" #include "armada_gem.h"

View File

@ -3,11 +3,14 @@
* Copyright (C) 2012 Russell King * Copyright (C) 2012 Russell King
* Written from the i915 driver. * Written from the i915 driver.
*/ */
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <drm/drm_fourcc.h>
#include "armada_crtc.h" #include "armada_crtc.h"
#include "armada_drm.h" #include "armada_drm.h"
#include "armada_fb.h" #include "armada_fb.h"

View File

@ -2,12 +2,17 @@
/* /*
* Copyright (C) 2012 Russell King * Copyright (C) 2012 Russell King
*/ */
#include <linux/dma-buf.h> #include <linux/dma-buf.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/mman.h>
#include <linux/shmem_fs.h> #include <linux/shmem_fs.h>
#include <drm/armada_drm.h>
#include <drm/drm_prime.h>
#include "armada_drm.h" #include "armada_drm.h"
#include "armada_gem.h" #include "armada_gem.h"
#include <drm/armada_drm.h>
#include "armada_ioctlP.h" #include "armada_ioctlP.h"
static vm_fault_t armada_gem_vm_fault(struct vm_fault *vmf) static vm_fault_t armada_gem_vm_fault(struct vm_fault *vmf)

View File

@ -3,12 +3,14 @@
* Copyright (C) 2012 Russell King * Copyright (C) 2012 Russell King
* Rewritten from the dovefb driver, and Armada510 manuals. * Rewritten from the dovefb driver, and Armada510 manuals.
*/ */
#include <drm/drmP.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_uapi.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/armada_drm.h> #include <drm/armada_drm.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_atomic_uapi.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_plane_helper.h>
#include "armada_crtc.h" #include "armada_crtc.h"
#include "armada_drm.h" #include "armada_drm.h"
#include "armada_fb.h" #include "armada_fb.h"

View File

@ -3,10 +3,12 @@
* Copyright (C) 2012 Russell King * Copyright (C) 2012 Russell King
* Rewritten from the dovefb driver, and Armada510 manuals. * Rewritten from the dovefb driver, and Armada510 manuals.
*/ */
#include <drm/drmP.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_plane_helper.h> #include <drm/drm_plane_helper.h>
#include "armada_crtc.h" #include "armada_crtc.h"
#include "armada_drm.h" #include "armada_drm.h"
#include "armada_fb.h" #include "armada_fb.h"

View File

@ -3,7 +3,10 @@
#define ARMADA_TRACE_H #define ARMADA_TRACE_H
#include <linux/tracepoint.h> #include <linux/tracepoint.h>
#include <drm/drmP.h>
struct drm_crtc;
struct drm_framebuffer;
struct drm_plane;
#undef TRACE_SYSTEM #undef TRACE_SYSTEM
#define TRACE_SYSTEM armada #define TRACE_SYSTEM armada

View File

@ -215,7 +215,7 @@ static void aspeed_gfx_disable_vblank(struct drm_simple_display_pipe *pipe)
writel(reg | CRT_CTRL_VERTICAL_INTR_STS, priv->base + CRT_CTRL1); writel(reg | CRT_CTRL_VERTICAL_INTR_STS, priv->base + CRT_CTRL1);
} }
static struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = { static const struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
.enable = aspeed_gfx_pipe_enable, .enable = aspeed_gfx_pipe_enable,
.disable = aspeed_gfx_pipe_disable, .disable = aspeed_gfx_pipe_disable,
.update = aspeed_gfx_pipe_update, .update = aspeed_gfx_pipe_update,

View File

@ -1780,8 +1780,7 @@ void analogix_dp_unbind(struct analogix_dp_device *dp)
if (dp->plat_data->panel) { if (dp->plat_data->panel) {
if (drm_panel_unprepare(dp->plat_data->panel)) if (drm_panel_unprepare(dp->plat_data->panel))
DRM_ERROR("failed to turnoff the panel\n"); DRM_ERROR("failed to turnoff the panel\n");
if (drm_panel_detach(dp->plat_data->panel)) drm_panel_detach(dp->plat_data->panel);
DRM_ERROR("failed to detach the panel\n");
} }
drm_dp_aux_unregister(&dp->aux); drm_dp_aux_unregister(&dp->aux);

View File

@ -42,7 +42,7 @@ static int dumb_vga_get_modes(struct drm_connector *connector)
struct edid *edid; struct edid *edid;
int ret; int ret;
if (IS_ERR(vga->ddc)) if (!vga->ddc)
goto fallback; goto fallback;
edid = drm_get_edid(connector, vga->ddc); edid = drm_get_edid(connector, vga->ddc);
@ -84,7 +84,7 @@ dumb_vga_connector_detect(struct drm_connector *connector, bool force)
* wire the DDC pins, or the I2C bus might not be working at * wire the DDC pins, or the I2C bus might not be working at
* all. * all.
*/ */
if (!IS_ERR(vga->ddc) && drm_probe_ddc(vga->ddc)) if (vga->ddc && drm_probe_ddc(vga->ddc))
return connector_status_connected; return connector_status_connected;
return connector_status_unknown; return connector_status_unknown;
@ -197,6 +197,7 @@ static int dumb_vga_probe(struct platform_device *pdev)
if (PTR_ERR(vga->ddc) == -ENODEV) { if (PTR_ERR(vga->ddc) == -ENODEV) {
dev_dbg(&pdev->dev, dev_dbg(&pdev->dev,
"No i2c bus specified. Disabling EDID readout\n"); "No i2c bus specified. Disabling EDID readout\n");
vga->ddc = NULL;
} else { } else {
dev_err(&pdev->dev, "Couldn't retrieve i2c bus\n"); dev_err(&pdev->dev, "Couldn't retrieve i2c bus\n");
return PTR_ERR(vga->ddc); return PTR_ERR(vga->ddc);
@ -218,7 +219,7 @@ static int dumb_vga_remove(struct platform_device *pdev)
drm_bridge_remove(&vga->bridge); drm_bridge_remove(&vga->bridge);
if (!IS_ERR(vga->ddc)) if (vga->ddc)
i2c_put_adapter(vga->ddc); i2c_put_adapter(vga->ddc);
return 0; return 0;

View File

@ -63,10 +63,6 @@ enum {
HDMI_REVISION_ID = 0x0001, HDMI_REVISION_ID = 0x0001,
HDMI_IH_AHBDMAAUD_STAT0 = 0x0109, HDMI_IH_AHBDMAAUD_STAT0 = 0x0109,
HDMI_IH_MUTE_AHBDMAAUD_STAT0 = 0x0189, HDMI_IH_MUTE_AHBDMAAUD_STAT0 = 0x0189,
HDMI_FC_AUDICONF2 = 0x1027,
HDMI_FC_AUDSCONF = 0x1063,
HDMI_FC_AUDSCONF_LAYOUT1 = 1 << 0,
HDMI_FC_AUDSCONF_LAYOUT0 = 0 << 0,
HDMI_AHB_DMA_CONF0 = 0x3600, HDMI_AHB_DMA_CONF0 = 0x3600,
HDMI_AHB_DMA_START = 0x3601, HDMI_AHB_DMA_START = 0x3601,
HDMI_AHB_DMA_STOP = 0x3602, HDMI_AHB_DMA_STOP = 0x3602,
@ -403,7 +399,7 @@ static int dw_hdmi_prepare(struct snd_pcm_substream *substream)
{ {
struct snd_pcm_runtime *runtime = substream->runtime; struct snd_pcm_runtime *runtime = substream->runtime;
struct snd_dw_hdmi *dw = substream->private_data; struct snd_dw_hdmi *dw = substream->private_data;
u8 threshold, conf0, conf1, layout, ca; u8 threshold, conf0, conf1, ca;
/* Setup as per 3.0.5 FSL 4.1.0 BSP */ /* Setup as per 3.0.5 FSL 4.1.0 BSP */
switch (dw->revision) { switch (dw->revision) {
@ -434,20 +430,12 @@ static int dw_hdmi_prepare(struct snd_pcm_substream *substream)
conf1 = default_hdmi_channel_config[runtime->channels - 2].conf1; conf1 = default_hdmi_channel_config[runtime->channels - 2].conf1;
ca = default_hdmi_channel_config[runtime->channels - 2].ca; ca = default_hdmi_channel_config[runtime->channels - 2].ca;
/*
* For >2 channel PCM audio, we need to select layout 1
* and set an appropriate channel map.
*/
if (runtime->channels > 2)
layout = HDMI_FC_AUDSCONF_LAYOUT1;
else
layout = HDMI_FC_AUDSCONF_LAYOUT0;
writeb_relaxed(threshold, dw->data.base + HDMI_AHB_DMA_THRSLD); writeb_relaxed(threshold, dw->data.base + HDMI_AHB_DMA_THRSLD);
writeb_relaxed(conf0, dw->data.base + HDMI_AHB_DMA_CONF0); writeb_relaxed(conf0, dw->data.base + HDMI_AHB_DMA_CONF0);
writeb_relaxed(conf1, dw->data.base + HDMI_AHB_DMA_CONF1); writeb_relaxed(conf1, dw->data.base + HDMI_AHB_DMA_CONF1);
writeb_relaxed(layout, dw->data.base + HDMI_FC_AUDSCONF);
writeb_relaxed(ca, dw->data.base + HDMI_FC_AUDICONF2); dw_hdmi_set_channel_count(dw->data.hdmi, runtime->channels);
dw_hdmi_set_channel_allocation(dw->data.hdmi, ca);
switch (runtime->format) { switch (runtime->format) {
case SNDRV_PCM_FORMAT_IEC958_SUBFRAME_LE: case SNDRV_PCM_FORMAT_IEC958_SUBFRAME_LE:

View File

@ -14,6 +14,7 @@ struct dw_hdmi_audio_data {
struct dw_hdmi_i2s_audio_data { struct dw_hdmi_i2s_audio_data {
struct dw_hdmi *hdmi; struct dw_hdmi *hdmi;
u8 *eld;
void (*write)(struct dw_hdmi *hdmi, u8 val, int offset); void (*write)(struct dw_hdmi *hdmi, u8 val, int offset);
u8 (*read)(struct dw_hdmi *hdmi, int offset); u8 (*read)(struct dw_hdmi *hdmi, int offset);

View File

@ -10,6 +10,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <drm/bridge/dw_hdmi.h> #include <drm/bridge/dw_hdmi.h>
#include <drm/drm_crtc.h>
#include <sound/hdmi-codec.h> #include <sound/hdmi-codec.h>
@ -44,14 +45,30 @@ static int dw_hdmi_i2s_hw_params(struct device *dev, void *data,
u8 inputclkfs = 0; u8 inputclkfs = 0;
/* it cares I2S only */ /* it cares I2S only */
if ((fmt->fmt != HDMI_I2S) || if (fmt->bit_clk_master | fmt->frame_clk_master) {
(fmt->bit_clk_master | fmt->frame_clk_master)) { dev_err(dev, "unsupported clock settings\n");
dev_err(dev, "unsupported format/settings\n");
return -EINVAL; return -EINVAL;
} }
/* Reset the FIFOs before applying new params */
hdmi_write(audio, HDMI_AUD_CONF0_SW_RESET, HDMI_AUD_CONF0);
hdmi_write(audio, (u8)~HDMI_MC_SWRSTZ_I2SSWRST_REQ, HDMI_MC_SWRSTZ);
inputclkfs = HDMI_AUD_INPUTCLKFS_64FS; inputclkfs = HDMI_AUD_INPUTCLKFS_64FS;
conf0 = HDMI_AUD_CONF0_I2S_ALL_ENABLE; conf0 = (HDMI_AUD_CONF0_I2S_SELECT | HDMI_AUD_CONF0_I2S_EN0);
/* Enable the required i2s lanes */
switch (hparms->channels) {
case 7 ... 8:
conf0 |= HDMI_AUD_CONF0_I2S_EN3;
/* Fall-thru */
case 5 ... 6:
conf0 |= HDMI_AUD_CONF0_I2S_EN2;
/* Fall-thru */
case 3 ... 4:
conf0 |= HDMI_AUD_CONF0_I2S_EN1;
/* Fall-thru */
}
switch (hparms->sample_width) { switch (hparms->sample_width) {
case 16: case 16:
@ -63,7 +80,30 @@ static int dw_hdmi_i2s_hw_params(struct device *dev, void *data,
break; break;
} }
switch (fmt->fmt) {
case HDMI_I2S:
conf1 |= HDMI_AUD_CONF1_MODE_I2S;
break;
case HDMI_RIGHT_J:
conf1 |= HDMI_AUD_CONF1_MODE_RIGHT_J;
break;
case HDMI_LEFT_J:
conf1 |= HDMI_AUD_CONF1_MODE_LEFT_J;
break;
case HDMI_DSP_A:
conf1 |= HDMI_AUD_CONF1_MODE_BURST_1;
break;
case HDMI_DSP_B:
conf1 |= HDMI_AUD_CONF1_MODE_BURST_2;
break;
default:
dev_err(dev, "unsupported format\n");
return -EINVAL;
}
dw_hdmi_set_sample_rate(hdmi, hparms->sample_rate); dw_hdmi_set_sample_rate(hdmi, hparms->sample_rate);
dw_hdmi_set_channel_count(hdmi, hparms->channels);
dw_hdmi_set_channel_allocation(hdmi, hparms->cea.channel_allocation);
hdmi_write(audio, inputclkfs, HDMI_AUD_INPUTCLKFS); hdmi_write(audio, inputclkfs, HDMI_AUD_INPUTCLKFS);
hdmi_write(audio, conf0, HDMI_AUD_CONF0); hdmi_write(audio, conf0, HDMI_AUD_CONF0);
@ -80,8 +120,15 @@ static void dw_hdmi_i2s_audio_shutdown(struct device *dev, void *data)
struct dw_hdmi *hdmi = audio->hdmi; struct dw_hdmi *hdmi = audio->hdmi;
dw_hdmi_audio_disable(hdmi); dw_hdmi_audio_disable(hdmi);
}
hdmi_write(audio, HDMI_AUD_CONF0_SW_RESET, HDMI_AUD_CONF0); static int dw_hdmi_i2s_get_eld(struct device *dev, void *data, uint8_t *buf,
size_t len)
{
struct dw_hdmi_i2s_audio_data *audio = data;
memcpy(buf, audio->eld, min_t(size_t, MAX_ELD_BYTES, len));
return 0;
} }
static int dw_hdmi_i2s_get_dai_id(struct snd_soc_component *component, static int dw_hdmi_i2s_get_dai_id(struct snd_soc_component *component,
@ -107,6 +154,7 @@ static int dw_hdmi_i2s_get_dai_id(struct snd_soc_component *component,
static struct hdmi_codec_ops dw_hdmi_i2s_ops = { static struct hdmi_codec_ops dw_hdmi_i2s_ops = {
.hw_params = dw_hdmi_i2s_hw_params, .hw_params = dw_hdmi_i2s_hw_params,
.audio_shutdown = dw_hdmi_i2s_audio_shutdown, .audio_shutdown = dw_hdmi_i2s_audio_shutdown,
.get_eld = dw_hdmi_i2s_get_eld,
.get_dai_id = dw_hdmi_i2s_get_dai_id, .get_dai_id = dw_hdmi_i2s_get_dai_id,
}; };
@ -119,7 +167,7 @@ static int snd_dw_hdmi_probe(struct platform_device *pdev)
pdata.ops = &dw_hdmi_i2s_ops; pdata.ops = &dw_hdmi_i2s_ops;
pdata.i2s = 1; pdata.i2s = 1;
pdata.max_i2s_channels = 6; pdata.max_i2s_channels = 8;
pdata.data = audio; pdata.data = audio;
memset(&pdevinfo, 0, sizeof(pdevinfo)); memset(&pdevinfo, 0, sizeof(pdevinfo));

View File

@ -645,6 +645,42 @@ void dw_hdmi_set_sample_rate(struct dw_hdmi *hdmi, unsigned int rate)
} }
EXPORT_SYMBOL_GPL(dw_hdmi_set_sample_rate); EXPORT_SYMBOL_GPL(dw_hdmi_set_sample_rate);
void dw_hdmi_set_channel_count(struct dw_hdmi *hdmi, unsigned int cnt)
{
u8 layout;
mutex_lock(&hdmi->audio_mutex);
/*
* For >2 channel PCM audio, we need to select layout 1
* and set an appropriate channel map.
*/
if (cnt > 2)
layout = HDMI_FC_AUDSCONF_AUD_PACKET_LAYOUT_LAYOUT1;
else
layout = HDMI_FC_AUDSCONF_AUD_PACKET_LAYOUT_LAYOUT0;
hdmi_modb(hdmi, layout, HDMI_FC_AUDSCONF_AUD_PACKET_LAYOUT_MASK,
HDMI_FC_AUDSCONF);
/* Set the audio infoframes channel count */
hdmi_modb(hdmi, (cnt - 1) << HDMI_FC_AUDICONF0_CC_OFFSET,
HDMI_FC_AUDICONF0_CC_MASK, HDMI_FC_AUDICONF0);
mutex_unlock(&hdmi->audio_mutex);
}
EXPORT_SYMBOL_GPL(dw_hdmi_set_channel_count);
void dw_hdmi_set_channel_allocation(struct dw_hdmi *hdmi, unsigned int ca)
{
mutex_lock(&hdmi->audio_mutex);
hdmi_writeb(hdmi, ca, HDMI_FC_AUDICONF2);
mutex_unlock(&hdmi->audio_mutex);
}
EXPORT_SYMBOL_GPL(dw_hdmi_set_channel_allocation);
static void hdmi_enable_audio_clk(struct dw_hdmi *hdmi, bool enable) static void hdmi_enable_audio_clk(struct dw_hdmi *hdmi, bool enable)
{ {
if (enable) if (enable)
@ -2763,6 +2799,7 @@ __dw_hdmi_probe(struct platform_device *pdev,
struct dw_hdmi_i2s_audio_data audio; struct dw_hdmi_i2s_audio_data audio;
audio.hdmi = hdmi; audio.hdmi = hdmi;
audio.eld = hdmi->connector.eld;
audio.write = hdmi_writeb; audio.write = hdmi_writeb;
audio.read = hdmi_readb; audio.read = hdmi_readb;
hdmi->enable_audio = dw_hdmi_i2s_audio_enable; hdmi->enable_audio = dw_hdmi_i2s_audio_enable;

View File

@ -865,12 +865,18 @@ enum {
/* AUD_CONF0 field values */ /* AUD_CONF0 field values */
HDMI_AUD_CONF0_SW_RESET = 0x80, HDMI_AUD_CONF0_SW_RESET = 0x80,
HDMI_AUD_CONF0_I2S_ALL_ENABLE = 0x2F, HDMI_AUD_CONF0_I2S_SELECT = 0x20,
HDMI_AUD_CONF0_I2S_EN3 = 0x08,
HDMI_AUD_CONF0_I2S_EN2 = 0x04,
HDMI_AUD_CONF0_I2S_EN1 = 0x02,
HDMI_AUD_CONF0_I2S_EN0 = 0x01,
/* AUD_CONF1 field values */ /* AUD_CONF1 field values */
HDMI_AUD_CONF1_MODE_I2S = 0x00, HDMI_AUD_CONF1_MODE_I2S = 0x00,
HDMI_AUD_CONF1_MODE_RIGHT_J = 0x02, HDMI_AUD_CONF1_MODE_RIGHT_J = 0x20,
HDMI_AUD_CONF1_MODE_LEFT_J = 0x04, HDMI_AUD_CONF1_MODE_LEFT_J = 0x40,
HDMI_AUD_CONF1_MODE_BURST_1 = 0x60,
HDMI_AUD_CONF1_MODE_BURST_2 = 0x80,
HDMI_AUD_CONF1_WIDTH_16 = 0x10, HDMI_AUD_CONF1_WIDTH_16 = 0x10,
HDMI_AUD_CONF1_WIDTH_24 = 0x18, HDMI_AUD_CONF1_WIDTH_24 = 0x18,
@ -938,6 +944,7 @@ enum {
HDMI_MC_CLKDIS_PIXELCLK_DISABLE = 0x1, HDMI_MC_CLKDIS_PIXELCLK_DISABLE = 0x1,
/* MC_SWRSTZ field values */ /* MC_SWRSTZ field values */
HDMI_MC_SWRSTZ_I2SSWRST_REQ = 0x08,
HDMI_MC_SWRSTZ_TMDSSWRST_REQ = 0x02, HDMI_MC_SWRSTZ_TMDSSWRST_REQ = 0x02,
/* MC_FLOWCTRL field values */ /* MC_FLOWCTRL field values */

View File

@ -1312,7 +1312,7 @@ static int tc_connector_get_modes(struct drm_connector *connector)
{ {
struct tc_data *tc = connector_to_tc(connector); struct tc_data *tc = connector_to_tc(connector);
struct edid *edid; struct edid *edid;
unsigned int count; int count;
int ret; int ret;
ret = tc_get_display_props(tc); ret = tc_get_display_props(tc);
@ -1321,11 +1321,9 @@ static int tc_connector_get_modes(struct drm_connector *connector)
return 0; return 0;
} }
if (tc->panel && tc->panel->funcs && tc->panel->funcs->get_modes) { count = drm_panel_get_modes(tc->panel);
count = tc->panel->funcs->get_modes(tc->panel); if (count > 0)
if (count > 0) return count;
return count;
}
edid = drm_get_edid(connector, &tc->aux.ddc); edid = drm_get_edid(connector, &tc->aux.ddc);

View File

@ -1037,7 +1037,7 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
* As a contrast, with implicit fencing the kernel keeps track of any * As a contrast, with implicit fencing the kernel keeps track of any
* ongoing rendering, and automatically ensures that the atomic update waits * ongoing rendering, and automatically ensures that the atomic update waits
* for any pending rendering to complete. For shared buffers represented with * for any pending rendering to complete. For shared buffers represented with
* a &struct dma_buf this is tracked in &struct reservation_object. * a &struct dma_buf this is tracked in &struct dma_resv.
* Implicit syncing is how Linux traditionally worked (e.g. DRI2/3 on X.org), * Implicit syncing is how Linux traditionally worked (e.g. DRI2/3 on X.org),
* whereas explicit fencing is what Android wants. * whereas explicit fencing is what Android wants.
* *

View File

@ -986,12 +986,14 @@ static const struct drm_prop_enum_list hdmi_colorspaces[] = {
* - Kernel sends uevent with the connector id and property id through * - Kernel sends uevent with the connector id and property id through
* @drm_hdcp_update_content_protection, upon below kernel triggered * @drm_hdcp_update_content_protection, upon below kernel triggered
* scenarios: * scenarios:
* DESIRED -> ENABLED (authentication success) *
* ENABLED -> DESIRED (termination of authentication) * - DESIRED -> ENABLED (authentication success)
* - ENABLED -> DESIRED (termination of authentication)
* - Please note no uevents for userspace triggered property state changes, * - Please note no uevents for userspace triggered property state changes,
* which can't fail such as * which can't fail such as
* DESIRED/ENABLED -> UNDESIRED *
* UNDESIRED -> DESIRED * - DESIRED/ENABLED -> UNDESIRED
* - UNDESIRED -> DESIRED
* - Userspace is responsible for polling the property or listen to uevents * - Userspace is responsible for polling the property or listen to uevents
* to determine when the value transitions from ENABLED to DESIRED. * to determine when the value transitions from ENABLED to DESIRED.
* This signifies the link is no longer protected and userspace should * This signifies the link is no longer protected and userspace should

View File

@ -159,7 +159,7 @@ void drm_gem_private_object_init(struct drm_device *dev,
kref_init(&obj->refcount); kref_init(&obj->refcount);
obj->handle_count = 0; obj->handle_count = 0;
obj->size = size; obj->size = size;
reservation_object_init(&obj->_resv); dma_resv_init(&obj->_resv);
if (!obj->resv) if (!obj->resv)
obj->resv = &obj->_resv; obj->resv = &obj->_resv;
@ -633,6 +633,9 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
pagevec_init(&pvec); pagevec_init(&pvec);
for (i = 0; i < npages; i++) { for (i = 0; i < npages; i++) {
if (!pages[i])
continue;
if (dirty) if (dirty)
set_page_dirty(pages[i]); set_page_dirty(pages[i]);
@ -752,7 +755,7 @@ drm_gem_object_lookup(struct drm_file *filp, u32 handle)
EXPORT_SYMBOL(drm_gem_object_lookup); EXPORT_SYMBOL(drm_gem_object_lookup);
/** /**
* drm_gem_reservation_object_wait - Wait on GEM object's reservation's objects * drm_gem_dma_resv_wait - Wait on GEM object's reservation's objects
* shared and/or exclusive fences. * shared and/or exclusive fences.
* @filep: DRM file private date * @filep: DRM file private date
* @handle: userspace handle * @handle: userspace handle
@ -764,7 +767,7 @@ EXPORT_SYMBOL(drm_gem_object_lookup);
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
* greater than 0 on success. * greater than 0 on success.
*/ */
long drm_gem_reservation_object_wait(struct drm_file *filep, u32 handle, long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
bool wait_all, unsigned long timeout) bool wait_all, unsigned long timeout)
{ {
long ret; long ret;
@ -776,7 +779,7 @@ long drm_gem_reservation_object_wait(struct drm_file *filep, u32 handle,
return -EINVAL; return -EINVAL;
} }
ret = reservation_object_wait_timeout_rcu(obj->resv, wait_all, ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
true, timeout); true, timeout);
if (ret == 0) if (ret == 0)
ret = -ETIME; ret = -ETIME;
@ -787,7 +790,7 @@ long drm_gem_reservation_object_wait(struct drm_file *filep, u32 handle,
return ret; return ret;
} }
EXPORT_SYMBOL(drm_gem_reservation_object_wait); EXPORT_SYMBOL(drm_gem_dma_resv_wait);
/** /**
* drm_gem_close_ioctl - implementation of the GEM_CLOSE ioctl * drm_gem_close_ioctl - implementation of the GEM_CLOSE ioctl
@ -953,7 +956,7 @@ drm_gem_object_release(struct drm_gem_object *obj)
if (obj->filp) if (obj->filp)
fput(obj->filp); fput(obj->filp);
reservation_object_fini(&obj->_resv); dma_resv_fini(&obj->_resv);
drm_gem_free_mmap_offset(obj); drm_gem_free_mmap_offset(obj);
} }
EXPORT_SYMBOL(drm_gem_object_release); EXPORT_SYMBOL(drm_gem_object_release);
@ -1288,7 +1291,7 @@ retry:
if (contended != -1) { if (contended != -1) {
struct drm_gem_object *obj = objs[contended]; struct drm_gem_object *obj = objs[contended];
ret = reservation_object_lock_slow_interruptible(obj->resv, ret = dma_resv_lock_slow_interruptible(obj->resv,
acquire_ctx); acquire_ctx);
if (ret) { if (ret) {
ww_acquire_done(acquire_ctx); ww_acquire_done(acquire_ctx);
@ -1300,16 +1303,16 @@ retry:
if (i == contended) if (i == contended)
continue; continue;
ret = reservation_object_lock_interruptible(objs[i]->resv, ret = dma_resv_lock_interruptible(objs[i]->resv,
acquire_ctx); acquire_ctx);
if (ret) { if (ret) {
int j; int j;
for (j = 0; j < i; j++) for (j = 0; j < i; j++)
reservation_object_unlock(objs[j]->resv); dma_resv_unlock(objs[j]->resv);
if (contended != -1 && contended >= i) if (contended != -1 && contended >= i)
reservation_object_unlock(objs[contended]->resv); dma_resv_unlock(objs[contended]->resv);
if (ret == -EDEADLK) { if (ret == -EDEADLK) {
contended = i; contended = i;
@ -1334,7 +1337,7 @@ drm_gem_unlock_reservations(struct drm_gem_object **objs, int count,
int i; int i;
for (i = 0; i < count; i++) for (i = 0; i < count; i++)
reservation_object_unlock(objs[i]->resv); dma_resv_unlock(objs[i]->resv);
ww_acquire_fini(acquire_ctx); ww_acquire_fini(acquire_ctx);
} }
@ -1410,12 +1413,12 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
if (!write) { if (!write) {
struct dma_fence *fence = struct dma_fence *fence =
reservation_object_get_excl_rcu(obj->resv); dma_resv_get_excl_rcu(obj->resv);
return drm_gem_fence_array_add(fence_array, fence); return drm_gem_fence_array_add(fence_array, fence);
} }
ret = reservation_object_get_fences_rcu(obj->resv, NULL, ret = dma_resv_get_fences_rcu(obj->resv, NULL,
&fence_count, &fences); &fence_count, &fences);
if (ret || !fence_count) if (ret || !fence_count)
return ret; return ret;

View File

@ -7,7 +7,7 @@
#include <linux/dma-buf.h> #include <linux/dma-buf.h>
#include <linux/dma-fence.h> #include <linux/dma-fence.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
@ -294,7 +294,7 @@ int drm_gem_fb_prepare_fb(struct drm_plane *plane,
return 0; return 0;
obj = drm_gem_fb_get_obj(state->fb, 0); obj = drm_gem_fb_get_obj(state->fb, 0);
fence = reservation_object_get_excl_rcu(obj->resv); fence = dma_resv_get_excl_rcu(obj->resv);
drm_atomic_set_fence_for_plane(state, fence); drm_atomic_set_fence_for_plane(state, fence);
return 0; return 0;

View File

@ -75,6 +75,7 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
shmem = to_drm_gem_shmem_obj(obj); shmem = to_drm_gem_shmem_obj(obj);
mutex_init(&shmem->pages_lock); mutex_init(&shmem->pages_lock);
mutex_init(&shmem->vmap_lock); mutex_init(&shmem->vmap_lock);
INIT_LIST_HEAD(&shmem->madv_list);
/* /*
* Our buffers are kept pinned, so allocating them * Our buffers are kept pinned, so allocating them
@ -118,11 +119,11 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj)
if (shmem->sgt) { if (shmem->sgt) {
dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl, dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl,
shmem->sgt->nents, DMA_BIDIRECTIONAL); shmem->sgt->nents, DMA_BIDIRECTIONAL);
drm_gem_shmem_put_pages(shmem);
sg_free_table(shmem->sgt); sg_free_table(shmem->sgt);
kfree(shmem->sgt); kfree(shmem->sgt);
} }
if (shmem->pages)
drm_gem_shmem_put_pages(shmem);
} }
WARN_ON(shmem->pages_use_count); WARN_ON(shmem->pages_use_count);
@ -362,6 +363,62 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
} }
EXPORT_SYMBOL(drm_gem_shmem_create_with_handle); EXPORT_SYMBOL(drm_gem_shmem_create_with_handle);
/* Update madvise status, returns true if not purged, else
* false or -errno.
*/
int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
mutex_lock(&shmem->pages_lock);
if (shmem->madv >= 0)
shmem->madv = madv;
madv = shmem->madv;
mutex_unlock(&shmem->pages_lock);
return (madv >= 0);
}
EXPORT_SYMBOL(drm_gem_shmem_madvise);
void drm_gem_shmem_purge_locked(struct drm_gem_object *obj)
{
struct drm_device *dev = obj->dev;
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
drm_gem_shmem_put_pages_locked(shmem);
shmem->madv = -1;
drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
drm_gem_free_mmap_offset(obj);
/* Our goal here is to return as much of the memory as
* is possible back to the system as we are called from OOM.
* To do this we must instruct the shmfs to drop all of its
* backing pages, *now*.
*/
shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
invalidate_mapping_pages(file_inode(obj->filp)->i_mapping,
0, (loff_t)-1);
}
EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
void drm_gem_shmem_purge(struct drm_gem_object *obj)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
mutex_lock(&shmem->pages_lock);
drm_gem_shmem_purge_locked(obj);
mutex_unlock(&shmem->pages_lock);
}
EXPORT_SYMBOL(drm_gem_shmem_purge);
/** /**
* drm_gem_shmem_dumb_create - Create a dumb shmem buffer object * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
* @file: DRM file structure to create the dumb buffer for * @file: DRM file structure to create the dumb buffer for

View File

@ -123,18 +123,110 @@ EXPORT_SYMBOL(drm_panel_attach);
* *
* This function should not be called by the panel device itself. It * This function should not be called by the panel device itself. It
* is only for the drm device that called drm_panel_attach(). * is only for the drm device that called drm_panel_attach().
*
* Return: 0 on success or a negative error code on failure.
*/ */
int drm_panel_detach(struct drm_panel *panel) void drm_panel_detach(struct drm_panel *panel)
{ {
panel->connector = NULL; panel->connector = NULL;
panel->drm = NULL; panel->drm = NULL;
return 0;
} }
EXPORT_SYMBOL(drm_panel_detach); EXPORT_SYMBOL(drm_panel_detach);
/**
* drm_panel_prepare - power on a panel
* @panel: DRM panel
*
* Calling this function will enable power and deassert any reset signals to
* the panel. After this has completed it is possible to communicate with any
* integrated circuitry via a command bus.
*
* Return: 0 on success or a negative error code on failure.
*/
int drm_panel_prepare(struct drm_panel *panel)
{
if (panel && panel->funcs && panel->funcs->prepare)
return panel->funcs->prepare(panel);
return panel ? -ENOSYS : -EINVAL;
}
EXPORT_SYMBOL(drm_panel_prepare);
/**
* drm_panel_unprepare - power off a panel
* @panel: DRM panel
*
* Calling this function will completely power off a panel (assert the panel's
* reset, turn off power supplies, ...). After this function has completed, it
* is usually no longer possible to communicate with the panel until another
* call to drm_panel_prepare().
*
* Return: 0 on success or a negative error code on failure.
*/
int drm_panel_unprepare(struct drm_panel *panel)
{
if (panel && panel->funcs && panel->funcs->unprepare)
return panel->funcs->unprepare(panel);
return panel ? -ENOSYS : -EINVAL;
}
EXPORT_SYMBOL(drm_panel_unprepare);
/**
* drm_panel_enable - enable a panel
* @panel: DRM panel
*
* Calling this function will cause the panel display drivers to be turned on
* and the backlight to be enabled. Content will be visible on screen after
* this call completes.
*
* Return: 0 on success or a negative error code on failure.
*/
int drm_panel_enable(struct drm_panel *panel)
{
if (panel && panel->funcs && panel->funcs->enable)
return panel->funcs->enable(panel);
return panel ? -ENOSYS : -EINVAL;
}
EXPORT_SYMBOL(drm_panel_enable);
/**
* drm_panel_disable - disable a panel
* @panel: DRM panel
*
* This will typically turn off the panel's backlight or disable the display
* drivers. For smart panels it should still be possible to communicate with
* the integrated circuitry via any command bus after this call.
*
* Return: 0 on success or a negative error code on failure.
*/
int drm_panel_disable(struct drm_panel *panel)
{
if (panel && panel->funcs && panel->funcs->disable)
return panel->funcs->disable(panel);
return panel ? -ENOSYS : -EINVAL;
}
EXPORT_SYMBOL(drm_panel_disable);
/**
* drm_panel_get_modes - probe the available display modes of a panel
* @panel: DRM panel
*
* The modes probed from the panel are automatically added to the connector
* that the panel is attached to.
*
* Return: The number of modes available from the panel on success or a
* negative error code on failure.
*/
int drm_panel_get_modes(struct drm_panel *panel)
{
if (panel && panel->funcs && panel->funcs->get_modes)
return panel->funcs->get_modes(panel);
return panel ? -ENOSYS : -EINVAL;
}
EXPORT_SYMBOL(drm_panel_get_modes);
#ifdef CONFIG_OF #ifdef CONFIG_OF
/** /**
* of_drm_find_panel - look up a panel using a device tree node * of_drm_find_panel - look up a panel using a device tree node

View File

@ -29,21 +29,97 @@
/** /**
* DOC: Overview * DOC: Overview
* *
* DRM synchronisation objects (syncobj, see struct &drm_syncobj) are * DRM synchronisation objects (syncobj, see struct &drm_syncobj) provide a
* persistent objects that contain an optional fence. The fence can be updated * container for a synchronization primitive which can be used by userspace
* with a new fence, or be NULL. * to explicitly synchronize GPU commands, can be shared between userspace
* * processes, and can be shared between different DRM drivers.
* syncobj's can be waited upon, where it will wait for the underlying
* fence.
*
* syncobj's can be export to fd's and back, these fd's are opaque and
* have no other use case, except passing the syncobj between processes.
*
* Their primary use-case is to implement Vulkan fences and semaphores. * Their primary use-case is to implement Vulkan fences and semaphores.
* The syncobj userspace API provides ioctls for several operations:
* *
* syncobj have a kref reference count, but also have an optional file. * - Creation and destruction of syncobjs
* The file is only created once the syncobj is exported. * - Import and export of syncobjs to/from a syncobj file descriptor
* The file takes a reference on the kref. * - Import and export a syncobj's underlying fence to/from a sync file
* - Reset a syncobj (set its fence to NULL)
* - Signal a syncobj (set a trivially signaled fence)
* - Wait for a syncobj's fence to appear and be signaled
*
* At it's core, a syncobj is simply a wrapper around a pointer to a struct
* &dma_fence which may be NULL.
* When a syncobj is first created, its pointer is either NULL or a pointer
* to an already signaled fence depending on whether the
* &DRM_SYNCOBJ_CREATE_SIGNALED flag is passed to
* &DRM_IOCTL_SYNCOBJ_CREATE.
* When GPU work which signals a syncobj is enqueued in a DRM driver,
* the syncobj fence is replaced with a fence which will be signaled by the
* completion of that work.
* When GPU work which waits on a syncobj is enqueued in a DRM driver, the
* driver retrieves syncobj's current fence at the time the work is enqueued
* waits on that fence before submitting the work to hardware.
* If the syncobj's fence is NULL, the enqueue operation is expected to fail.
* All manipulation of the syncobjs's fence happens in terms of the current
* fence at the time the ioctl is called by userspace regardless of whether
* that operation is an immediate host-side operation (signal or reset) or
* or an operation which is enqueued in some driver queue.
* &DRM_IOCTL_SYNCOBJ_RESET and &DRM_IOCTL_SYNCOBJ_SIGNAL can be used to
* manipulate a syncobj from the host by resetting its pointer to NULL or
* setting its pointer to a fence which is already signaled.
*
*
* Host-side wait on syncobjs
* --------------------------
*
* &DRM_IOCTL_SYNCOBJ_WAIT takes an array of syncobj handles and does a
* host-side wait on all of the syncobj fences simultaneously.
* If &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL is set, the wait ioctl will wait on
* all of the syncobj fences to be signaled before it returns.
* Otherwise, it returns once at least one syncobj fence has been signaled
* and the index of a signaled fence is written back to the client.
*
* Unlike the enqueued GPU work dependencies which fail if they see a NULL
* fence in a syncobj, if &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT is set,
* the host-side wait will first wait for the syncobj to receive a non-NULL
* fence and then wait on that fence.
* If &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT is not set and any one of the
* syncobjs in the array has a NULL fence, -EINVAL will be returned.
* Assuming the syncobj starts off with a NULL fence, this allows a client
* to do a host wait in one thread (or process) which waits on GPU work
* submitted in another thread (or process) without having to manually
* synchronize between the two.
* This requirement is inherited from the Vulkan fence API.
*
*
* Import/export of syncobjs
* -------------------------
*
* &DRM_IOCTL_SYNCOBJ_FD_TO_HANDLE and &DRM_IOCTL_SYNCOBJ_HANDLE_TO_FD
* provide two mechanisms for import/export of syncobjs.
*
* The first lets the client import or export an entire syncobj to a file
* descriptor.
* These fd's are opaque and have no other use case, except passing the
* syncobj between processes.
* All exported file descriptors and any syncobj handles created as a
* result of importing those file descriptors own a reference to the
* same underlying struct &drm_syncobj and the syncobj can be used
* persistently across all the processes with which it is shared.
* The syncobj is freed only once the last reference is dropped.
* Unlike dma-buf, importing a syncobj creates a new handle (with its own
* reference) for every import instead of de-duplicating.
* The primary use-case of this persistent import/export is for shared
* Vulkan fences and semaphores.
*
* The second import/export mechanism, which is indicated by
* &DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE or
* &DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE lets the client
* import/export the syncobj's current fence from/to a &sync_file.
* When a syncobj is exported to a sync file, that sync file wraps the
* sycnobj's fence at the time of export and any later signal or reset
* operations on the syncobj will not affect the exported sync file.
* When a sync file is imported into a syncobj, the syncobj's fence is set
* to the fence wrapped by that sync file.
* Because sync files are immutable, resetting or signaling the syncobj
* will not affect any sync files whose fences have been imported into the
* syncobj.
*/ */
#include <linux/anon_inodes.h> #include <linux/anon_inodes.h>

View File

@ -397,13 +397,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
} }
if (op & ETNA_PREP_NOSYNC) { if (op & ETNA_PREP_NOSYNC) {
if (!reservation_object_test_signaled_rcu(obj->resv, if (!dma_resv_test_signaled_rcu(obj->resv,
write)) write))
return -EBUSY; return -EBUSY;
} else { } else {
unsigned long remain = etnaviv_timeout_to_jiffies(timeout); unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
ret = reservation_object_wait_timeout_rcu(obj->resv, ret = dma_resv_wait_timeout_rcu(obj->resv,
write, true, remain); write, true, remain);
if (ret <= 0) if (ret <= 0)
return ret == 0 ? -ETIMEDOUT : ret; return ret == 0 ? -ETIMEDOUT : ret;
@ -459,8 +459,8 @@ static void etnaviv_gem_describe_fence(struct dma_fence *fence,
static void etnaviv_gem_describe(struct drm_gem_object *obj, struct seq_file *m) static void etnaviv_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
{ {
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
struct reservation_object *robj = obj->resv; struct dma_resv *robj = obj->resv;
struct reservation_object_list *fobj; struct dma_resv_list *fobj;
struct dma_fence *fence; struct dma_fence *fence;
unsigned long off = drm_vma_node_start(&obj->vma_node); unsigned long off = drm_vma_node_start(&obj->vma_node);

View File

@ -6,7 +6,7 @@
#ifndef __ETNAVIV_GEM_H__ #ifndef __ETNAVIV_GEM_H__
#define __ETNAVIV_GEM_H__ #define __ETNAVIV_GEM_H__
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include "etnaviv_cmdbuf.h" #include "etnaviv_cmdbuf.h"
#include "etnaviv_drv.h" #include "etnaviv_drv.h"

View File

@ -4,7 +4,7 @@
*/ */
#include <linux/dma-fence-array.h> #include <linux/dma-fence-array.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include <linux/sync_file.h> #include <linux/sync_file.h>
#include "etnaviv_cmdbuf.h" #include "etnaviv_cmdbuf.h"
#include "etnaviv_drv.h" #include "etnaviv_drv.h"
@ -165,10 +165,10 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
for (i = 0; i < submit->nr_bos; i++) { for (i = 0; i < submit->nr_bos; i++) {
struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; struct etnaviv_gem_submit_bo *bo = &submit->bos[i];
struct reservation_object *robj = bo->obj->base.resv; struct dma_resv *robj = bo->obj->base.resv;
if (!(bo->flags & ETNA_SUBMIT_BO_WRITE)) { if (!(bo->flags & ETNA_SUBMIT_BO_WRITE)) {
ret = reservation_object_reserve_shared(robj, 1); ret = dma_resv_reserve_shared(robj, 1);
if (ret) if (ret)
return ret; return ret;
} }
@ -177,13 +177,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
continue; continue;
if (bo->flags & ETNA_SUBMIT_BO_WRITE) { if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
ret = reservation_object_get_fences_rcu(robj, &bo->excl, ret = dma_resv_get_fences_rcu(robj, &bo->excl,
&bo->nr_shared, &bo->nr_shared,
&bo->shared); &bo->shared);
if (ret) if (ret)
return ret; return ret;
} else { } else {
bo->excl = reservation_object_get_excl_rcu(robj); bo->excl = dma_resv_get_excl_rcu(robj);
} }
} }
@ -199,10 +199,10 @@ static void submit_attach_object_fences(struct etnaviv_gem_submit *submit)
struct drm_gem_object *obj = &submit->bos[i].obj->base; struct drm_gem_object *obj = &submit->bos[i].obj->base;
if (submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE) if (submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE)
reservation_object_add_excl_fence(obj->resv, dma_resv_add_excl_fence(obj->resv,
submit->out_fence); submit->out_fence);
else else
reservation_object_add_shared_fence(obj->resv, dma_resv_add_shared_fence(obj->resv,
submit->out_fence); submit->out_fence);
submit_unlock_object(submit, i); submit_unlock_object(submit, i);

View File

@ -65,17 +65,9 @@ static const struct drm_connector_funcs fsl_dcu_drm_connector_funcs = {
static int fsl_dcu_drm_connector_get_modes(struct drm_connector *connector) static int fsl_dcu_drm_connector_get_modes(struct drm_connector *connector)
{ {
struct fsl_dcu_drm_connector *fsl_connector; struct fsl_dcu_drm_connector *fsl_connector;
int (*get_modes)(struct drm_panel *panel);
int num_modes = 0;
fsl_connector = to_fsl_dcu_connector(connector); fsl_connector = to_fsl_dcu_connector(connector);
if (fsl_connector->panel && fsl_connector->panel->funcs && return drm_panel_get_modes(fsl_connector->panel);
fsl_connector->panel->funcs->get_modes) {
get_modes = fsl_connector->panel->funcs->get_modes;
num_modes = get_modes(fsl_connector->panel);
}
return num_modes;
} }
static int fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector, static int fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,

View File

@ -13,10 +13,10 @@
#include <sound/asoundef.h> #include <sound/asoundef.h>
#include <sound/hdmi-codec.h> #include <sound/hdmi-codec.h>
#include <drm/drmP.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_edid.h> #include <drm/drm_edid.h>
#include <drm/drm_of.h> #include <drm/drm_of.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_probe_helper.h>
#include <drm/i2c/tda998x.h> #include <drm/i2c/tda998x.h>

View File

@ -29,7 +29,7 @@
#include <linux/intel-iommu.h> #include <linux/intel-iommu.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/vgaarb.h> #include <linux/vgaarb.h>
@ -14431,7 +14431,7 @@ intel_prepare_plane_fb(struct drm_plane *plane,
if (ret < 0) if (ret < 0)
return ret; return ret;
fence = reservation_object_get_excl_rcu(obj->base.resv); fence = dma_resv_get_excl_rcu(obj->base.resv);
if (fence) { if (fence) {
add_rps_boost_after_vblank(new_state->crtc, fence); add_rps_boost_after_vblank(new_state->crtc, fence);
dma_fence_put(fence); dma_fence_put(fence);

View File

@ -82,7 +82,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
{ {
struct drm_i915_gem_busy *args = data; struct drm_i915_gem_busy *args = data;
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
struct reservation_object_list *list; struct dma_resv_list *list;
unsigned int seq; unsigned int seq;
int err; int err;
@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
* Alternatively, we can trade that extra information on read/write * Alternatively, we can trade that extra information on read/write
* activity with * activity with
* args->busy = * args->busy =
* !reservation_object_test_signaled_rcu(obj->resv, true); * !dma_resv_test_signaled_rcu(obj->resv, true);
* to report the overall busyness. This is what the wait-ioctl does. * to report the overall busyness. This is what the wait-ioctl does.
* *
*/ */

View File

@ -147,7 +147,7 @@ bool i915_gem_clflush_object(struct drm_i915_gem_object *obj,
true, I915_FENCE_TIMEOUT, true, I915_FENCE_TIMEOUT,
I915_FENCE_GFP); I915_FENCE_GFP);
reservation_object_add_excl_fence(obj->base.resv, dma_resv_add_excl_fence(obj->base.resv,
&clflush->dma); &clflush->dma);
i915_sw_fence_commit(&clflush->wait); i915_sw_fence_commit(&clflush->wait);

View File

@ -287,7 +287,7 @@ int i915_gem_schedule_fill_pages_blt(struct drm_i915_gem_object *obj,
if (err < 0) { if (err < 0) {
dma_fence_set_error(&work->dma, err); dma_fence_set_error(&work->dma, err);
} else { } else {
reservation_object_add_excl_fence(obj->base.resv, &work->dma); dma_resv_add_excl_fence(obj->base.resv, &work->dma);
err = 0; err = 0;
} }
i915_gem_object_unlock(obj); i915_gem_object_unlock(obj);

View File

@ -6,7 +6,7 @@
#include <linux/dma-buf.h> #include <linux/dma-buf.h>
#include <linux/highmem.h> #include <linux/highmem.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include "i915_drv.h" #include "i915_drv.h"
#include "i915_gem_object.h" #include "i915_gem_object.h"

View File

@ -5,7 +5,7 @@
*/ */
#include <linux/intel-iommu.h> #include <linux/intel-iommu.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include <linux/sync_file.h> #include <linux/sync_file.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
@ -1242,7 +1242,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
goto skip_request; goto skip_request;
i915_vma_lock(batch); i915_vma_lock(batch);
GEM_BUG_ON(!reservation_object_test_signaled_rcu(batch->resv, true)); GEM_BUG_ON(!dma_resv_test_signaled_rcu(batch->resv, true));
err = i915_vma_move_to_active(batch, rq, 0); err = i915_vma_move_to_active(batch, rq, 0);
i915_vma_unlock(batch); i915_vma_unlock(batch);
if (err) if (err)
@ -1313,7 +1313,7 @@ relocate_entry(struct i915_vma *vma,
if (!eb->reloc_cache.vaddr && if (!eb->reloc_cache.vaddr &&
(DBG_FORCE_RELOC == FORCE_GPU_RELOC || (DBG_FORCE_RELOC == FORCE_GPU_RELOC ||
!reservation_object_test_signaled_rcu(vma->resv, true))) { !dma_resv_test_signaled_rcu(vma->resv, true))) {
const unsigned int gen = eb->reloc_cache.gen; const unsigned int gen = eb->reloc_cache.gen;
unsigned int len; unsigned int len;
u32 *batch; u32 *batch;

View File

@ -78,7 +78,7 @@ i915_gem_object_lock_fence(struct drm_i915_gem_object *obj)
I915_FENCE_GFP) < 0) I915_FENCE_GFP) < 0)
goto err; goto err;
reservation_object_add_excl_fence(obj->base.resv, &stub->dma); dma_resv_add_excl_fence(obj->base.resv, &stub->dma);
return &stub->dma; return &stub->dma;

View File

@ -152,7 +152,7 @@ static void __i915_gem_free_object_rcu(struct rcu_head *head)
container_of(head, typeof(*obj), rcu); container_of(head, typeof(*obj), rcu);
struct drm_i915_private *i915 = to_i915(obj->base.dev); struct drm_i915_private *i915 = to_i915(obj->base.dev);
reservation_object_fini(&obj->base._resv); dma_resv_fini(&obj->base._resv);
i915_gem_object_free(obj); i915_gem_object_free(obj);
GEM_BUG_ON(!atomic_read(&i915->mm.free_count)); GEM_BUG_ON(!atomic_read(&i915->mm.free_count));

View File

@ -99,22 +99,22 @@ i915_gem_object_put(struct drm_i915_gem_object *obj)
__drm_gem_object_put(&obj->base); __drm_gem_object_put(&obj->base);
} }
#define assert_object_held(obj) reservation_object_assert_held((obj)->base.resv) #define assert_object_held(obj) dma_resv_assert_held((obj)->base.resv)
static inline void i915_gem_object_lock(struct drm_i915_gem_object *obj) static inline void i915_gem_object_lock(struct drm_i915_gem_object *obj)
{ {
reservation_object_lock(obj->base.resv, NULL); dma_resv_lock(obj->base.resv, NULL);
} }
static inline int static inline int
i915_gem_object_lock_interruptible(struct drm_i915_gem_object *obj) i915_gem_object_lock_interruptible(struct drm_i915_gem_object *obj)
{ {
return reservation_object_lock_interruptible(obj->base.resv, NULL); return dma_resv_lock_interruptible(obj->base.resv, NULL);
} }
static inline void i915_gem_object_unlock(struct drm_i915_gem_object *obj) static inline void i915_gem_object_unlock(struct drm_i915_gem_object *obj)
{ {
reservation_object_unlock(obj->base.resv); dma_resv_unlock(obj->base.resv);
} }
struct dma_fence * struct dma_fence *
@ -367,7 +367,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
struct dma_fence *fence; struct dma_fence *fence;
rcu_read_lock(); rcu_read_lock();
fence = reservation_object_get_excl_rcu(obj->base.resv); fence = dma_resv_get_excl_rcu(obj->base.resv);
rcu_read_unlock(); rcu_read_unlock();
if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))

View File

@ -31,7 +31,7 @@ i915_gem_object_wait_fence(struct dma_fence *fence,
} }
static long static long
i915_gem_object_wait_reservation(struct reservation_object *resv, i915_gem_object_wait_reservation(struct dma_resv *resv,
unsigned int flags, unsigned int flags,
long timeout) long timeout)
{ {
@ -43,7 +43,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
unsigned int count, i; unsigned int count, i;
int ret; int ret;
ret = reservation_object_get_fences_rcu(resv, ret = dma_resv_get_fences_rcu(resv,
&excl, &count, &shared); &excl, &count, &shared);
if (ret) if (ret)
return ret; return ret;
@ -72,7 +72,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
*/ */
prune_fences = count && timeout >= 0; prune_fences = count && timeout >= 0;
} else { } else {
excl = reservation_object_get_excl_rcu(resv); excl = dma_resv_get_excl_rcu(resv);
} }
if (excl && timeout >= 0) if (excl && timeout >= 0)
@ -84,10 +84,10 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
* Opportunistically prune the fences iff we know they have *all* been * Opportunistically prune the fences iff we know they have *all* been
* signaled. * signaled.
*/ */
if (prune_fences && reservation_object_trylock(resv)) { if (prune_fences && dma_resv_trylock(resv)) {
if (reservation_object_test_signaled_rcu(resv, true)) if (dma_resv_test_signaled_rcu(resv, true))
reservation_object_add_excl_fence(resv, NULL); dma_resv_add_excl_fence(resv, NULL);
reservation_object_unlock(resv); dma_resv_unlock(resv);
} }
return timeout; return timeout;
@ -140,7 +140,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
unsigned int count, i; unsigned int count, i;
int ret; int ret;
ret = reservation_object_get_fences_rcu(obj->base.resv, ret = dma_resv_get_fences_rcu(obj->base.resv,
&excl, &count, &shared); &excl, &count, &shared);
if (ret) if (ret)
return ret; return ret;
@ -152,7 +152,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
kfree(shared); kfree(shared);
} else { } else {
excl = reservation_object_get_excl_rcu(obj->base.resv); excl = dma_resv_get_excl_rcu(obj->base.resv);
} }
if (excl) { if (excl) {

View File

@ -112,18 +112,18 @@ __dma_fence_signal__timestamp(struct dma_fence *fence, ktime_t timestamp)
} }
static void static void
__dma_fence_signal__notify(struct dma_fence *fence) __dma_fence_signal__notify(struct dma_fence *fence,
const struct list_head *list)
{ {
struct dma_fence_cb *cur, *tmp; struct dma_fence_cb *cur, *tmp;
lockdep_assert_held(fence->lock); lockdep_assert_held(fence->lock);
lockdep_assert_irqs_disabled(); lockdep_assert_irqs_disabled();
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { list_for_each_entry_safe(cur, tmp, list, node) {
INIT_LIST_HEAD(&cur->node); INIT_LIST_HEAD(&cur->node);
cur->func(fence, cur); cur->func(fence, cur);
} }
INIT_LIST_HEAD(&fence->cb_list);
} }
void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine) void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine)
@ -185,11 +185,12 @@ void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine)
list_for_each_safe(pos, next, &signal) { list_for_each_safe(pos, next, &signal) {
struct i915_request *rq = struct i915_request *rq =
list_entry(pos, typeof(*rq), signal_link); list_entry(pos, typeof(*rq), signal_link);
struct list_head cb_list;
__dma_fence_signal__timestamp(&rq->fence, timestamp);
spin_lock(&rq->lock); spin_lock(&rq->lock);
__dma_fence_signal__notify(&rq->fence); list_replace(&rq->fence.cb_list, &cb_list);
__dma_fence_signal__timestamp(&rq->fence, timestamp);
__dma_fence_signal__notify(&rq->fence, &cb_list);
spin_unlock(&rq->lock); spin_unlock(&rq->lock);
i915_request_put(rq); i915_request_put(rq);

View File

@ -43,7 +43,7 @@
#include <linux/mm_types.h> #include <linux/mm_types.h>
#include <linux/perf_event.h> #include <linux/perf_event.h>
#include <linux/pm_qos.h> #include <linux/pm_qos.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include <linux/shmem_fs.h> #include <linux/shmem_fs.h>
#include <linux/stackdepot.h> #include <linux/stackdepot.h>

View File

@ -29,7 +29,7 @@
#include <drm/i915_drm.h> #include <drm/i915_drm.h>
#include <linux/dma-fence-array.h> #include <linux/dma-fence-array.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include <linux/shmem_fs.h> #include <linux/shmem_fs.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/stop_machine.h> #include <linux/stop_machine.h>

View File

@ -94,10 +94,10 @@ i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool,
list = &pool->cache_list[n]; list = &pool->cache_list[n];
list_for_each_entry(obj, list, batch_pool_link) { list_for_each_entry(obj, list, batch_pool_link) {
struct reservation_object *resv = obj->base.resv; struct dma_resv *resv = obj->base.resv;
/* The batches are strictly LRU ordered */ /* The batches are strictly LRU ordered */
if (!reservation_object_test_signaled_rcu(resv, true)) if (!dma_resv_test_signaled_rcu(resv, true))
break; break;
/* /*
@ -109,9 +109,9 @@ i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool,
* than replace the existing fence. * than replace the existing fence.
*/ */
if (rcu_access_pointer(resv->fence)) { if (rcu_access_pointer(resv->fence)) {
reservation_object_lock(resv, NULL); dma_resv_lock(resv, NULL);
reservation_object_add_excl_fence(resv, NULL); dma_resv_add_excl_fence(resv, NULL);
reservation_object_unlock(resv); dma_resv_unlock(resv);
} }
if (obj->base.size >= size) if (obj->base.size >= size)

View File

@ -1038,7 +1038,7 @@ i915_request_await_object(struct i915_request *to,
struct dma_fence **shared; struct dma_fence **shared;
unsigned int count, i; unsigned int count, i;
ret = reservation_object_get_fences_rcu(obj->base.resv, ret = dma_resv_get_fences_rcu(obj->base.resv,
&excl, &count, &shared); &excl, &count, &shared);
if (ret) if (ret)
return ret; return ret;
@ -1055,7 +1055,7 @@ i915_request_await_object(struct i915_request *to,
dma_fence_put(shared[i]); dma_fence_put(shared[i]);
kfree(shared); kfree(shared);
} else { } else {
excl = reservation_object_get_excl_rcu(obj->base.resv); excl = dma_resv_get_excl_rcu(obj->base.resv);
} }
if (excl) { if (excl) {

View File

@ -7,7 +7,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/dma-fence.h> #include <linux/dma-fence.h>
#include <linux/irq_work.h> #include <linux/irq_work.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include "i915_sw_fence.h" #include "i915_sw_fence.h"
#include "i915_selftest.h" #include "i915_selftest.h"
@ -510,7 +510,7 @@ int __i915_sw_fence_await_dma_fence(struct i915_sw_fence *fence,
} }
int i915_sw_fence_await_reservation(struct i915_sw_fence *fence, int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
struct reservation_object *resv, struct dma_resv *resv,
const struct dma_fence_ops *exclude, const struct dma_fence_ops *exclude,
bool write, bool write,
unsigned long timeout, unsigned long timeout,
@ -526,7 +526,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
struct dma_fence **shared; struct dma_fence **shared;
unsigned int count, i; unsigned int count, i;
ret = reservation_object_get_fences_rcu(resv, ret = dma_resv_get_fences_rcu(resv,
&excl, &count, &shared); &excl, &count, &shared);
if (ret) if (ret)
return ret; return ret;
@ -551,7 +551,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
dma_fence_put(shared[i]); dma_fence_put(shared[i]);
kfree(shared); kfree(shared);
} else { } else {
excl = reservation_object_get_excl_rcu(resv); excl = dma_resv_get_excl_rcu(resv);
} }
if (ret >= 0 && excl && excl->ops != exclude) { if (ret >= 0 && excl && excl->ops != exclude) {

View File

@ -16,7 +16,7 @@
#include <linux/wait.h> #include <linux/wait.h>
struct completion; struct completion;
struct reservation_object; struct dma_resv;
struct i915_sw_fence { struct i915_sw_fence {
wait_queue_head_t wait; wait_queue_head_t wait;
@ -82,7 +82,7 @@ int i915_sw_fence_await_dma_fence(struct i915_sw_fence *fence,
gfp_t gfp); gfp_t gfp);
int i915_sw_fence_await_reservation(struct i915_sw_fence *fence, int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
struct reservation_object *resv, struct dma_resv *resv,
const struct dma_fence_ops *exclude, const struct dma_fence_ops *exclude,
bool write, bool write,
unsigned long timeout, unsigned long timeout,

View File

@ -890,7 +890,7 @@ static void export_fence(struct i915_vma *vma,
struct i915_request *rq, struct i915_request *rq,
unsigned int flags) unsigned int flags)
{ {
struct reservation_object *resv = vma->resv; struct dma_resv *resv = vma->resv;
/* /*
* Ignore errors from failing to allocate the new fence, we can't * Ignore errors from failing to allocate the new fence, we can't
@ -898,9 +898,9 @@ static void export_fence(struct i915_vma *vma,
* synchronisation leading to rendering corruption. * synchronisation leading to rendering corruption.
*/ */
if (flags & EXEC_OBJECT_WRITE) if (flags & EXEC_OBJECT_WRITE)
reservation_object_add_excl_fence(resv, &rq->fence); dma_resv_add_excl_fence(resv, &rq->fence);
else if (reservation_object_reserve_shared(resv, 1) == 0) else if (dma_resv_reserve_shared(resv, 1) == 0)
reservation_object_add_shared_fence(resv, &rq->fence); dma_resv_add_shared_fence(resv, &rq->fence);
} }
int i915_vma_move_to_active(struct i915_vma *vma, int i915_vma_move_to_active(struct i915_vma *vma,

View File

@ -55,7 +55,7 @@ struct i915_vma {
struct i915_address_space *vm; struct i915_address_space *vm;
const struct i915_vma_ops *ops; const struct i915_vma_ops *ops;
struct i915_fence_reg *fence; struct i915_fence_reg *fence;
struct reservation_object *resv; /** Alias of obj->resv */ struct dma_resv *resv; /** Alias of obj->resv */
struct sg_table *pages; struct sg_table *pages;
void __iomem *iomap; void __iomem *iomap;
void *private; /* owned by creator */ void *private; /* owned by creator */
@ -299,16 +299,16 @@ void i915_vma_close(struct i915_vma *vma);
void i915_vma_reopen(struct i915_vma *vma); void i915_vma_reopen(struct i915_vma *vma);
void i915_vma_destroy(struct i915_vma *vma); void i915_vma_destroy(struct i915_vma *vma);
#define assert_vma_held(vma) reservation_object_assert_held((vma)->resv) #define assert_vma_held(vma) dma_resv_assert_held((vma)->resv)
static inline void i915_vma_lock(struct i915_vma *vma) static inline void i915_vma_lock(struct i915_vma *vma)
{ {
reservation_object_lock(vma->resv, NULL); dma_resv_lock(vma->resv, NULL);
} }
static inline void i915_vma_unlock(struct i915_vma *vma) static inline void i915_vma_unlock(struct i915_vma *vma)
{ {
reservation_object_unlock(vma->resv); dma_resv_unlock(vma->resv);
} }
int __i915_vma_do_pin(struct i915_vma *vma, int __i915_vma_do_pin(struct i915_vma *vma,

View File

@ -124,14 +124,11 @@ static void imx_ldb_ch_set_bus_format(struct imx_ldb_channel *imx_ldb_ch,
static int imx_ldb_connector_get_modes(struct drm_connector *connector) static int imx_ldb_connector_get_modes(struct drm_connector *connector)
{ {
struct imx_ldb_channel *imx_ldb_ch = con_to_imx_ldb_ch(connector); struct imx_ldb_channel *imx_ldb_ch = con_to_imx_ldb_ch(connector);
int num_modes = 0; int num_modes;
if (imx_ldb_ch->panel && imx_ldb_ch->panel->funcs && num_modes = drm_panel_get_modes(imx_ldb_ch->panel);
imx_ldb_ch->panel->funcs->get_modes) { if (num_modes > 0)
num_modes = imx_ldb_ch->panel->funcs->get_modes(imx_ldb_ch->panel); return num_modes;
if (num_modes > 0)
return num_modes;
}
if (!imx_ldb_ch->edid && imx_ldb_ch->ddc) if (!imx_ldb_ch->edid && imx_ldb_ch->ddc)
imx_ldb_ch->edid = drm_get_edid(connector, imx_ldb_ch->ddc); imx_ldb_ch->edid = drm_get_edid(connector, imx_ldb_ch->ddc);

View File

@ -47,14 +47,11 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector)
{ {
struct imx_parallel_display *imxpd = con_to_imxpd(connector); struct imx_parallel_display *imxpd = con_to_imxpd(connector);
struct device_node *np = imxpd->dev->of_node; struct device_node *np = imxpd->dev->of_node;
int num_modes = 0; int num_modes;
if (imxpd->panel && imxpd->panel->funcs && num_modes = drm_panel_get_modes(imxpd->panel);
imxpd->panel->funcs->get_modes) { if (num_modes > 0)
num_modes = imxpd->panel->funcs->get_modes(imxpd->panel); return num_modes;
if (num_modes > 0)
return num_modes;
}
if (imxpd->edid) { if (imxpd->edid) {
drm_connector_update_edid_property(connector, imxpd->edid); drm_connector_update_edid_property(connector, imxpd->edid);

View File

@ -136,7 +136,7 @@ static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo,
int err = 0; int err = 0;
if (!write) { if (!write) {
err = reservation_object_reserve_shared(bo->gem.resv, 1); err = dma_resv_reserve_shared(bo->gem.resv, 1);
if (err) if (err)
return err; return err;
} }
@ -296,9 +296,9 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
for (i = 0; i < submit->nr_bos; i++) { for (i = 0; i < submit->nr_bos; i++) {
if (submit->bos[i].flags & LIMA_SUBMIT_BO_WRITE) if (submit->bos[i].flags & LIMA_SUBMIT_BO_WRITE)
reservation_object_add_excl_fence(bos[i]->gem.resv, fence); dma_resv_add_excl_fence(bos[i]->gem.resv, fence);
else else
reservation_object_add_shared_fence(bos[i]->gem.resv, fence); dma_resv_add_shared_fence(bos[i]->gem.resv, fence);
} }
lima_gem_unlock_bos(bos, submit->nr_bos, &ctx); lima_gem_unlock_bos(bos, submit->nr_bos, &ctx);
@ -341,7 +341,7 @@ int lima_gem_wait(struct drm_file *file, u32 handle, u32 op, s64 timeout_ns)
timeout = drm_timeout_abs_to_jiffies(timeout_ns); timeout = drm_timeout_abs_to_jiffies(timeout_ns);
ret = drm_gem_reservation_object_wait(file, handle, write, timeout); ret = drm_gem_dma_resv_wait(file, handle, write, timeout);
if (ret == 0) if (ret == 0)
ret = timeout ? -ETIMEDOUT : -EBUSY; ret = timeout ? -ETIMEDOUT : -EBUSY;

View File

@ -4,7 +4,7 @@
*/ */
#include <linux/dma-buf.h> #include <linux/dma-buf.h>
#include <linux/reservation.h> #include <linux/dma-resv.h>
#include <drm/drm_modeset_helper.h> #include <drm/drm_modeset_helper.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>

View File

@ -265,11 +265,11 @@ static void meson_crtc_enable_vd1(struct meson_drm *priv)
static void meson_g12a_crtc_enable_vd1(struct meson_drm *priv) static void meson_g12a_crtc_enable_vd1(struct meson_drm *priv)
{ {
writel_relaxed(((1 << 16) | /* post bld premult*/ writel_relaxed(VD_BLEND_PREBLD_SRC_VD1 |
(1 << 8) | /* post src */ VD_BLEND_PREBLD_PREMULT_EN |
(1 << 4) | /* pre bld premult*/ VD_BLEND_POSTBLD_SRC_VD1 |
(1 << 0)), VD_BLEND_POSTBLD_PREMULT_EN,
priv->io_base + _REG(VD1_BLEND_SRC_CTRL)); priv->io_base + _REG(VD1_BLEND_SRC_CTRL));
} }
void meson_crtc_irq(struct meson_drm *priv) void meson_crtc_irq(struct meson_drm *priv)
@ -487,7 +487,12 @@ void meson_crtc_irq(struct meson_drm *priv)
writel_relaxed(priv->viu.vd1_range_map_cr, writel_relaxed(priv->viu.vd1_range_map_cr,
priv->io_base + meson_crtc->viu_offset + priv->io_base + meson_crtc->viu_offset +
_REG(VD1_IF0_RANGE_MAP_CR)); _REG(VD1_IF0_RANGE_MAP_CR));
writel_relaxed(0x78404, writel_relaxed(VPP_VSC_BANK_LENGTH(4) |
VPP_HSC_BANK_LENGTH(4) |
VPP_SC_VD_EN_ENABLE |
VPP_SC_TOP_EN_ENABLE |
VPP_SC_HSC_EN_ENABLE |
VPP_SC_VSC_EN_ENABLE,
priv->io_base + _REG(VPP_SC_MISC)); priv->io_base + _REG(VPP_SC_MISC));
writel_relaxed(priv->viu.vpp_pic_in_height, writel_relaxed(priv->viu.vpp_pic_in_height,
priv->io_base + _REG(VPP_PIC_IN_HEIGHT)); priv->io_base + _REG(VPP_PIC_IN_HEIGHT));

View File

@ -140,10 +140,28 @@ static struct regmap_config meson_regmap_config = {
static void meson_vpu_init(struct meson_drm *priv) static void meson_vpu_init(struct meson_drm *priv)
{ {
writel_relaxed(0x210000, priv->io_base + _REG(VPU_RDARB_MODE_L1C1)); u32 value;
writel_relaxed(0x10000, priv->io_base + _REG(VPU_RDARB_MODE_L1C2));
writel_relaxed(0x900000, priv->io_base + _REG(VPU_RDARB_MODE_L2C1)); /*
writel_relaxed(0x20000, priv->io_base + _REG(VPU_WRARB_MODE_L2C1)); * Slave dc0 and dc5 connected to master port 1.
* By default other slaves are connected to master port 0.
*/
value = VPU_RDARB_SLAVE_TO_MASTER_PORT(0, 1) |
VPU_RDARB_SLAVE_TO_MASTER_PORT(5, 1);
writel_relaxed(value, priv->io_base + _REG(VPU_RDARB_MODE_L1C1));
/* Slave dc0 connected to master port 1 */
value = VPU_RDARB_SLAVE_TO_MASTER_PORT(0, 1);
writel_relaxed(value, priv->io_base + _REG(VPU_RDARB_MODE_L1C2));
/* Slave dc4 and dc7 connected to master port 1 */
value = VPU_RDARB_SLAVE_TO_MASTER_PORT(4, 1) |
VPU_RDARB_SLAVE_TO_MASTER_PORT(7, 1);
writel_relaxed(value, priv->io_base + _REG(VPU_RDARB_MODE_L2C1));
/* Slave dc1 connected to master port 1 */
value = VPU_RDARB_SLAVE_TO_MASTER_PORT(1, 1);
writel_relaxed(value, priv->io_base + _REG(VPU_WRARB_MODE_L2C1));
} }
static void meson_remove_framebuffers(void) static void meson_remove_framebuffers(void)

View File

@ -429,6 +429,8 @@ static int dw_hdmi_phy_init(struct dw_hdmi *hdmi, void *data,
/* Enable internal pixclk, tmds_clk, spdif_clk, i2s_clk, cecclk */ /* Enable internal pixclk, tmds_clk, spdif_clk, i2s_clk, cecclk */
dw_hdmi_top_write_bits(dw_hdmi, HDMITX_TOP_CLK_CNTL, dw_hdmi_top_write_bits(dw_hdmi, HDMITX_TOP_CLK_CNTL,
0x3, 0x3); 0x3, 0x3);
/* Enable cec_clk and hdcp22_tmdsclk_en */
dw_hdmi_top_write_bits(dw_hdmi, HDMITX_TOP_CLK_CNTL, dw_hdmi_top_write_bits(dw_hdmi, HDMITX_TOP_CLK_CNTL,
0x3 << 4, 0x3 << 4); 0x3 << 4, 0x3 << 4);

Some files were not shown because too many files have changed in this diff Show More