1
0
Fork 0

drm-misc-next for v5.6:

UAPI Changes:
 - Add support for DMA-BUF HEAPS.
 
 Cross-subsystem Changes:
 - mipi dsi definition updates, pulled into drm-intel as well.
 - Add lockdep annotations for dma_resv vs mmap_sem and fs_reclaim.
 - Remove support for dma-buf kmap/kunmap.
 - Constify fb_ops in all fbdev drivers, including drm drivers and drm-core, and media as well.
 
 Core Changes:
 - Small cleanups to ttm.
 - Fix SCDC definition.
 - Assorted cleanups to core.
 - Add todo to remove load/unload hooks, and use generic fbdev emulation.
 - Assorted documentation updates.
 - Use blocking ww lock in ttm fault handler.
 - Remove drm_fb_helper_fbdev_setup/teardown.
 - Warning fixes with W=1 for atomic.
 - Use drm_debug_enabled() instead of drm_debug flag testing in various drivers.
 - Fallback to nontiled mode in fbdev emulation when not all tiles are present. (Later on reverted)
 - Various kconfig indentation fixes in core and drivers.
 - Fix freeing transactions in dp-mst correctly.
 - Sean Paul is steping down as core maintainer. :-(
 - Add lockdep annotations for atomic locks vs dma-resv.
 - Prevent use-after-free for a bad job in drm_scheduler.
 - Fill out all block sizes in the P01x and P210 definitions.
 - Avoid division by zero in drm/rect, and fix bounds.
 - Add drm/rect selftests.
 - Add aspect ratio and alternate clocks for HDMI 4k modes.
 - Add todo for drm_framebuffer_funcs and fb_create cleanup.
 - Drop DRM_AUTH for prime import/export ioctls.
 - Clear DP-MST payload id tables downstream when initializating.
 - Fix for DSC throughput definition.
 - Add extra FEC definitions.
 - Fix fake offset in drm_gem_object_funs.mmap.
 - Stop using encoder->bridge in core directly
 - Handle bridge chaining slightly better.
 - Add backlight support to drm/panel, and use it in many panel drivers.
 - Increase max number of y420 modes from 128 to 256, as preparation to add the new modes.
 
 Driver Changes:
 - Small fixes all over.
 - Fix documentation in vkms.
 - Fix mmap_sem vs dma_resv in nouveau.
 - Small cleanup in komeda.
 - Add page flip support in gma500 for psb/cdv.
 - Add ddc symlink in the connector sysfs directory for many drivers.
 - Add support for analogic an6345, and fix small bugs in it.
 - Add atomic modesetting support to ast.
 - Fix radeon fault handler VMA race.
 - Switch udl to use generic shmem helpers.
 - Unconditional vblank handling for mcde.
 - Miscellaneous fixes to mcde.
 - Tweak debug output from komeda using debugfs.
 - Add gamma and color transform support to komeda for DOU-IPS.
 - Add support for sony acx424AKP panel.
 - Various small cleanups to gma500.
 - Use generic fbdev emulation in udl, and replace udl_framebuffer with generic implementation.
 - Add support for Logic PD Type 28 panel.
 - Use drm_panel_* wrapper functions in exynos/tegra/msm.
 - Add devicetree bindings for generic DSI panels.
 - Don't include drm_pci.h directly in many drivers.
 - Add support for begin/end_cpu_access in udmabuf.
 - Stop using drm_get_pci_dev in gma500 and mga200.
 - Fixes to UDL damage handling, and use dma_buf_begin/end_cpu_access.
 - Add devfreq thermal support to panfrost.
 - Fix hotplug with daisy chained monitors by removing VCPI when disabling topology manager.
 - meson: Add support for OSD1 plane AFBC commit.
 - Stop displaying garbage when toggling ast primary plane on/off.
 - More cleanups and fixes to UDL.
 - Add D32 suport to komeda.
 - Remove globle copy of drm_dev in gma500.
 - Add support for Boe Himax8279d MIPI-DSI LCD panel.
 - Add support for ingenic JZ4770 panel.
 - Small null pointer deference fix in ingenic.
 - Remove support for the special tfp420 driver, as there is a generic way to do it.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuXvWqAysSYEJGuVH/lWMcqZwE8MFAl34lkkACgkQ/lWMcqZw
 E8M76g//WRYl9fWnV063s44FBVJYjGxaus0vQJSGidaPCIE6Ep6TNjXp8DVzV82M
 HR79P9glL02DC9B8pflioNNXdIRGSVk/FJcKVB2seFAqEFCAknvWDM/X/y+mOUpp
 fUeFl+Znlwx3YlM8f4Qujdbm+CbTewfbya4VAWeWd8XG2V8jfq5cmODPPlUMNenZ
 J6Ja+W3ph741uSIfAKaP69LVJgOcuUjXINE4SWhRk/i5QF3GIRej/A7ZjWGLQ/t2
 2zUUF7EiCzhPomM40H3ddKtXb4ZjNJuc5pOD4GpxR8ciNbe2gUOHEZ5aenwYBdsU
 5MwbxNKyMbKXATtn3yv3fSc4jH3DtmEKpmovONeO8ZDBrQBnxeYa3tQvfkNghA2f
 acoZMzYUImV+ft6DMIgpXppASvo7mQYDAbLPOGEJ9E44AL4UP00jesEjnK5FOHSR
 3BEzGUnK/6QL5zFNPni8YZQ8dan4jDIno1mqIV+cQ4WCGlaKckzIWO6243Bf13b/
 kROSJpgWkiK6Ngq0ofhD0MHyT/m1QnqUzWRKTJhRtPflSWRBsDZqWCQ5Vx1QlNIE
 /HfTNbTpXWwa+5wXbbB8TkDw5t9cQGnR+QcrEd9HgoIec7B5Re8rx9i0TJAT4N05
 03RCQCecSfD8gwKd2wgaFIpFGRl9lTdLYSpffSmyL2X5a20lZhM=
 =b15X
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2019-12-16' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v5.6:

UAPI Changes:
- Add support for DMA-BUF HEAPS.

Cross-subsystem Changes:
- mipi dsi definition updates, pulled into drm-intel as well.
- Add lockdep annotations for dma_resv vs mmap_sem and fs_reclaim.
- Remove support for dma-buf kmap/kunmap.
- Constify fb_ops in all fbdev drivers, including drm drivers and drm-core, and media as well.

Core Changes:
- Small cleanups to ttm.
- Fix SCDC definition.
- Assorted cleanups to core.
- Add todo to remove load/unload hooks, and use generic fbdev emulation.
- Assorted documentation updates.
- Use blocking ww lock in ttm fault handler.
- Remove drm_fb_helper_fbdev_setup/teardown.
- Warning fixes with W=1 for atomic.
- Use drm_debug_enabled() instead of drm_debug flag testing in various drivers.
- Fallback to nontiled mode in fbdev emulation when not all tiles are present. (Later on reverted)
- Various kconfig indentation fixes in core and drivers.
- Fix freeing transactions in dp-mst correctly.
- Sean Paul is steping down as core maintainer. :-(
- Add lockdep annotations for atomic locks vs dma-resv.
- Prevent use-after-free for a bad job in drm_scheduler.
- Fill out all block sizes in the P01x and P210 definitions.
- Avoid division by zero in drm/rect, and fix bounds.
- Add drm/rect selftests.
- Add aspect ratio and alternate clocks for HDMI 4k modes.
- Add todo for drm_framebuffer_funcs and fb_create cleanup.
- Drop DRM_AUTH for prime import/export ioctls.
- Clear DP-MST payload id tables downstream when initializating.
- Fix for DSC throughput definition.
- Add extra FEC definitions.
- Fix fake offset in drm_gem_object_funs.mmap.
- Stop using encoder->bridge in core directly
- Handle bridge chaining slightly better.
- Add backlight support to drm/panel, and use it in many panel drivers.
- Increase max number of y420 modes from 128 to 256, as preparation to add the new modes.

Driver Changes:
- Small fixes all over.
- Fix documentation in vkms.
- Fix mmap_sem vs dma_resv in nouveau.
- Small cleanup in komeda.
- Add page flip support in gma500 for psb/cdv.
- Add ddc symlink in the connector sysfs directory for many drivers.
- Add support for analogic an6345, and fix small bugs in it.
- Add atomic modesetting support to ast.
- Fix radeon fault handler VMA race.
- Switch udl to use generic shmem helpers.
- Unconditional vblank handling for mcde.
- Miscellaneous fixes to mcde.
- Tweak debug output from komeda using debugfs.
- Add gamma and color transform support to komeda for DOU-IPS.
- Add support for sony acx424AKP panel.
- Various small cleanups to gma500.
- Use generic fbdev emulation in udl, and replace udl_framebuffer with generic implementation.
- Add support for Logic PD Type 28 panel.
- Use drm_panel_* wrapper functions in exynos/tegra/msm.
- Add devicetree bindings for generic DSI panels.
- Don't include drm_pci.h directly in many drivers.
- Add support for begin/end_cpu_access in udmabuf.
- Stop using drm_get_pci_dev in gma500 and mga200.
- Fixes to UDL damage handling, and use dma_buf_begin/end_cpu_access.
- Add devfreq thermal support to panfrost.
- Fix hotplug with daisy chained monitors by removing VCPI when disabling topology manager.
- meson: Add support for OSD1 plane AFBC commit.
- Stop displaying garbage when toggling ast primary plane on/off.
- More cleanups and fixes to UDL.
- Add D32 suport to komeda.
- Remove globle copy of drm_dev in gma500.
- Add support for Boe Himax8279d MIPI-DSI LCD panel.
- Add support for ingenic JZ4770 panel.
- Small null pointer deference fix in ingenic.
- Remove support for the special tfp420 driver, as there is a generic way to do it.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/ba73535a-9334-5302-2e1f-5208bd7390bd@linux.intel.com
alistair/sensors
Daniel Vetter 2019-12-17 13:57:54 +01:00
commit 6c56e8adc0
506 changed files with 9862 additions and 6114 deletions

View File

@ -0,0 +1,91 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/dsi-controller.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Common Properties for DSI Display Panels
maintainers:
- Linus Walleij <linus.walleij@linaro.org>
description: |
This document defines device tree properties common to DSI, Display
Serial Interface controllers and attached panels. It doesn't constitute
a device tree binding specification by itself but is meant to be referenced
by device tree bindings.
When referenced from panel device tree bindings the properties defined in
this document are defined as follows. The panel device tree bindings are
responsible for defining whether each property is required or optional.
Notice: this binding concerns DSI panels connected directly to a master
without any intermediate port graph to the panel. Each DSI master
can control one to four virtual channels to one panel. Each virtual
channel should have a node "panel" for their virtual channel with their
reg-property set to the virtual channel number, usually there is just
one virtual channel, number 0.
properties:
$nodename:
pattern: "^dsi-controller(@.*)?$"
"#address-cells":
const: 1
"#size-cells":
const: 0
patternProperties:
"^panel@[0-3]$":
description: Panels connected to the DSI link
type: object
properties:
reg:
minimum: 0
maximum: 3
description:
The virtual channel number of a DSI peripheral. Must be in the range
from 0 to 3, as DSI uses a 2-bit addressing scheme. Some DSI
peripherals respond to more than a single virtual channel. In that
case the reg property can take multiple entries, one for each virtual
channel that the peripheral responds to.
clock-master:
type: boolean
description:
Should be enabled if the host is being used in conjunction with
another DSI host to drive the same peripheral. Hardware supporting
such a configuration generally requires the data on both the busses
to be driven by the same clock. Only the DSI host instance
controlling this clock should contain this property.
enforce-video-mode:
type: boolean
description:
The best option is usually to run a panel in command mode, as this
gives better control over the panel hardware. However for different
reasons like broken hardware, missing features or testing, it may be
useful to be able to force a command mode-capable panel into video
mode.
required:
- reg
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi-controller@a0351000 {
reg = <0xa0351000 0x1000>;
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "sony,acx424akp";
reg = <0>;
vddi-supply = <&ab8500_ldo_aux1_reg>;
reset-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>;
};
};
...

View File

@ -4,6 +4,7 @@ Required properties:
- compatible: one of:
* ingenic,jz4740-lcd
* ingenic,jz4725b-lcd
* ingenic,jz4770-lcd
- reg: LCD registers location and length
- clocks: LCD pixclock and device clock specifiers.
The device clock is only required on the JZ4740.

View File

@ -0,0 +1,42 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/logicpd,type28.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Logic PD Type 28 4.3" WQVGA TFT LCD panel
maintainers:
- Adam Ford <aford173@gmail.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: logicpd,type28
power-supply: true
enable-gpios: true
backlight: true
port: true
required:
- compatible
additionalProperties: false
examples:
- |
lcd0: display {
compatible = "logicpd,type28";
enable-gpios = <&gpio5 27 0>;
backlight = <&backlight>;
port {
lcd_in: endpoint {
remote-endpoint = <&dpi_out>;
};
};
};
...

View File

@ -0,0 +1,49 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/sony,acx424akp.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Sony ACX424AKP 4" 480x864 AMOLED panel
maintainers:
- Linus Walleij <linus.walleij@linaro.org>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: sony,acx424akp
reg: true
reset-gpios: true
vddi-supply:
description: regulator that supplies the vddi voltage
enforce-video-mode: true
required:
- compatible
- reg
- reset-gpios
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi-controller@a0351000 {
compatible = "ste,mcde-dsi";
reg = <0xa0351000 0x1000>;
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "sony,acx424akp";
reg = <0>;
vddi-supply = <&foo>;
reset-gpios = <&foo_gpio 0 GPIO_ACTIVE_LOW>;
};
};
...

View File

@ -1,21 +0,0 @@
Device-Tree bindings for tilcdc DRM TFP410 output driver
Required properties:
- compatible: value should be "ti,tilcdc,tfp410".
- i2c: the phandle for the i2c device to use for DDC
Recommended properties:
- pinctrl-names, pinctrl-0: the pincontrol settings to configure
muxing properly for pins that connect to TFP410 device
- powerdn-gpio: the powerdown GPIO, pulled low to power down the
TFP410 device (for DPMS_OFF)
Example:
dvicape {
compatible = "ti,tilcdc,tfp410";
i2c = <&i2c2>;
pinctrl-names = "default";
pinctrl-0 = <&bone_dvi_cape_dvi_00A1_pins>;
powerdn-gpio = <&gpio2 31 0>;
};

View File

@ -24,9 +24,9 @@ Driver Initialization
At the core of every DRM driver is a :c:type:`struct drm_driver
<drm_driver>` structure. Drivers typically statically initialize
a drm_driver structure, and then pass it to
:c:func:`drm_dev_alloc()` to allocate a device instance. After the
drm_dev_alloc() to allocate a device instance. After the
device instance is fully initialized it can be registered (which makes
it accessible from userspace) using :c:func:`drm_dev_register()`.
it accessible from userspace) using drm_dev_register().
The :c:type:`struct drm_driver <drm_driver>` structure
contains static information that describes the driver and features it

View File

@ -3,7 +3,7 @@ Kernel Mode Setting (KMS)
=========================
Drivers must initialize the mode setting core by calling
:c:func:`drm_mode_config_init()` on the DRM device. The function
drm_mode_config_init() on the DRM device. The function
initializes the :c:type:`struct drm_device <drm_device>`
mode_config field and never fails. Once done, mode configuration must
be setup by initializing the following fields.
@ -181,8 +181,7 @@ Setting`_). The somewhat surprising part here is that properties are not
directly instantiated on each object, but free-standing mode objects themselves,
represented by :c:type:`struct drm_property <drm_property>`, which only specify
the type and value range of a property. Any given property can be attached
multiple times to different objects using :c:func:`drm_object_attach_property()
<drm_object_attach_property>`.
multiple times to different objects using drm_object_attach_property().
.. kernel-doc:: include/drm/drm_mode_object.h
:internal:
@ -260,7 +259,8 @@ Taken all together there's two consequences for the atomic design:
drm_connector_state <drm_connector_state>` for connectors. These are the only
objects with userspace-visible and settable state. For internal state drivers
can subclass these structures through embeddeding, or add entirely new state
structures for their globally shared hardware functions.
structures for their globally shared hardware functions, see :c:type:`struct
drm_private_state<drm_private_state>`.
- An atomic update is assembled and validated as an entirely free-standing pile
of structures within the :c:type:`drm_atomic_state <drm_atomic_state>`
@ -269,6 +269,14 @@ Taken all together there's two consequences for the atomic design:
to the driver and modeset objects. This way rolling back an update boils down
to releasing memory and unreferencing objects like framebuffers.
Locking of atomic state structures is internally using :c:type:`struct
drm_modeset_lock <drm_modeset_lock>`. As a general rule the locking shouldn't be
exposed to drivers, instead the right locks should be automatically acquired by
any function that duplicates or peeks into a state, like e.g.
drm_atomic_get_crtc_state(). Locking only protects the software data
structure, ordering of committing state changes to hardware is sequenced using
:c:type:`struct drm_crtc_commit <drm_crtc_commit>`.
Read on in this chapter, and also in :ref:`drm_atomic_helper` for more detailed
coverage of specific topics.
@ -479,6 +487,9 @@ Color Management Properties
.. kernel-doc:: drivers/gpu/drm/drm_color_mgmt.c
:export:
.. kernel-doc:: include/drm/drm_color_mgmt.h
:internal:
Tile Group Property
-------------------

View File

@ -149,19 +149,19 @@ struct :c:type:`struct drm_gem_object <drm_gem_object>`.
To create a GEM object, a driver allocates memory for an instance of its
specific GEM object type and initializes the embedded struct
:c:type:`struct drm_gem_object <drm_gem_object>` with a call
to :c:func:`drm_gem_object_init()`. The function takes a pointer
to drm_gem_object_init(). The function takes a pointer
to the DRM device, a pointer to the GEM object and the buffer object
size in bytes.
GEM uses shmem to allocate anonymous pageable memory.
:c:func:`drm_gem_object_init()` will create an shmfs file of the
drm_gem_object_init() will create an shmfs file of the
requested size and store it into the struct :c:type:`struct
drm_gem_object <drm_gem_object>` filp field. The memory is
used as either main storage for the object when the graphics hardware
uses system memory directly or as a backing store otherwise.
Drivers are responsible for the actual physical pages allocation by
calling :c:func:`shmem_read_mapping_page_gfp()` for each page.
calling shmem_read_mapping_page_gfp() for each page.
Note that they can decide to allocate pages when initializing the GEM
object, or to delay allocation until the memory is needed (for instance
when a page fault occurs as a result of a userspace memory access or
@ -170,20 +170,18 @@ when the driver needs to start a DMA transfer involving the memory).
Anonymous pageable memory allocation is not always desired, for instance
when the hardware requires physically contiguous system memory as is
often the case in embedded devices. Drivers can create GEM objects with
no shmfs backing (called private GEM objects) by initializing them with
a call to :c:func:`drm_gem_private_object_init()` instead of
:c:func:`drm_gem_object_init()`. Storage for private GEM objects
must be managed by drivers.
no shmfs backing (called private GEM objects) by initializing them with a call
to drm_gem_private_object_init() instead of drm_gem_object_init(). Storage for
private GEM objects must be managed by drivers.
GEM Objects Lifetime
--------------------
All GEM objects are reference-counted by the GEM core. References can be
acquired and release by :c:func:`calling drm_gem_object_get()` and
:c:func:`drm_gem_object_put()` respectively. The caller must hold the
:c:type:`struct drm_device <drm_device>` struct_mutex lock when calling
:c:func:`drm_gem_object_get()`. As a convenience, GEM provides
:c:func:`drm_gem_object_put_unlocked()` functions that can be called without
acquired and release by calling drm_gem_object_get() and drm_gem_object_put()
respectively. The caller must hold the :c:type:`struct drm_device <drm_device>`
struct_mutex lock when calling drm_gem_object_get(). As a convenience, GEM
provides drm_gem_object_put_unlocked() functions that can be called without
holding the lock.
When the last reference to a GEM object is released the GEM core calls
@ -194,7 +192,7 @@ free the GEM object and all associated resources.
void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are
responsible for freeing all GEM object resources. This includes the
resources created by the GEM core, which need to be released with
:c:func:`drm_gem_object_release()`.
drm_gem_object_release().
GEM Objects Naming
------------------
@ -210,13 +208,11 @@ to the GEM object in other standard or driver-specific ioctls. Closing a
DRM file handle frees all its GEM handles and dereferences the
associated GEM objects.
To create a handle for a GEM object drivers call
:c:func:`drm_gem_handle_create()`. The function takes a pointer
to the DRM file and the GEM object and returns a locally unique handle.
When the handle is no longer needed drivers delete it with a call to
:c:func:`drm_gem_handle_delete()`. Finally the GEM object
associated with a handle can be retrieved by a call to
:c:func:`drm_gem_object_lookup()`.
To create a handle for a GEM object drivers call drm_gem_handle_create(). The
function takes a pointer to the DRM file and the GEM object and returns a
locally unique handle. When the handle is no longer needed drivers delete it
with a call to drm_gem_handle_delete(). Finally the GEM object associated with a
handle can be retrieved by a call to drm_gem_object_lookup().
Handles don't take ownership of GEM objects, they only take a reference
to the object that will be dropped when the handle is destroyed. To
@ -258,7 +254,7 @@ The mmap system call can't be used directly to map GEM objects, as they
don't have their own file handle. Two alternative methods currently
co-exist to map GEM objects to userspace. The first method uses a
driver-specific ioctl to perform the mapping operation, calling
:c:func:`do_mmap()` under the hood. This is often considered
do_mmap() under the hood. This is often considered
dubious, seems to be discouraged for new GEM-enabled drivers, and will
thus not be described here.
@ -267,23 +263,22 @@ The second method uses the mmap system call on the DRM file handle. void
offset); DRM identifies the GEM object to be mapped by a fake offset
passed through the mmap offset argument. Prior to being mapped, a GEM
object must thus be associated with a fake offset. To do so, drivers
must call :c:func:`drm_gem_create_mmap_offset()` on the object.
must call drm_gem_create_mmap_offset() on the object.
Once allocated, the fake offset value must be passed to the application
in a driver-specific way and can then be used as the mmap offset
argument.
The GEM core provides a helper method :c:func:`drm_gem_mmap()` to
The GEM core provides a helper method drm_gem_mmap() to
handle object mapping. The method can be set directly as the mmap file
operation handler. It will look up the GEM object based on the offset
value and set the VMA operations to the :c:type:`struct drm_driver
<drm_driver>` gem_vm_ops field. Note that
:c:func:`drm_gem_mmap()` doesn't map memory to userspace, but
relies on the driver-provided fault handler to map pages individually.
<drm_driver>` gem_vm_ops field. Note that drm_gem_mmap() doesn't map memory to
userspace, but relies on the driver-provided fault handler to map pages
individually.
To use :c:func:`drm_gem_mmap()`, drivers must fill the struct
:c:type:`struct drm_driver <drm_driver>` gem_vm_ops field
with a pointer to VM operations.
To use drm_gem_mmap(), drivers must fill the struct :c:type:`struct drm_driver
<drm_driver>` gem_vm_ops field with a pointer to VM operations.
The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>`
made up of several fields, the more interesting ones being:
@ -298,9 +293,8 @@ made up of several fields, the more interesting ones being:
The open and close operations must update the GEM object reference
count. Drivers can use the :c:func:`drm_gem_vm_open()` and
:c:func:`drm_gem_vm_close()` helper functions directly as open
and close handlers.
count. Drivers can use the drm_gem_vm_open() and drm_gem_vm_close() helper
functions directly as open and close handlers.
The fault operation handler is responsible for mapping individual pages
to userspace when a page fault occurs. Depending on the memory
@ -312,12 +306,12 @@ Drivers that want to map the GEM object upfront instead of handling page
faults can implement their own mmap file operation handler.
For platforms without MMU the GEM core provides a helper method
:c:func:`drm_gem_cma_get_unmapped_area`. The mmap() routines will call
this to get a proposed address for the mapping.
drm_gem_cma_get_unmapped_area(). The mmap() routines will call this to get a
proposed address for the mapping.
To use :c:func:`drm_gem_cma_get_unmapped_area`, drivers must fill the
struct :c:type:`struct file_operations <file_operations>` get_unmapped_area
field with a pointer on :c:func:`drm_gem_cma_get_unmapped_area`.
To use drm_gem_cma_get_unmapped_area(), drivers must fill the struct
:c:type:`struct file_operations <file_operations>` get_unmapped_area field with
a pointer on drm_gem_cma_get_unmapped_area().
More detailed information about get_unmapped_area can be found in
Documentation/nommu-mmap.txt

View File

@ -254,36 +254,45 @@ Validating changes with IGT
There's a collection of tests that aims to cover the whole functionality of
DRM drivers and that can be used to check that changes to DRM drivers or the
core don't regress existing functionality. This test suite is called IGT and
its code can be found in https://cgit.freedesktop.org/drm/igt-gpu-tools/.
its code and instructions to build and run can be found in
https://gitlab.freedesktop.org/drm/igt-gpu-tools/.
To build IGT, start by installing its build dependencies. In Debian-based
systems::
Using VKMS to test DRM API
--------------------------
# apt-get build-dep intel-gpu-tools
VKMS is a software-only model of a KMS driver that is useful for testing
and for running compositors. VKMS aims to enable a virtual display without
the need for a hardware display capability. These characteristics made VKMS
a perfect tool for validating the DRM core behavior and also support the
compositor developer. VKMS makes it possible to test DRM functions in a
virtual machine without display, simplifying the validation of some of the
core changes.
And in Fedora-based systems::
To Validate changes in DRM API with VKMS, start setting the kernel: make
sure to enable VKMS module; compile the kernel with the VKMS enabled and
install it in the target machine. VKMS can be run in a Virtual Machine
(QEMU, virtme or similar). It's recommended the use of KVM with the minimum
of 1GB of RAM and four cores.
# dnf builddep intel-gpu-tools
It's possible to run the IGT-tests in a VM in two ways:
Then clone the repository::
1. Use IGT inside a VM
2. Use IGT from the host machine and write the results in a shared directory.
$ git clone git://anongit.freedesktop.org/drm/igt-gpu-tools
As follow, there is an example of using a VM with a shared directory with
the host machine to run igt-tests. As an example it's used virtme::
Configure the build system and start the build::
$ virtme-run --rwdir /path/for/shared_dir --kdir=path/for/kernel/directory --mods=auto
$ cd igt-gpu-tools && ./autogen.sh && make -j6
Run the igt-tests in the guest machine, as example it's ran the 'kms_flip'
tests::
Download the piglit dependency::
$ /path/for/igt-gpu-tools/scripts/run-tests.sh -p -s -t "kms_flip.*" -v
$ ./scripts/run-tests.sh -d
And run the tests::
$ ./scripts/run-tests.sh -t kms -t core -s
run-tests.sh is a wrapper around piglit that will execute the tests matching
the -t options. A report in HTML format will be available in
./results/html/index.html. Results can be compared with piglit.
In this example, instead of build the igt_runner, Piglit is used
(-p option); it's created html summary of the tests results and it's saved
in the folder "igt-gpu-tools/results"; it's executed only the igt-tests
matching the -t option.
Display CRC Support
-------------------

View File

@ -171,26 +171,43 @@ Contact: Maintainer of the driver you plan to convert
Level: Intermediate
Convert drivers to use drm_fb_helper_fbdev_setup/teardown()
-----------------------------------------------------------
Convert drivers to use drm_fbdev_generic_setup()
------------------------------------------------
Most drivers can use drm_fb_helper_fbdev_setup() except maybe:
- amdgpu which has special logic to decide whether to call
drm_helper_disable_unused_functions()
- armada which isn't atomic and doesn't call
drm_helper_disable_unused_functions()
- i915 which calls drm_fb_helper_initial_config() in a worker
Drivers that use drm_framebuffer_remove() to clean up the fbdev framebuffer can
probably use drm_fb_helper_fbdev_teardown().
Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
atomic modesetting and GEM vmap support. Current generic fbdev emulation
expects the framebuffer in system memory (or system-like memory).
Contact: Maintainer of the driver you plan to convert
Level: Intermediate
drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
-----------------------------------------------------------------
A lot more drivers could be switched over to the drm_gem_framebuffer helpers.
Various hold-ups:
- Need to switch over to the generic dirty tracking code using
drm_atomic_helper_dirtyfb first (e.g. qxl).
- Need to switch to drm_fbdev_generic_setup(), otherwise a lot of the custom fb
setup code can't be deleted.
- Many drivers wrap drm_gem_fb_create() only to check for valid formats. For
atomic drivers we could check for valid formats by calling
drm_plane_check_pixel_format() against all planes, and pass if any plane
supports the format. For non-atomic that's not possible since like the format
list for the primary plane is fake and we'd therefor reject valid formats.
- Many drivers subclass drm_framebuffer, we'd need a embedding compatible
version of the varios drm_gem_fb_create functions. Maybe called
drm_gem_fb_create/_with_dirty/_with_funcs as needed.
Contact: Daniel Vetter
Level: Intermediate
Clean up mmap forwarding
------------------------
@ -328,8 +345,8 @@ drm_fb_helper tasks
these igt tests need to be fixed: kms_fbcon_fbt@psr and
kms_fbcon_fbt@psr-suspend.
- The max connector argument for drm_fb_helper_init() and
drm_fb_helper_fbdev_setup() isn't used anymore and can be removed.
- The max connector argument for drm_fb_helper_init() isn't used anymore and
can be removed.
- The helper doesn't keep an array of connectors anymore so these can be
removed: drm_fb_helper_single_add_all_connectors(),
@ -351,6 +368,23 @@ connector register/unregister fixes
Level: Intermediate
Remove load/unload callbacks from all non-DRIVER_LEGACY drivers
---------------------------------------------------------------
The load/unload callbacks in struct &drm_driver are very much midlayers, plus
for historical reasons they get the ordering wrong (and we can't fix that)
between setting up the &drm_driver structure and calling drm_dev_register().
- Rework drivers to no longer use the load/unload callbacks, directly coding the
load/unload sequence into the driver's probe function.
- Once all non-DRIVER_LEGACY drivers are converted, disallow the load/unload
callbacks for all modern drivers.
Contact: Daniel Vetter
Level: Intermediate
Core refactorings
=================

View File

@ -4973,6 +4973,24 @@ F: Documentation/driver-api/dma-buf.rst
K: dma_(buf|fence|resv)
T: git git://anongit.freedesktop.org/drm/drm-misc
DMA-BUF HEAPS FRAMEWORK
M: Sumit Semwal <sumit.semwal@linaro.org>
R: Andrew F. Davis <afd@ti.com>
R: Benjamin Gaignard <benjamin.gaignard@linaro.org>
R: Liam Mark <lmark@codeaurora.org>
R: Laura Abbott <labbott@redhat.com>
R: Brian Starkey <Brian.Starkey@arm.com>
R: John Stultz <john.stultz@linaro.org>
S: Maintained
L: linux-media@vger.kernel.org
L: dri-devel@lists.freedesktop.org
L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
F: include/uapi/linux/dma-heap.h
F: include/linux/dma-heap.h
F: drivers/dma-buf/dma-heap.c
F: drivers/dma-buf/heaps/*
T: git git://anongit.freedesktop.org/drm/drm-misc
DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
M: Vinod Koul <vkoul@kernel.org>
L: dmaengine@vger.kernel.org
@ -5178,6 +5196,12 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
S: Maintained
F: drivers/gpu/drm/bochs/
DRM DRIVER FOR BOE HIMAX8279D PANELS
M: Jerry Han <hanxu5@huaqin.corp-partner.google.com>
S: Maintained
F: drivers/gpu/drm/panel/panel-boe-himax8279d.c
F: Documentation/devicetree/bindings/display/panel/boe,himax8279d.txt
DRM DRIVER FOR FARADAY TVE200 TV ENCODER
M: Linus Walleij <linus.walleij@linaro.org>
T: git git://anongit.freedesktop.org/drm/drm-misc
@ -5405,7 +5429,6 @@ F: include/linux/vga*
DRM DRIVERS AND MISC GPU PATCHES
M: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
M: Maxime Ripard <mripard@kernel.org>
M: Sean Paul <sean@poorly.run>
W: https://01.org/linuxgraphics/gfx-docs/maintainer-tools/drm-misc.html
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc

View File

@ -6,21 +6,31 @@
* Author: Sathyanarayanan Kuppuswamy <sathyanarayanan.kuppuswamy@intel.com>
*/
#include <linux/gpio.h>
#include <linux/platform_data/tc35876x.h>
#include <linux/gpio/machine.h>
#include <asm/intel-mid.h>
static struct gpiod_lookup_table tc35876x_gpio_table = {
.dev_id = "i2c_disp_brig",
.table = {
GPIO_LOOKUP("0000:00:0c.0", -1, "bridge-reset", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("0000:00:0c.0", -1, "bl-en", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("0000:00:0c.0", -1, "vadd", GPIO_ACTIVE_HIGH),
{ },
},
};
/*tc35876x DSI_LVDS bridge chip and panel platform data*/
static void *tc35876x_platform_data(void *data)
{
static struct tc35876x_platform_data pdata;
struct gpiod_lookup_table *table = &tc35876x_gpio_table;
struct gpiod_lookup *lookup = table->table;
/* gpio pins set to -1 will not be used by the driver */
pdata.gpio_bridge_reset = get_gpio_by_name("LCMB_RXEN");
pdata.gpio_panel_bl_en = get_gpio_by_name("6S6P_BL_EN");
pdata.gpio_panel_vadd = get_gpio_by_name("EN_VREG_LCD_V3P3");
lookup[0].chip_hwnum = get_gpio_by_name("LCMB_RXEN");
lookup[1].chip_hwnum = get_gpio_by_name("6S6P_BL_EN");
lookup[2].chip_hwnum = get_gpio_by_name("EN_VREG_LCD_V3P3");
gpiod_add_lookup_table(table);
return &pdata;
return NULL;
}
static const struct devs_id tc35876x_dev_id __initconst = {

View File

@ -57,7 +57,7 @@ static int cfag12864bfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
return vm_map_pages_zero(vma, &pages, 1);
}
static struct fb_ops cfag12864bfb_ops = {
static const struct fb_ops cfag12864bfb_ops = {
.owner = THIS_MODULE,
.fb_read = fb_sys_read,
.fb_write = fb_sys_write,

View File

@ -228,7 +228,7 @@ static int ht16k33_mmap(struct fb_info *info, struct vm_area_struct *vma)
return vm_map_pages_zero(vma, &pages, 1);
}
static struct fb_ops ht16k33_fb_ops = {
static const struct fb_ops ht16k33_fb_ops = {
.owner = THIS_MODULE,
.fb_read = fb_sys_read,
.fb_write = fb_sys_write,

View File

@ -44,4 +44,15 @@ config DMABUF_SELFTESTS
default n
depends on DMA_SHARED_BUFFER
menuconfig DMABUF_HEAPS
bool "DMA-BUF Userland Memory Heaps"
select DMA_SHARED_BUFFER
help
Choose this option to enable the DMA-BUF userland memory heaps.
This options creates per heap chardevs in /dev/dma_heap/ which
allows userspace to allocate dma-bufs that can be shared
between drivers.
source "drivers/dma-buf/heaps/Kconfig"
endmenu

View File

@ -1,6 +1,8 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
dma-resv.o seqno-fence.o
obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
obj-$(CONFIG_DMABUF_HEAPS) += heaps/
obj-$(CONFIG_SYNC_FILE) += sync_file.o
obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o
obj-$(CONFIG_UDMABUF) += udmabuf.o

View File

@ -878,29 +878,9 @@ EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
* with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
* access.
*
* To support dma_buf objects residing in highmem cpu access is page-based
* using an api similar to kmap. Accessing a dma_buf is done in aligned chunks
* of PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which
* returns a pointer in kernel virtual address space. Afterwards the chunk
* needs to be unmapped again. There is no limit on how often a given chunk
* can be mapped and unmapped, i.e. the importer does not need to call
* begin_cpu_access again before mapping the same chunk again.
*
* Interfaces::
* void \*dma_buf_kmap(struct dma_buf \*, unsigned long);
* void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*);
*
* Implementing the functions is optional for exporters and for importers all
* the restrictions of using kmap apply.
*
* dma_buf kmap calls outside of the range specified in begin_cpu_access are
* undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
* the partial chunks at the beginning and end but may return stale or bogus
* data outside of the range (in these partial chunks).
*
* For some cases the overhead of kmap can be too high, a vmap interface
* is introduced. This interface should be used very carefully, as vmalloc
* space is a limited resources on many architectures.
* Since for most kernel internal dma-buf accesses need the entire buffer, a
* vmap interface is introduced. Note that on very old 32-bit architectures
* vmalloc space might be limited and result in vmap calls failing.
*
* Interfaces::
* void \*dma_buf_vmap(struct dma_buf \*dmabuf)
@ -1050,43 +1030,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
}
EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
/**
* dma_buf_kmap - Map a page of the buffer object into kernel address space. The
* same restrictions as for kmap and friends apply.
* @dmabuf: [in] buffer to map page from.
* @page_num: [in] page in PAGE_SIZE units to map.
*
* This call must always succeed, any necessary preparations that might fail
* need to be done in begin_cpu_access.
*/
void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long page_num)
{
WARN_ON(!dmabuf);
if (!dmabuf->ops->map)
return NULL;
return dmabuf->ops->map(dmabuf, page_num);
}
EXPORT_SYMBOL_GPL(dma_buf_kmap);
/**
* dma_buf_kunmap - Unmap a page obtained by dma_buf_kmap.
* @dmabuf: [in] buffer to unmap page from.
* @page_num: [in] page in PAGE_SIZE units to unmap.
* @vaddr: [in] kernel space pointer obtained from dma_buf_kmap.
*
* This call must always succeed.
*/
void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num,
void *vaddr)
{
WARN_ON(!dmabuf);
if (dmabuf->ops->unmap)
dmabuf->ops->unmap(dmabuf, page_num, vaddr);
}
EXPORT_SYMBOL_GPL(dma_buf_kunmap);
/**
* dma_buf_mmap - Setup up a userspace mmap with the given vma

View File

@ -0,0 +1,297 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Framework for userspace DMA-BUF allocations
*
* Copyright (C) 2011 Google, Inc.
* Copyright (C) 2019 Linaro Ltd.
*/
#include <linux/cdev.h>
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/dma-buf.h>
#include <linux/err.h>
#include <linux/xarray.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/syscalls.h>
#include <linux/dma-heap.h>
#include <uapi/linux/dma-heap.h>
#define DEVNAME "dma_heap"
#define NUM_HEAP_MINORS 128
/**
* struct dma_heap - represents a dmabuf heap in the system
* @name: used for debugging/device-node name
* @ops: ops struct for this heap
* @heap_devt heap device node
* @list list head connecting to list of heaps
* @heap_cdev heap char device
*
* Represents a heap of memory from which buffers can be made.
*/
struct dma_heap {
const char *name;
const struct dma_heap_ops *ops;
void *priv;
dev_t heap_devt;
struct list_head list;
struct cdev heap_cdev;
};
static LIST_HEAD(heap_list);
static DEFINE_MUTEX(heap_list_lock);
static dev_t dma_heap_devt;
static struct class *dma_heap_class;
static DEFINE_XARRAY_ALLOC(dma_heap_minors);
static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
unsigned int fd_flags,
unsigned int heap_flags)
{
/*
* Allocations from all heaps have to begin
* and end on page boundaries.
*/
len = PAGE_ALIGN(len);
if (!len)
return -EINVAL;
return heap->ops->allocate(heap, len, fd_flags, heap_flags);
}
static int dma_heap_open(struct inode *inode, struct file *file)
{
struct dma_heap *heap;
heap = xa_load(&dma_heap_minors, iminor(inode));
if (!heap) {
pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
return -ENODEV;
}
/* instance data as context */
file->private_data = heap;
nonseekable_open(inode, file);
return 0;
}
static long dma_heap_ioctl_allocate(struct file *file, void *data)
{
struct dma_heap_allocation_data *heap_allocation = data;
struct dma_heap *heap = file->private_data;
int fd;
if (heap_allocation->fd)
return -EINVAL;
if (heap_allocation->fd_flags & ~DMA_HEAP_VALID_FD_FLAGS)
return -EINVAL;
if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS)
return -EINVAL;
fd = dma_heap_buffer_alloc(heap, heap_allocation->len,
heap_allocation->fd_flags,
heap_allocation->heap_flags);
if (fd < 0)
return fd;
heap_allocation->fd = fd;
return 0;
}
unsigned int dma_heap_ioctl_cmds[] = {
DMA_HEAP_IOC_ALLOC,
};
static long dma_heap_ioctl(struct file *file, unsigned int ucmd,
unsigned long arg)
{
char stack_kdata[128];
char *kdata = stack_kdata;
unsigned int kcmd;
unsigned int in_size, out_size, drv_size, ksize;
int nr = _IOC_NR(ucmd);
int ret = 0;
if (nr >= ARRAY_SIZE(dma_heap_ioctl_cmds))
return -EINVAL;
/* Get the kernel ioctl cmd that matches */
kcmd = dma_heap_ioctl_cmds[nr];
/* Figure out the delta between user cmd size and kernel cmd size */
drv_size = _IOC_SIZE(kcmd);
out_size = _IOC_SIZE(ucmd);
in_size = out_size;
if ((ucmd & kcmd & IOC_IN) == 0)
in_size = 0;
if ((ucmd & kcmd & IOC_OUT) == 0)
out_size = 0;
ksize = max(max(in_size, out_size), drv_size);
/* If necessary, allocate buffer for ioctl argument */
if (ksize > sizeof(stack_kdata)) {
kdata = kmalloc(ksize, GFP_KERNEL);
if (!kdata)
return -ENOMEM;
}
if (copy_from_user(kdata, (void __user *)arg, in_size) != 0) {
ret = -EFAULT;
goto err;
}
/* zero out any difference between the kernel/user structure size */
if (ksize > in_size)
memset(kdata + in_size, 0, ksize - in_size);
switch (kcmd) {
case DMA_HEAP_IOC_ALLOC:
ret = dma_heap_ioctl_allocate(file, kdata);
break;
default:
return -ENOTTY;
}
if (copy_to_user((void __user *)arg, kdata, out_size) != 0)
ret = -EFAULT;
err:
if (kdata != stack_kdata)
kfree(kdata);
return ret;
}
static const struct file_operations dma_heap_fops = {
.owner = THIS_MODULE,
.open = dma_heap_open,
.unlocked_ioctl = dma_heap_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = dma_heap_ioctl,
#endif
};
/**
* dma_heap_get_drvdata() - get per-subdriver data for the heap
* @heap: DMA-Heap to retrieve private data for
*
* Returns:
* The per-subdriver data for the heap.
*/
void *dma_heap_get_drvdata(struct dma_heap *heap)
{
return heap->priv;
}
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
{
struct dma_heap *heap, *h, *err_ret;
struct device *dev_ret;
unsigned int minor;
int ret;
if (!exp_info->name || !strcmp(exp_info->name, "")) {
pr_err("dma_heap: Cannot add heap without a name\n");
return ERR_PTR(-EINVAL);
}
if (!exp_info->ops || !exp_info->ops->allocate) {
pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
return ERR_PTR(-EINVAL);
}
/* check the name is unique */
mutex_lock(&heap_list_lock);
list_for_each_entry(h, &heap_list, list) {
if (!strcmp(h->name, exp_info->name)) {
mutex_unlock(&heap_list_lock);
pr_err("dma_heap: Already registered heap named %s\n",
exp_info->name);
return ERR_PTR(-EINVAL);
}
}
mutex_unlock(&heap_list_lock);
heap = kzalloc(sizeof(*heap), GFP_KERNEL);
if (!heap)
return ERR_PTR(-ENOMEM);
heap->name = exp_info->name;
heap->ops = exp_info->ops;
heap->priv = exp_info->priv;
/* Find unused minor number */
ret = xa_alloc(&dma_heap_minors, &minor, heap,
XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL);
if (ret < 0) {
pr_err("dma_heap: Unable to get minor number for heap\n");
err_ret = ERR_PTR(ret);
goto err0;
}
/* Create device */
heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), minor);
cdev_init(&heap->heap_cdev, &dma_heap_fops);
ret = cdev_add(&heap->heap_cdev, heap->heap_devt, 1);
if (ret < 0) {
pr_err("dma_heap: Unable to add char device\n");
err_ret = ERR_PTR(ret);
goto err1;
}
dev_ret = device_create(dma_heap_class,
NULL,
heap->heap_devt,
NULL,
heap->name);
if (IS_ERR(dev_ret)) {
pr_err("dma_heap: Unable to create device\n");
err_ret = ERR_CAST(dev_ret);
goto err2;
}
/* Add heap to the list */
mutex_lock(&heap_list_lock);
list_add(&heap->list, &heap_list);
mutex_unlock(&heap_list_lock);
return heap;
err2:
cdev_del(&heap->heap_cdev);
err1:
xa_erase(&dma_heap_minors, minor);
err0:
kfree(heap);
return err_ret;
}
static char *dma_heap_devnode(struct device *dev, umode_t *mode)
{
return kasprintf(GFP_KERNEL, "dma_heap/%s", dev_name(dev));
}
static int dma_heap_init(void)
{
int ret;
ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
if (ret)
return ret;
dma_heap_class = class_create(THIS_MODULE, DEVNAME);
if (IS_ERR(dma_heap_class)) {
unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
return PTR_ERR(dma_heap_class);
}
dma_heap_class->devnode = dma_heap_devnode;
return 0;
}
subsys_initcall(dma_heap_init);

View File

@ -34,6 +34,7 @@
#include <linux/dma-resv.h>
#include <linux/export.h>
#include <linux/sched/mm.h>
/**
* DOC: Reservation Object Overview
@ -95,6 +96,37 @@ static void dma_resv_list_free(struct dma_resv_list *list)
kfree_rcu(list, rcu);
}
#if IS_ENABLED(CONFIG_LOCKDEP)
static int __init dma_resv_lockdep(void)
{
struct mm_struct *mm = mm_alloc();
struct ww_acquire_ctx ctx;
struct dma_resv obj;
int ret;
if (!mm)
return -ENOMEM;
dma_resv_init(&obj);
down_read(&mm->mmap_sem);
ww_acquire_init(&ctx, &reservation_ww_class);
ret = dma_resv_lock(&obj, &ctx);
if (ret == -EDEADLK)
dma_resv_lock_slow(&obj, &ctx);
fs_reclaim_acquire(GFP_KERNEL);
fs_reclaim_release(GFP_KERNEL);
ww_mutex_unlock(&obj.lock);
ww_acquire_fini(&ctx);
up_read(&mm->mmap_sem);
mmput(mm);
return 0;
}
subsys_initcall(dma_resv_lockdep);
#endif
/**
* dma_resv_init - initialize a reservation object
* @obj: the reservation object

View File

@ -0,0 +1,14 @@
config DMABUF_HEAPS_SYSTEM
bool "DMA-BUF System Heap"
depends on DMABUF_HEAPS
help
Choose this option to enable the system dmabuf heap. The system heap
is backed by pages from the buddy allocator. If in doubt, say Y.
config DMABUF_HEAPS_CMA
bool "DMA-BUF CMA Heap"
depends on DMABUF_HEAPS && DMA_CMA
help
Choose this option to enable dma-buf CMA heap. This heap is backed
by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here.

View File

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-y += heap-helpers.o
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o

View File

@ -0,0 +1,177 @@
// SPDX-License-Identifier: GPL-2.0
/*
* DMABUF CMA heap exporter
*
* Copyright (C) 2012, 2019 Linaro Ltd.
* Author: <benjamin.gaignard@linaro.org> for ST-Ericsson.
*/
#include <linux/cma.h>
#include <linux/device.h>
#include <linux/dma-buf.h>
#include <linux/dma-heap.h>
#include <linux/dma-contiguous.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/highmem.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/scatterlist.h>
#include <linux/sched/signal.h>
#include "heap-helpers.h"
struct cma_heap {
struct dma_heap *heap;
struct cma *cma;
};
static void cma_heap_free(struct heap_helper_buffer *buffer)
{
struct cma_heap *cma_heap = dma_heap_get_drvdata(buffer->heap);
unsigned long nr_pages = buffer->pagecount;
struct page *cma_pages = buffer->priv_virt;
/* free page list */
kfree(buffer->pages);
/* release memory */
cma_release(cma_heap->cma, cma_pages, nr_pages);
kfree(buffer);
}
/* dmabuf heap CMA operations functions */
static int cma_heap_allocate(struct dma_heap *heap,
unsigned long len,
unsigned long fd_flags,
unsigned long heap_flags)
{
struct cma_heap *cma_heap = dma_heap_get_drvdata(heap);
struct heap_helper_buffer *helper_buffer;
struct page *cma_pages;
size_t size = PAGE_ALIGN(len);
unsigned long nr_pages = size >> PAGE_SHIFT;
unsigned long align = get_order(size);
struct dma_buf *dmabuf;
int ret = -ENOMEM;
pgoff_t pg;
if (align > CONFIG_CMA_ALIGNMENT)
align = CONFIG_CMA_ALIGNMENT;
helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
if (!helper_buffer)
return -ENOMEM;
init_heap_helper_buffer(helper_buffer, cma_heap_free);
helper_buffer->heap = heap;
helper_buffer->size = len;
cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
if (!cma_pages)
goto free_buf;
if (PageHighMem(cma_pages)) {
unsigned long nr_clear_pages = nr_pages;
struct page *page = cma_pages;
while (nr_clear_pages > 0) {
void *vaddr = kmap_atomic(page);
memset(vaddr, 0, PAGE_SIZE);
kunmap_atomic(vaddr);
/*
* Avoid wasting time zeroing memory if the process
* has been killed by by SIGKILL
*/
if (fatal_signal_pending(current))
goto free_cma;
page++;
nr_clear_pages--;
}
} else {
memset(page_address(cma_pages), 0, size);
}
helper_buffer->pagecount = nr_pages;
helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
sizeof(*helper_buffer->pages),
GFP_KERNEL);
if (!helper_buffer->pages) {
ret = -ENOMEM;
goto free_cma;
}
for (pg = 0; pg < helper_buffer->pagecount; pg++)
helper_buffer->pages[pg] = &cma_pages[pg];
/* create the dmabuf */
dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags);
if (IS_ERR(dmabuf)) {
ret = PTR_ERR(dmabuf);
goto free_pages;
}
helper_buffer->dmabuf = dmabuf;
helper_buffer->priv_virt = cma_pages;
ret = dma_buf_fd(dmabuf, fd_flags);
if (ret < 0) {
dma_buf_put(dmabuf);
/* just return, as put will call release and that will free */
return ret;
}
return ret;
free_pages:
kfree(helper_buffer->pages);
free_cma:
cma_release(cma_heap->cma, cma_pages, nr_pages);
free_buf:
kfree(helper_buffer);
return ret;
}
static const struct dma_heap_ops cma_heap_ops = {
.allocate = cma_heap_allocate,
};
static int __add_cma_heap(struct cma *cma, void *data)
{
struct cma_heap *cma_heap;
struct dma_heap_export_info exp_info;
cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL);
if (!cma_heap)
return -ENOMEM;
cma_heap->cma = cma;
exp_info.name = cma_get_name(cma);
exp_info.ops = &cma_heap_ops;
exp_info.priv = cma_heap;
cma_heap->heap = dma_heap_add(&exp_info);
if (IS_ERR(cma_heap->heap)) {
int ret = PTR_ERR(cma_heap->heap);
kfree(cma_heap);
return ret;
}
return 0;
}
static int add_default_cma_heap(void)
{
struct cma *default_cma = dev_get_cma_area(NULL);
int ret = 0;
if (default_cma)
ret = __add_cma_heap(default_cma, NULL);
return ret;
}
module_init(add_default_cma_heap);
MODULE_DESCRIPTION("DMA-BUF CMA Heap");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,271 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/device.h>
#include <linux/dma-buf.h>
#include <linux/err.h>
#include <linux/highmem.h>
#include <linux/idr.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/vmalloc.h>
#include <uapi/linux/dma-heap.h>
#include "heap-helpers.h"
void init_heap_helper_buffer(struct heap_helper_buffer *buffer,
void (*free)(struct heap_helper_buffer *))
{
buffer->priv_virt = NULL;
mutex_init(&buffer->lock);
buffer->vmap_cnt = 0;
buffer->vaddr = NULL;
buffer->pagecount = 0;
buffer->pages = NULL;
INIT_LIST_HEAD(&buffer->attachments);
buffer->free = free;
}
struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer,
int fd_flags)
{
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
exp_info.ops = &heap_helper_ops;
exp_info.size = buffer->size;
exp_info.flags = fd_flags;
exp_info.priv = buffer;
return dma_buf_export(&exp_info);
}
static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
{
void *vaddr;
vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
if (!vaddr)
return ERR_PTR(-ENOMEM);
return vaddr;
}
static void dma_heap_buffer_destroy(struct heap_helper_buffer *buffer)
{
if (buffer->vmap_cnt > 0) {
WARN(1, "%s: buffer still mapped in the kernel\n", __func__);
vunmap(buffer->vaddr);
}
buffer->free(buffer);
}
static void *dma_heap_buffer_vmap_get(struct heap_helper_buffer *buffer)
{
void *vaddr;
if (buffer->vmap_cnt) {
buffer->vmap_cnt++;
return buffer->vaddr;
}
vaddr = dma_heap_map_kernel(buffer);
if (IS_ERR(vaddr))
return vaddr;
buffer->vaddr = vaddr;
buffer->vmap_cnt++;
return vaddr;
}
static void dma_heap_buffer_vmap_put(struct heap_helper_buffer *buffer)
{
if (!--buffer->vmap_cnt) {
vunmap(buffer->vaddr);
buffer->vaddr = NULL;
}
}
struct dma_heaps_attachment {
struct device *dev;
struct sg_table table;
struct list_head list;
};
static int dma_heap_attach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
{
struct dma_heaps_attachment *a;
struct heap_helper_buffer *buffer = dmabuf->priv;
int ret;
a = kzalloc(sizeof(*a), GFP_KERNEL);
if (!a)
return -ENOMEM;
ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
buffer->pagecount, 0,
buffer->pagecount << PAGE_SHIFT,
GFP_KERNEL);
if (ret) {
kfree(a);
return ret;
}
a->dev = attachment->dev;
INIT_LIST_HEAD(&a->list);
attachment->priv = a;
mutex_lock(&buffer->lock);
list_add(&a->list, &buffer->attachments);
mutex_unlock(&buffer->lock);
return 0;
}
static void dma_heap_detach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
{
struct dma_heaps_attachment *a = attachment->priv;
struct heap_helper_buffer *buffer = dmabuf->priv;
mutex_lock(&buffer->lock);
list_del(&a->list);
mutex_unlock(&buffer->lock);
sg_free_table(&a->table);
kfree(a);
}
static
struct sg_table *dma_heap_map_dma_buf(struct dma_buf_attachment *attachment,
enum dma_data_direction direction)
{
struct dma_heaps_attachment *a = attachment->priv;
struct sg_table *table;
table = &a->table;
if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
direction))
table = ERR_PTR(-ENOMEM);
return table;
}
static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *table,
enum dma_data_direction direction)
{
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
}
static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct heap_helper_buffer *buffer = vma->vm_private_data;
if (vmf->pgoff > buffer->pagecount)
return VM_FAULT_SIGBUS;
vmf->page = buffer->pages[vmf->pgoff];
get_page(vmf->page);
return 0;
}
static const struct vm_operations_struct dma_heap_vm_ops = {
.fault = dma_heap_vm_fault,
};
static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
return -EINVAL;
vma->vm_ops = &dma_heap_vm_ops;
vma->vm_private_data = buffer;
return 0;
}
static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
dma_heap_buffer_destroy(buffer);
}
static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
struct dma_heaps_attachment *a;
int ret = 0;
mutex_lock(&buffer->lock);
if (buffer->vmap_cnt)
invalidate_kernel_vmap_range(buffer->vaddr, buffer->size);
list_for_each_entry(a, &buffer->attachments, list) {
dma_sync_sg_for_cpu(a->dev, a->table.sgl, a->table.nents,
direction);
}
mutex_unlock(&buffer->lock);
return ret;
}
static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
struct dma_heaps_attachment *a;
mutex_lock(&buffer->lock);
if (buffer->vmap_cnt)
flush_kernel_vmap_range(buffer->vaddr, buffer->size);
list_for_each_entry(a, &buffer->attachments, list) {
dma_sync_sg_for_device(a->dev, a->table.sgl, a->table.nents,
direction);
}
mutex_unlock(&buffer->lock);
return 0;
}
static void *dma_heap_dma_buf_vmap(struct dma_buf *dmabuf)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
void *vaddr;
mutex_lock(&buffer->lock);
vaddr = dma_heap_buffer_vmap_get(buffer);
mutex_unlock(&buffer->lock);
return vaddr;
}
static void dma_heap_dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
mutex_lock(&buffer->lock);
dma_heap_buffer_vmap_put(buffer);
mutex_unlock(&buffer->lock);
}
const struct dma_buf_ops heap_helper_ops = {
.map_dma_buf = dma_heap_map_dma_buf,
.unmap_dma_buf = dma_heap_unmap_dma_buf,
.mmap = dma_heap_mmap,
.release = dma_heap_dma_buf_release,
.attach = dma_heap_attach,
.detach = dma_heap_detach,
.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
.vmap = dma_heap_dma_buf_vmap,
.vunmap = dma_heap_dma_buf_vunmap,
};

View File

@ -0,0 +1,53 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* DMABUF Heaps helper code
*
* Copyright (C) 2011 Google, Inc.
* Copyright (C) 2019 Linaro Ltd.
*/
#ifndef _HEAP_HELPERS_H
#define _HEAP_HELPERS_H
#include <linux/dma-heap.h>
#include <linux/list.h>
/**
* struct heap_helper_buffer - helper buffer metadata
* @heap: back pointer to the heap the buffer came from
* @dmabuf: backing dma-buf for this buffer
* @size: size of the buffer
* @priv_virt pointer to heap specific private value
* @lock mutext to protect the data in this structure
* @vmap_cnt count of vmap references on the buffer
* @vaddr vmap'ed virtual address
* @pagecount number of pages in the buffer
* @pages list of page pointers
* @attachments list of device attachments
*
* @free heap callback to free the buffer
*/
struct heap_helper_buffer {
struct dma_heap *heap;
struct dma_buf *dmabuf;
size_t size;
void *priv_virt;
struct mutex lock;
int vmap_cnt;
void *vaddr;
pgoff_t pagecount;
struct page **pages;
struct list_head attachments;
void (*free)(struct heap_helper_buffer *buffer);
};
void init_heap_helper_buffer(struct heap_helper_buffer *buffer,
void (*free)(struct heap_helper_buffer *));
struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer,
int fd_flags);
extern const struct dma_buf_ops heap_helper_ops;
#endif /* _HEAP_HELPERS_H */

View File

@ -0,0 +1,123 @@
// SPDX-License-Identifier: GPL-2.0
/*
* DMABUF System heap exporter
*
* Copyright (C) 2011 Google, Inc.
* Copyright (C) 2019 Linaro Ltd.
*/
#include <linux/dma-buf.h>
#include <linux/dma-mapping.h>
#include <linux/dma-heap.h>
#include <linux/err.h>
#include <linux/highmem.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/sched/signal.h>
#include <asm/page.h>
#include "heap-helpers.h"
struct dma_heap *sys_heap;
static void system_heap_free(struct heap_helper_buffer *buffer)
{
pgoff_t pg;
for (pg = 0; pg < buffer->pagecount; pg++)
__free_page(buffer->pages[pg]);
kfree(buffer->pages);
kfree(buffer);
}
static int system_heap_allocate(struct dma_heap *heap,
unsigned long len,
unsigned long fd_flags,
unsigned long heap_flags)
{
struct heap_helper_buffer *helper_buffer;
struct dma_buf *dmabuf;
int ret = -ENOMEM;
pgoff_t pg;
helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
if (!helper_buffer)
return -ENOMEM;
init_heap_helper_buffer(helper_buffer, system_heap_free);
helper_buffer->heap = heap;
helper_buffer->size = len;
helper_buffer->pagecount = len / PAGE_SIZE;
helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
sizeof(*helper_buffer->pages),
GFP_KERNEL);
if (!helper_buffer->pages) {
ret = -ENOMEM;
goto err0;
}
for (pg = 0; pg < helper_buffer->pagecount; pg++) {
/*
* Avoid trying to allocate memory if the process
* has been killed by by SIGKILL
*/
if (fatal_signal_pending(current))
goto err1;
helper_buffer->pages[pg] = alloc_page(GFP_KERNEL | __GFP_ZERO);
if (!helper_buffer->pages[pg])
goto err1;
}
/* create the dmabuf */
dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags);
if (IS_ERR(dmabuf)) {
ret = PTR_ERR(dmabuf);
goto err1;
}
helper_buffer->dmabuf = dmabuf;
ret = dma_buf_fd(dmabuf, fd_flags);
if (ret < 0) {
dma_buf_put(dmabuf);
/* just return, as put will call release and that will free */
return ret;
}
return ret;
err1:
while (pg > 0)
__free_page(helper_buffer->pages[--pg]);
kfree(helper_buffer->pages);
err0:
kfree(helper_buffer);
return ret;
}
static const struct dma_heap_ops system_heap_ops = {
.allocate = system_heap_allocate,
};
static int system_heap_create(void)
{
struct dma_heap_export_info exp_info;
int ret = 0;
exp_info.name = "system_heap";
exp_info.ops = &system_heap_ops;
exp_info.priv = NULL;
sys_heap = dma_heap_add(&exp_info);
if (IS_ERR(sys_heap))
ret = PTR_ERR(sys_heap);
return ret;
}
module_init(system_heap_create);
MODULE_LICENSE("GPL v2");

View File

@ -18,6 +18,8 @@ static const size_t size_limit_mb = 64; /* total dmabuf size, in megabytes */
struct udmabuf {
pgoff_t pagecount;
struct page **pages;
struct sg_table *sg;
struct miscdevice *device;
};
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@ -46,10 +48,10 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
return 0;
}
static struct sg_table *map_udmabuf(struct dma_buf_attachment *at,
enum dma_data_direction direction)
static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
enum dma_data_direction direction)
{
struct udmabuf *ubuf = at->dmabuf->priv;
struct udmabuf *ubuf = buf->priv;
struct sg_table *sg;
int ret;
@ -61,7 +63,7 @@ static struct sg_table *map_udmabuf(struct dma_buf_attachment *at,
GFP_KERNEL);
if (ret < 0)
goto err;
if (!dma_map_sg(at->dev, sg->sgl, sg->nents, direction)) {
if (!dma_map_sg(dev, sg->sgl, sg->nents, direction)) {
ret = -EINVAL;
goto err;
}
@ -73,54 +75,90 @@ err:
return ERR_PTR(ret);
}
static void put_sg_table(struct device *dev, struct sg_table *sg,
enum dma_data_direction direction)
{
dma_unmap_sg(dev, sg->sgl, sg->nents, direction);
sg_free_table(sg);
kfree(sg);
}
static struct sg_table *map_udmabuf(struct dma_buf_attachment *at,
enum dma_data_direction direction)
{
return get_sg_table(at->dev, at->dmabuf, direction);
}
static void unmap_udmabuf(struct dma_buf_attachment *at,
struct sg_table *sg,
enum dma_data_direction direction)
{
dma_unmap_sg(at->dev, sg->sgl, sg->nents, direction);
sg_free_table(sg);
kfree(sg);
return put_sg_table(at->dev, sg, direction);
}
static void release_udmabuf(struct dma_buf *buf)
{
struct udmabuf *ubuf = buf->priv;
struct device *dev = ubuf->device->this_device;
pgoff_t pg;
if (ubuf->sg)
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
for (pg = 0; pg < ubuf->pagecount; pg++)
put_page(ubuf->pages[pg]);
kfree(ubuf->pages);
kfree(ubuf);
}
static void *kmap_udmabuf(struct dma_buf *buf, unsigned long page_num)
static int begin_cpu_udmabuf(struct dma_buf *buf,
enum dma_data_direction direction)
{
struct udmabuf *ubuf = buf->priv;
struct page *page = ubuf->pages[page_num];
struct device *dev = ubuf->device->this_device;
return kmap(page);
if (!ubuf->sg) {
ubuf->sg = get_sg_table(dev, buf, direction);
if (IS_ERR(ubuf->sg))
return PTR_ERR(ubuf->sg);
} else {
dma_sync_sg_for_device(dev, ubuf->sg->sgl,
ubuf->sg->nents,
direction);
}
return 0;
}
static void kunmap_udmabuf(struct dma_buf *buf, unsigned long page_num,
void *vaddr)
static int end_cpu_udmabuf(struct dma_buf *buf,
enum dma_data_direction direction)
{
kunmap(vaddr);
struct udmabuf *ubuf = buf->priv;
struct device *dev = ubuf->device->this_device;
if (!ubuf->sg)
return -EINVAL;
dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents, direction);
return 0;
}
static const struct dma_buf_ops udmabuf_ops = {
.map_dma_buf = map_udmabuf,
.unmap_dma_buf = unmap_udmabuf,
.release = release_udmabuf,
.map = kmap_udmabuf,
.unmap = kunmap_udmabuf,
.mmap = mmap_udmabuf,
.cache_sgt_mapping = true,
.map_dma_buf = map_udmabuf,
.unmap_dma_buf = unmap_udmabuf,
.release = release_udmabuf,
.mmap = mmap_udmabuf,
.begin_cpu_access = begin_cpu_udmabuf,
.end_cpu_access = end_cpu_udmabuf,
};
#define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE)
static long udmabuf_create(const struct udmabuf_create_list *head,
const struct udmabuf_create_item *list)
static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct file *memfd = NULL;
@ -187,6 +225,7 @@ static long udmabuf_create(const struct udmabuf_create_list *head,
exp_info.priv = ubuf;
exp_info.flags = O_RDWR;
ubuf->device = device;
buf = dma_buf_export(&exp_info);
if (IS_ERR(buf)) {
ret = PTR_ERR(buf);
@ -224,7 +263,7 @@ static long udmabuf_ioctl_create(struct file *filp, unsigned long arg)
list.offset = create.offset;
list.size = create.size;
return udmabuf_create(&head, &list);
return udmabuf_create(filp->private_data, &head, &list);
}
static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg)
@ -243,7 +282,7 @@ static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg)
if (IS_ERR(list))
return PTR_ERR(list);
ret = udmabuf_create(&head, list);
ret = udmabuf_create(filp->private_data, &head, list);
kfree(list);
return ret;
}

View File

@ -294,9 +294,6 @@ config DRM_VKMS
If M is selected the module will be called vkms.
config DRM_ATI_PCIGART
bool
source "drivers/gpu/drm/exynos/Kconfig"
source "drivers/gpu/drm/rockchip/Kconfig"
@ -393,7 +390,6 @@ menuconfig DRM_LEGACY
bool "Enable legacy drivers (DANGEROUS)"
depends on DRM && MMU
select DRM_VM
select DRM_ATI_PCIGART if PCI
help
Enable legacy DRI1 drivers. Those drivers expose unsafe and dangerous
APIs to user-space, which can be used to circumvent access

View File

@ -5,7 +5,7 @@
drm-y := drm_auth.o drm_cache.o \
drm_file.o drm_gem.o drm_ioctl.o drm_irq.o \
drm_memory.o drm_drv.o drm_pci.o \
drm_memory.o drm_drv.o \
drm_sysfs.o drm_hashtab.o drm_mm.o \
drm_crtc.o drm_fourcc.o drm_modes.o drm_edid.o \
drm_encoder_slave.o \
@ -25,10 +25,10 @@ drm-$(CONFIG_DRM_VM) += drm_vm.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
drm-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_gem_shmem_helper.o
drm-$(CONFIG_DRM_ATI_PCIGART) += ati_pcigart.o
drm-$(CONFIG_DRM_PANEL) += drm_panel.o
drm-$(CONFIG_OF) += drm_of.o
drm-$(CONFIG_AGP) += drm_agpsupport.o
drm-$(CONFIG_PCI) += drm_pci.o
drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o
drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o

View File

@ -360,10 +360,8 @@ struct dma_buf *amdgpu_gem_prime_export(struct drm_gem_object *gobj,
return ERR_PTR(-EPERM);
buf = drm_gem_prime_export(gobj, flags);
if (!IS_ERR(buf)) {
buf->file->f_mapping = gobj->dev->anon_inode->i_mapping;
if (!IS_ERR(buf))
buf->ops = &amdgpu_dmabuf_ops;
}
return buf;
}

View File

@ -69,7 +69,7 @@ amdgpufb_release(struct fb_info *info, int user)
return 0;
}
static struct fb_ops amdgpufb_ops = {
static const struct fb_ops amdgpufb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
.fb_open = amdgpufb_open,

View File

@ -234,7 +234,7 @@ static uint32_t smu_v11_0_i2c_transmit(struct i2c_adapter *control,
DRM_DEBUG_DRIVER("I2C_Transmit(), address = %x, bytes = %d , data: ",
(uint16_t)address, numbytes);
if (drm_debug & DRM_UT_DRIVER) {
if (drm_debug_enabled(DRM_UT_DRIVER)) {
print_hex_dump(KERN_INFO, "data: ", DUMP_PREFIX_NONE,
16, 1, data, numbytes, false);
}
@ -388,7 +388,7 @@ static uint32_t smu_v11_0_i2c_receive(struct i2c_adapter *control,
DRM_DEBUG_DRIVER("I2C_Receive(), address = %x, bytes = %d, data :",
(uint16_t)address, bytes_received);
if (drm_debug & DRM_UT_DRIVER) {
if (drm_debug_enabled(DRM_UT_DRIVER)) {
print_hex_dump(KERN_INFO, "data: ", DUMP_PREFIX_NONE,
16, 1, data, bytes_received, false);
}

View File

@ -5324,11 +5324,12 @@ static int amdgpu_dm_connector_init(struct amdgpu_display_manager *dm,
connector_type = to_drm_connector_type(link->connector_signal);
res = drm_connector_init(
res = drm_connector_init_with_ddc(
dm->ddev,
&aconnector->base,
&amdgpu_dm_connector_funcs,
connector_type);
connector_type,
&i2c->base);
if (res) {
DRM_ERROR("connector_init failed\n");

View File

@ -12,9 +12,3 @@ config DRM_KOMEDA
Processor driver. It supports the D71 variants of the hardware.
If compiled as a module it will be called komeda.
config DRM_KOMEDA_ERROR_PRINT
bool "Enable komeda error print"
depends on DRM_KOMEDA
help
Choose this option to enable error printing.

View File

@ -18,7 +18,8 @@
#define MALIDP_CORE_ID_STATUS(__core_id) (((__u32)(__core_id)) & 0xFF)
/* Mali-display product IDs */
#define MALIDP_D71_PRODUCT_ID 0x0071
#define MALIDP_D71_PRODUCT_ID 0x0071
#define MALIDP_D32_PRODUCT_ID 0x0032
union komeda_config_id {
struct {

View File

@ -16,12 +16,11 @@ komeda-y := \
komeda_crtc.o \
komeda_plane.o \
komeda_wb_connector.o \
komeda_private_obj.o
komeda_private_obj.o \
komeda_event.o
komeda-y += \
d71/d71_dev.o \
d71/d71_component.o
komeda-$(CONFIG_DRM_KOMEDA_ERROR_PRINT) += komeda_event.o
obj-$(CONFIG_DRM_KOMEDA) += komeda.o

View File

@ -1044,7 +1044,9 @@ static int d71_merger_init(struct d71_dev *d71,
static void d71_improc_update(struct komeda_component *c,
struct komeda_component_state *state)
{
struct drm_crtc_state *crtc_st = state->crtc->state;
struct komeda_improc_state *st = to_improc_st(state);
struct d71_pipeline *pipe = to_d71_pipeline(c->pipeline);
u32 __iomem *reg = c->reg;
u32 index, mask = 0, ctrl = 0;
@ -1055,6 +1057,24 @@ static void d71_improc_update(struct komeda_component *c,
malidp_write32(reg, BLK_SIZE, HV_SIZE(st->hsize, st->vsize));
malidp_write32(reg, IPS_DEPTH, st->color_depth);
if (crtc_st->color_mgmt_changed) {
mask |= IPS_CTRL_FT | IPS_CTRL_RGB;
if (crtc_st->gamma_lut) {
malidp_write_group(pipe->dou_ft_coeff_addr, FT_COEFF0,
KOMEDA_N_GAMMA_COEFFS,
st->fgamma_coeffs);
ctrl |= IPS_CTRL_FT; /* enable gamma */
}
if (crtc_st->ctm) {
malidp_write_group(reg, IPS_RGB_RGB_COEFF0,
KOMEDA_N_CTM_COEFFS,
st->ctm_coeffs);
ctrl |= IPS_CTRL_RGB; /* enable gamut */
}
}
mask |= IPS_CTRL_YUV | IPS_CTRL_CHD422 | IPS_CTRL_CHD420;
/* config color format */
@ -1250,7 +1270,7 @@ static int d71_timing_ctrlr_init(struct d71_dev *d71,
ctrlr = to_ctrlr(c);
ctrlr->supports_dual_link = true;
ctrlr->supports_dual_link = d71->supports_dual_link;
return 0;
}

View File

@ -371,23 +371,33 @@ static int d71_enum_resources(struct komeda_dev *mdev)
goto err_cleanup;
}
/* probe PERIPH */
/* Only the legacy HW has the periph block, the newer merges the periph
* into GCU
*/
value = malidp_read32(d71->periph_addr, BLK_BLOCK_INFO);
if (BLOCK_INFO_BLK_TYPE(value) != D71_BLK_TYPE_PERIPH) {
DRM_ERROR("access blk periph but got blk: %d.\n",
BLOCK_INFO_BLK_TYPE(value));
err = -EINVAL;
goto err_cleanup;
if (BLOCK_INFO_BLK_TYPE(value) != D71_BLK_TYPE_PERIPH)
d71->periph_addr = NULL;
if (d71->periph_addr) {
/* probe PERIPHERAL in legacy HW */
value = malidp_read32(d71->periph_addr, PERIPH_CONFIGURATION_ID);
d71->max_line_size = value & PERIPH_MAX_LINE_SIZE ? 4096 : 2048;
d71->max_vsize = 4096;
d71->num_rich_layers = value & PERIPH_NUM_RICH_LAYERS ? 2 : 1;
d71->supports_dual_link = !!(value & PERIPH_SPLIT_EN);
d71->integrates_tbu = !!(value & PERIPH_TBU_EN);
} else {
value = malidp_read32(d71->gcu_addr, GCU_CONFIGURATION_ID0);
d71->max_line_size = GCU_MAX_LINE_SIZE(value);
d71->max_vsize = GCU_MAX_NUM_LINES(value);
value = malidp_read32(d71->gcu_addr, GCU_CONFIGURATION_ID1);
d71->num_rich_layers = GCU_NUM_RICH_LAYERS(value);
d71->supports_dual_link = GCU_DISPLAY_SPLIT_EN(value);
d71->integrates_tbu = GCU_DISPLAY_TBU_EN(value);
}
value = malidp_read32(d71->periph_addr, PERIPH_CONFIGURATION_ID);
d71->max_line_size = value & PERIPH_MAX_LINE_SIZE ? 4096 : 2048;
d71->max_vsize = 4096;
d71->num_rich_layers = value & PERIPH_NUM_RICH_LAYERS ? 2 : 1;
d71->supports_dual_link = value & PERIPH_SPLIT_EN ? true : false;
d71->integrates_tbu = value & PERIPH_TBU_EN ? true : false;
for (i = 0; i < d71->num_pipelines; i++) {
pipe = komeda_pipeline_add(mdev, sizeof(struct d71_pipeline),
&d71_pipeline_funcs);
@ -414,8 +424,11 @@ static int d71_enum_resources(struct komeda_dev *mdev)
d71->pipes[i] = to_d71_pipeline(pipe);
}
/* loop the register blks and probe */
i = 2; /* exclude GCU and PERIPH */
/* loop the register blks and probe.
* NOTE: d71->num_blocks includes reserved blocks.
* d71->num_blocks = GCU + valid blocks + reserved blocks
*/
i = 1; /* exclude GCU */
offset = D71_BLOCK_SIZE; /* skip GCU */
while (i < d71->num_blocks) {
blk_base = mdev->reg_base + (offset >> 2);
@ -425,9 +438,9 @@ static int d71_enum_resources(struct komeda_dev *mdev)
err = d71_probe_block(d71, &blk, blk_base);
if (err)
goto err_cleanup;
i++;
}
i++;
offset += D71_BLOCK_SIZE;
}
@ -594,10 +607,26 @@ static const struct komeda_dev_funcs d71_chip_funcs = {
const struct komeda_dev_funcs *
d71_identify(u32 __iomem *reg_base, struct komeda_chip_info *chip)
{
const struct komeda_dev_funcs *funcs;
u32 product_id;
chip->core_id = malidp_read32(reg_base, GLB_CORE_ID);
product_id = MALIDP_CORE_ID_PRODUCT_ID(chip->core_id);
switch (product_id) {
case MALIDP_D71_PRODUCT_ID:
case MALIDP_D32_PRODUCT_ID:
funcs = &d71_chip_funcs;
break;
default:
DRM_ERROR("Unsupported product: 0x%x\n", product_id);
return NULL;
}
chip->arch_id = malidp_read32(reg_base, GLB_ARCH_ID);
chip->core_id = malidp_read32(reg_base, GLB_CORE_ID);
chip->core_info = malidp_read32(reg_base, GLB_CORE_INFO);
chip->bus_width = D71_BUS_WIDTH_16_BYTES;
return &d71_chip_funcs;
return funcs;
}

View File

@ -72,6 +72,19 @@
#define GCU_CONTROL_MODE(x) ((x) & 0x7)
#define GCU_CONTROL_SRST BIT(16)
/* GCU_CONFIGURATION registers */
#define GCU_CONFIGURATION_ID0 0x100
#define GCU_CONFIGURATION_ID1 0x104
/* GCU configuration */
#define GCU_MAX_LINE_SIZE(x) ((x) & 0xFFFF)
#define GCU_MAX_NUM_LINES(x) ((x) >> 16)
#define GCU_NUM_RICH_LAYERS(x) ((x) & 0x7)
#define GCU_NUM_PIPELINES(x) (((x) >> 3) & 0x7)
#define GCU_NUM_SCALERS(x) (((x) >> 6) & 0x7)
#define GCU_DISPLAY_SPLIT_EN(x) (((x) >> 16) & 0x1)
#define GCU_DISPLAY_TBU_EN(x) (((x) >> 17) & 0x1)
/* GCU opmode */
#define INACTIVE_MODE 0
#define TBU_CONNECT_MODE 1

View File

@ -65,3 +65,69 @@ const s32 *komeda_select_yuv2rgb_coeffs(u32 color_encoding, u32 color_range)
return coeffs;
}
struct gamma_curve_sector {
u32 boundary_start;
u32 num_of_segments;
u32 segment_width;
};
struct gamma_curve_segment {
u32 start;
u32 end;
};
static struct gamma_curve_sector sector_tbl[] = {
{ 0, 4, 4 },
{ 16, 4, 4 },
{ 32, 4, 8 },
{ 64, 4, 16 },
{ 128, 4, 32 },
{ 256, 4, 64 },
{ 512, 16, 32 },
{ 1024, 24, 128 },
};
static void
drm_lut_to_coeffs(struct drm_property_blob *lut_blob, u32 *coeffs,
struct gamma_curve_sector *sector_tbl, u32 num_sectors)
{
struct drm_color_lut *lut;
u32 i, j, in, num = 0;
if (!lut_blob)
return;
lut = lut_blob->data;
for (i = 0; i < num_sectors; i++) {
for (j = 0; j < sector_tbl[i].num_of_segments; j++) {
in = sector_tbl[i].boundary_start +
j * sector_tbl[i].segment_width;
coeffs[num++] = drm_color_lut_extract(lut[in].red,
KOMEDA_COLOR_PRECISION);
}
}
coeffs[num] = BIT(KOMEDA_COLOR_PRECISION);
}
void drm_lut_to_fgamma_coeffs(struct drm_property_blob *lut_blob, u32 *coeffs)
{
drm_lut_to_coeffs(lut_blob, coeffs, sector_tbl, ARRAY_SIZE(sector_tbl));
}
void drm_ctm_to_coeffs(struct drm_property_blob *ctm_blob, u32 *coeffs)
{
struct drm_color_ctm *ctm;
u32 i;
if (!ctm_blob)
return;
ctm = ctm_blob->data;
for (i = 0; i < KOMEDA_N_CTM_COEFFS; i++)
coeffs[i] = drm_color_ctm_s31_32_to_qm_n(ctm->matrix[i], 3, 12);
}

View File

@ -11,7 +11,15 @@
#include <drm/drm_color_mgmt.h>
#define KOMEDA_N_YUV2RGB_COEFFS 12
#define KOMEDA_N_RGB2YUV_COEFFS 12
#define KOMEDA_COLOR_PRECISION 12
#define KOMEDA_N_GAMMA_COEFFS 65
#define KOMEDA_COLOR_LUT_SIZE BIT(KOMEDA_COLOR_PRECISION)
#define KOMEDA_N_CTM_COEFFS 9
void drm_lut_to_fgamma_coeffs(struct drm_property_blob *lut_blob, u32 *coeffs);
void drm_ctm_to_coeffs(struct drm_property_blob *ctm_blob, u32 *coeffs);
const s32 *komeda_select_yuv2rgb_coeffs(u32 color_encoding, u32 color_range);
#endif
#endif /*_KOMEDA_COLOR_MGMT_H_*/

View File

@ -617,6 +617,8 @@ static int komeda_crtc_add(struct komeda_kms_dev *kms,
crtc->port = kcrtc->master->of_output_port;
drm_crtc_enable_color_mgmt(crtc, 0, true, KOMEDA_COLOR_LUT_SIZE);
return err;
}

View File

@ -58,6 +58,8 @@ static void komeda_debugfs_init(struct komeda_dev *mdev)
mdev->debugfs_root = debugfs_create_dir("komeda", NULL);
debugfs_create_file("register", 0444, mdev->debugfs_root,
mdev, &komeda_register_fops);
debugfs_create_x16("err_verbosity", 0664, mdev->debugfs_root,
&mdev->err_verbosity);
}
#endif
@ -113,22 +115,14 @@ static struct attribute_group komeda_sysfs_attr_group = {
.attrs = komeda_sysfs_entries,
};
static int komeda_parse_pipe_dt(struct komeda_dev *mdev, struct device_node *np)
static int komeda_parse_pipe_dt(struct komeda_pipeline *pipe)
{
struct komeda_pipeline *pipe;
struct device_node *np = pipe->of_node;
struct clk *clk;
u32 pipe_id;
int ret = 0;
ret = of_property_read_u32(np, "reg", &pipe_id);
if (ret != 0 || pipe_id >= mdev->n_pipelines)
return -EINVAL;
pipe = mdev->pipelines[pipe_id];
clk = of_clk_get_by_name(np, "pxclk");
if (IS_ERR(clk)) {
DRM_ERROR("get pxclk for pipeline %d failed!\n", pipe_id);
DRM_ERROR("get pxclk for pipeline %d failed!\n", pipe->id);
return PTR_ERR(clk);
}
pipe->pxlclk = clk;
@ -142,7 +136,6 @@ static int komeda_parse_pipe_dt(struct komeda_dev *mdev, struct device_node *np)
of_graph_get_port_by_id(np, KOMEDA_OF_PORT_OUTPUT);
pipe->dual_link = pipe->of_output_links[0] && pipe->of_output_links[1];
pipe->of_node = of_node_get(np);
return 0;
}
@ -151,7 +144,9 @@ static int komeda_parse_dt(struct device *dev, struct komeda_dev *mdev)
{
struct platform_device *pdev = to_platform_device(dev);
struct device_node *child, *np = dev->of_node;
int ret;
struct komeda_pipeline *pipe;
u32 pipe_id = U32_MAX;
int ret = -1;
mdev->irq = platform_get_irq(pdev, 0);
if (mdev->irq < 0) {
@ -166,37 +161,44 @@ static int komeda_parse_dt(struct device *dev, struct komeda_dev *mdev)
ret = 0;
for_each_available_child_of_node(np, child) {
if (of_node_cmp(child->name, "pipeline") == 0) {
ret = komeda_parse_pipe_dt(mdev, child);
if (ret) {
DRM_ERROR("parse pipeline dt error!\n");
of_node_put(child);
break;
if (of_node_name_eq(child, "pipeline")) {
of_property_read_u32(child, "reg", &pipe_id);
if (pipe_id >= mdev->n_pipelines) {
DRM_WARN("Skip the redundant DT node: pipeline-%u.\n",
pipe_id);
continue;
}
mdev->pipelines[pipe_id]->of_node = of_node_get(child);
}
}
return ret;
for (pipe_id = 0; pipe_id < mdev->n_pipelines; pipe_id++) {
pipe = mdev->pipelines[pipe_id];
if (!pipe->of_node) {
DRM_ERROR("Pipeline-%d doesn't have a DT node.\n",
pipe->id);
return -EINVAL;
}
ret = komeda_parse_pipe_dt(pipe);
if (ret)
return ret;
}
return 0;
}
struct komeda_dev *komeda_dev_create(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
const struct komeda_product_data *product;
komeda_identify_func komeda_identify;
struct komeda_dev *mdev;
struct resource *io_res;
int err = 0;
product = of_device_get_match_data(dev);
if (!product)
komeda_identify = of_device_get_match_data(dev);
if (!komeda_identify)
return ERR_PTR(-ENODEV);
io_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!io_res) {
DRM_ERROR("No registers defined.\n");
return ERR_PTR(-ENODEV);
}
mdev = devm_kzalloc(dev, sizeof(*mdev), GFP_KERNEL);
if (!mdev)
return ERR_PTR(-ENOMEM);
@ -204,7 +206,7 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
mutex_init(&mdev->lock);
mdev->dev = dev;
mdev->reg_base = devm_ioremap_resource(dev, io_res);
mdev->reg_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(mdev->reg_base)) {
DRM_ERROR("Map register space failed.\n");
err = PTR_ERR(mdev->reg_base);
@ -222,11 +224,9 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
clk_prepare_enable(mdev->aclk);
mdev->funcs = product->identify(mdev->reg_base, &mdev->chip);
if (!komeda_product_match(mdev, product->product_id)) {
DRM_ERROR("DT configured %x mismatch with real HW %x.\n",
product->product_id,
MALIDP_CORE_ID_PRODUCT_ID(mdev->chip.core_id));
mdev->funcs = komeda_identify(mdev->reg_base, &mdev->chip);
if (!mdev->funcs) {
DRM_ERROR("Failed to identify the HW.\n");
err = -ENODEV;
goto disable_clk;
}
@ -280,6 +280,8 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
goto err_cleanup;
}
mdev->err_verbosity = KOMEDA_DEV_PRINT_ERR_EVENTS;
#ifdef CONFIG_DEBUG_FS
komeda_debugfs_init(mdev);
#endif

View File

@ -51,10 +51,12 @@
#define KOMEDA_WARN_EVENTS KOMEDA_ERR_CSCE
/* malidp device id */
enum {
MALI_D71 = 0,
};
#define KOMEDA_INFO_EVENTS (0 \
| KOMEDA_EVENT_VSYNC \
| KOMEDA_EVENT_FLIP \
| KOMEDA_EVENT_EOW \
| KOMEDA_EVENT_MODE \
)
/* pipeline DT ports */
enum {
@ -69,12 +71,6 @@ struct komeda_chip_info {
u32 bus_width;
};
struct komeda_product_data {
u32 product_id;
const struct komeda_dev_funcs *(*identify)(u32 __iomem *reg,
struct komeda_chip_info *info);
};
struct komeda_dev;
struct komeda_events {
@ -202,6 +198,23 @@ struct komeda_dev {
/** @debugfs_root: root directory of komeda debugfs */
struct dentry *debugfs_root;
/**
* @err_verbosity: bitmask for how much extra info to print on error
*
* See KOMEDA_DEV_* macros for details. Low byte contains the debug
* level categories, the high byte contains extra debug options.
*/
u16 err_verbosity;
/* Print a single line per error per frame with error events. */
#define KOMEDA_DEV_PRINT_ERR_EVENTS BIT(0)
/* Print a single line per warning per frame with error events. */
#define KOMEDA_DEV_PRINT_WARN_EVENTS BIT(1)
/* Print a single line per info event per frame with error events. */
#define KOMEDA_DEV_PRINT_INFO_EVENTS BIT(2)
/* Dump DRM state on an error or warning event. */
#define KOMEDA_DEV_PRINT_DUMP_STATE_ON_EVENT BIT(8)
/* Disable rate limiting of event prints (normally one per commit) */
#define KOMEDA_DEV_PRINT_DISABLE_RATELIMIT BIT(12)
};
static inline bool
@ -210,6 +223,9 @@ komeda_product_match(struct komeda_dev *mdev, u32 target)
return MALIDP_CORE_ID_PRODUCT_ID(mdev->chip.core_id) == target;
}
typedef const struct komeda_dev_funcs *
(*komeda_identify_func)(u32 __iomem *reg, struct komeda_chip_info *chip);
const struct komeda_dev_funcs *
d71_identify(u32 __iomem *reg, struct komeda_chip_info *chip);
@ -218,11 +234,7 @@ void komeda_dev_destroy(struct komeda_dev *mdev);
struct komeda_dev *dev_to_mdev(struct device *dev);
#ifdef CONFIG_DRM_KOMEDA_ERROR_PRINT
void komeda_print_events(struct komeda_events *evts);
#else
static inline void komeda_print_events(struct komeda_events *evts) {}
#endif
void komeda_print_events(struct komeda_events *evts, struct drm_device *dev);
int komeda_dev_resume(struct komeda_dev *mdev);
int komeda_dev_suspend(struct komeda_dev *mdev);

View File

@ -123,15 +123,9 @@ static int komeda_platform_remove(struct platform_device *pdev)
return 0;
}
static const struct komeda_product_data komeda_products[] = {
[MALI_D71] = {
.product_id = MALIDP_D71_PRODUCT_ID,
.identify = d71_identify,
},
};
static const struct of_device_id komeda_of_match[] = {
{ .compatible = "arm,mali-d71", .data = &komeda_products[MALI_D71], },
{ .compatible = "arm,mali-d71", .data = d71_identify, },
{ .compatible = "arm,mali-d32", .data = d71_identify, },
{},
};

View File

@ -4,6 +4,7 @@
* Author: James.Qian.Wang <james.qian.wang@arm.com>
*
*/
#include <drm/drm_atomic.h>
#include <drm/drm_print.h>
#include "komeda_dev.h"
@ -16,6 +17,7 @@ struct komeda_str {
/* return 0 on success, < 0 on no space.
*/
__printf(2, 3)
static int komeda_sprintf(struct komeda_str *str, const char *fmt, ...)
{
va_list args;
@ -107,20 +109,31 @@ static bool is_new_frame(struct komeda_events *a)
(KOMEDA_EVENT_FLIP | KOMEDA_EVENT_EOW);
}
void komeda_print_events(struct komeda_events *evts)
void komeda_print_events(struct komeda_events *evts, struct drm_device *dev)
{
u64 print_evts = KOMEDA_ERR_EVENTS;
u64 print_evts = 0;
static bool en_print = true;
struct komeda_dev *mdev = dev->dev_private;
u16 const err_verbosity = mdev->err_verbosity;
u64 evts_mask = evts->global | evts->pipes[0] | evts->pipes[1];
/* reduce the same msg print, only print the first evt for one frame */
if (evts->global || is_new_frame(evts))
en_print = true;
if (!en_print)
if (!(err_verbosity & KOMEDA_DEV_PRINT_DISABLE_RATELIMIT) && !en_print)
return;
if ((evts->global | evts->pipes[0] | evts->pipes[1]) & print_evts) {
if (err_verbosity & KOMEDA_DEV_PRINT_ERR_EVENTS)
print_evts |= KOMEDA_ERR_EVENTS;
if (err_verbosity & KOMEDA_DEV_PRINT_WARN_EVENTS)
print_evts |= KOMEDA_WARN_EVENTS;
if (err_verbosity & KOMEDA_DEV_PRINT_INFO_EVENTS)
print_evts |= KOMEDA_INFO_EVENTS;
if (evts_mask & print_evts) {
char msg[256];
struct komeda_str str;
struct drm_printer p = drm_info_printer(dev->dev);
str.str = msg;
str.sz = sizeof(msg);
@ -134,6 +147,9 @@ void komeda_print_events(struct komeda_events *evts)
evt_str(&str, evts->pipes[1]);
DRM_ERROR("err detect: %s\n", msg);
if ((err_verbosity & KOMEDA_DEV_PRINT_DUMP_STATE_ON_EVENT) &&
(evts_mask & (KOMEDA_ERR_EVENTS | KOMEDA_WARN_EVENTS)))
drm_state_dump(dev, &p);
en_print = false;
}

View File

@ -48,7 +48,7 @@ static irqreturn_t komeda_kms_irq_handler(int irq, void *data)
memset(&evts, 0, sizeof(evts));
status = mdev->funcs->irq_handler(mdev, &evts);
komeda_print_events(&evts);
komeda_print_events(&evts, drm);
/* Notify the crtc to handle the events */
for (i = 0; i < kms->n_crtcs; i++)

View File

@ -11,6 +11,7 @@
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include "malidp_utils.h"
#include "komeda_color_mgmt.h"
#define KOMEDA_MAX_PIPELINES 2
#define KOMEDA_PIPELINE_MAX_LAYERS 4
@ -327,6 +328,8 @@ struct komeda_improc_state {
struct komeda_component_state base;
u8 color_format, color_depth;
u16 hsize, vsize;
u32 fgamma_coeffs[KOMEDA_N_GAMMA_COEFFS];
u32 ctm_coeffs[KOMEDA_N_CTM_COEFFS];
};
/* display timing controller */

View File

@ -802,6 +802,12 @@ komeda_improc_validate(struct komeda_improc *improc,
st->color_format = BIT(__ffs(avail_formats));
}
if (kcrtc_st->base.color_mgmt_changed) {
drm_lut_to_fgamma_coeffs(kcrtc_st->base.gamma_lut,
st->fgamma_coeffs);
drm_ctm_to_coeffs(kcrtc_st->base.ctm, st->ctm_coeffs);
}
komeda_component_add_input(&st->base, &dflow->input, 0);
komeda_component_set_output(&dflow->input, &improc->base, 0);

View File

@ -16,7 +16,7 @@
#include "armada_fb.h"
#include "armada_gem.h"
static /*const*/ struct fb_ops armada_fb_ops = {
static const struct fb_ops armada_fb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
.fb_fillrect = drm_fb_helper_cfb_fillrect,

View File

@ -461,16 +461,6 @@ static void armada_gem_prime_unmap_dma_buf(struct dma_buf_attachment *attach,
kfree(sgt);
}
static void *armada_gem_dmabuf_no_kmap(struct dma_buf *buf, unsigned long n)
{
return NULL;
}
static void
armada_gem_dmabuf_no_kunmap(struct dma_buf *buf, unsigned long n, void *addr)
{
}
static int
armada_gem_dmabuf_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
{
@ -481,8 +471,6 @@ static const struct dma_buf_ops armada_gem_prime_dmabuf_ops = {
.map_dma_buf = armada_gem_prime_map_dma_buf,
.unmap_dma_buf = armada_gem_prime_unmap_dma_buf,
.release = drm_gem_dmabuf_release,
.map = armada_gem_dmabuf_no_kmap,
.unmap = armada_gem_dmabuf_no_kunmap,
.mmap = armada_gem_dmabuf_mmap,
};

View File

@ -33,7 +33,6 @@
#include <drm/drm_crtc_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_gem_vram_helper.h>
#include <drm/drm_pci.h>
#include <drm/drm_probe_helper.h>
#include "ast_drv.h"
@ -86,9 +85,42 @@ static void ast_kick_out_firmware_fb(struct pci_dev *pdev)
static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct drm_device *dev;
int ret;
ast_kick_out_firmware_fb(pdev);
return drm_get_pci_dev(pdev, ent, &driver);
ret = pci_enable_device(pdev);
if (ret)
return ret;
dev = drm_dev_alloc(&driver, &pdev->dev);
if (IS_ERR(dev)) {
ret = PTR_ERR(dev);
goto err_pci_disable_device;
}
dev->pdev = pdev;
pci_set_drvdata(pdev, dev);
ret = ast_driver_load(dev, ent->driver_data);
if (ret)
goto err_drm_dev_put;
ret = drm_dev_register(dev, ent->driver_data);
if (ret)
goto err_ast_driver_unload;
return 0;
err_ast_driver_unload:
ast_driver_unload(dev);
err_drm_dev_put:
drm_dev_put(dev);
err_pci_disable_device:
pci_disable_device(pdev);
return ret;
}
static void
@ -96,17 +128,19 @@ ast_pci_remove(struct pci_dev *pdev)
{
struct drm_device *dev = pci_get_drvdata(pdev);
drm_put_dev(dev);
drm_dev_unregister(dev);
ast_driver_unload(dev);
drm_dev_put(dev);
}
static int ast_drm_freeze(struct drm_device *dev)
{
drm_kms_helper_poll_disable(dev);
pci_save_state(dev->pdev);
drm_fb_helper_set_suspend_unlocked(dev->fb_helper, true);
int error;
error = drm_mode_config_helper_suspend(dev);
if (error)
return error;
pci_save_state(dev->pdev);
return 0;
}
@ -114,11 +148,7 @@ static int ast_drm_thaw(struct drm_device *dev)
{
ast_post_gpu(dev);
drm_mode_config_reset(dev);
drm_helper_resume_force_mode(dev);
drm_fb_helper_set_suspend_unlocked(dev->fb_helper, false);
return 0;
return drm_mode_config_helper_resume(dev);
}
static int ast_drm_resume(struct drm_device *dev)
@ -131,8 +161,6 @@ static int ast_drm_resume(struct drm_device *dev)
ret = ast_drm_thaw(dev);
if (ret)
return ret;
drm_kms_helper_poll_enable(dev);
return 0;
}
@ -150,6 +178,7 @@ static int ast_pm_suspend(struct device *dev)
pci_set_power_state(pdev, PCI_D3hot);
return 0;
}
static int ast_pm_resume(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
@ -165,7 +194,6 @@ static int ast_pm_freeze(struct device *dev)
if (!ddev || !ddev->dev_private)
return -ENODEV;
return ast_drm_freeze(ddev);
}
static int ast_pm_thaw(struct device *dev)
@ -203,10 +231,9 @@ static struct pci_driver ast_pci_driver = {
DEFINE_DRM_GEM_FOPS(ast_fops);
static struct drm_driver driver = {
.driver_features = DRIVER_MODESET | DRIVER_GEM,
.load = ast_driver_load,
.unload = ast_driver_unload,
.driver_features = DRIVER_ATOMIC |
DRIVER_GEM |
DRIVER_MODESET,
.fops = &ast_fops,
.name = DRIVER_NAME,

View File

@ -121,6 +121,9 @@ struct ast_private {
unsigned int next_index;
} cursor;
struct drm_plane primary_plane;
struct drm_plane cursor_plane;
bool support_wide_screen;
enum {
ast_use_p2a,
@ -137,8 +140,6 @@ struct ast_private {
int ast_driver_load(struct drm_device *dev, unsigned long flags);
void ast_driver_unload(struct drm_device *dev);
struct ast_gem_object;
#define AST_IO_AR_PORT_WRITE (0x40)
#define AST_IO_MISC_PORT_WRITE (0x42)
#define AST_IO_VGA_ENABLE_PORT (0x43)
@ -280,6 +281,17 @@ struct ast_vbios_mode_info {
const struct ast_vbios_enhtable *enh_table;
};
struct ast_crtc_state {
struct drm_crtc_state base;
/* Last known format of primary plane */
const struct drm_format_info *format;
struct ast_vbios_mode_info vbios_mode_info;
};
#define to_ast_crtc_state(state) container_of(state, struct ast_crtc_state, base)
extern int ast_mode_init(struct drm_device *dev);
extern void ast_mode_fini(struct drm_device *dev);
@ -289,10 +301,6 @@ extern void ast_mode_fini(struct drm_device *dev);
int ast_mm_init(struct ast_private *ast);
void ast_mm_fini(struct ast_private *ast);
int ast_gem_create(struct drm_device *dev,
u32 size, bool iskernel,
struct drm_gem_object **obj);
/* ast post */
void ast_enable_vga(struct drm_device *dev);
void ast_enable_mmio(struct drm_device *dev);

View File

@ -28,6 +28,7 @@
#include <linux/pci.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem.h>
@ -387,8 +388,33 @@ static int ast_get_dram_info(struct drm_device *dev)
return 0;
}
enum drm_mode_status ast_mode_config_mode_valid(struct drm_device *dev,
const struct drm_display_mode *mode)
{
static const unsigned long max_bpp = 4; /* DRM_FORMAT_XRGBA8888 */
struct ast_private *ast = dev->dev_private;
unsigned long fbsize, fbpages, max_fbpages;
/* To support double buffering, a framebuffer may not
* consume more than half of the available VRAM.
*/
max_fbpages = (ast->vram_size / 2) >> PAGE_SHIFT;
fbsize = mode->hdisplay * mode->vdisplay * max_bpp;
fbpages = DIV_ROUND_UP(fbsize, PAGE_SIZE);
if (fbpages > max_fbpages)
return MODE_MEM;
return MODE_OK;
}
static const struct drm_mode_config_funcs ast_mode_funcs = {
.fb_create = drm_gem_fb_create
.fb_create = drm_gem_fb_create,
.mode_valid = ast_mode_config_mode_valid,
.atomic_check = drm_atomic_helper_check,
.atomic_commit = drm_atomic_helper_commit,
};
static u32 ast_get_vram_info(struct drm_device *dev)
@ -506,6 +532,8 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags)
if (ret)
goto out_free;
drm_mode_config_reset(dev);
ret = drm_fbdev_generic_setup(dev, 32);
if (ret)
goto out_free;
@ -535,27 +563,3 @@ void ast_driver_unload(struct drm_device *dev)
pci_iounmap(dev->pdev, ast->regs);
kfree(ast);
}
int ast_gem_create(struct drm_device *dev,
u32 size, bool iskernel,
struct drm_gem_object **obj)
{
struct drm_gem_vram_object *gbo;
int ret;
*obj = NULL;
size = roundup(size, PAGE_SIZE);
if (size == 0)
return -EINVAL;
gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, size, 0, false);
if (IS_ERR(gbo)) {
ret = PTR_ERR(gbo);
if (ret != -ERESTARTSYS)
DRM_ERROR("failed to allocate GEM object\n");
return ret;
}
*obj = &gbo->bo.base;
return 0;
}

File diff suppressed because it is too large Load Diff

View File

@ -557,12 +557,6 @@ static irqreturn_t atmel_hlcdc_dc_irq_handler(int irq, void *data)
return IRQ_HANDLED;
}
static struct drm_framebuffer *atmel_hlcdc_fb_create(struct drm_device *dev,
struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd)
{
return drm_gem_fb_create(dev, file_priv, mode_cmd);
}
struct atmel_hlcdc_dc_commit {
struct work_struct work;
struct drm_device *dev;
@ -657,7 +651,7 @@ error:
}
static const struct drm_mode_config_funcs mode_config_funcs = {
.fb_create = atmel_hlcdc_fb_create,
.fb_create = drm_gem_fb_create,
.atomic_check = drm_atomic_helper_check,
.atomic_commit = atmel_hlcdc_dc_atomic_commit,
};

View File

@ -604,7 +604,7 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
int ret;
int i;
if (!state->base.crtc || !fb)
if (!state->base.crtc || WARN_ON(!fb))
return 0;
crtc_state = drm_atomic_get_existing_crtc_state(s->state, s->crtc);

View File

@ -16,16 +16,6 @@ config DRM_PANEL_BRIDGE
menu "Display Interface Bridges"
depends on DRM && DRM_BRIDGE
config DRM_ANALOGIX_ANX78XX
tristate "Analogix ANX78XX bridge"
select DRM_KMS_HELPER
select REGMAP_I2C
---help---
ANX78XX is an ultra-low power Full-HD SlimPort transmitter
designed for portable devices. The ANX78XX transforms
the HDMI output of an application processor to MyDP
or DisplayPort.
config DRM_CDNS_DSI
tristate "Cadence DPI/DSI bridge"
select DRM_KMS_HELPER
@ -60,10 +50,10 @@ config DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW
select DRM_KMS_HELPER
select DRM_PANEL
---help---
This is a driver for the display bridges of
GE B850v3 that convert dual channel LVDS
to DP++. This is used with the i.MX6 imx-ldb
driver. You are likely to say N here.
This is a driver for the display bridges of
GE B850v3 that convert dual channel LVDS
to DP++. This is used with the i.MX6 imx-ldb
driver. You are likely to say N here.
config DRM_NXP_PTN3460
tristate "NXP PTN3460 DP/LVDS bridge"

View File

@ -1,5 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DRM_ANALOGIX_ANX78XX) += analogix-anx78xx.o
obj-$(CONFIG_DRM_CDNS_DSI) += cdns-dsi.o
obj-$(CONFIG_DRM_DUMB_VGA_DAC) += dumb-vga-dac.o
obj-$(CONFIG_DRM_LVDS_ENCODER) += lvds-encoder.o
@ -12,8 +11,9 @@ obj-$(CONFIG_DRM_SII9234) += sii9234.o
obj-$(CONFIG_DRM_THINE_THC63LVD1024) += thc63lvd1024.o
obj-$(CONFIG_DRM_TOSHIBA_TC358764) += tc358764.o
obj-$(CONFIG_DRM_TOSHIBA_TC358767) += tc358767.o
obj-$(CONFIG_DRM_ANALOGIX_DP) += analogix/
obj-$(CONFIG_DRM_I2C_ADV7511) += adv7511/
obj-$(CONFIG_DRM_TI_SN65DSI86) += ti-sn65dsi86.o
obj-$(CONFIG_DRM_TI_TFP410) += ti-tfp410.o
obj-y += analogix/
obj-y += synopsys/

View File

@ -1,703 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright(c) 2016, Analogix Semiconductor. All rights reserved.
*/
#ifndef __ANX78xx_H
#define __ANX78xx_H
/***************************************************************/
/* Register definitions for RX_PO */
/***************************************************************/
/*
* System Control and Status
*/
/* Software Reset Register 1 */
#define SP_SOFTWARE_RESET1_REG 0x11
#define SP_VIDEO_RST BIT(4)
#define SP_HDCP_MAN_RST BIT(2)
#define SP_TMDS_RST BIT(1)
#define SP_SW_MAN_RST BIT(0)
/* System Status Register */
#define SP_SYSTEM_STATUS_REG 0x14
#define SP_TMDS_CLOCK_DET BIT(1)
#define SP_TMDS_DE_DET BIT(0)
/* HDMI Status Register */
#define SP_HDMI_STATUS_REG 0x15
#define SP_HDMI_AUD_LAYOUT BIT(3)
#define SP_HDMI_DET BIT(0)
# define SP_DVI_MODE 0
# define SP_HDMI_MODE 1
/* HDMI Mute Control Register */
#define SP_HDMI_MUTE_CTRL_REG 0x16
#define SP_AUD_MUTE BIT(1)
#define SP_VID_MUTE BIT(0)
/* System Power Down Register 1 */
#define SP_SYSTEM_POWER_DOWN1_REG 0x18
#define SP_PWDN_CTRL BIT(0)
/*
* Audio and Video Auto Control
*/
/* Auto Audio and Video Control register */
#define SP_AUDVID_CTRL_REG 0x20
#define SP_AVC_OE BIT(7)
#define SP_AAC_OE BIT(6)
#define SP_AVC_EN BIT(1)
#define SP_AAC_EN BIT(0)
/* Audio Exception Enable Registers */
#define SP_AUD_EXCEPTION_ENABLE_BASE (0x24 - 1)
/* Bits for Audio Exception Enable Register 3 */
#define SP_AEC_EN21 BIT(5)
/*
* Interrupt
*/
/* Interrupt Status Register 1 */
#define SP_INT_STATUS1_REG 0x31
/* Bits for Interrupt Status Register 1 */
#define SP_HDMI_DVI BIT(7)
#define SP_CKDT_CHG BIT(6)
#define SP_SCDT_CHG BIT(5)
#define SP_PCLK_CHG BIT(4)
#define SP_PLL_UNLOCK BIT(3)
#define SP_CABLE_PLUG_CHG BIT(2)
#define SP_SET_MUTE BIT(1)
#define SP_SW_INTR BIT(0)
/* Bits for Interrupt Status Register 2 */
#define SP_HDCP_ERR BIT(5)
#define SP_AUDIO_SAMPLE_CHG BIT(0) /* undocumented */
/* Bits for Interrupt Status Register 3 */
#define SP_AUD_MODE_CHG BIT(0)
/* Bits for Interrupt Status Register 5 */
#define SP_AUDIO_RCV BIT(0)
/* Bits for Interrupt Status Register 6 */
#define SP_INT_STATUS6_REG 0x36
#define SP_CTS_RCV BIT(7)
#define SP_NEW_AUD_PKT BIT(4)
#define SP_NEW_AVI_PKT BIT(1)
#define SP_NEW_CP_PKT BIT(0)
/* Bits for Interrupt Status Register 7 */
#define SP_NO_VSI BIT(7)
#define SP_NEW_VS BIT(4)
/* Interrupt Mask 1 Status Registers */
#define SP_INT_MASK1_REG 0x41
/* HDMI US TIMER Control Register */
#define SP_HDMI_US_TIMER_CTRL_REG 0x49
#define SP_MS_TIMER_MARGIN_10_8_MASK 0x07
/*
* TMDS Control
*/
/* TMDS Control Registers */
#define SP_TMDS_CTRL_BASE (0x50 - 1)
/* Bits for TMDS Control Register 7 */
#define SP_PD_RT BIT(0)
/*
* Video Control
*/
/* Video Status Register */
#define SP_VIDEO_STATUS_REG 0x70
#define SP_COLOR_DEPTH_MASK 0xf0
#define SP_COLOR_DEPTH_SHIFT 4
# define SP_COLOR_DEPTH_MODE_LEGACY 0x00
# define SP_COLOR_DEPTH_MODE_24BIT 0x04
# define SP_COLOR_DEPTH_MODE_30BIT 0x05
# define SP_COLOR_DEPTH_MODE_36BIT 0x06
# define SP_COLOR_DEPTH_MODE_48BIT 0x07
/* Video Data Range Control Register */
#define SP_VID_DATA_RANGE_CTRL_REG 0x83
#define SP_R2Y_INPUT_LIMIT BIT(1)
/* Pixel Clock High Resolution Counter Registers */
#define SP_PCLK_HIGHRES_CNT_BASE (0x8c - 1)
/*
* Audio Control
*/
/* Number of Audio Channels Status Registers */
#define SP_AUD_CH_STATUS_REG_NUM 6
/* Audio IN S/PDIF Channel Status Registers */
#define SP_AUD_SPDIF_CH_STATUS_BASE 0xc7
/* Audio IN S/PDIF Channel Status Register 4 */
#define SP_FS_FREQ_MASK 0x0f
# define SP_FS_FREQ_44100HZ 0x00
# define SP_FS_FREQ_48000HZ 0x02
# define SP_FS_FREQ_32000HZ 0x03
# define SP_FS_FREQ_88200HZ 0x08
# define SP_FS_FREQ_96000HZ 0x0a
# define SP_FS_FREQ_176400HZ 0x0c
# define SP_FS_FREQ_192000HZ 0x0e
/*
* Micellaneous Control Block
*/
/* CHIP Control Register */
#define SP_CHIP_CTRL_REG 0xe3
#define SP_MAN_HDMI5V_DET BIT(3)
#define SP_PLLLOCK_CKDT_EN BIT(2)
#define SP_ANALOG_CKDT_EN BIT(1)
#define SP_DIGITAL_CKDT_EN BIT(0)
/* Packet Receiving Status Register */
#define SP_PACKET_RECEIVING_STATUS_REG 0xf3
#define SP_AVI_RCVD BIT(5)
#define SP_VSI_RCVD BIT(1)
/***************************************************************/
/* Register definitions for RX_P1 */
/***************************************************************/
/* HDCP BCAPS Shadow Register */
#define SP_HDCP_BCAPS_SHADOW_REG 0x2a
#define SP_BCAPS_REPEATER BIT(5)
/* HDCP Status Register */
#define SP_RX_HDCP_STATUS_REG 0x3f
#define SP_AUTH_EN BIT(4)
/*
* InfoFrame and Control Packet Registers
*/
/* AVI InfoFrame packet checksum */
#define SP_AVI_INFOFRAME_CHECKSUM 0xa3
/* AVI InfoFrame Registers */
#define SP_AVI_INFOFRAME_DATA_BASE 0xa4
#define SP_AVI_COLOR_F_MASK 0x60
#define SP_AVI_COLOR_F_SHIFT 5
/* Audio InfoFrame Registers */
#define SP_AUD_INFOFRAME_DATA_BASE 0xc4
#define SP_AUD_INFOFRAME_LAYOUT_MASK 0x0f
/* MPEG/HDMI Vendor Specific InfoFrame Packet type code */
#define SP_MPEG_VS_INFOFRAME_TYPE_REG 0xe0
/* MPEG/HDMI Vendor Specific InfoFrame Packet length */
#define SP_MPEG_VS_INFOFRAME_LEN_REG 0xe2
/* MPEG/HDMI Vendor Specific InfoFrame Packet version number */
#define SP_MPEG_VS_INFOFRAME_VER_REG 0xe1
/* MPEG/HDMI Vendor Specific InfoFrame Packet content */
#define SP_MPEG_VS_INFOFRAME_DATA_BASE 0xe4
/* General Control Packet Register */
#define SP_GENERAL_CTRL_PACKET_REG 0x9f
#define SP_CLEAR_AVMUTE BIT(4)
#define SP_SET_AVMUTE BIT(0)
/***************************************************************/
/* Register definitions for TX_P0 */
/***************************************************************/
/* HDCP Status Register */
#define SP_TX_HDCP_STATUS_REG 0x00
#define SP_AUTH_FAIL BIT(5)
#define SP_AUTHEN_PASS BIT(1)
/* HDCP Control Register 0 */
#define SP_HDCP_CTRL0_REG 0x01
#define SP_RX_REPEATER BIT(6)
#define SP_RE_AUTH BIT(5)
#define SP_SW_AUTH_OK BIT(4)
#define SP_HARD_AUTH_EN BIT(3)
#define SP_HDCP_ENC_EN BIT(2)
#define SP_BKSV_SRM_PASS BIT(1)
#define SP_KSVLIST_VLD BIT(0)
/* HDCP Function Enabled */
#define SP_HDCP_FUNCTION_ENABLED (BIT(0) | BIT(1) | BIT(2) | BIT(3))
/* HDCP Receiver BSTATUS Register 0 */
#define SP_HDCP_RX_BSTATUS0_REG 0x1b
/* HDCP Receiver BSTATUS Register 1 */
#define SP_HDCP_RX_BSTATUS1_REG 0x1c
/* HDCP Embedded "Blue Screen" Content Registers */
#define SP_HDCP_VID0_BLUE_SCREEN_REG 0x2c
#define SP_HDCP_VID1_BLUE_SCREEN_REG 0x2d
#define SP_HDCP_VID2_BLUE_SCREEN_REG 0x2e
/* HDCP Wait R0 Timing Register */
#define SP_HDCP_WAIT_R0_TIME_REG 0x40
/* HDCP Link Integrity Check Timer Register */
#define SP_HDCP_LINK_CHECK_TIMER_REG 0x41
/* HDCP Repeater Ready Wait Timer Register */
#define SP_HDCP_RPTR_RDY_WAIT_TIME_REG 0x42
/* HDCP Auto Timer Register */
#define SP_HDCP_AUTO_TIMER_REG 0x51
/* HDCP Key Status Register */
#define SP_HDCP_KEY_STATUS_REG 0x5e
/* HDCP Key Command Register */
#define SP_HDCP_KEY_COMMAND_REG 0x5f
#define SP_DISABLE_SYNC_HDCP BIT(2)
/* OTP Memory Key Protection Registers */
#define SP_OTP_KEY_PROTECT1_REG 0x60
#define SP_OTP_KEY_PROTECT2_REG 0x61
#define SP_OTP_KEY_PROTECT3_REG 0x62
#define SP_OTP_PSW1 0xa2
#define SP_OTP_PSW2 0x7e
#define SP_OTP_PSW3 0xc6
/* DP System Control Registers */
#define SP_DP_SYSTEM_CTRL_BASE (0x80 - 1)
/* Bits for DP System Control Register 2 */
#define SP_CHA_STA BIT(2)
/* Bits for DP System Control Register 3 */
#define SP_HPD_STATUS BIT(6)
#define SP_STRM_VALID BIT(2)
/* Bits for DP System Control Register 4 */
#define SP_ENHANCED_MODE BIT(3)
/* DP Video Control Register */
#define SP_DP_VIDEO_CTRL_REG 0x84
#define SP_COLOR_F_MASK 0x06
#define SP_COLOR_F_SHIFT 1
#define SP_BPC_MASK 0xe0
#define SP_BPC_SHIFT 5
# define SP_BPC_6BITS 0x00
# define SP_BPC_8BITS 0x01
# define SP_BPC_10BITS 0x02
# define SP_BPC_12BITS 0x03
/* DP Audio Control Register */
#define SP_DP_AUDIO_CTRL_REG 0x87
#define SP_AUD_EN BIT(0)
/* 10us Pulse Generate Timer Registers */
#define SP_I2C_GEN_10US_TIMER0_REG 0x88
#define SP_I2C_GEN_10US_TIMER1_REG 0x89
/* Packet Send Control Register */
#define SP_PACKET_SEND_CTRL_REG 0x90
#define SP_AUD_IF_UP BIT(7)
#define SP_AVI_IF_UD BIT(6)
#define SP_MPEG_IF_UD BIT(5)
#define SP_SPD_IF_UD BIT(4)
#define SP_AUD_IF_EN BIT(3)
#define SP_AVI_IF_EN BIT(2)
#define SP_MPEG_IF_EN BIT(1)
#define SP_SPD_IF_EN BIT(0)
/* DP HDCP Control Register */
#define SP_DP_HDCP_CTRL_REG 0x92
#define SP_AUTO_EN BIT(7)
#define SP_AUTO_START BIT(5)
#define SP_LINK_POLLING BIT(1)
/* DP Main Link Bandwidth Setting Register */
#define SP_DP_MAIN_LINK_BW_SET_REG 0xa0
#define SP_LINK_BW_SET_MASK 0x1f
#define SP_INITIAL_SLIM_M_AUD_SEL BIT(5)
/* DP Training Pattern Set Register */
#define SP_DP_TRAINING_PATTERN_SET_REG 0xa2
/* DP Lane 0 Link Training Control Register */
#define SP_DP_LANE0_LT_CTRL_REG 0xa3
#define SP_TX_SW_SET_MASK 0x1b
#define SP_MAX_PRE_REACH BIT(5)
#define SP_MAX_DRIVE_REACH BIT(4)
#define SP_PRE_EMP_LEVEL1 BIT(3)
#define SP_DRVIE_CURRENT_LEVEL1 BIT(0)
/* DP Link Training Control Register */
#define SP_DP_LT_CTRL_REG 0xa8
#define SP_LT_ERROR_TYPE_MASK 0x70
# define SP_LT_NO_ERROR 0x00
# define SP_LT_AUX_WRITE_ERROR 0x01
# define SP_LT_MAX_DRIVE_REACHED 0x02
# define SP_LT_WRONG_LANE_COUNT_SET 0x03
# define SP_LT_LOOP_SAME_5_TIME 0x04
# define SP_LT_CR_FAIL_IN_EQ 0x05
# define SP_LT_EQ_LOOP_5_TIME 0x06
#define SP_LT_EN BIT(0)
/* DP CEP Training Control Registers */
#define SP_DP_CEP_TRAINING_CTRL0_REG 0xa9
#define SP_DP_CEP_TRAINING_CTRL1_REG 0xaa
/* DP Debug Register 1 */
#define SP_DP_DEBUG1_REG 0xb0
#define SP_DEBUG_PLL_LOCK BIT(4)
#define SP_POLLING_EN BIT(1)
/* DP Polling Control Register */
#define SP_DP_POLLING_CTRL_REG 0xb4
#define SP_AUTO_POLLING_DISABLE BIT(0)
/* DP Link Debug Control Register */
#define SP_DP_LINK_DEBUG_CTRL_REG 0xb8
#define SP_M_VID_DEBUG BIT(5)
#define SP_NEW_PRBS7 BIT(4)
#define SP_INSERT_ER BIT(1)
#define SP_PRBS31_EN BIT(0)
/* AUX Misc control Register */
#define SP_AUX_MISC_CTRL_REG 0xbf
/* DP PLL control Register */
#define SP_DP_PLL_CTRL_REG 0xc7
#define SP_PLL_RST BIT(6)
/* DP Analog Power Down Register */
#define SP_DP_ANALOG_POWER_DOWN_REG 0xc8
#define SP_CH0_PD BIT(0)
/* DP Misc Control Register */
#define SP_DP_MISC_CTRL_REG 0xcd
#define SP_EQ_TRAINING_LOOP BIT(6)
/* DP Extra I2C Device Address Register */
#define SP_DP_EXTRA_I2C_DEV_ADDR_REG 0xce
#define SP_I2C_STRETCH_DISABLE BIT(7)
#define SP_I2C_EXTRA_ADDR 0x50
/* DP Downspread Control Register 1 */
#define SP_DP_DOWNSPREAD_CTRL1_REG 0xd0
/* DP M Value Calculation Control Register */
#define SP_DP_M_CALCULATION_CTRL_REG 0xd9
#define SP_M_GEN_CLK_SEL BIT(0)
/* AUX Channel Access Status Register */
#define SP_AUX_CH_STATUS_REG 0xe0
#define SP_AUX_STATUS 0x0f
/* AUX Channel DEFER Control Register */
#define SP_AUX_DEFER_CTRL_REG 0xe2
#define SP_DEFER_CTRL_EN BIT(7)
/* DP Buffer Data Count Register */
#define SP_BUF_DATA_COUNT_REG 0xe4
#define SP_BUF_DATA_COUNT_MASK 0x1f
#define SP_BUF_CLR BIT(7)
/* DP AUX Channel Control Register 1 */
#define SP_DP_AUX_CH_CTRL1_REG 0xe5
#define SP_AUX_TX_COMM_MASK 0x0f
#define SP_AUX_LENGTH_MASK 0xf0
#define SP_AUX_LENGTH_SHIFT 4
/* DP AUX CH Address Register 0 */
#define SP_AUX_ADDR_7_0_REG 0xe6
/* DP AUX CH Address Register 1 */
#define SP_AUX_ADDR_15_8_REG 0xe7
/* DP AUX CH Address Register 2 */
#define SP_AUX_ADDR_19_16_REG 0xe8
#define SP_AUX_ADDR_19_16_MASK 0x0f
/* DP AUX Channel Control Register 2 */
#define SP_DP_AUX_CH_CTRL2_REG 0xe9
#define SP_AUX_SEL_RXCM BIT(6)
#define SP_AUX_CHSEL BIT(3)
#define SP_AUX_PN_INV BIT(2)
#define SP_ADDR_ONLY BIT(1)
#define SP_AUX_EN BIT(0)
/* DP Video Stream Control InfoFrame Register */
#define SP_DP_3D_VSC_CTRL_REG 0xea
#define SP_INFO_FRAME_VSC_EN BIT(0)
/* DP Video Stream Data Byte 1 Register */
#define SP_DP_VSC_DB1_REG 0xeb
/* DP AUX Channel Control Register 3 */
#define SP_DP_AUX_CH_CTRL3_REG 0xec
#define SP_WAIT_COUNTER_7_0_MASK 0xff
/* DP AUX Channel Control Register 4 */
#define SP_DP_AUX_CH_CTRL4_REG 0xed
/* DP AUX Buffer Data Registers */
#define SP_DP_BUF_DATA0_REG 0xf0
/***************************************************************/
/* Register definitions for TX_P2 */
/***************************************************************/
/*
* Core Register Definitions
*/
/* Device ID Low Byte Register */
#define SP_DEVICE_IDL_REG 0x02
/* Device ID High Byte Register */
#define SP_DEVICE_IDH_REG 0x03
/* Device version register */
#define SP_DEVICE_VERSION_REG 0x04
/* Power Down Control Register */
#define SP_POWERDOWN_CTRL_REG 0x05
#define SP_REGISTER_PD BIT(7)
#define SP_HDCP_PD BIT(5)
#define SP_AUDIO_PD BIT(4)
#define SP_VIDEO_PD BIT(3)
#define SP_LINK_PD BIT(2)
#define SP_TOTAL_PD BIT(1)
/* Reset Control Register 1 */
#define SP_RESET_CTRL1_REG 0x06
#define SP_MISC_RST BIT(7)
#define SP_VIDCAP_RST BIT(6)
#define SP_VIDFIF_RST BIT(5)
#define SP_AUDFIF_RST BIT(4)
#define SP_AUDCAP_RST BIT(3)
#define SP_HDCP_RST BIT(2)
#define SP_SW_RST BIT(1)
#define SP_HW_RST BIT(0)
/* Reset Control Register 2 */
#define SP_RESET_CTRL2_REG 0x07
#define SP_AUX_RST BIT(2)
#define SP_SERDES_FIFO_RST BIT(1)
#define SP_I2C_REG_RST BIT(0)
/* Video Control Register 1 */
#define SP_VID_CTRL1_REG 0x08
#define SP_VIDEO_EN BIT(7)
#define SP_VIDEO_MUTE BIT(2)
#define SP_DE_GEN BIT(1)
#define SP_DEMUX BIT(0)
/* Video Control Register 2 */
#define SP_VID_CTRL2_REG 0x09
#define SP_IN_COLOR_F_MASK 0x03
#define SP_IN_YC_BIT_SEL BIT(2)
#define SP_IN_BPC_MASK 0x70
#define SP_IN_BPC_SHIFT 4
# define SP_IN_BPC_12BIT 0x03
# define SP_IN_BPC_10BIT 0x02
# define SP_IN_BPC_8BIT 0x01
# define SP_IN_BPC_6BIT 0x00
#define SP_IN_D_RANGE BIT(7)
/* Video Control Register 3 */
#define SP_VID_CTRL3_REG 0x0a
#define SP_HPD_OUT BIT(6)
/* Video Control Register 5 */
#define SP_VID_CTRL5_REG 0x0c
#define SP_CSC_STD_SEL BIT(7)
#define SP_XVYCC_RNG_LMT BIT(6)
#define SP_RANGE_Y2R BIT(5)
#define SP_CSPACE_Y2R BIT(4)
#define SP_RGB_RNG_LMT BIT(3)
#define SP_Y_RNG_LMT BIT(2)
#define SP_RANGE_R2Y BIT(1)
#define SP_CSPACE_R2Y BIT(0)
/* Video Control Register 6 */
#define SP_VID_CTRL6_REG 0x0d
#define SP_TEST_PATTERN_EN BIT(7)
#define SP_VIDEO_PROCESS_EN BIT(6)
#define SP_VID_US_MODE BIT(3)
#define SP_VID_DS_MODE BIT(2)
#define SP_UP_SAMPLE BIT(1)
#define SP_DOWN_SAMPLE BIT(0)
/* Video Control Register 8 */
#define SP_VID_CTRL8_REG 0x0f
#define SP_VID_VRES_TH BIT(0)
/* Total Line Status Low Byte Register */
#define SP_TOTAL_LINE_STAL_REG 0x24
/* Total Line Status High Byte Register */
#define SP_TOTAL_LINE_STAH_REG 0x25
/* Active Line Status Low Byte Register */
#define SP_ACT_LINE_STAL_REG 0x26
/* Active Line Status High Byte Register */
#define SP_ACT_LINE_STAH_REG 0x27
/* Vertical Front Porch Status Register */
#define SP_V_F_PORCH_STA_REG 0x28
/* Vertical SYNC Width Status Register */
#define SP_V_SYNC_STA_REG 0x29
/* Vertical Back Porch Status Register */
#define SP_V_B_PORCH_STA_REG 0x2a
/* Total Pixel Status Low Byte Register */
#define SP_TOTAL_PIXEL_STAL_REG 0x2b
/* Total Pixel Status High Byte Register */
#define SP_TOTAL_PIXEL_STAH_REG 0x2c
/* Active Pixel Status Low Byte Register */
#define SP_ACT_PIXEL_STAL_REG 0x2d
/* Active Pixel Status High Byte Register */
#define SP_ACT_PIXEL_STAH_REG 0x2e
/* Horizontal Front Porch Status Low Byte Register */
#define SP_H_F_PORCH_STAL_REG 0x2f
/* Horizontal Front Porch Statys High Byte Register */
#define SP_H_F_PORCH_STAH_REG 0x30
/* Horizontal SYNC Width Status Low Byte Register */
#define SP_H_SYNC_STAL_REG 0x31
/* Horizontal SYNC Width Status High Byte Register */
#define SP_H_SYNC_STAH_REG 0x32
/* Horizontal Back Porch Status Low Byte Register */
#define SP_H_B_PORCH_STAL_REG 0x33
/* Horizontal Back Porch Status High Byte Register */
#define SP_H_B_PORCH_STAH_REG 0x34
/* InfoFrame AVI Packet DB1 Register */
#define SP_INFOFRAME_AVI_DB1_REG 0x70
/* Bit Control Specific Register */
#define SP_BIT_CTRL_SPECIFIC_REG 0x80
#define SP_BIT_CTRL_SELECT_SHIFT 1
#define SP_ENABLE_BIT_CTRL BIT(0)
/* InfoFrame Audio Packet DB1 Register */
#define SP_INFOFRAME_AUD_DB1_REG 0x83
/* InfoFrame MPEG Packet DB1 Register */
#define SP_INFOFRAME_MPEG_DB1_REG 0xb0
/* Audio Channel Status Registers */
#define SP_AUD_CH_STATUS_BASE 0xd0
/* Audio Channel Num Register 5 */
#define SP_I2S_CHANNEL_NUM_MASK 0xe0
# define SP_I2S_CH_NUM_1 (0x00 << 5)
# define SP_I2S_CH_NUM_2 (0x01 << 5)
# define SP_I2S_CH_NUM_3 (0x02 << 5)
# define SP_I2S_CH_NUM_4 (0x03 << 5)
# define SP_I2S_CH_NUM_5 (0x04 << 5)
# define SP_I2S_CH_NUM_6 (0x05 << 5)
# define SP_I2S_CH_NUM_7 (0x06 << 5)
# define SP_I2S_CH_NUM_8 (0x07 << 5)
#define SP_EXT_VUCP BIT(2)
#define SP_VBIT BIT(1)
#define SP_AUDIO_LAYOUT BIT(0)
/* Analog Debug Register 2 */
#define SP_ANALOG_DEBUG2_REG 0xdd
#define SP_FORCE_SW_OFF_BYPASS 0x20
#define SP_XTAL_FRQ 0x1c
# define SP_XTAL_FRQ_19M2 (0x00 << 2)
# define SP_XTAL_FRQ_24M (0x01 << 2)
# define SP_XTAL_FRQ_25M (0x02 << 2)
# define SP_XTAL_FRQ_26M (0x03 << 2)
# define SP_XTAL_FRQ_27M (0x04 << 2)
# define SP_XTAL_FRQ_38M4 (0x05 << 2)
# define SP_XTAL_FRQ_52M (0x06 << 2)
#define SP_POWERON_TIME_1P5MS 0x03
/* Analog Control 0 Register */
#define SP_ANALOG_CTRL0_REG 0xe1
/* Common Interrupt Status Register 1 */
#define SP_COMMON_INT_STATUS_BASE (0xf1 - 1)
#define SP_PLL_LOCK_CHG 0x40
/* Common Interrupt Status Register 2 */
#define SP_COMMON_INT_STATUS2 0xf2
#define SP_HDCP_AUTH_CHG BIT(1)
#define SP_HDCP_AUTH_DONE BIT(0)
#define SP_HDCP_LINK_CHECK_FAIL BIT(0)
/* Common Interrupt Status Register 4 */
#define SP_COMMON_INT_STATUS4_REG 0xf4
#define SP_HPD_IRQ BIT(6)
#define SP_HPD_ESYNC_ERR BIT(4)
#define SP_HPD_CHG BIT(2)
#define SP_HPD_LOST BIT(1)
#define SP_HPD_PLUG BIT(0)
/* DP Interrupt Status Register */
#define SP_DP_INT_STATUS1_REG 0xf7
#define SP_TRAINING_FINISH BIT(5)
#define SP_POLLING_ERR BIT(4)
/* Common Interrupt Mask Register */
#define SP_COMMON_INT_MASK_BASE (0xf8 - 1)
#define SP_COMMON_INT_MASK4_REG 0xfb
/* DP Interrupts Mask Register */
#define SP_DP_INT_MASK1_REG 0xfe
/* Interrupt Control Register */
#define SP_INT_CTRL_REG 0xff
/***************************************************************/
/* Register definitions for TX_P1 */
/***************************************************************/
/* DP TX Link Training Control Register */
#define SP_DP_TX_LT_CTRL0_REG 0x30
/* PD 1.2 Lint Training 80bit Pattern Register */
#define SP_DP_LT_80BIT_PATTERN0_REG 0x80
#define SP_DP_LT_80BIT_PATTERN_REG_NUM 10
/* Audio Interface Control Register 0 */
#define SP_AUD_INTERFACE_CTRL0_REG 0x5f
#define SP_AUD_INTERFACE_DISABLE 0x80
/* Audio Interface Control Register 2 */
#define SP_AUD_INTERFACE_CTRL2_REG 0x60
#define SP_M_AUD_ADJUST_ST 0x04
/* Audio Interface Control Register 3 */
#define SP_AUD_INTERFACE_CTRL3_REG 0x62
/* Audio Interface Control Register 4 */
#define SP_AUD_INTERFACE_CTRL4_REG 0x67
/* Audio Interface Control Register 5 */
#define SP_AUD_INTERFACE_CTRL5_REG 0x68
/* Audio Interface Control Register 6 */
#define SP_AUD_INTERFACE_CTRL6_REG 0x69
/* Firmware Version Register */
#define SP_FW_VER_REG 0xb7
#endif

View File

@ -1,4 +1,27 @@
# SPDX-License-Identifier: GPL-2.0-only
config DRM_ANALOGIX_ANX6345
tristate "Analogix ANX6345 bridge"
depends on OF
select DRM_ANALOGIX_DP
select DRM_KMS_HELPER
select REGMAP_I2C
help
ANX6345 is an ultra-low Full-HD DisplayPort/eDP
transmitter designed for portable devices. The
ANX6345 transforms the LVTTL RGB output of an
application processor to eDP or DisplayPort.
config DRM_ANALOGIX_ANX78XX
tristate "Analogix ANX78XX bridge"
select DRM_ANALOGIX_DP
select DRM_KMS_HELPER
select REGMAP_I2C
help
ANX78XX is an ultra-low power Full-HD SlimPort transmitter
designed for portable devices. The ANX78XX transforms
the HDMI output of an application processor to MyDP
or DisplayPort.
config DRM_ANALOGIX_DP
tristate
depends on DRM

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
analogix_dp-objs := analogix_dp_core.o analogix_dp_reg.o
analogix_dp-objs := analogix_dp_core.o analogix_dp_reg.o analogix-i2c-dptx.o
obj-$(CONFIG_DRM_ANALOGIX_ANX6345) += analogix-anx6345.o
obj-$(CONFIG_DRM_ANALOGIX_ANX78XX) += analogix-anx78xx.o
obj-$(CONFIG_DRM_ANALOGIX_DP) += analogix_dp.o

View File

@ -0,0 +1,817 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright(c) 2016, Analogix Semiconductor.
* Copyright(c) 2017, Icenowy Zheng <icenowy@aosc.io>
*
* Based on anx7808 driver obtained from chromeos with copyright:
* Copyright(c) 2013, Google Inc.
*/
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/gpio/consumer.h>
#include <linux/i2c.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/regmap.h>
#include <linux/regulator/consumer.h>
#include <linux/types.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_bridge.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_dp_helper.h>
#include <drm/drm_edid.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
#include "analogix-i2c-dptx.h"
#include "analogix-i2c-txcommon.h"
#define POLL_DELAY 50000 /* us */
#define POLL_TIMEOUT 5000000 /* us */
#define I2C_IDX_DPTX 0
#define I2C_IDX_TXCOM 1
static const u8 anx6345_i2c_addresses[] = {
[I2C_IDX_DPTX] = 0x70,
[I2C_IDX_TXCOM] = 0x72,
};
#define I2C_NUM_ADDRESSES ARRAY_SIZE(anx6345_i2c_addresses)
struct anx6345 {
struct drm_dp_aux aux;
struct drm_bridge bridge;
struct i2c_client *client;
struct edid *edid;
struct drm_connector connector;
struct drm_panel *panel;
struct regulator *dvdd12;
struct regulator *dvdd25;
struct gpio_desc *gpiod_reset;
struct mutex lock; /* protect EDID access */
/* I2C Slave addresses of ANX6345 are mapped as DPTX and SYS */
struct i2c_client *i2c_clients[I2C_NUM_ADDRESSES];
struct regmap *map[I2C_NUM_ADDRESSES];
u16 chipid;
u8 dpcd[DP_RECEIVER_CAP_SIZE];
bool powered;
};
static inline struct anx6345 *connector_to_anx6345(struct drm_connector *c)
{
return container_of(c, struct anx6345, connector);
}
static inline struct anx6345 *bridge_to_anx6345(struct drm_bridge *bridge)
{
return container_of(bridge, struct anx6345, bridge);
}
static int anx6345_set_bits(struct regmap *map, u8 reg, u8 mask)
{
return regmap_update_bits(map, reg, mask, mask);
}
static int anx6345_clear_bits(struct regmap *map, u8 reg, u8 mask)
{
return regmap_update_bits(map, reg, mask, 0);
}
static ssize_t anx6345_aux_transfer(struct drm_dp_aux *aux,
struct drm_dp_aux_msg *msg)
{
struct anx6345 *anx6345 = container_of(aux, struct anx6345, aux);
return anx_dp_aux_transfer(anx6345->map[I2C_IDX_DPTX], msg);
}
static int anx6345_dp_link_training(struct anx6345 *anx6345)
{
unsigned int value;
u8 dp_bw, dpcd[2];
int err;
err = anx6345_clear_bits(anx6345->map[I2C_IDX_TXCOM],
SP_POWERDOWN_CTRL_REG,
SP_TOTAL_PD);
if (err)
return err;
err = drm_dp_dpcd_readb(&anx6345->aux, DP_MAX_LINK_RATE, &dp_bw);
if (err < 0)
return err;
switch (dp_bw) {
case DP_LINK_BW_1_62:
case DP_LINK_BW_2_7:
break;
default:
DRM_DEBUG_KMS("DP bandwidth (%#02x) not supported\n", dp_bw);
return -EINVAL;
}
err = anx6345_set_bits(anx6345->map[I2C_IDX_TXCOM], SP_VID_CTRL1_REG,
SP_VIDEO_MUTE);
if (err)
return err;
err = anx6345_clear_bits(anx6345->map[I2C_IDX_TXCOM],
SP_VID_CTRL1_REG, SP_VIDEO_EN);
if (err)
return err;
/* Get DPCD info */
err = drm_dp_dpcd_read(&anx6345->aux, DP_DPCD_REV,
&anx6345->dpcd, DP_RECEIVER_CAP_SIZE);
if (err < 0) {
DRM_ERROR("Failed to read DPCD: %d\n", err);
return err;
}
/* Clear channel x SERDES power down */
err = anx6345_clear_bits(anx6345->map[I2C_IDX_DPTX],
SP_DP_ANALOG_POWER_DOWN_REG, SP_CH0_PD);
if (err)
return err;
/*
* Power up the sink (DP_SET_POWER register is only available on DPCD
* v1.1 and later).
*/
if (anx6345->dpcd[DP_DPCD_REV] >= 0x11) {
err = drm_dp_dpcd_readb(&anx6345->aux, DP_SET_POWER, &dpcd[0]);
if (err < 0) {
DRM_ERROR("Failed to read DP_SET_POWER register: %d\n",
err);
return err;
}
dpcd[0] &= ~DP_SET_POWER_MASK;
dpcd[0] |= DP_SET_POWER_D0;
err = drm_dp_dpcd_writeb(&anx6345->aux, DP_SET_POWER, dpcd[0]);
if (err < 0) {
DRM_ERROR("Failed to power up DisplayPort link: %d\n",
err);
return err;
}
/*
* According to the DP 1.1 specification, a "Sink Device must
* exit the power saving state within 1 ms" (Section 2.5.3.1,
* Table 5-52, "Sink Control Field" (register 0x600).
*/
usleep_range(1000, 2000);
}
/* Possibly enable downspread on the sink */
err = regmap_write(anx6345->map[I2C_IDX_DPTX],
SP_DP_DOWNSPREAD_CTRL1_REG, 0);
if (err)
return err;
if (anx6345->dpcd[DP_MAX_DOWNSPREAD] & DP_MAX_DOWNSPREAD_0_5) {
DRM_DEBUG("Enable downspread on the sink\n");
/* 4000PPM */
err = regmap_write(anx6345->map[I2C_IDX_DPTX],
SP_DP_DOWNSPREAD_CTRL1_REG, 8);
if (err)
return err;
err = drm_dp_dpcd_writeb(&anx6345->aux, DP_DOWNSPREAD_CTRL,
DP_SPREAD_AMP_0_5);
if (err < 0)
return err;
} else {
err = drm_dp_dpcd_writeb(&anx6345->aux, DP_DOWNSPREAD_CTRL, 0);
if (err < 0)
return err;
}
/* Set the lane count and the link rate on the sink */
if (drm_dp_enhanced_frame_cap(anx6345->dpcd))
err = anx6345_set_bits(anx6345->map[I2C_IDX_DPTX],
SP_DP_SYSTEM_CTRL_BASE + 4,
SP_ENHANCED_MODE);
else
err = anx6345_clear_bits(anx6345->map[I2C_IDX_DPTX],
SP_DP_SYSTEM_CTRL_BASE + 4,
SP_ENHANCED_MODE);
if (err)
return err;
dpcd[0] = drm_dp_max_link_rate(anx6345->dpcd);
dpcd[0] = drm_dp_link_rate_to_bw_code(dpcd[0]);
err = regmap_write(anx6345->map[I2C_IDX_DPTX],
SP_DP_MAIN_LINK_BW_SET_REG, dpcd[0]);
if (err)
return err;
dpcd[1] = drm_dp_max_lane_count(anx6345->dpcd);
err = regmap_write(anx6345->map[I2C_IDX_DPTX],
SP_DP_LANE_COUNT_SET_REG, dpcd[1]);
if (err)
return err;
if (drm_dp_enhanced_frame_cap(anx6345->dpcd))
dpcd[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;
err = drm_dp_dpcd_write(&anx6345->aux, DP_LINK_BW_SET, dpcd,
sizeof(dpcd));
if (err < 0) {
DRM_ERROR("Failed to configure link: %d\n", err);
return err;
}
/* Start training on the source */
err = regmap_write(anx6345->map[I2C_IDX_DPTX], SP_DP_LT_CTRL_REG,
SP_LT_EN);
if (err)
return err;
return regmap_read_poll_timeout(anx6345->map[I2C_IDX_DPTX],
SP_DP_LT_CTRL_REG,
value, !(value & SP_DP_LT_INPROGRESS),
POLL_DELAY, POLL_TIMEOUT);
}
static int anx6345_tx_initialization(struct anx6345 *anx6345)
{
int err, i;
/* FIXME: colordepth is hardcoded for now */
err = regmap_write(anx6345->map[I2C_IDX_TXCOM], SP_VID_CTRL2_REG,
SP_IN_BPC_6BIT << SP_IN_BPC_SHIFT);
if (err)
return err;
err = regmap_write(anx6345->map[I2C_IDX_DPTX], SP_DP_PLL_CTRL_REG, 0);
if (err)
return err;
err = regmap_write(anx6345->map[I2C_IDX_TXCOM],
SP_ANALOG_DEBUG1_REG, 0);
if (err)
return err;
err = regmap_write(anx6345->map[I2C_IDX_DPTX],
SP_DP_LINK_DEBUG_CTRL_REG,
SP_NEW_PRBS7 | SP_M_VID_DEBUG);
if (err)
return err;
err = regmap_write(anx6345->map[I2C_IDX_DPTX],
SP_DP_ANALOG_POWER_DOWN_REG, 0);
if (err)
return err;
/* Force HPD */
err = anx6345_set_bits(anx6345->map[I2C_IDX_DPTX],
SP_DP_SYSTEM_CTRL_BASE + 3,
SP_HPD_FORCE | SP_HPD_CTRL);
if (err)
return err;
for (i = 0; i < 4; i++) {
/* 4 lanes */
err = regmap_write(anx6345->map[I2C_IDX_DPTX],
SP_DP_LANE0_LT_CTRL_REG + i, 0);
if (err)
return err;
}
/* Reset AUX */
err = anx6345_set_bits(anx6345->map[I2C_IDX_TXCOM],
SP_RESET_CTRL2_REG, SP_AUX_RST);
if (err)
return err;
return anx6345_clear_bits(anx6345->map[I2C_IDX_TXCOM],
SP_RESET_CTRL2_REG, SP_AUX_RST);
}
static void anx6345_poweron(struct anx6345 *anx6345)
{
int err;
/* Ensure reset is asserted before starting power on sequence */
gpiod_set_value_cansleep(anx6345->gpiod_reset, 1);
usleep_range(1000, 2000);
err = regulator_enable(anx6345->dvdd12);
if (err) {
DRM_ERROR("Failed to enable dvdd12 regulator: %d\n",
err);
return;
}
/* T1 - delay between VDD12 and VDD25 should be 0-2ms */
usleep_range(1000, 2000);
err = regulator_enable(anx6345->dvdd25);
if (err) {
DRM_ERROR("Failed to enable dvdd25 regulator: %d\n",
err);
return;
}
/* T2 - delay between RESETN and all power rail stable,
* should be 2-5ms
*/
usleep_range(2000, 5000);
gpiod_set_value_cansleep(anx6345->gpiod_reset, 0);
/* Power on registers module */
anx6345_set_bits(anx6345->map[I2C_IDX_TXCOM], SP_POWERDOWN_CTRL_REG,
SP_HDCP_PD | SP_AUDIO_PD | SP_VIDEO_PD | SP_LINK_PD);
anx6345_clear_bits(anx6345->map[I2C_IDX_TXCOM], SP_POWERDOWN_CTRL_REG,
SP_REGISTER_PD | SP_TOTAL_PD);
if (anx6345->panel)
drm_panel_prepare(anx6345->panel);
anx6345->powered = true;
}
static void anx6345_poweroff(struct anx6345 *anx6345)
{
int err;
gpiod_set_value_cansleep(anx6345->gpiod_reset, 1);
usleep_range(1000, 2000);
if (anx6345->panel)
drm_panel_unprepare(anx6345->panel);
err = regulator_disable(anx6345->dvdd25);
if (err) {
DRM_ERROR("Failed to disable dvdd25 regulator: %d\n",
err);
return;
}
usleep_range(5000, 10000);
err = regulator_disable(anx6345->dvdd12);
if (err) {
DRM_ERROR("Failed to disable dvdd12 regulator: %d\n",
err);
return;
}
usleep_range(1000, 2000);
anx6345->powered = false;
}
static int anx6345_start(struct anx6345 *anx6345)
{
int err;
if (!anx6345->powered)
anx6345_poweron(anx6345);
/* Power on needed modules */
err = anx6345_clear_bits(anx6345->map[I2C_IDX_TXCOM],
SP_POWERDOWN_CTRL_REG,
SP_VIDEO_PD | SP_LINK_PD);
err = anx6345_tx_initialization(anx6345);
if (err) {
DRM_ERROR("Failed eDP transmitter initialization: %d\n", err);
anx6345_poweroff(anx6345);
return err;
}
err = anx6345_dp_link_training(anx6345);
if (err) {
DRM_ERROR("Failed link training: %d\n", err);
anx6345_poweroff(anx6345);
return err;
}
/*
* This delay seems to help keep the hardware in a good state. Without
* it, there are times where it fails silently.
*/
usleep_range(10000, 15000);
return 0;
}
static int anx6345_config_dp_output(struct anx6345 *anx6345)
{
int err;
err = anx6345_clear_bits(anx6345->map[I2C_IDX_TXCOM], SP_VID_CTRL1_REG,
SP_VIDEO_MUTE);
if (err)
return err;
/* Enable DP output */
err = anx6345_set_bits(anx6345->map[I2C_IDX_TXCOM], SP_VID_CTRL1_REG,
SP_VIDEO_EN);
if (err)
return err;
/* Force stream valid */
return anx6345_set_bits(anx6345->map[I2C_IDX_DPTX],
SP_DP_SYSTEM_CTRL_BASE + 3,
SP_STRM_FORCE | SP_STRM_CTRL);
}
static int anx6345_get_downstream_info(struct anx6345 *anx6345)
{
u8 value;
int err;
err = drm_dp_dpcd_readb(&anx6345->aux, DP_SINK_COUNT, &value);
if (err < 0) {
DRM_ERROR("Get sink count failed %d\n", err);
return err;
}
if (!DP_GET_SINK_COUNT(value)) {
DRM_ERROR("Downstream disconnected\n");
return -EIO;
}
return 0;
}
static int anx6345_get_modes(struct drm_connector *connector)
{
struct anx6345 *anx6345 = connector_to_anx6345(connector);
int err, num_modes = 0;
bool power_off = false;
mutex_lock(&anx6345->lock);
if (!anx6345->edid) {
if (!anx6345->powered) {
anx6345_poweron(anx6345);
power_off = true;
}
err = anx6345_get_downstream_info(anx6345);
if (err) {
DRM_ERROR("Failed to get downstream info: %d\n", err);
goto unlock;
}
anx6345->edid = drm_get_edid(connector, &anx6345->aux.ddc);
if (!anx6345->edid)
DRM_ERROR("Failed to read EDID from panel\n");
err = drm_connector_update_edid_property(connector,
anx6345->edid);
if (err) {
DRM_ERROR("Failed to update EDID property: %d\n", err);
goto unlock;
}
}
num_modes += drm_add_edid_modes(connector, anx6345->edid);
unlock:
if (power_off)
anx6345_poweroff(anx6345);
mutex_unlock(&anx6345->lock);
if (!num_modes && anx6345->panel)
num_modes += drm_panel_get_modes(anx6345->panel, connector);
return num_modes;
}
static const struct drm_connector_helper_funcs anx6345_connector_helper_funcs = {
.get_modes = anx6345_get_modes,
};
static void
anx6345_connector_destroy(struct drm_connector *connector)
{
struct anx6345 *anx6345 = connector_to_anx6345(connector);
if (anx6345->panel)
drm_panel_detach(anx6345->panel);
drm_connector_cleanup(connector);
}
static const struct drm_connector_funcs anx6345_connector_funcs = {
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = anx6345_connector_destroy,
.reset = drm_atomic_helper_connector_reset,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static int anx6345_bridge_attach(struct drm_bridge *bridge)
{
struct anx6345 *anx6345 = bridge_to_anx6345(bridge);
int err;
if (!bridge->encoder) {
DRM_ERROR("Parent encoder object not found");
return -ENODEV;
}
/* Register aux channel */
anx6345->aux.name = "DP-AUX";
anx6345->aux.dev = &anx6345->client->dev;
anx6345->aux.transfer = anx6345_aux_transfer;
err = drm_dp_aux_register(&anx6345->aux);
if (err < 0) {
DRM_ERROR("Failed to register aux channel: %d\n", err);
return err;
}
err = drm_connector_init(bridge->dev, &anx6345->connector,
&anx6345_connector_funcs,
DRM_MODE_CONNECTOR_eDP);
if (err) {
DRM_ERROR("Failed to initialize connector: %d\n", err);
return err;
}
drm_connector_helper_add(&anx6345->connector,
&anx6345_connector_helper_funcs);
err = drm_connector_register(&anx6345->connector);
if (err) {
DRM_ERROR("Failed to register connector: %d\n", err);
return err;
}
anx6345->connector.polled = DRM_CONNECTOR_POLL_HPD;
err = drm_connector_attach_encoder(&anx6345->connector,
bridge->encoder);
if (err) {
DRM_ERROR("Failed to link up connector to encoder: %d\n", err);
return err;
}
if (anx6345->panel) {
err = drm_panel_attach(anx6345->panel, &anx6345->connector);
if (err) {
DRM_ERROR("Failed to attach panel: %d\n", err);
return err;
}
}
return 0;
}
static enum drm_mode_status
anx6345_bridge_mode_valid(struct drm_bridge *bridge,
const struct drm_display_mode *mode)
{
if (mode->flags & DRM_MODE_FLAG_INTERLACE)
return MODE_NO_INTERLACE;
/* Max 1200p at 5.4 Ghz, one lane */
if (mode->clock > 154000)
return MODE_CLOCK_HIGH;
return MODE_OK;
}
static void anx6345_bridge_disable(struct drm_bridge *bridge)
{
struct anx6345 *anx6345 = bridge_to_anx6345(bridge);
/* Power off all modules except configuration registers access */
anx6345_set_bits(anx6345->map[I2C_IDX_TXCOM], SP_POWERDOWN_CTRL_REG,
SP_HDCP_PD | SP_AUDIO_PD | SP_VIDEO_PD | SP_LINK_PD);
if (anx6345->panel)
drm_panel_disable(anx6345->panel);
if (anx6345->powered)
anx6345_poweroff(anx6345);
}
static void anx6345_bridge_enable(struct drm_bridge *bridge)
{
struct anx6345 *anx6345 = bridge_to_anx6345(bridge);
int err;
if (anx6345->panel)
drm_panel_enable(anx6345->panel);
err = anx6345_start(anx6345);
if (err) {
DRM_ERROR("Failed to initialize: %d\n", err);
return;
}
err = anx6345_config_dp_output(anx6345);
if (err)
DRM_ERROR("Failed to enable DP output: %d\n", err);
}
static const struct drm_bridge_funcs anx6345_bridge_funcs = {
.attach = anx6345_bridge_attach,
.mode_valid = anx6345_bridge_mode_valid,
.disable = anx6345_bridge_disable,
.enable = anx6345_bridge_enable,
};
static void unregister_i2c_dummy_clients(struct anx6345 *anx6345)
{
unsigned int i;
for (i = 1; i < ARRAY_SIZE(anx6345->i2c_clients); i++)
if (anx6345->i2c_clients[i] &&
anx6345->i2c_clients[i]->addr != anx6345->client->addr)
i2c_unregister_device(anx6345->i2c_clients[i]);
}
static const struct regmap_config anx6345_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.max_register = 0xff,
.cache_type = REGCACHE_NONE,
};
static const u16 anx6345_chipid_list[] = {
0x6345,
};
static bool anx6345_get_chip_id(struct anx6345 *anx6345)
{
unsigned int i, idl, idh, version;
if (regmap_read(anx6345->map[I2C_IDX_TXCOM], SP_DEVICE_IDL_REG, &idl))
return false;
if (regmap_read(anx6345->map[I2C_IDX_TXCOM], SP_DEVICE_IDH_REG, &idh))
return false;
anx6345->chipid = (u8)idl | ((u8)idh << 8);
if (regmap_read(anx6345->map[I2C_IDX_TXCOM], SP_DEVICE_VERSION_REG,
&version))
return false;
for (i = 0; i < ARRAY_SIZE(anx6345_chipid_list); i++) {
if (anx6345->chipid == anx6345_chipid_list[i]) {
DRM_INFO("Found ANX%x (ver. %d) eDP Transmitter\n",
anx6345->chipid, version);
return true;
}
}
DRM_ERROR("ANX%x (ver. %d) not supported by this driver\n",
anx6345->chipid, version);
return false;
}
static int anx6345_i2c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct anx6345 *anx6345;
struct device *dev;
int i, err;
anx6345 = devm_kzalloc(&client->dev, sizeof(*anx6345), GFP_KERNEL);
if (!anx6345)
return -ENOMEM;
mutex_init(&anx6345->lock);
anx6345->bridge.of_node = client->dev.of_node;
anx6345->client = client;
i2c_set_clientdata(client, anx6345);
dev = &anx6345->client->dev;
err = drm_of_find_panel_or_bridge(client->dev.of_node, 1, 0,
&anx6345->panel, NULL);
if (err == -EPROBE_DEFER)
return err;
if (err)
DRM_DEBUG("No panel found\n");
/* 1.2V digital core power regulator */
anx6345->dvdd12 = devm_regulator_get(dev, "dvdd12-supply");
if (IS_ERR(anx6345->dvdd12)) {
DRM_ERROR("dvdd12-supply not found\n");
return PTR_ERR(anx6345->dvdd12);
}
/* 2.5V digital core power regulator */
anx6345->dvdd25 = devm_regulator_get(dev, "dvdd25-supply");
if (IS_ERR(anx6345->dvdd25)) {
DRM_ERROR("dvdd25-supply not found\n");
return PTR_ERR(anx6345->dvdd25);
}
/* GPIO for chip reset */
anx6345->gpiod_reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(anx6345->gpiod_reset)) {
DRM_ERROR("Reset gpio not found\n");
return PTR_ERR(anx6345->gpiod_reset);
}
/* Map slave addresses of ANX6345 */
for (i = 0; i < I2C_NUM_ADDRESSES; i++) {
if (anx6345_i2c_addresses[i] >> 1 != client->addr)
anx6345->i2c_clients[i] = i2c_new_dummy(client->adapter,
anx6345_i2c_addresses[i] >> 1);
else
anx6345->i2c_clients[i] = client;
if (!anx6345->i2c_clients[i]) {
err = -ENOMEM;
DRM_ERROR("Failed to reserve I2C bus %02x\n",
anx6345_i2c_addresses[i]);
goto err_unregister_i2c;
}
anx6345->map[i] = devm_regmap_init_i2c(anx6345->i2c_clients[i],
&anx6345_regmap_config);
if (IS_ERR(anx6345->map[i])) {
err = PTR_ERR(anx6345->map[i]);
DRM_ERROR("Failed regmap initialization %02x\n",
anx6345_i2c_addresses[i]);
goto err_unregister_i2c;
}
}
/* Look for supported chip ID */
anx6345_poweron(anx6345);
if (anx6345_get_chip_id(anx6345)) {
anx6345->bridge.funcs = &anx6345_bridge_funcs;
drm_bridge_add(&anx6345->bridge);
return 0;
} else {
anx6345_poweroff(anx6345);
err = -ENODEV;
}
err_unregister_i2c:
unregister_i2c_dummy_clients(anx6345);
return err;
}
static int anx6345_i2c_remove(struct i2c_client *client)
{
struct anx6345 *anx6345 = i2c_get_clientdata(client);
drm_bridge_remove(&anx6345->bridge);
unregister_i2c_dummy_clients(anx6345);
kfree(anx6345->edid);
mutex_destroy(&anx6345->lock);
return 0;
}
static const struct i2c_device_id anx6345_id[] = {
{ "anx6345", 0 },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(i2c, anx6345_id);
static const struct of_device_id anx6345_match_table[] = {
{ .compatible = "analogix,anx6345", },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, anx6345_match_table);
static struct i2c_driver anx6345_driver = {
.driver = {
.name = "anx6345",
.of_match_table = of_match_ptr(anx6345_match_table),
},
.probe = anx6345_i2c_probe,
.remove = anx6345_i2c_remove,
.id_table = anx6345_id,
};
module_i2c_driver(anx6345_driver);
MODULE_DESCRIPTION("ANX6345 eDP Transmitter driver");
MODULE_AUTHOR("Icenowy Zheng <icenowy@aosc.io>");
MODULE_LICENSE("GPL v2");

View File

@ -36,8 +36,6 @@
#define I2C_IDX_RX_P1 4
#define XTAL_CLK 270 /* 27M */
#define AUX_CH_BUFFER_SIZE 16
#define AUX_WAIT_TIMEOUT_MS 15
static const u8 anx7808_i2c_addresses[] = {
[I2C_IDX_TX_P0] = 0x78,
@ -107,153 +105,11 @@ static int anx78xx_clear_bits(struct regmap *map, u8 reg, u8 mask)
return regmap_update_bits(map, reg, mask, 0);
}
static bool anx78xx_aux_op_finished(struct anx78xx *anx78xx)
{
unsigned int value;
int err;
err = regmap_read(anx78xx->map[I2C_IDX_TX_P0], SP_DP_AUX_CH_CTRL2_REG,
&value);
if (err < 0)
return false;
return (value & SP_AUX_EN) == 0;
}
static int anx78xx_aux_wait(struct anx78xx *anx78xx)
{
unsigned long timeout;
unsigned int status;
int err;
timeout = jiffies + msecs_to_jiffies(AUX_WAIT_TIMEOUT_MS) + 1;
while (!anx78xx_aux_op_finished(anx78xx)) {
if (time_after(jiffies, timeout)) {
if (!anx78xx_aux_op_finished(anx78xx)) {
DRM_ERROR("Timed out waiting AUX to finish\n");
return -ETIMEDOUT;
}
break;
}
usleep_range(1000, 2000);
}
/* Read the AUX channel access status */
err = regmap_read(anx78xx->map[I2C_IDX_TX_P0], SP_AUX_CH_STATUS_REG,
&status);
if (err < 0) {
DRM_ERROR("Failed to read from AUX channel: %d\n", err);
return err;
}
if (status & SP_AUX_STATUS) {
DRM_ERROR("Failed to wait for AUX channel (status: %02x)\n",
status);
return -ETIMEDOUT;
}
return 0;
}
static int anx78xx_aux_address(struct anx78xx *anx78xx, unsigned int addr)
{
int err;
err = regmap_write(anx78xx->map[I2C_IDX_TX_P0], SP_AUX_ADDR_7_0_REG,
addr & 0xff);
if (err)
return err;
err = regmap_write(anx78xx->map[I2C_IDX_TX_P0], SP_AUX_ADDR_15_8_REG,
(addr & 0xff00) >> 8);
if (err)
return err;
/*
* DP AUX CH Address Register #2, only update bits[3:0]
* [7:4] RESERVED
* [3:0] AUX_ADDR[19:16], Register control AUX CH address.
*/
err = regmap_update_bits(anx78xx->map[I2C_IDX_TX_P0],
SP_AUX_ADDR_19_16_REG,
SP_AUX_ADDR_19_16_MASK,
(addr & 0xf0000) >> 16);
if (err)
return err;
return 0;
}
static ssize_t anx78xx_aux_transfer(struct drm_dp_aux *aux,
struct drm_dp_aux_msg *msg)
{
struct anx78xx *anx78xx = container_of(aux, struct anx78xx, aux);
u8 ctrl1 = msg->request;
u8 ctrl2 = SP_AUX_EN;
u8 *buffer = msg->buffer;
int err;
/* The DP AUX transmit and receive buffer has 16 bytes. */
if (WARN_ON(msg->size > AUX_CH_BUFFER_SIZE))
return -E2BIG;
/* Zero-sized messages specify address-only transactions. */
if (msg->size < 1)
ctrl2 |= SP_ADDR_ONLY;
else /* For non-zero-sized set the length field. */
ctrl1 |= (msg->size - 1) << SP_AUX_LENGTH_SHIFT;
if ((msg->request & DP_AUX_I2C_READ) == 0) {
/* When WRITE | MOT write values to data buffer */
err = regmap_bulk_write(anx78xx->map[I2C_IDX_TX_P0],
SP_DP_BUF_DATA0_REG, buffer,
msg->size);
if (err)
return err;
}
/* Write address and request */
err = anx78xx_aux_address(anx78xx, msg->address);
if (err)
return err;
err = regmap_write(anx78xx->map[I2C_IDX_TX_P0], SP_DP_AUX_CH_CTRL1_REG,
ctrl1);
if (err)
return err;
/* Start transaction */
err = regmap_update_bits(anx78xx->map[I2C_IDX_TX_P0],
SP_DP_AUX_CH_CTRL2_REG, SP_ADDR_ONLY |
SP_AUX_EN, ctrl2);
if (err)
return err;
err = anx78xx_aux_wait(anx78xx);
if (err)
return err;
msg->reply = DP_AUX_I2C_REPLY_ACK;
if ((msg->size > 0) && (msg->request & DP_AUX_I2C_READ)) {
/* Read values from data buffer */
err = regmap_bulk_read(anx78xx->map[I2C_IDX_TX_P0],
SP_DP_BUF_DATA0_REG, buffer,
msg->size);
if (err)
return err;
}
err = anx78xx_clear_bits(anx78xx->map[I2C_IDX_TX_P0],
SP_DP_AUX_CH_CTRL2_REG, SP_ADDR_ONLY);
if (err)
return err;
return msg->size;
return anx_dp_aux_transfer(anx78xx->map[I2C_IDX_TX_P0], msg);
}
static int anx78xx_set_hpd(struct anx78xx *anx78xx)

View File

@ -0,0 +1,249 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright(c) 2016, Analogix Semiconductor. All rights reserved.
*/
#ifndef __ANX78xx_H
#define __ANX78xx_H
#include "analogix-i2c-dptx.h"
#include "analogix-i2c-txcommon.h"
/***************************************************************/
/* Register definitions for RX_PO */
/***************************************************************/
/*
* System Control and Status
*/
/* Software Reset Register 1 */
#define SP_SOFTWARE_RESET1_REG 0x11
#define SP_VIDEO_RST BIT(4)
#define SP_HDCP_MAN_RST BIT(2)
#define SP_TMDS_RST BIT(1)
#define SP_SW_MAN_RST BIT(0)
/* System Status Register */
#define SP_SYSTEM_STATUS_REG 0x14
#define SP_TMDS_CLOCK_DET BIT(1)
#define SP_TMDS_DE_DET BIT(0)
/* HDMI Status Register */
#define SP_HDMI_STATUS_REG 0x15
#define SP_HDMI_AUD_LAYOUT BIT(3)
#define SP_HDMI_DET BIT(0)
# define SP_DVI_MODE 0
# define SP_HDMI_MODE 1
/* HDMI Mute Control Register */
#define SP_HDMI_MUTE_CTRL_REG 0x16
#define SP_AUD_MUTE BIT(1)
#define SP_VID_MUTE BIT(0)
/* System Power Down Register 1 */
#define SP_SYSTEM_POWER_DOWN1_REG 0x18
#define SP_PWDN_CTRL BIT(0)
/*
* Audio and Video Auto Control
*/
/* Auto Audio and Video Control register */
#define SP_AUDVID_CTRL_REG 0x20
#define SP_AVC_OE BIT(7)
#define SP_AAC_OE BIT(6)
#define SP_AVC_EN BIT(1)
#define SP_AAC_EN BIT(0)
/* Audio Exception Enable Registers */
#define SP_AUD_EXCEPTION_ENABLE_BASE (0x24 - 1)
/* Bits for Audio Exception Enable Register 3 */
#define SP_AEC_EN21 BIT(5)
/*
* Interrupt
*/
/* Interrupt Status Register 1 */
#define SP_INT_STATUS1_REG 0x31
/* Bits for Interrupt Status Register 1 */
#define SP_HDMI_DVI BIT(7)
#define SP_CKDT_CHG BIT(6)
#define SP_SCDT_CHG BIT(5)
#define SP_PCLK_CHG BIT(4)
#define SP_PLL_UNLOCK BIT(3)
#define SP_CABLE_PLUG_CHG BIT(2)
#define SP_SET_MUTE BIT(1)
#define SP_SW_INTR BIT(0)
/* Bits for Interrupt Status Register 2 */
#define SP_HDCP_ERR BIT(5)
#define SP_AUDIO_SAMPLE_CHG BIT(0) /* undocumented */
/* Bits for Interrupt Status Register 3 */
#define SP_AUD_MODE_CHG BIT(0)
/* Bits for Interrupt Status Register 5 */
#define SP_AUDIO_RCV BIT(0)
/* Bits for Interrupt Status Register 6 */
#define SP_INT_STATUS6_REG 0x36
#define SP_CTS_RCV BIT(7)
#define SP_NEW_AUD_PKT BIT(4)
#define SP_NEW_AVI_PKT BIT(1)
#define SP_NEW_CP_PKT BIT(0)
/* Bits for Interrupt Status Register 7 */
#define SP_NO_VSI BIT(7)
#define SP_NEW_VS BIT(4)
/* Interrupt Mask 1 Status Registers */
#define SP_INT_MASK1_REG 0x41
/* HDMI US TIMER Control Register */
#define SP_HDMI_US_TIMER_CTRL_REG 0x49
#define SP_MS_TIMER_MARGIN_10_8_MASK 0x07
/*
* TMDS Control
*/
/* TMDS Control Registers */
#define SP_TMDS_CTRL_BASE (0x50 - 1)
/* Bits for TMDS Control Register 7 */
#define SP_PD_RT BIT(0)
/*
* Video Control
*/
/* Video Status Register */
#define SP_VIDEO_STATUS_REG 0x70
#define SP_COLOR_DEPTH_MASK 0xf0
#define SP_COLOR_DEPTH_SHIFT 4
# define SP_COLOR_DEPTH_MODE_LEGACY 0x00
# define SP_COLOR_DEPTH_MODE_24BIT 0x04
# define SP_COLOR_DEPTH_MODE_30BIT 0x05
# define SP_COLOR_DEPTH_MODE_36BIT 0x06
# define SP_COLOR_DEPTH_MODE_48BIT 0x07
/* Video Data Range Control Register */
#define SP_VID_DATA_RANGE_CTRL_REG 0x83
#define SP_R2Y_INPUT_LIMIT BIT(1)
/* Pixel Clock High Resolution Counter Registers */
#define SP_PCLK_HIGHRES_CNT_BASE (0x8c - 1)
/*
* Audio Control
*/
/* Number of Audio Channels Status Registers */
#define SP_AUD_CH_STATUS_REG_NUM 6
/* Audio IN S/PDIF Channel Status Registers */
#define SP_AUD_SPDIF_CH_STATUS_BASE 0xc7
/* Audio IN S/PDIF Channel Status Register 4 */
#define SP_FS_FREQ_MASK 0x0f
# define SP_FS_FREQ_44100HZ 0x00
# define SP_FS_FREQ_48000HZ 0x02
# define SP_FS_FREQ_32000HZ 0x03
# define SP_FS_FREQ_88200HZ 0x08
# define SP_FS_FREQ_96000HZ 0x0a
# define SP_FS_FREQ_176400HZ 0x0c
# define SP_FS_FREQ_192000HZ 0x0e
/*
* Micellaneous Control Block
*/
/* CHIP Control Register */
#define SP_CHIP_CTRL_REG 0xe3
#define SP_MAN_HDMI5V_DET BIT(3)
#define SP_PLLLOCK_CKDT_EN BIT(2)
#define SP_ANALOG_CKDT_EN BIT(1)
#define SP_DIGITAL_CKDT_EN BIT(0)
/* Packet Receiving Status Register */
#define SP_PACKET_RECEIVING_STATUS_REG 0xf3
#define SP_AVI_RCVD BIT(5)
#define SP_VSI_RCVD BIT(1)
/***************************************************************/
/* Register definitions for RX_P1 */
/***************************************************************/
/* HDCP BCAPS Shadow Register */
#define SP_HDCP_BCAPS_SHADOW_REG 0x2a
#define SP_BCAPS_REPEATER BIT(5)
/* HDCP Status Register */
#define SP_RX_HDCP_STATUS_REG 0x3f
#define SP_AUTH_EN BIT(4)
/*
* InfoFrame and Control Packet Registers
*/
/* AVI InfoFrame packet checksum */
#define SP_AVI_INFOFRAME_CHECKSUM 0xa3
/* AVI InfoFrame Registers */
#define SP_AVI_INFOFRAME_DATA_BASE 0xa4
#define SP_AVI_COLOR_F_MASK 0x60
#define SP_AVI_COLOR_F_SHIFT 5
/* Audio InfoFrame Registers */
#define SP_AUD_INFOFRAME_DATA_BASE 0xc4
#define SP_AUD_INFOFRAME_LAYOUT_MASK 0x0f
/* MPEG/HDMI Vendor Specific InfoFrame Packet type code */
#define SP_MPEG_VS_INFOFRAME_TYPE_REG 0xe0
/* MPEG/HDMI Vendor Specific InfoFrame Packet length */
#define SP_MPEG_VS_INFOFRAME_LEN_REG 0xe2
/* MPEG/HDMI Vendor Specific InfoFrame Packet version number */
#define SP_MPEG_VS_INFOFRAME_VER_REG 0xe1
/* MPEG/HDMI Vendor Specific InfoFrame Packet content */
#define SP_MPEG_VS_INFOFRAME_DATA_BASE 0xe4
/* General Control Packet Register */
#define SP_GENERAL_CTRL_PACKET_REG 0x9f
#define SP_CLEAR_AVMUTE BIT(4)
#define SP_SET_AVMUTE BIT(0)
/***************************************************************/
/* Register definitions for TX_P1 */
/***************************************************************/
/* DP TX Link Training Control Register */
#define SP_DP_TX_LT_CTRL0_REG 0x30
/* PD 1.2 Lint Training 80bit Pattern Register */
#define SP_DP_LT_80BIT_PATTERN0_REG 0x80
#define SP_DP_LT_80BIT_PATTERN_REG_NUM 10
/* Audio Interface Control Register 0 */
#define SP_AUD_INTERFACE_CTRL0_REG 0x5f
#define SP_AUD_INTERFACE_DISABLE 0x80
/* Audio Interface Control Register 2 */
#define SP_AUD_INTERFACE_CTRL2_REG 0x60
#define SP_M_AUD_ADJUST_ST 0x04
/* Audio Interface Control Register 3 */
#define SP_AUD_INTERFACE_CTRL3_REG 0x62
/* Audio Interface Control Register 4 */
#define SP_AUD_INTERFACE_CTRL4_REG 0x67
/* Audio Interface Control Register 5 */
#define SP_AUD_INTERFACE_CTRL5_REG 0x68
/* Audio Interface Control Register 6 */
#define SP_AUD_INTERFACE_CTRL6_REG 0x69
/* Firmware Version Register */
#define SP_FW_VER_REG 0xb7
#endif

View File

@ -0,0 +1,165 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright(c) 2016, Analogix Semiconductor.
*
* Based on anx7808 driver obtained from chromeos with copyright:
* Copyright(c) 2013, Google Inc.
*/
#include <linux/regmap.h>
#include <drm/drm.h>
#include <drm/drm_dp_helper.h>
#include <drm/drm_print.h>
#include "analogix-i2c-dptx.h"
#define AUX_WAIT_TIMEOUT_MS 15
#define AUX_CH_BUFFER_SIZE 16
static int anx_i2c_dp_clear_bits(struct regmap *map, u8 reg, u8 mask)
{
return regmap_update_bits(map, reg, mask, 0);
}
static bool anx_dp_aux_op_finished(struct regmap *map_dptx)
{
unsigned int value;
int err;
err = regmap_read(map_dptx, SP_DP_AUX_CH_CTRL2_REG, &value);
if (err < 0)
return false;
return (value & SP_AUX_EN) == 0;
}
static int anx_dp_aux_wait(struct regmap *map_dptx)
{
unsigned long timeout;
unsigned int status;
int err;
timeout = jiffies + msecs_to_jiffies(AUX_WAIT_TIMEOUT_MS) + 1;
while (!anx_dp_aux_op_finished(map_dptx)) {
if (time_after(jiffies, timeout)) {
if (!anx_dp_aux_op_finished(map_dptx)) {
DRM_ERROR("Timed out waiting AUX to finish\n");
return -ETIMEDOUT;
}
break;
}
usleep_range(1000, 2000);
}
/* Read the AUX channel access status */
err = regmap_read(map_dptx, SP_AUX_CH_STATUS_REG, &status);
if (err < 0) {
DRM_ERROR("Failed to read from AUX channel: %d\n", err);
return err;
}
if (status & SP_AUX_STATUS) {
DRM_ERROR("Failed to wait for AUX channel (status: %02x)\n",
status);
return -ETIMEDOUT;
}
return 0;
}
static int anx_dp_aux_address(struct regmap *map_dptx, unsigned int addr)
{
int err;
err = regmap_write(map_dptx, SP_AUX_ADDR_7_0_REG, addr & 0xff);
if (err)
return err;
err = regmap_write(map_dptx, SP_AUX_ADDR_15_8_REG,
(addr & 0xff00) >> 8);
if (err)
return err;
/*
* DP AUX CH Address Register #2, only update bits[3:0]
* [7:4] RESERVED
* [3:0] AUX_ADDR[19:16], Register control AUX CH address.
*/
err = regmap_update_bits(map_dptx, SP_AUX_ADDR_19_16_REG,
SP_AUX_ADDR_19_16_MASK,
(addr & 0xf0000) >> 16);
if (err)
return err;
return 0;
}
ssize_t anx_dp_aux_transfer(struct regmap *map_dptx,
struct drm_dp_aux_msg *msg)
{
u8 ctrl1 = msg->request;
u8 ctrl2 = SP_AUX_EN;
u8 *buffer = msg->buffer;
int err;
/* The DP AUX transmit and receive buffer has 16 bytes. */
if (WARN_ON(msg->size > AUX_CH_BUFFER_SIZE))
return -E2BIG;
/* Zero-sized messages specify address-only transactions. */
if (msg->size < 1)
ctrl2 |= SP_ADDR_ONLY;
else /* For non-zero-sized set the length field. */
ctrl1 |= (msg->size - 1) << SP_AUX_LENGTH_SHIFT;
if ((msg->size > 0) && ((msg->request & DP_AUX_I2C_READ) == 0)) {
/* When WRITE | MOT write values to data buffer */
err = regmap_bulk_write(map_dptx,
SP_DP_BUF_DATA0_REG, buffer,
msg->size);
if (err)
return err;
}
/* Write address and request */
err = anx_dp_aux_address(map_dptx, msg->address);
if (err)
return err;
err = regmap_write(map_dptx, SP_DP_AUX_CH_CTRL1_REG, ctrl1);
if (err)
return err;
/* Start transaction */
err = regmap_update_bits(map_dptx, SP_DP_AUX_CH_CTRL2_REG,
SP_ADDR_ONLY | SP_AUX_EN, ctrl2);
if (err)
return err;
err = anx_dp_aux_wait(map_dptx);
if (err)
return err;
msg->reply = DP_AUX_I2C_REPLY_ACK;
if ((msg->size > 0) && (msg->request & DP_AUX_I2C_READ)) {
/* Read values from data buffer */
err = regmap_bulk_read(map_dptx,
SP_DP_BUF_DATA0_REG, buffer,
msg->size);
if (err)
return err;
}
err = anx_i2c_dp_clear_bits(map_dptx, SP_DP_AUX_CH_CTRL2_REG,
SP_ADDR_ONLY);
if (err)
return err;
return msg->size;
}
EXPORT_SYMBOL_GPL(anx_dp_aux_transfer);

View File

@ -0,0 +1,256 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright(c) 2016, Analogix Semiconductor.
*
* Based on anx7808 driver obtained from chromeos with copyright:
* Copyright(c) 2013, Google Inc.
*/
#ifndef _ANALOGIX_I2C_DPTX_H_
#define _ANALOGIX_I2C_DPTX_H_
/***************************************************************/
/* Register definitions for TX_P0 */
/***************************************************************/
/* HDCP Status Register */
#define SP_TX_HDCP_STATUS_REG 0x00
#define SP_AUTH_FAIL BIT(5)
#define SP_AUTHEN_PASS BIT(1)
/* HDCP Control Register 0 */
#define SP_HDCP_CTRL0_REG 0x01
#define SP_RX_REPEATER BIT(6)
#define SP_RE_AUTH BIT(5)
#define SP_SW_AUTH_OK BIT(4)
#define SP_HARD_AUTH_EN BIT(3)
#define SP_HDCP_ENC_EN BIT(2)
#define SP_BKSV_SRM_PASS BIT(1)
#define SP_KSVLIST_VLD BIT(0)
/* HDCP Function Enabled */
#define SP_HDCP_FUNCTION_ENABLED (BIT(0) | BIT(1) | BIT(2) | BIT(3))
/* HDCP Receiver BSTATUS Register 0 */
#define SP_HDCP_RX_BSTATUS0_REG 0x1b
/* HDCP Receiver BSTATUS Register 1 */
#define SP_HDCP_RX_BSTATUS1_REG 0x1c
/* HDCP Embedded "Blue Screen" Content Registers */
#define SP_HDCP_VID0_BLUE_SCREEN_REG 0x2c
#define SP_HDCP_VID1_BLUE_SCREEN_REG 0x2d
#define SP_HDCP_VID2_BLUE_SCREEN_REG 0x2e
/* HDCP Wait R0 Timing Register */
#define SP_HDCP_WAIT_R0_TIME_REG 0x40
/* HDCP Link Integrity Check Timer Register */
#define SP_HDCP_LINK_CHECK_TIMER_REG 0x41
/* HDCP Repeater Ready Wait Timer Register */
#define SP_HDCP_RPTR_RDY_WAIT_TIME_REG 0x42
/* HDCP Auto Timer Register */
#define SP_HDCP_AUTO_TIMER_REG 0x51
/* HDCP Key Status Register */
#define SP_HDCP_KEY_STATUS_REG 0x5e
/* HDCP Key Command Register */
#define SP_HDCP_KEY_COMMAND_REG 0x5f
#define SP_DISABLE_SYNC_HDCP BIT(2)
/* OTP Memory Key Protection Registers */
#define SP_OTP_KEY_PROTECT1_REG 0x60
#define SP_OTP_KEY_PROTECT2_REG 0x61
#define SP_OTP_KEY_PROTECT3_REG 0x62
#define SP_OTP_PSW1 0xa2
#define SP_OTP_PSW2 0x7e
#define SP_OTP_PSW3 0xc6
/* DP System Control Registers */
#define SP_DP_SYSTEM_CTRL_BASE (0x80 - 1)
/* Bits for DP System Control Register 2 */
#define SP_CHA_STA BIT(2)
/* Bits for DP System Control Register 3 */
#define SP_HPD_STATUS BIT(6)
#define SP_HPD_FORCE BIT(5)
#define SP_HPD_CTRL BIT(4)
#define SP_STRM_VALID BIT(2)
#define SP_STRM_FORCE BIT(1)
#define SP_STRM_CTRL BIT(0)
/* Bits for DP System Control Register 4 */
#define SP_ENHANCED_MODE BIT(3)
/* DP Video Control Register */
#define SP_DP_VIDEO_CTRL_REG 0x84
#define SP_COLOR_F_MASK 0x06
#define SP_COLOR_F_SHIFT 1
#define SP_BPC_MASK 0xe0
#define SP_BPC_SHIFT 5
# define SP_BPC_6BITS 0x00
# define SP_BPC_8BITS 0x01
# define SP_BPC_10BITS 0x02
# define SP_BPC_12BITS 0x03
/* DP Audio Control Register */
#define SP_DP_AUDIO_CTRL_REG 0x87
#define SP_AUD_EN BIT(0)
/* 10us Pulse Generate Timer Registers */
#define SP_I2C_GEN_10US_TIMER0_REG 0x88
#define SP_I2C_GEN_10US_TIMER1_REG 0x89
/* Packet Send Control Register */
#define SP_PACKET_SEND_CTRL_REG 0x90
#define SP_AUD_IF_UP BIT(7)
#define SP_AVI_IF_UD BIT(6)
#define SP_MPEG_IF_UD BIT(5)
#define SP_SPD_IF_UD BIT(4)
#define SP_AUD_IF_EN BIT(3)
#define SP_AVI_IF_EN BIT(2)
#define SP_MPEG_IF_EN BIT(1)
#define SP_SPD_IF_EN BIT(0)
/* DP HDCP Control Register */
#define SP_DP_HDCP_CTRL_REG 0x92
#define SP_AUTO_EN BIT(7)
#define SP_AUTO_START BIT(5)
#define SP_LINK_POLLING BIT(1)
/* DP Main Link Bandwidth Setting Register */
#define SP_DP_MAIN_LINK_BW_SET_REG 0xa0
#define SP_LINK_BW_SET_MASK 0x1f
#define SP_INITIAL_SLIM_M_AUD_SEL BIT(5)
/* DP Lane Count Setting Register */
#define SP_DP_LANE_COUNT_SET_REG 0xa1
/* DP Training Pattern Set Register */
#define SP_DP_TRAINING_PATTERN_SET_REG 0xa2
/* DP Lane 0 Link Training Control Register */
#define SP_DP_LANE0_LT_CTRL_REG 0xa3
#define SP_TX_SW_SET_MASK 0x1b
#define SP_MAX_PRE_REACH BIT(5)
#define SP_MAX_DRIVE_REACH BIT(4)
#define SP_PRE_EMP_LEVEL1 BIT(3)
#define SP_DRVIE_CURRENT_LEVEL1 BIT(0)
/* DP Link Training Control Register */
#define SP_DP_LT_CTRL_REG 0xa8
#define SP_DP_LT_INPROGRESS 0x80
#define SP_LT_ERROR_TYPE_MASK 0x70
# define SP_LT_NO_ERROR 0x00
# define SP_LT_AUX_WRITE_ERROR 0x01
# define SP_LT_MAX_DRIVE_REACHED 0x02
# define SP_LT_WRONG_LANE_COUNT_SET 0x03
# define SP_LT_LOOP_SAME_5_TIME 0x04
# define SP_LT_CR_FAIL_IN_EQ 0x05
# define SP_LT_EQ_LOOP_5_TIME 0x06
#define SP_LT_EN BIT(0)
/* DP CEP Training Control Registers */
#define SP_DP_CEP_TRAINING_CTRL0_REG 0xa9
#define SP_DP_CEP_TRAINING_CTRL1_REG 0xaa
/* DP Debug Register 1 */
#define SP_DP_DEBUG1_REG 0xb0
#define SP_DEBUG_PLL_LOCK BIT(4)
#define SP_POLLING_EN BIT(1)
/* DP Polling Control Register */
#define SP_DP_POLLING_CTRL_REG 0xb4
#define SP_AUTO_POLLING_DISABLE BIT(0)
/* DP Link Debug Control Register */
#define SP_DP_LINK_DEBUG_CTRL_REG 0xb8
#define SP_M_VID_DEBUG BIT(5)
#define SP_NEW_PRBS7 BIT(4)
#define SP_INSERT_ER BIT(1)
#define SP_PRBS31_EN BIT(0)
/* AUX Misc control Register */
#define SP_AUX_MISC_CTRL_REG 0xbf
/* DP PLL control Register */
#define SP_DP_PLL_CTRL_REG 0xc7
#define SP_PLL_RST BIT(6)
/* DP Analog Power Down Register */
#define SP_DP_ANALOG_POWER_DOWN_REG 0xc8
#define SP_CH0_PD BIT(0)
/* DP Misc Control Register */
#define SP_DP_MISC_CTRL_REG 0xcd
#define SP_EQ_TRAINING_LOOP BIT(6)
/* DP Extra I2C Device Address Register */
#define SP_DP_EXTRA_I2C_DEV_ADDR_REG 0xce
#define SP_I2C_STRETCH_DISABLE BIT(7)
#define SP_I2C_EXTRA_ADDR 0x50
/* DP Downspread Control Register 1 */
#define SP_DP_DOWNSPREAD_CTRL1_REG 0xd0
/* DP M Value Calculation Control Register */
#define SP_DP_M_CALCULATION_CTRL_REG 0xd9
#define SP_M_GEN_CLK_SEL BIT(0)
/* AUX Channel Access Status Register */
#define SP_AUX_CH_STATUS_REG 0xe0
#define SP_AUX_STATUS 0x0f
/* AUX Channel DEFER Control Register */
#define SP_AUX_DEFER_CTRL_REG 0xe2
#define SP_DEFER_CTRL_EN BIT(7)
/* DP Buffer Data Count Register */
#define SP_BUF_DATA_COUNT_REG 0xe4
#define SP_BUF_DATA_COUNT_MASK 0x1f
#define SP_BUF_CLR BIT(7)
/* DP AUX Channel Control Register 1 */
#define SP_DP_AUX_CH_CTRL1_REG 0xe5
#define SP_AUX_TX_COMM_MASK 0x0f
#define SP_AUX_LENGTH_MASK 0xf0
#define SP_AUX_LENGTH_SHIFT 4
/* DP AUX CH Address Register 0 */
#define SP_AUX_ADDR_7_0_REG 0xe6
/* DP AUX CH Address Register 1 */
#define SP_AUX_ADDR_15_8_REG 0xe7
/* DP AUX CH Address Register 2 */
#define SP_AUX_ADDR_19_16_REG 0xe8
#define SP_AUX_ADDR_19_16_MASK 0x0f
/* DP AUX Channel Control Register 2 */
#define SP_DP_AUX_CH_CTRL2_REG 0xe9
#define SP_AUX_SEL_RXCM BIT(6)
#define SP_AUX_CHSEL BIT(3)
#define SP_AUX_PN_INV BIT(2)
#define SP_ADDR_ONLY BIT(1)
#define SP_AUX_EN BIT(0)
/* DP Video Stream Control InfoFrame Register */
#define SP_DP_3D_VSC_CTRL_REG 0xea
#define SP_INFO_FRAME_VSC_EN BIT(0)
/* DP Video Stream Data Byte 1 Register */
#define SP_DP_VSC_DB1_REG 0xeb
/* DP AUX Channel Control Register 3 */
#define SP_DP_AUX_CH_CTRL3_REG 0xec
#define SP_WAIT_COUNTER_7_0_MASK 0xff
/* DP AUX Channel Control Register 4 */
#define SP_DP_AUX_CH_CTRL4_REG 0xed
/* DP AUX Buffer Data Registers */
#define SP_DP_BUF_DATA0_REG 0xf0
ssize_t anx_dp_aux_transfer(struct regmap *map_dptx,
struct drm_dp_aux_msg *msg);
#endif

View File

@ -0,0 +1,234 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright(c) 2016, Analogix Semiconductor. All rights reserved.
*/
#ifndef _ANALOGIX_I2C_TXCOMMON_H_
#define _ANALOGIX_I2C_TXCOMMON_H_
/***************************************************************/
/* Register definitions for TX_P2 */
/***************************************************************/
/*
* Core Register Definitions
*/
/* Device ID Low Byte Register */
#define SP_DEVICE_IDL_REG 0x02
/* Device ID High Byte Register */
#define SP_DEVICE_IDH_REG 0x03
/* Device version register */
#define SP_DEVICE_VERSION_REG 0x04
/* Power Down Control Register */
#define SP_POWERDOWN_CTRL_REG 0x05
#define SP_REGISTER_PD BIT(7)
#define SP_HDCP_PD BIT(5)
#define SP_AUDIO_PD BIT(4)
#define SP_VIDEO_PD BIT(3)
#define SP_LINK_PD BIT(2)
#define SP_TOTAL_PD BIT(1)
/* Reset Control Register 1 */
#define SP_RESET_CTRL1_REG 0x06
#define SP_MISC_RST BIT(7)
#define SP_VIDCAP_RST BIT(6)
#define SP_VIDFIF_RST BIT(5)
#define SP_AUDFIF_RST BIT(4)
#define SP_AUDCAP_RST BIT(3)
#define SP_HDCP_RST BIT(2)
#define SP_SW_RST BIT(1)
#define SP_HW_RST BIT(0)
/* Reset Control Register 2 */
#define SP_RESET_CTRL2_REG 0x07
#define SP_AUX_RST BIT(2)
#define SP_SERDES_FIFO_RST BIT(1)
#define SP_I2C_REG_RST BIT(0)
/* Video Control Register 1 */
#define SP_VID_CTRL1_REG 0x08
#define SP_VIDEO_EN BIT(7)
#define SP_VIDEO_MUTE BIT(2)
#define SP_DE_GEN BIT(1)
#define SP_DEMUX BIT(0)
/* Video Control Register 2 */
#define SP_VID_CTRL2_REG 0x09
#define SP_IN_COLOR_F_MASK 0x03
#define SP_IN_YC_BIT_SEL BIT(2)
#define SP_IN_BPC_MASK 0x70
#define SP_IN_BPC_SHIFT 4
# define SP_IN_BPC_12BIT 0x03
# define SP_IN_BPC_10BIT 0x02
# define SP_IN_BPC_8BIT 0x01
# define SP_IN_BPC_6BIT 0x00
#define SP_IN_D_RANGE BIT(7)
/* Video Control Register 3 */
#define SP_VID_CTRL3_REG 0x0a
#define SP_HPD_OUT BIT(6)
/* Video Control Register 5 */
#define SP_VID_CTRL5_REG 0x0c
#define SP_CSC_STD_SEL BIT(7)
#define SP_XVYCC_RNG_LMT BIT(6)
#define SP_RANGE_Y2R BIT(5)
#define SP_CSPACE_Y2R BIT(4)
#define SP_RGB_RNG_LMT BIT(3)
#define SP_Y_RNG_LMT BIT(2)
#define SP_RANGE_R2Y BIT(1)
#define SP_CSPACE_R2Y BIT(0)
/* Video Control Register 6 */
#define SP_VID_CTRL6_REG 0x0d
#define SP_TEST_PATTERN_EN BIT(7)
#define SP_VIDEO_PROCESS_EN BIT(6)
#define SP_VID_US_MODE BIT(3)
#define SP_VID_DS_MODE BIT(2)
#define SP_UP_SAMPLE BIT(1)
#define SP_DOWN_SAMPLE BIT(0)
/* Video Control Register 8 */
#define SP_VID_CTRL8_REG 0x0f
#define SP_VID_VRES_TH BIT(0)
/* Total Line Status Low Byte Register */
#define SP_TOTAL_LINE_STAL_REG 0x24
/* Total Line Status High Byte Register */
#define SP_TOTAL_LINE_STAH_REG 0x25
/* Active Line Status Low Byte Register */
#define SP_ACT_LINE_STAL_REG 0x26
/* Active Line Status High Byte Register */
#define SP_ACT_LINE_STAH_REG 0x27
/* Vertical Front Porch Status Register */
#define SP_V_F_PORCH_STA_REG 0x28
/* Vertical SYNC Width Status Register */
#define SP_V_SYNC_STA_REG 0x29
/* Vertical Back Porch Status Register */
#define SP_V_B_PORCH_STA_REG 0x2a
/* Total Pixel Status Low Byte Register */
#define SP_TOTAL_PIXEL_STAL_REG 0x2b
/* Total Pixel Status High Byte Register */
#define SP_TOTAL_PIXEL_STAH_REG 0x2c
/* Active Pixel Status Low Byte Register */
#define SP_ACT_PIXEL_STAL_REG 0x2d
/* Active Pixel Status High Byte Register */
#define SP_ACT_PIXEL_STAH_REG 0x2e
/* Horizontal Front Porch Status Low Byte Register */
#define SP_H_F_PORCH_STAL_REG 0x2f
/* Horizontal Front Porch Statys High Byte Register */
#define SP_H_F_PORCH_STAH_REG 0x30
/* Horizontal SYNC Width Status Low Byte Register */
#define SP_H_SYNC_STAL_REG 0x31
/* Horizontal SYNC Width Status High Byte Register */
#define SP_H_SYNC_STAH_REG 0x32
/* Horizontal Back Porch Status Low Byte Register */
#define SP_H_B_PORCH_STAL_REG 0x33
/* Horizontal Back Porch Status High Byte Register */
#define SP_H_B_PORCH_STAH_REG 0x34
/* InfoFrame AVI Packet DB1 Register */
#define SP_INFOFRAME_AVI_DB1_REG 0x70
/* Bit Control Specific Register */
#define SP_BIT_CTRL_SPECIFIC_REG 0x80
#define SP_BIT_CTRL_SELECT_SHIFT 1
#define SP_ENABLE_BIT_CTRL BIT(0)
/* InfoFrame Audio Packet DB1 Register */
#define SP_INFOFRAME_AUD_DB1_REG 0x83
/* InfoFrame MPEG Packet DB1 Register */
#define SP_INFOFRAME_MPEG_DB1_REG 0xb0
/* Audio Channel Status Registers */
#define SP_AUD_CH_STATUS_BASE 0xd0
/* Audio Channel Num Register 5 */
#define SP_I2S_CHANNEL_NUM_MASK 0xe0
# define SP_I2S_CH_NUM_1 (0x00 << 5)
# define SP_I2S_CH_NUM_2 (0x01 << 5)
# define SP_I2S_CH_NUM_3 (0x02 << 5)
# define SP_I2S_CH_NUM_4 (0x03 << 5)
# define SP_I2S_CH_NUM_5 (0x04 << 5)
# define SP_I2S_CH_NUM_6 (0x05 << 5)
# define SP_I2S_CH_NUM_7 (0x06 << 5)
# define SP_I2S_CH_NUM_8 (0x07 << 5)
#define SP_EXT_VUCP BIT(2)
#define SP_VBIT BIT(1)
#define SP_AUDIO_LAYOUT BIT(0)
/* Analog Debug Register 1 */
#define SP_ANALOG_DEBUG1_REG 0xdc
/* Analog Debug Register 2 */
#define SP_ANALOG_DEBUG2_REG 0xdd
#define SP_FORCE_SW_OFF_BYPASS 0x20
#define SP_XTAL_FRQ 0x1c
# define SP_XTAL_FRQ_19M2 (0x00 << 2)
# define SP_XTAL_FRQ_24M (0x01 << 2)
# define SP_XTAL_FRQ_25M (0x02 << 2)
# define SP_XTAL_FRQ_26M (0x03 << 2)
# define SP_XTAL_FRQ_27M (0x04 << 2)
# define SP_XTAL_FRQ_38M4 (0x05 << 2)
# define SP_XTAL_FRQ_52M (0x06 << 2)
#define SP_POWERON_TIME_1P5MS 0x03
/* Analog Control 0 Register */
#define SP_ANALOG_CTRL0_REG 0xe1
/* Common Interrupt Status Register 1 */
#define SP_COMMON_INT_STATUS_BASE (0xf1 - 1)
#define SP_PLL_LOCK_CHG 0x40
/* Common Interrupt Status Register 2 */
#define SP_COMMON_INT_STATUS2 0xf2
#define SP_HDCP_AUTH_CHG BIT(1)
#define SP_HDCP_AUTH_DONE BIT(0)
#define SP_HDCP_LINK_CHECK_FAIL BIT(0)
/* Common Interrupt Status Register 4 */
#define SP_COMMON_INT_STATUS4_REG 0xf4
#define SP_HPD_IRQ BIT(6)
#define SP_HPD_ESYNC_ERR BIT(4)
#define SP_HPD_CHG BIT(2)
#define SP_HPD_LOST BIT(1)
#define SP_HPD_PLUG BIT(0)
/* DP Interrupt Status Register */
#define SP_DP_INT_STATUS1_REG 0xf7
#define SP_TRAINING_FINISH BIT(5)
#define SP_POLLING_ERR BIT(4)
/* Common Interrupt Mask Register */
#define SP_COMMON_INT_MASK_BASE (0xf8 - 1)
#define SP_COMMON_INT_MASK4_REG 0xfb
/* DP Interrupts Mask Register */
#define SP_DP_INT_MASK1_REG 0xfe
/* Interrupt Control Register */
#define SP_INT_CTRL_REG 0xff
#endif /* _ANALOGIX_I2C_TXCOMMON_H_ */

View File

@ -1111,7 +1111,7 @@ static int analogix_dp_get_modes(struct drm_connector *connector)
int ret, num_modes = 0;
if (dp->plat_data->panel) {
num_modes += drm_panel_get_modes(dp->plat_data->panel);
num_modes += drm_panel_get_modes(dp->plat_data->panel, connector);
} else {
ret = analogix_dp_prepare_panel(dp, true, false);
if (ret) {

View File

@ -37,7 +37,7 @@ static int panel_bridge_connector_get_modes(struct drm_connector *connector)
struct panel_bridge *panel_bridge =
drm_connector_to_panel_bridge(connector);
return drm_panel_get_modes(panel_bridge->panel);
return drm_panel_get_modes(panel_bridge->panel, connector);
}
static const struct drm_connector_helper_funcs
@ -289,3 +289,21 @@ struct drm_bridge *devm_drm_panel_bridge_add_typed(struct device *dev,
return bridge;
}
EXPORT_SYMBOL(devm_drm_panel_bridge_add_typed);
/**
* drm_panel_bridge_connector - return the connector for the panel bridge
*
* drm_panel_bridge creates the connector.
* This function gives external access to the connector.
*
* Returns: Pointer to drm_connector
*/
struct drm_connector *drm_panel_bridge_connector(struct drm_bridge *bridge)
{
struct panel_bridge *panel_bridge;
panel_bridge = drm_bridge_to_panel_bridge(bridge);
return &panel_bridge->connector;
}
EXPORT_SYMBOL(drm_panel_bridge_connector);

View File

@ -461,7 +461,7 @@ static int ps8622_get_modes(struct drm_connector *connector)
ps8622 = connector_to_ps8622(connector);
return drm_panel_get_modes(ps8622->panel);
return drm_panel_get_modes(ps8622->panel, connector);
}
static const struct drm_connector_helper_funcs ps8622_connector_helper_funcs = {

View File

@ -282,7 +282,7 @@ static int tc358764_get_modes(struct drm_connector *connector)
{
struct tc358764 *ctx = connector_to_tc358764(connector);
return drm_panel_get_modes(ctx->panel);
return drm_panel_get_modes(ctx->panel, connector);
}
static const

View File

@ -1346,7 +1346,7 @@ static int tc_connector_get_modes(struct drm_connector *connector)
return 0;
}
count = drm_panel_get_modes(tc->panel);
count = drm_panel_get_modes(tc->panel, connector);
if (count > 0)
return count;

View File

@ -206,7 +206,7 @@ static int ti_sn_bridge_connector_get_modes(struct drm_connector *connector)
{
struct ti_sn_bridge *pdata = connector_to_ti_sn_bridge(connector);
return drm_panel_get_modes(pdata->panel);
return drm_panel_get_modes(pdata->panel, connector);
}
static enum drm_mode_status

View File

@ -212,7 +212,7 @@ int drm_agp_alloc(struct drm_device *dev, struct drm_agp_buffer *request)
if (!entry)
return -ENOMEM;
pages = (request->size + PAGE_SIZE - 1) / PAGE_SIZE;
pages = DIV_ROUND_UP(request->size, PAGE_SIZE);
type = (u32) request->type;
memory = agp_allocate_memory(dev->agp->bridge, pages, type);
if (!memory) {
@ -325,7 +325,7 @@ int drm_agp_bind(struct drm_device *dev, struct drm_agp_binding *request)
entry = drm_agp_lookup_entry(dev, request->handle);
if (!entry || entry->bound)
return -EINVAL;
page = (request->offset + PAGE_SIZE - 1) / PAGE_SIZE;
page = DIV_ROUND_UP(request->offset, PAGE_SIZE);
retcode = drm_bind_agp(entry->memory, page);
if (retcode)
return retcode;

View File

@ -688,10 +688,12 @@ static void drm_atomic_plane_print_state(struct drm_printer *p,
* associated state struct &drm_private_state.
*
* Similar to userspace-exposed objects, private state structures can be
* acquired by calling drm_atomic_get_private_obj_state(). Since this function
* does not take care of locking, drivers should wrap it for each type of
* private state object they have with the required call to drm_modeset_lock()
* for the corresponding &drm_modeset_lock.
* acquired by calling drm_atomic_get_private_obj_state(). This also takes care
* of locking, hence drivers should not have a need to call drm_modeset_lock()
* directly. Sequence of the actual hardware state commit is not handled,
* drivers might need to keep track of struct drm_crtc_commit within subclassed
* structure of &drm_private_state as necessary, e.g. similar to
* &drm_plane_state.commit. See also &drm_atomic_state.fake_commit.
*
* All private state structures contained in a &drm_atomic_state update can be
* iterated using for_each_oldnew_private_obj_in_state(),

View File

@ -419,6 +419,7 @@ mode_fixup(struct drm_atomic_state *state)
for_each_new_connector_in_state(state, connector, new_conn_state, i) {
const struct drm_encoder_helper_funcs *funcs;
struct drm_encoder *encoder;
struct drm_bridge *bridge;
WARN_ON(!!new_conn_state->best_encoder != !!new_conn_state->crtc);
@ -435,8 +436,10 @@ mode_fixup(struct drm_atomic_state *state)
encoder = new_conn_state->best_encoder;
funcs = encoder->helper_private;
ret = drm_bridge_mode_fixup(encoder->bridge, &new_crtc_state->mode,
&new_crtc_state->adjusted_mode);
bridge = drm_bridge_chain_get_first_bridge(encoder);
ret = drm_bridge_chain_mode_fixup(bridge,
&new_crtc_state->mode,
&new_crtc_state->adjusted_mode);
if (!ret) {
DRM_DEBUG_ATOMIC("Bridge fixup failed\n");
return -EINVAL;
@ -492,6 +495,7 @@ static enum drm_mode_status mode_valid_path(struct drm_connector *connector,
struct drm_crtc *crtc,
const struct drm_display_mode *mode)
{
struct drm_bridge *bridge;
enum drm_mode_status ret;
ret = drm_encoder_mode_valid(encoder, mode);
@ -501,7 +505,8 @@ static enum drm_mode_status mode_valid_path(struct drm_connector *connector,
return ret;
}
ret = drm_bridge_mode_valid(encoder->bridge, mode);
bridge = drm_bridge_chain_get_first_bridge(encoder);
ret = drm_bridge_chain_mode_valid(bridge, mode);
if (ret != MODE_OK) {
DRM_DEBUG_ATOMIC("[BRIDGE] mode_valid() failed\n");
return ret;
@ -984,6 +989,7 @@ disable_outputs(struct drm_device *dev, struct drm_atomic_state *old_state)
for_each_oldnew_connector_in_state(old_state, connector, old_conn_state, new_conn_state, i) {
const struct drm_encoder_helper_funcs *funcs;
struct drm_encoder *encoder;
struct drm_bridge *bridge;
/* Shut down everything that's in the changeset and currently
* still on. So need to check the old, saved state. */
@ -1020,7 +1026,8 @@ disable_outputs(struct drm_device *dev, struct drm_atomic_state *old_state)
* Each encoder has at most one connector (since we always steal
* it away), so we won't call disable hooks twice.
*/
drm_atomic_bridge_disable(encoder->bridge, old_state);
bridge = drm_bridge_chain_get_first_bridge(encoder);
drm_atomic_bridge_chain_disable(bridge, old_state);
/* Right function depends upon target state. */
if (funcs) {
@ -1034,7 +1041,7 @@ disable_outputs(struct drm_device *dev, struct drm_atomic_state *old_state)
funcs->dpms(encoder, DRM_MODE_DPMS_OFF);
}
drm_atomic_bridge_post_disable(encoder->bridge, old_state);
drm_atomic_bridge_chain_post_disable(bridge, old_state);
}
for_each_oldnew_crtc_in_state(old_state, crtc, old_crtc_state, new_crtc_state, i) {
@ -1188,6 +1195,7 @@ crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *old_state)
const struct drm_encoder_helper_funcs *funcs;
struct drm_encoder *encoder;
struct drm_display_mode *mode, *adjusted_mode;
struct drm_bridge *bridge;
if (!new_conn_state->best_encoder)
continue;
@ -1215,7 +1223,8 @@ crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *old_state)
funcs->mode_set(encoder, mode, adjusted_mode);
}
drm_bridge_mode_set(encoder->bridge, mode, adjusted_mode);
bridge = drm_bridge_chain_get_first_bridge(encoder);
drm_bridge_chain_mode_set(bridge, mode, adjusted_mode);
}
}
@ -1314,6 +1323,7 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
for_each_new_connector_in_state(old_state, connector, new_conn_state, i) {
const struct drm_encoder_helper_funcs *funcs;
struct drm_encoder *encoder;
struct drm_bridge *bridge;
if (!new_conn_state->best_encoder)
continue;
@ -1332,7 +1342,8 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
* Each encoder has at most one connector (since we always steal
* it away), so we won't call enable hooks twice.
*/
drm_atomic_bridge_pre_enable(encoder->bridge, old_state);
bridge = drm_bridge_chain_get_first_bridge(encoder);
drm_atomic_bridge_chain_pre_enable(bridge, old_state);
if (funcs) {
if (funcs->atomic_enable)
@ -1343,7 +1354,7 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
funcs->commit(encoder);
}
drm_atomic_bridge_enable(encoder->bridge, old_state);
drm_atomic_bridge_chain_enable(bridge, old_state);
}
drm_atomic_helper_commit_writebacks(dev, old_state);
@ -1834,17 +1845,21 @@ EXPORT_SYMBOL(drm_atomic_helper_commit);
/**
* DOC: implementing nonblocking commit
*
* Nonblocking atomic commits have to be implemented in the following sequence:
* Nonblocking atomic commits should use struct &drm_crtc_commit to sequence
* different operations against each another. Locks, especially struct
* &drm_modeset_lock, should not be held in worker threads or any other
* asynchronous context used to commit the hardware state.
*
* 1. Run drm_atomic_helper_prepare_planes() first. This is the only function
* which commit needs to call which can fail, so we want to run it first and
* drm_atomic_helper_commit() implements the recommended sequence for
* nonblocking commits, using drm_atomic_helper_setup_commit() internally:
*
* 1. Run drm_atomic_helper_prepare_planes(). Since this can fail and we
* need to propagate out of memory/VRAM errors to userspace, it must be called
* synchronously.
*
* 2. Synchronize with any outstanding nonblocking commit worker threads which
* might be affected the new state update. This can be done by either cancelling
* or flushing the work items, depending upon whether the driver can deal with
* cancelled updates. Note that it is important to ensure that the framebuffer
* cleanup is still done when cancelling.
* might be affected by the new state update. This is handled by
* drm_atomic_helper_setup_commit().
*
* Asynchronous workers need to have sufficient parallelism to be able to run
* different atomic commits on different CRTCs in parallel. The simplest way to
@ -1855,21 +1870,29 @@ EXPORT_SYMBOL(drm_atomic_helper_commit);
* must be done as one global operation, and enabling or disabling a CRTC can
* take a long time. But even that is not required.
*
* IMPORTANT: A &drm_atomic_state update for multiple CRTCs is sequenced
* against all CRTCs therein. Therefore for atomic state updates which only flip
* planes the driver must not get the struct &drm_crtc_state of unrelated CRTCs
* in its atomic check code: This would prevent committing of atomic updates to
* multiple CRTCs in parallel. In general, adding additional state structures
* should be avoided as much as possible, because this reduces parallelism in
* (nonblocking) commits, both due to locking and due to commit sequencing
* requirements.
*
* 3. The software state is updated synchronously with
* drm_atomic_helper_swap_state(). Doing this under the protection of all modeset
* locks means concurrent callers never see inconsistent state. And doing this
* while it's guaranteed that no relevant nonblocking worker runs means that
* nonblocking workers do not need grab any locks. Actually they must not grab
* locks, for otherwise the work flushing will deadlock.
* locks means concurrent callers never see inconsistent state. Note that commit
* workers do not hold any locks; their access is only coordinated through
* ordering. If workers would access state only through the pointers in the
* free-standing state objects (currently not the case for any driver) then even
* multiple pending commits could be in-flight at the same time.
*
* 4. Schedule a work item to do all subsequent steps, using the split-out
* commit helpers: a) pre-plane commit b) plane commit c) post-plane commit and
* then cleaning up the framebuffers after the old framebuffer is no longer
* being displayed.
*
* The above scheme is implemented in the atomic helper libraries in
* drm_atomic_helper_commit() using a bunch of helper functions. See
* drm_atomic_helper_setup_commit() for a starting point.
* being displayed. The scheduled work should synchronize against other workers
* using the &drm_crtc_commit infrastructure as needed. See
* drm_atomic_helper_setup_commit() for more details.
*/
static int stall_checks(struct drm_crtc *crtc, bool nonblock)
@ -2098,7 +2121,7 @@ EXPORT_SYMBOL(drm_atomic_helper_setup_commit);
*
* This function waits for all preceeding commits that touch the same CRTC as
* @old_state to both be committed to the hardware (as signalled by
* drm_atomic_helper_commit_hw_done) and executed by the hardware (as signalled
* drm_atomic_helper_commit_hw_done()) and executed by the hardware (as signalled
* by calling drm_crtc_send_vblank_event() on the &drm_crtc_state.event).
*
* This is part of the atomic helper support for nonblocking commits, see

View File

@ -55,7 +55,7 @@
* just provide additional hooks to get the desired output at the end of the
* encoder chain.
*
* Bridges can also be chained up using the &drm_bridge.next pointer.
* Bridges can also be chained up using the &drm_bridge.chain_node field.
*
* Both legacy CRTC helpers and the new atomic modeset helpers support bridges.
*/
@ -128,20 +128,21 @@ int drm_bridge_attach(struct drm_encoder *encoder, struct drm_bridge *bridge,
bridge->dev = encoder->dev;
bridge->encoder = encoder;
if (previous)
list_add(&bridge->chain_node, &previous->chain_node);
else
list_add(&bridge->chain_node, &encoder->bridge_chain);
if (bridge->funcs->attach) {
ret = bridge->funcs->attach(bridge);
if (ret < 0) {
list_del(&bridge->chain_node);
bridge->dev = NULL;
bridge->encoder = NULL;
return ret;
}
}
if (previous)
previous->next = bridge;
else
encoder->bridge = bridge;
return 0;
}
EXPORT_SYMBOL(drm_bridge_attach);
@ -157,6 +158,7 @@ void drm_bridge_detach(struct drm_bridge *bridge)
if (bridge->funcs->detach)
bridge->funcs->detach(bridge);
list_del(&bridge->chain_node);
bridge->dev = NULL;
}
@ -172,8 +174,8 @@ void drm_bridge_detach(struct drm_bridge *bridge)
*/
/**
* drm_bridge_mode_fixup - fixup proposed mode for all bridges in the
* encoder chain
* drm_bridge_chain_mode_fixup - fixup proposed mode for all bridges in the
* encoder chain
* @bridge: bridge control structure
* @mode: desired mode to be set for the bridge
* @adjusted_mode: updated mode that works for this bridge
@ -186,27 +188,31 @@ void drm_bridge_detach(struct drm_bridge *bridge)
* RETURNS:
* true on success, false on failure
*/
bool drm_bridge_mode_fixup(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
bool drm_bridge_chain_mode_fixup(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
bool ret = true;
struct drm_encoder *encoder;
if (!bridge)
return true;
if (bridge->funcs->mode_fixup)
ret = bridge->funcs->mode_fixup(bridge, mode, adjusted_mode);
encoder = bridge->encoder;
list_for_each_entry_from(bridge, &encoder->bridge_chain, chain_node) {
if (!bridge->funcs->mode_fixup)
continue;
ret = ret && drm_bridge_mode_fixup(bridge->next, mode, adjusted_mode);
if (!bridge->funcs->mode_fixup(bridge, mode, adjusted_mode))
return false;
}
return ret;
return true;
}
EXPORT_SYMBOL(drm_bridge_mode_fixup);
EXPORT_SYMBOL(drm_bridge_chain_mode_fixup);
/**
* drm_bridge_mode_valid - validate the mode against all bridges in the
* encoder chain.
* drm_bridge_chain_mode_valid - validate the mode against all bridges in the
* encoder chain.
* @bridge: bridge control structure
* @mode: desired mode to be validated
*
@ -219,26 +225,33 @@ EXPORT_SYMBOL(drm_bridge_mode_fixup);
* RETURNS:
* MODE_OK on success, drm_mode_status Enum error code on failure
*/
enum drm_mode_status drm_bridge_mode_valid(struct drm_bridge *bridge,
const struct drm_display_mode *mode)
enum drm_mode_status
drm_bridge_chain_mode_valid(struct drm_bridge *bridge,
const struct drm_display_mode *mode)
{
enum drm_mode_status ret = MODE_OK;
struct drm_encoder *encoder;
if (!bridge)
return ret;
return MODE_OK;
encoder = bridge->encoder;
list_for_each_entry_from(bridge, &encoder->bridge_chain, chain_node) {
enum drm_mode_status ret;
if (!bridge->funcs->mode_valid)
continue;
if (bridge->funcs->mode_valid)
ret = bridge->funcs->mode_valid(bridge, mode);
if (ret != MODE_OK)
return ret;
}
if (ret != MODE_OK)
return ret;
return drm_bridge_mode_valid(bridge->next, mode);
return MODE_OK;
}
EXPORT_SYMBOL(drm_bridge_mode_valid);
EXPORT_SYMBOL(drm_bridge_chain_mode_valid);
/**
* drm_bridge_disable - disables all bridges in the encoder chain
* drm_bridge_chain_disable - disables all bridges in the encoder chain
* @bridge: bridge control structure
*
* Calls &drm_bridge_funcs.disable op for all the bridges in the encoder
@ -247,20 +260,28 @@ EXPORT_SYMBOL(drm_bridge_mode_valid);
*
* Note: the bridge passed should be the one closest to the encoder
*/
void drm_bridge_disable(struct drm_bridge *bridge)
void drm_bridge_chain_disable(struct drm_bridge *bridge)
{
struct drm_encoder *encoder;
struct drm_bridge *iter;
if (!bridge)
return;
drm_bridge_disable(bridge->next);
encoder = bridge->encoder;
list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
if (iter->funcs->disable)
iter->funcs->disable(iter);
if (bridge->funcs->disable)
bridge->funcs->disable(bridge);
if (iter == bridge)
break;
}
}
EXPORT_SYMBOL(drm_bridge_disable);
EXPORT_SYMBOL(drm_bridge_chain_disable);
/**
* drm_bridge_post_disable - cleans up after disabling all bridges in the encoder chain
* drm_bridge_chain_post_disable - cleans up after disabling all bridges in the
* encoder chain
* @bridge: bridge control structure
*
* Calls &drm_bridge_funcs.post_disable op for all the bridges in the
@ -269,47 +290,53 @@ EXPORT_SYMBOL(drm_bridge_disable);
*
* Note: the bridge passed should be the one closest to the encoder
*/
void drm_bridge_post_disable(struct drm_bridge *bridge)
void drm_bridge_chain_post_disable(struct drm_bridge *bridge)
{
struct drm_encoder *encoder;
if (!bridge)
return;
if (bridge->funcs->post_disable)
bridge->funcs->post_disable(bridge);
drm_bridge_post_disable(bridge->next);
encoder = bridge->encoder;
list_for_each_entry_from(bridge, &encoder->bridge_chain, chain_node) {
if (bridge->funcs->post_disable)
bridge->funcs->post_disable(bridge);
}
}
EXPORT_SYMBOL(drm_bridge_post_disable);
EXPORT_SYMBOL(drm_bridge_chain_post_disable);
/**
* drm_bridge_mode_set - set proposed mode for all bridges in the
* encoder chain
* drm_bridge_chain_mode_set - set proposed mode for all bridges in the
* encoder chain
* @bridge: bridge control structure
* @mode: desired mode to be set for the bridge
* @adjusted_mode: updated mode that works for this bridge
* @mode: desired mode to be set for the encoder chain
* @adjusted_mode: updated mode that works for this encoder chain
*
* Calls &drm_bridge_funcs.mode_set op for all the bridges in the
* encoder chain, starting from the first bridge to the last.
*
* Note: the bridge passed should be the one closest to the encoder
*/
void drm_bridge_mode_set(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
void drm_bridge_chain_mode_set(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
struct drm_encoder *encoder;
if (!bridge)
return;
if (bridge->funcs->mode_set)
bridge->funcs->mode_set(bridge, mode, adjusted_mode);
drm_bridge_mode_set(bridge->next, mode, adjusted_mode);
encoder = bridge->encoder;
list_for_each_entry_from(bridge, &encoder->bridge_chain, chain_node) {
if (bridge->funcs->mode_set)
bridge->funcs->mode_set(bridge, mode, adjusted_mode);
}
}
EXPORT_SYMBOL(drm_bridge_mode_set);
EXPORT_SYMBOL(drm_bridge_chain_mode_set);
/**
* drm_bridge_pre_enable - prepares for enabling all
* bridges in the encoder chain
* drm_bridge_chain_pre_enable - prepares for enabling all bridges in the
* encoder chain
* @bridge: bridge control structure
*
* Calls &drm_bridge_funcs.pre_enable op for all the bridges in the encoder
@ -318,20 +345,24 @@ EXPORT_SYMBOL(drm_bridge_mode_set);
*
* Note: the bridge passed should be the one closest to the encoder
*/
void drm_bridge_pre_enable(struct drm_bridge *bridge)
void drm_bridge_chain_pre_enable(struct drm_bridge *bridge)
{
struct drm_encoder *encoder;
struct drm_bridge *iter;
if (!bridge)
return;
drm_bridge_pre_enable(bridge->next);
if (bridge->funcs->pre_enable)
bridge->funcs->pre_enable(bridge);
encoder = bridge->encoder;
list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
if (iter->funcs->pre_enable)
iter->funcs->pre_enable(iter);
}
}
EXPORT_SYMBOL(drm_bridge_pre_enable);
EXPORT_SYMBOL(drm_bridge_chain_pre_enable);
/**
* drm_bridge_enable - enables all bridges in the encoder chain
* drm_bridge_chain_enable - enables all bridges in the encoder chain
* @bridge: bridge control structure
*
* Calls &drm_bridge_funcs.enable op for all the bridges in the encoder
@ -340,22 +371,25 @@ EXPORT_SYMBOL(drm_bridge_pre_enable);
*
* Note that the bridge passed should be the one closest to the encoder
*/
void drm_bridge_enable(struct drm_bridge *bridge)
void drm_bridge_chain_enable(struct drm_bridge *bridge)
{
struct drm_encoder *encoder;
if (!bridge)
return;
if (bridge->funcs->enable)
bridge->funcs->enable(bridge);
drm_bridge_enable(bridge->next);
encoder = bridge->encoder;
list_for_each_entry_from(bridge, &encoder->bridge_chain, chain_node) {
if (bridge->funcs->enable)
bridge->funcs->enable(bridge);
}
}
EXPORT_SYMBOL(drm_bridge_enable);
EXPORT_SYMBOL(drm_bridge_chain_enable);
/**
* drm_atomic_bridge_disable - disables all bridges in the encoder chain
* drm_atomic_bridge_chain_disable - disables all bridges in the encoder chain
* @bridge: bridge control structure
* @state: atomic state being committed
* @old_state: old atomic state
*
* Calls &drm_bridge_funcs.atomic_disable (falls back on
* &drm_bridge_funcs.disable) op for all the bridges in the encoder chain,
@ -364,26 +398,33 @@ EXPORT_SYMBOL(drm_bridge_enable);
*
* Note: the bridge passed should be the one closest to the encoder
*/
void drm_atomic_bridge_disable(struct drm_bridge *bridge,
struct drm_atomic_state *state)
void drm_atomic_bridge_chain_disable(struct drm_bridge *bridge,
struct drm_atomic_state *old_state)
{
struct drm_encoder *encoder;
struct drm_bridge *iter;
if (!bridge)
return;
drm_atomic_bridge_disable(bridge->next, state);
encoder = bridge->encoder;
list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
if (iter->funcs->atomic_disable)
iter->funcs->atomic_disable(iter, old_state);
else if (iter->funcs->disable)
iter->funcs->disable(iter);
if (bridge->funcs->atomic_disable)
bridge->funcs->atomic_disable(bridge, state);
else if (bridge->funcs->disable)
bridge->funcs->disable(bridge);
if (iter == bridge)
break;
}
}
EXPORT_SYMBOL(drm_atomic_bridge_disable);
EXPORT_SYMBOL(drm_atomic_bridge_chain_disable);
/**
* drm_atomic_bridge_post_disable - cleans up after disabling all bridges in the
* encoder chain
* drm_atomic_bridge_chain_post_disable - cleans up after disabling all bridges
* in the encoder chain
* @bridge: bridge control structure
* @state: atomic state being committed
* @old_state: old atomic state
*
* Calls &drm_bridge_funcs.atomic_post_disable (falls back on
* &drm_bridge_funcs.post_disable) op for all the bridges in the encoder chain,
@ -392,26 +433,29 @@ EXPORT_SYMBOL(drm_atomic_bridge_disable);
*
* Note: the bridge passed should be the one closest to the encoder
*/
void drm_atomic_bridge_post_disable(struct drm_bridge *bridge,
struct drm_atomic_state *state)
void drm_atomic_bridge_chain_post_disable(struct drm_bridge *bridge,
struct drm_atomic_state *old_state)
{
struct drm_encoder *encoder;
if (!bridge)
return;
if (bridge->funcs->atomic_post_disable)
bridge->funcs->atomic_post_disable(bridge, state);
else if (bridge->funcs->post_disable)
bridge->funcs->post_disable(bridge);
drm_atomic_bridge_post_disable(bridge->next, state);
encoder = bridge->encoder;
list_for_each_entry_from(bridge, &encoder->bridge_chain, chain_node) {
if (bridge->funcs->atomic_post_disable)
bridge->funcs->atomic_post_disable(bridge, old_state);
else if (bridge->funcs->post_disable)
bridge->funcs->post_disable(bridge);
}
}
EXPORT_SYMBOL(drm_atomic_bridge_post_disable);
EXPORT_SYMBOL(drm_atomic_bridge_chain_post_disable);
/**
* drm_atomic_bridge_pre_enable - prepares for enabling all bridges in the
* encoder chain
* drm_atomic_bridge_chain_pre_enable - prepares for enabling all bridges in
* the encoder chain
* @bridge: bridge control structure
* @state: atomic state being committed
* @old_state: old atomic state
*
* Calls &drm_bridge_funcs.atomic_pre_enable (falls back on
* &drm_bridge_funcs.pre_enable) op for all the bridges in the encoder chain,
@ -420,25 +464,32 @@ EXPORT_SYMBOL(drm_atomic_bridge_post_disable);
*
* Note: the bridge passed should be the one closest to the encoder
*/
void drm_atomic_bridge_pre_enable(struct drm_bridge *bridge,
struct drm_atomic_state *state)
void drm_atomic_bridge_chain_pre_enable(struct drm_bridge *bridge,
struct drm_atomic_state *old_state)
{
struct drm_encoder *encoder;
struct drm_bridge *iter;
if (!bridge)
return;
drm_atomic_bridge_pre_enable(bridge->next, state);
encoder = bridge->encoder;
list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
if (iter->funcs->atomic_pre_enable)
iter->funcs->atomic_pre_enable(iter, old_state);
else if (iter->funcs->pre_enable)
iter->funcs->pre_enable(iter);
if (bridge->funcs->atomic_pre_enable)
bridge->funcs->atomic_pre_enable(bridge, state);
else if (bridge->funcs->pre_enable)
bridge->funcs->pre_enable(bridge);
if (iter == bridge)
break;
}
}
EXPORT_SYMBOL(drm_atomic_bridge_pre_enable);
EXPORT_SYMBOL(drm_atomic_bridge_chain_pre_enable);
/**
* drm_atomic_bridge_enable - enables all bridges in the encoder chain
* drm_atomic_bridge_chain_enable - enables all bridges in the encoder chain
* @bridge: bridge control structure
* @state: atomic state being committed
* @old_state: old atomic state
*
* Calls &drm_bridge_funcs.atomic_enable (falls back on
* &drm_bridge_funcs.enable) op for all the bridges in the encoder chain,
@ -447,20 +498,23 @@ EXPORT_SYMBOL(drm_atomic_bridge_pre_enable);
*
* Note: the bridge passed should be the one closest to the encoder
*/
void drm_atomic_bridge_enable(struct drm_bridge *bridge,
struct drm_atomic_state *state)
void drm_atomic_bridge_chain_enable(struct drm_bridge *bridge,
struct drm_atomic_state *old_state)
{
struct drm_encoder *encoder;
if (!bridge)
return;
if (bridge->funcs->atomic_enable)
bridge->funcs->atomic_enable(bridge, state);
else if (bridge->funcs->enable)
bridge->funcs->enable(bridge);
drm_atomic_bridge_enable(bridge->next, state);
encoder = bridge->encoder;
list_for_each_entry_from(bridge, &encoder->bridge_chain, chain_node) {
if (bridge->funcs->atomic_enable)
bridge->funcs->atomic_enable(bridge, old_state);
else if (bridge->funcs->enable)
bridge->funcs->enable(bridge);
}
}
EXPORT_SYMBOL(drm_atomic_bridge_enable);
EXPORT_SYMBOL(drm_atomic_bridge_chain_enable);
#ifdef CONFIG_OF
/**

View File

@ -109,28 +109,38 @@
*/
/**
* drm_color_lut_extract - clamp and round LUT entries
* @user_input: input value
* @bit_precision: number of bits the hw LUT supports
* drm_color_ctm_s31_32_to_qm_n
*
* Extract a degamma/gamma LUT value provided by user (in the form of
* &drm_color_lut entries) and round it to the precision supported by the
* hardware.
* @user_input: input value
* @m: number of integer bits, only support m <= 32, include the sign-bit
* @n: number of fractional bits, only support n <= 32
*
* Convert and clamp S31.32 sign-magnitude to Qm.n (signed 2's complement).
* The sign-bit BIT(m+n-1) and above are 0 for positive value and 1 for negative
* the range of value is [-2^(m-1), 2^(m-1) - 2^-n]
*
* For example
* A Q3.12 format number:
* - required bit: 3 + 12 = 15bits
* - range: [-2^2, 2^2 - 2^15]
*
* NOTE: the m can be zero if all bit_precision are used to present fractional
* bits like Q0.32
*/
uint32_t drm_color_lut_extract(uint32_t user_input, uint32_t bit_precision)
u64 drm_color_ctm_s31_32_to_qm_n(u64 user_input, u32 m, u32 n)
{
uint32_t val = user_input;
uint32_t max = 0xffff >> (16 - bit_precision);
u64 mag = (user_input & ~BIT_ULL(63)) >> (32 - n);
bool negative = !!(user_input & BIT_ULL(63));
s64 val;
/* Round only if we're not using full precision. */
if (bit_precision < 16) {
val += 1UL << (16 - bit_precision - 1);
val >>= 16 - bit_precision;
}
WARN_ON(m > 32 || n > 32);
return clamp_val(val, 0, max);
val = clamp_val(mag, 0, negative ?
BIT_ULL(n + m - 1) : BIT_ULL(n + m - 1) - 1);
return negative ? -val : val;
}
EXPORT_SYMBOL(drm_color_lut_extract);
EXPORT_SYMBOL(drm_color_ctm_s31_32_to_qm_n);
/**
* drm_crtc_enable_color_mgmt - enable color management properties

View File

@ -48,6 +48,8 @@
#include <drm/drm_print.h>
#include <drm/drm_vblank.h>
#include "drm_crtc_helper_internal.h"
/**
* DOC: overview
*

View File

@ -76,6 +76,11 @@ static int drm_dp_send_dpcd_write(struct drm_dp_mst_topology_mgr *mgr,
static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_branch *mstb);
static void
drm_dp_send_clear_payload_id_table(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_branch *mstb);
static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_branch *mstb,
struct drm_dp_mst_port *port);
@ -517,8 +522,10 @@ drm_dp_decode_sideband_req(const struct drm_dp_sideband_msg_tx *raw,
}
if (failed) {
for (i = 0; i < r->num_transactions; i++)
for (i = 0; i < r->num_transactions; i++) {
tx = &r->transactions[i];
kfree(tx->bytes);
}
return -ENOMEM;
}
@ -950,6 +957,8 @@ static bool drm_dp_sideband_parse_reply(struct drm_dp_sideband_msg_rx *raw,
case DP_POWER_DOWN_PHY:
case DP_POWER_UP_PHY:
return drm_dp_sideband_parse_power_updown_phy_ack(raw, msg);
case DP_CLEAR_PAYLOAD_ID_TABLE:
return true; /* since there's nothing to parse */
default:
DRM_ERROR("Got unknown reply 0x%02x (%s)\n", msg->req_type,
drm_dp_mst_req_type_str(msg->req_type));
@ -1048,6 +1057,15 @@ static int build_link_address(struct drm_dp_sideband_msg_tx *msg)
return 0;
}
static int build_clear_payload_id_table(struct drm_dp_sideband_msg_tx *msg)
{
struct drm_dp_sideband_msg_req_body req;
req.req_type = DP_CLEAR_PAYLOAD_ID_TABLE;
drm_dp_encode_sideband_req(&req, msg);
return 0;
}
static int build_enum_path_resources(struct drm_dp_sideband_msg_tx *msg, int port_num)
{
struct drm_dp_sideband_msg_req_body req;
@ -2520,10 +2538,14 @@ static void drm_dp_mst_link_probe_work(struct work_struct *work)
struct drm_device *dev = mgr->dev;
struct drm_dp_mst_branch *mstb;
int ret;
bool clear_payload_id_table;
mutex_lock(&mgr->probe_lock);
mutex_lock(&mgr->lock);
clear_payload_id_table = !mgr->payload_id_table_cleared;
mgr->payload_id_table_cleared = true;
mstb = mgr->mst_primary;
if (mstb) {
ret = drm_dp_mst_topology_try_get_mstb(mstb);
@ -2536,6 +2558,19 @@ static void drm_dp_mst_link_probe_work(struct work_struct *work)
return;
}
/*
* Certain branch devices seem to incorrectly report an available_pbn
* of 0 on downstream sinks, even after clearing the
* DP_PAYLOAD_ALLOCATE_* registers in
* drm_dp_mst_topology_mgr_set_mst(). Namely, the CableMatters USB-C
* 2x DP hub. Sending a CLEAR_PAYLOAD_ID_TABLE message seems to make
* things work again.
*/
if (clear_payload_id_table) {
DRM_DEBUG_KMS("Clearing payload ID table\n");
drm_dp_send_clear_payload_id_table(mgr, mstb);
}
ret = drm_dp_check_and_send_link_address(mgr, mstb);
drm_dp_mst_topology_put_mstb(mstb);
@ -2859,6 +2894,28 @@ out:
return ret < 0 ? ret : changed;
}
void drm_dp_send_clear_payload_id_table(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_branch *mstb)
{
struct drm_dp_sideband_msg_tx *txmsg;
int len, ret;
txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);
if (!txmsg)
return;
txmsg->dst = mstb;
len = build_clear_payload_id_table(txmsg);
drm_dp_queue_down_tx(mgr, txmsg);
ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);
if (ret > 0 && txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK)
DRM_DEBUG_KMS("clear payload table id nak received\n");
kfree(txmsg);
}
static int
drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_branch *mstb,
@ -3388,6 +3445,7 @@ static int drm_dp_get_vc_payload_bw(u8 dp_link_bw, u8 dp_link_count)
int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool mst_state)
{
int ret = 0;
int i = 0;
struct drm_dp_mst_branch *mstb = NULL;
mutex_lock(&mgr->lock);
@ -3448,10 +3506,23 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
/* this can fail if the device is gone */
drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0);
ret = 0;
mutex_lock(&mgr->payload_lock);
memset(mgr->payloads, 0, mgr->max_payloads * sizeof(struct drm_dp_payload));
mgr->payload_mask = 0;
set_bit(0, &mgr->payload_mask);
for (i = 0; i < mgr->max_payloads; i++) {
struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i];
if (vcpi) {
vcpi->vcpi = 0;
vcpi->num_slots = 0;
}
mgr->proposed_vcpis[i] = NULL;
}
mgr->vcpi_mask = 0;
mutex_unlock(&mgr->payload_lock);
mgr->payload_id_table_cleared = false;
}
out_unlock:

View File

@ -1391,25 +1391,25 @@ static const struct drm_display_mode edid_4k_modes[] = {
3840, 4016, 4104, 4400, 0,
2160, 2168, 2178, 2250, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 30, },
.vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 2 - 3840x2160@25Hz */
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000,
3840, 4896, 4984, 5280, 0,
2160, 2168, 2178, 2250, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 25, },
.vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 3 - 3840x2160@24Hz */
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000,
3840, 5116, 5204, 5500, 0,
2160, 2168, 2178, 2250, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 24, },
.vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 4 - 4096x2160@24Hz (SMPTE) */
{ DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 297000,
4096, 5116, 5204, 5500, 0,
2160, 2168, 2178, 2250, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 24, },
.vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, },
};
/*** DDC fetch and block validation ***/
@ -3214,20 +3214,18 @@ static enum hdmi_picture_aspect drm_get_cea_aspect_ratio(const u8 video_code)
return edid_cea_modes[video_code].picture_aspect_ratio;
}
static enum hdmi_picture_aspect drm_get_hdmi_aspect_ratio(const u8 video_code)
{
return edid_4k_modes[video_code].picture_aspect_ratio;
}
/*
* Calculate the alternate clock for HDMI modes (those from the HDMI vendor
* specific block).
*
* It's almost like cea_mode_alternate_clock(), we just need to add an
* exception for the VIC 4 mode (4096x2160@24Hz): no alternate clock for this
* one.
*/
static unsigned int
hdmi_mode_alternate_clock(const struct drm_display_mode *hdmi_mode)
{
if (hdmi_mode->vdisplay == 4096 && hdmi_mode->hdisplay == 2160)
return hdmi_mode->clock;
return cea_mode_alternate_clock(hdmi_mode);
}
@ -3240,6 +3238,9 @@ static u8 drm_match_hdmi_mode_clock_tolerance(const struct drm_display_mode *to_
if (!to_match->clock)
return 0;
if (to_match->picture_aspect_ratio)
match_flags |= DRM_MODE_MATCH_ASPECT_RATIO;
for (vic = 1; vic < ARRAY_SIZE(edid_4k_modes); vic++) {
const struct drm_display_mode *hdmi_mode = &edid_4k_modes[vic];
unsigned int clock1, clock2;
@ -3275,6 +3276,9 @@ static u8 drm_match_hdmi_mode(const struct drm_display_mode *to_match)
if (!to_match->clock)
return 0;
if (to_match->picture_aspect_ratio)
match_flags |= DRM_MODE_MATCH_ASPECT_RATIO;
for (vic = 1; vic < ARRAY_SIZE(edid_4k_modes); vic++) {
const struct drm_display_mode *hdmi_mode = &edid_4k_modes[vic];
unsigned int clock1, clock2;
@ -4279,12 +4283,12 @@ int drm_edid_to_sad(struct edid *edid, struct cea_sad **sads)
cea = drm_find_cea_extension(edid);
if (!cea) {
DRM_DEBUG_KMS("SAD: no CEA Extension found\n");
return -ENOENT;
return 0;
}
if (cea_revision(cea) < 3) {
DRM_DEBUG_KMS("SAD: wrong CEA revision\n");
return -EOPNOTSUPP;
return 0;
}
if (cea_db_offsets(cea, &start, &end)) {
@ -4340,12 +4344,12 @@ int drm_edid_to_speaker_allocation(struct edid *edid, u8 **sadb)
cea = drm_find_cea_extension(edid);
if (!cea) {
DRM_DEBUG_KMS("SAD: no CEA Extension found\n");
return -ENOENT;
return 0;
}
if (cea_revision(cea) < 3) {
DRM_DEBUG_KMS("SAD: wrong CEA revision\n");
return -EOPNOTSUPP;
return 0;
}
if (cea_db_offsets(cea, &start, &end)) {
@ -5222,6 +5226,7 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
const struct drm_display_mode *mode)
{
enum hdmi_picture_aspect picture_aspect;
u8 vic, hdmi_vic;
int err;
if (!frame || !mode)
@ -5234,7 +5239,8 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
if (mode->flags & DRM_MODE_FLAG_DBLCLK)
frame->pixel_repeat = 1;
frame->video_code = drm_mode_cea_vic(connector, mode);
vic = drm_mode_cea_vic(connector, mode);
hdmi_vic = drm_mode_hdmi_vic(connector, mode);
frame->picture_aspect = HDMI_PICTURE_ASPECT_NONE;
@ -5248,11 +5254,15 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
/*
* Populate picture aspect ratio from either
* user input (if specified) or from the CEA mode list.
* user input (if specified) or from the CEA/HDMI mode lists.
*/
picture_aspect = mode->picture_aspect_ratio;
if (picture_aspect == HDMI_PICTURE_ASPECT_NONE)
picture_aspect = drm_get_cea_aspect_ratio(frame->video_code);
if (picture_aspect == HDMI_PICTURE_ASPECT_NONE) {
if (vic)
picture_aspect = drm_get_cea_aspect_ratio(vic);
else if (hdmi_vic)
picture_aspect = drm_get_hdmi_aspect_ratio(hdmi_vic);
}
/*
* The infoframe can't convey anything but none, 4:3
@ -5260,12 +5270,20 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
* we can only satisfy it by specifying the right VIC.
*/
if (picture_aspect > HDMI_PICTURE_ASPECT_16_9) {
if (picture_aspect !=
drm_get_cea_aspect_ratio(frame->video_code))
if (vic) {
if (picture_aspect != drm_get_cea_aspect_ratio(vic))
return -EINVAL;
} else if (hdmi_vic) {
if (picture_aspect != drm_get_hdmi_aspect_ratio(hdmi_vic))
return -EINVAL;
} else {
return -EINVAL;
}
picture_aspect = HDMI_PICTURE_ASPECT_NONE;
}
frame->video_code = vic;
frame->picture_aspect = picture_aspect;
frame->active_aspect = HDMI_ACTIVE_ASPECT_PICTURE;
frame->scan_mode = HDMI_SCAN_MODE_UNDERSCAN;

View File

@ -140,6 +140,7 @@ int drm_encoder_init(struct drm_device *dev,
goto out_put;
}
INIT_LIST_HEAD(&encoder->bridge_chain);
list_add_tail(&encoder->head, &dev->mode_config.encoder_list);
encoder->index = dev->mode_config.num_encoder++;
@ -160,22 +161,16 @@ EXPORT_SYMBOL(drm_encoder_init);
void drm_encoder_cleanup(struct drm_encoder *encoder)
{
struct drm_device *dev = encoder->dev;
struct drm_bridge *bridge, *next;
/* Note that the encoder_list is considered to be static; should we
* remove the drm_encoder at runtime we would have to decrement all
* the indices on the drm_encoder after us in the encoder_list.
*/
if (encoder->bridge) {
struct drm_bridge *bridge = encoder->bridge;
struct drm_bridge *next;
while (bridge) {
next = bridge->next;
drm_bridge_detach(bridge);
bridge = next;
}
}
list_for_each_entry_safe(bridge, next, &encoder->bridge_chain,
chain_node)
drm_bridge_detach(bridge);
drm_mode_object_unregister(dev, &encoder->base);
kfree(encoder->name);

View File

@ -95,10 +95,6 @@ static DEFINE_MUTEX(kernel_fb_helper_lock);
* It will automatically set up deferred I/O if the driver requires a shadow
* buffer.
*
* For other drivers, setup fbdev emulation by calling
* drm_fb_helper_fbdev_setup() and tear it down by calling
* drm_fb_helper_fbdev_teardown().
*
* At runtime drivers should restore the fbdev console by using
* drm_fb_helper_lastclose() as their &drm_driver.lastclose callback.
* They should also notify the fb helper code from updates to the output
@ -567,8 +563,7 @@ EXPORT_SYMBOL(drm_fb_helper_unregister_fbi);
* drm_fb_helper_fini - finialize a &struct drm_fb_helper
* @fb_helper: driver-allocated fbdev helper, can be NULL
*
* This cleans up all remaining resources associated with @fb_helper. Must be
* called after drm_fb_helper_unlink_fbi() was called.
* This cleans up all remaining resources associated with @fb_helper.
*/
void drm_fb_helper_fini(struct drm_fb_helper *fb_helper)
{
@ -608,19 +603,6 @@ void drm_fb_helper_fini(struct drm_fb_helper *fb_helper)
}
EXPORT_SYMBOL(drm_fb_helper_fini);
/**
* drm_fb_helper_unlink_fbi - wrapper around unlink_framebuffer
* @fb_helper: driver-allocated fbdev helper, can be NULL
*
* A wrapper around unlink_framebuffer implemented by fbdev core
*/
void drm_fb_helper_unlink_fbi(struct drm_fb_helper *fb_helper)
{
if (fb_helper && fb_helper->fbdev)
unlink_framebuffer(fb_helper->fbdev);
}
EXPORT_SYMBOL(drm_fb_helper_unlink_fbi);
static bool drm_fbdev_use_shadow_fb(struct drm_fb_helper *fb_helper)
{
struct drm_device *dev = fb_helper->dev;
@ -1919,108 +1901,6 @@ int drm_fb_helper_hotplug_event(struct drm_fb_helper *fb_helper)
}
EXPORT_SYMBOL(drm_fb_helper_hotplug_event);
/**
* drm_fb_helper_fbdev_setup() - Setup fbdev emulation
* @dev: DRM device
* @fb_helper: fbdev helper structure to set up
* @funcs: fbdev helper functions
* @preferred_bpp: Preferred bits per pixel for the device.
* @dev->mode_config.preferred_depth is used if this is zero.
* @max_conn_count: Maximum number of connectors (not used)
*
* This function sets up fbdev emulation and registers fbdev for access by
* userspace. If all connectors are disconnected, setup is deferred to the next
* time drm_fb_helper_hotplug_event() is called.
* The caller must to provide a &drm_fb_helper_funcs->fb_probe callback
* function.
*
* Use drm_fb_helper_fbdev_teardown() to destroy the fbdev.
*
* See also: drm_fb_helper_initial_config(), drm_fbdev_generic_setup().
*
* Returns:
* Zero on success or negative error code on failure.
*/
int drm_fb_helper_fbdev_setup(struct drm_device *dev,
struct drm_fb_helper *fb_helper,
const struct drm_fb_helper_funcs *funcs,
unsigned int preferred_bpp,
unsigned int max_conn_count)
{
int ret;
if (!preferred_bpp)
preferred_bpp = dev->mode_config.preferred_depth;
if (!preferred_bpp)
preferred_bpp = 32;
drm_fb_helper_prepare(dev, fb_helper, funcs);
ret = drm_fb_helper_init(dev, fb_helper, 0);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev, "fbdev: Failed to initialize (ret=%d)\n", ret);
return ret;
}
if (!drm_drv_uses_atomic_modeset(dev))
drm_helper_disable_unused_functions(dev);
ret = drm_fb_helper_initial_config(fb_helper, preferred_bpp);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev, "fbdev: Failed to set configuration (ret=%d)\n", ret);
goto err_drm_fb_helper_fini;
}
return 0;
err_drm_fb_helper_fini:
drm_fb_helper_fbdev_teardown(dev);
return ret;
}
EXPORT_SYMBOL(drm_fb_helper_fbdev_setup);
/**
* drm_fb_helper_fbdev_teardown - Tear down fbdev emulation
* @dev: DRM device
*
* This function unregisters fbdev if not already done and cleans up the
* associated resources including the &drm_framebuffer.
* The driver is responsible for freeing the &drm_fb_helper structure which is
* stored in &drm_device->fb_helper. Do note that this pointer has been cleared
* when this function returns.
*
* In order to support device removal/unplug while file handles are still open,
* drm_fb_helper_unregister_fbi() should be called on device removal and
* drm_fb_helper_fbdev_teardown() in the &drm_driver->release callback when
* file handles are closed.
*/
void drm_fb_helper_fbdev_teardown(struct drm_device *dev)
{
struct drm_fb_helper *fb_helper = dev->fb_helper;
struct fb_ops *fbops = NULL;
if (!fb_helper)
return;
/* Unregister if it hasn't been done already */
if (fb_helper->fbdev && fb_helper->fbdev->dev)
drm_fb_helper_unregister_fbi(fb_helper);
if (fb_helper->fbdev && fb_helper->fbdev->fbdefio) {
fb_deferred_io_cleanup(fb_helper->fbdev);
kfree(fb_helper->fbdev->fbdefio);
fbops = fb_helper->fbdev->fbops;
}
drm_fb_helper_fini(fb_helper);
kfree(fbops);
if (fb_helper->fb)
drm_framebuffer_remove(fb_helper->fb);
}
EXPORT_SYMBOL(drm_fb_helper_fbdev_teardown);
/**
* drm_fb_helper_lastclose - DRM driver lastclose helper for fbdev emulation
* @dev: DRM device
@ -2074,7 +1954,6 @@ static int drm_fbdev_fb_release(struct fb_info *info, int user)
static void drm_fbdev_cleanup(struct drm_fb_helper *fb_helper)
{
struct fb_info *fbi = fb_helper->fbdev;
struct fb_ops *fbops = NULL;
void *shadow = NULL;
if (!fb_helper->dev)
@ -2083,15 +1962,11 @@ static void drm_fbdev_cleanup(struct drm_fb_helper *fb_helper)
if (fbi && fbi->fbdefio) {
fb_deferred_io_cleanup(fbi);
shadow = fbi->screen_buffer;
fbops = fbi->fbops;
}
drm_fb_helper_fini(fb_helper);
if (shadow) {
vfree(shadow);
kfree(fbops);
}
vfree(shadow);
drm_client_framebuffer_delete(fb_helper->buffer);
}
@ -2122,7 +1997,7 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
return -ENODEV;
}
static struct fb_ops drm_fbdev_fb_ops = {
static const struct fb_ops drm_fbdev_fb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
.fb_open = drm_fbdev_fb_open,
@ -2141,21 +2016,14 @@ static struct fb_deferred_io drm_fbdev_defio = {
.deferred_io = drm_fb_helper_deferred_io,
};
/**
* drm_fb_helper_generic_probe - Generic fbdev emulation probe helper
* @fb_helper: fbdev helper structure
* @sizes: describes fbdev size and scanout surface size
*
/*
* This function uses the client API to create a framebuffer backed by a dumb buffer.
*
* The _sys_ versions are used for &fb_ops.fb_read, fb_write, fb_fillrect,
* fb_copyarea, fb_imageblit.
*
* Returns:
* Zero on success or negative error code on failure.
*/
int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
struct drm_fb_helper_surface_size *sizes)
static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
struct drm_fb_helper_surface_size *sizes)
{
struct drm_client_dev *client = &fb_helper->client;
struct drm_client_buffer *buffer;
@ -2189,24 +2057,10 @@ int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
drm_fb_helper_fill_info(fbi, fb_helper, sizes);
if (drm_fbdev_use_shadow_fb(fb_helper)) {
struct fb_ops *fbops;
void *shadow;
/*
* fb_deferred_io_cleanup() clears &fbops->fb_mmap so a per
* instance version is necessary.
*/
fbops = kzalloc(sizeof(*fbops), GFP_KERNEL);
shadow = vzalloc(fbi->screen_size);
if (!fbops || !shadow) {
kfree(fbops);
vfree(shadow);
fbi->screen_buffer = vzalloc(fbi->screen_size);
if (!fbi->screen_buffer)
return -ENOMEM;
}
*fbops = *fbi->fbops;
fbi->fbops = fbops;
fbi->screen_buffer = shadow;
fbi->fbdefio = &drm_fbdev_defio;
fb_deferred_io_init(fbi);
@ -2227,7 +2081,6 @@ int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
return 0;
}
EXPORT_SYMBOL(drm_fb_helper_generic_probe);
static const struct drm_fb_helper_funcs drm_fb_helper_generic_funcs = {
.fb_probe = drm_fb_helper_generic_probe,
@ -2309,8 +2162,7 @@ static const struct drm_client_funcs drm_fbdev_client_funcs = {
* @dev->mode_config.preferred_depth is used if this is zero.
*
* This function sets up generic fbdev emulation for drivers that supports
* dumb buffers with a virtual address and that can be mmap'ed. If the driver
* does not support these functions, it could use drm_fb_helper_fbdev_setup().
* dumb buffers with a virtual address and that can be mmap'ed.
*
* Restore, hotplug events and teardown are all taken care of. Drivers that do
* suspend/resume need to call drm_fb_helper_set_suspend_unlocked() themselves.

View File

@ -285,7 +285,7 @@ static int drm_cpu_valid(void)
}
/*
* Called whenever a process opens /dev/drm.
* Called whenever a process opens a drm node
*
* \param filp file pointer.
* \param minor acquired minor-object.

View File

@ -253,17 +253,17 @@ const struct drm_format_info *__drm_format_info(u32 format)
.char_per_block = { 8, 0, 0 }, .block_w = { 2, 0, 0 }, .block_h = { 2, 0, 0 },
.hsub = 2, .vsub = 2, .is_yuv = true },
{ .format = DRM_FORMAT_P010, .depth = 0, .num_planes = 2,
.char_per_block = { 2, 4, 0 }, .block_w = { 1, 0, 0 }, .block_h = { 1, 0, 0 },
.char_per_block = { 2, 4, 0 }, .block_w = { 1, 1, 0 }, .block_h = { 1, 1, 0 },
.hsub = 2, .vsub = 2, .is_yuv = true},
{ .format = DRM_FORMAT_P012, .depth = 0, .num_planes = 2,
.char_per_block = { 2, 4, 0 }, .block_w = { 1, 0, 0 }, .block_h = { 1, 0, 0 },
.char_per_block = { 2, 4, 0 }, .block_w = { 1, 1, 0 }, .block_h = { 1, 1, 0 },
.hsub = 2, .vsub = 2, .is_yuv = true},
{ .format = DRM_FORMAT_P016, .depth = 0, .num_planes = 2,
.char_per_block = { 2, 4, 0 }, .block_w = { 1, 0, 0 }, .block_h = { 1, 0, 0 },
.char_per_block = { 2, 4, 0 }, .block_w = { 1, 1, 0 }, .block_h = { 1, 1, 0 },
.hsub = 2, .vsub = 2, .is_yuv = true},
{ .format = DRM_FORMAT_P210, .depth = 0,
.num_planes = 2, .char_per_block = { 2, 4, 0 },
.block_w = { 1, 0, 0 }, .block_h = { 1, 0, 0 }, .hsub = 2,
.block_w = { 1, 1, 0 }, .block_h = { 1, 1, 0 }, .hsub = 2,
.vsub = 1, .is_yuv = true },
{ .format = DRM_FORMAT_VUY101010, .depth = 0,
.num_planes = 1, .cpp = { 0, 0, 0 }, .hsub = 1, .vsub = 1,

View File

@ -1114,9 +1114,6 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
drm_gem_object_get(obj);
if (obj->funcs && obj->funcs->mmap) {
/* Remove the fake offset */
vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
ret = obj->funcs->mmap(obj, vma);
if (ret) {
drm_gem_object_put_unlocked(obj);

View File

@ -528,6 +528,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
struct drm_gem_shmem_object *shmem;
int ret;
/* Remove the fake offset */
vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
shmem = to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);

View File

@ -45,12 +45,34 @@ struct drm_file *drm_file_alloc(struct drm_minor *minor);
void drm_file_free(struct drm_file *file);
void drm_lastclose(struct drm_device *dev);
#ifdef CONFIG_PCI
/* drm_pci.c */
int drm_irq_by_busid(struct drm_device *dev, void *data,
struct drm_file *file_priv);
void drm_pci_agp_destroy(struct drm_device *dev);
int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master);
#else
static inline int drm_irq_by_busid(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
return -EINVAL;
}
static inline void drm_pci_agp_destroy(struct drm_device *dev)
{
}
static inline int drm_pci_set_busid(struct drm_device *dev,
struct drm_master *master)
{
return -EINVAL;
}
#endif
/* drm_prime.c */
int drm_prime_handle_to_fd_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);

View File

@ -652,8 +652,8 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETRESOURCES, drm_mode_getresources, 0),
DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, drm_prime_handle_to_fd_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, drm_prime_fd_to_handle_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, drm_prime_handle_to_fd_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, drm_prime_fd_to_handle_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_mode_getplane_res, 0),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, 0),

View File

@ -33,6 +33,7 @@
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <drm/drm_dsc.h>
#include <video/mipi_display.h>
/**
@ -373,6 +374,7 @@ bool mipi_dsi_packet_format_is_short(u8 type)
case MIPI_DSI_V_SYNC_END:
case MIPI_DSI_H_SYNC_START:
case MIPI_DSI_H_SYNC_END:
case MIPI_DSI_COMPRESSION_MODE:
case MIPI_DSI_END_OF_TRANSMISSION:
case MIPI_DSI_COLOR_MODE_OFF:
case MIPI_DSI_COLOR_MODE_ON:
@ -387,7 +389,7 @@ bool mipi_dsi_packet_format_is_short(u8 type)
case MIPI_DSI_DCS_SHORT_WRITE:
case MIPI_DSI_DCS_SHORT_WRITE_PARAM:
case MIPI_DSI_DCS_READ:
case MIPI_DSI_DCS_COMPRESSION_MODE:
case MIPI_DSI_EXECUTE_QUEUE:
case MIPI_DSI_SET_MAXIMUM_RETURN_PACKET_SIZE:
return true;
}
@ -406,11 +408,12 @@ EXPORT_SYMBOL(mipi_dsi_packet_format_is_short);
bool mipi_dsi_packet_format_is_long(u8 type)
{
switch (type) {
case MIPI_DSI_PPS_LONG_WRITE:
case MIPI_DSI_NULL_PACKET:
case MIPI_DSI_BLANKING_PACKET:
case MIPI_DSI_GENERIC_LONG_WRITE:
case MIPI_DSI_DCS_LONG_WRITE:
case MIPI_DSI_PICTURE_PARAMETER_SET:
case MIPI_DSI_COMPRESSED_PIXEL_STREAM:
case MIPI_DSI_LOOSELY_PACKED_PIXEL_STREAM_YCBCR20:
case MIPI_DSI_PACKED_PIXEL_STREAM_YCBCR24:
case MIPI_DSI_PACKED_PIXEL_STREAM_YCBCR16:
@ -546,6 +549,56 @@ int mipi_dsi_set_maximum_return_packet_size(struct mipi_dsi_device *dsi,
}
EXPORT_SYMBOL(mipi_dsi_set_maximum_return_packet_size);
/**
* mipi_dsi_compression_mode() - enable/disable DSC on the peripheral
* @dsi: DSI peripheral device
* @enable: Whether to enable or disable the DSC
*
* Enable or disable Display Stream Compression on the peripheral using the
* default Picture Parameter Set and VESA DSC 1.1 algorithm.
*
* Return: 0 on success or a negative error code on failure.
*/
ssize_t mipi_dsi_compression_mode(struct mipi_dsi_device *dsi, bool enable)
{
/* Note: Needs updating for non-default PPS or algorithm */
u8 tx[2] = { enable << 0, 0 };
struct mipi_dsi_msg msg = {
.channel = dsi->channel,
.type = MIPI_DSI_COMPRESSION_MODE,
.tx_len = sizeof(tx),
.tx_buf = tx,
};
int ret = mipi_dsi_device_transfer(dsi, &msg);
return (ret < 0) ? ret : 0;
}
EXPORT_SYMBOL(mipi_dsi_compression_mode);
/**
* mipi_dsi_picture_parameter_set() - transmit the DSC PPS to the peripheral
* @dsi: DSI peripheral device
* @pps: VESA DSC 1.1 Picture Parameter Set
*
* Transmit the VESA DSC 1.1 Picture Parameter Set to the peripheral.
*
* Return: 0 on success or a negative error code on failure.
*/
ssize_t mipi_dsi_picture_parameter_set(struct mipi_dsi_device *dsi,
const struct drm_dsc_picture_parameter_set *pps)
{
struct mipi_dsi_msg msg = {
.channel = dsi->channel,
.type = MIPI_DSI_PICTURE_PARAMETER_SET,
.tx_len = sizeof(*pps),
.tx_buf = pps,
};
int ret = mipi_dsi_device_transfer(dsi, &msg);
return (ret < 0) ? ret : 0;
}
EXPORT_SYMBOL(mipi_dsi_picture_parameter_set);
/**
* mipi_dsi_generic_write() - transmit data using a generic write packet
* @dsi: DSI peripheral device

View File

@ -27,6 +27,7 @@
#include <drm/drm_file.h>
#include <drm/drm_mode_config.h>
#include <drm/drm_print.h>
#include <linux/dma-resv.h>
#include "drm_crtc_internal.h"
#include "drm_internal.h"
@ -415,6 +416,33 @@ void drm_mode_config_init(struct drm_device *dev)
dev->mode_config.num_crtc = 0;
dev->mode_config.num_encoder = 0;
dev->mode_config.num_total_plane = 0;
if (IS_ENABLED(CONFIG_LOCKDEP)) {
struct drm_modeset_acquire_ctx modeset_ctx;
struct ww_acquire_ctx resv_ctx;
struct dma_resv resv;
int ret;
dma_resv_init(&resv);
drm_modeset_acquire_init(&modeset_ctx, 0);
ret = drm_modeset_lock(&dev->mode_config.connection_mutex,
&modeset_ctx);
if (ret == -EDEADLK)
ret = drm_modeset_backoff(&modeset_ctx);
ww_acquire_init(&resv_ctx, &reservation_ww_class);
ret = dma_resv_lock(&resv, &resv_ctx);
if (ret == -EDEADLK)
dma_resv_lock_slow(&resv, &resv_ctx);
dma_resv_unlock(&resv);
ww_acquire_fini(&resv_ctx);
drm_modeset_drop_locks(&modeset_ctx);
drm_modeset_acquire_fini(&modeset_ctx);
dma_resv_fini(&resv);
}
}
EXPORT_SYMBOL(drm_mode_config_init);

View File

@ -224,12 +224,26 @@ EXPORT_SYMBOL(drm_mode_object_get);
* This attaches the given property to the modeset object with the given initial
* value. Currently this function cannot fail since the properties are stored in
* a statically sized array.
*
* Note that all properties must be attached before the object itself is
* registered and accessible from userspace.
*/
void drm_object_attach_property(struct drm_mode_object *obj,
struct drm_property *property,
uint64_t init_val)
{
int count = obj->properties->count;
struct drm_device *dev = property->dev;
if (obj->type == DRM_MODE_OBJECT_CONNECTOR) {
struct drm_connector *connector = obj_to_connector(obj);
WARN_ON(!dev->driver->load &&
connector->registration_state == DRM_CONNECTOR_REGISTERED);
} else {
WARN_ON(!dev->driver->load && dev->registered);
}
if (count == DRM_OBJECT_MAX_PROPERTY) {
WARN(1, "Failed to attach object property (type: 0x%x). Please "

View File

@ -21,11 +21,13 @@
* DEALINGS IN THE SOFTWARE.
*/
#include <linux/backlight.h>
#include <linux/err.h>
#include <linux/module.h>
#include <drm/drm_crtc.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
static DEFINE_MUTEX(panel_lock);
static LIST_HEAD(panel_list);
@ -112,12 +114,6 @@ EXPORT_SYMBOL(drm_panel_remove);
*/
int drm_panel_attach(struct drm_panel *panel, struct drm_connector *connector)
{
if (panel->connector)
return -EBUSY;
panel->connector = connector;
panel->drm = connector->dev;
return 0;
}
EXPORT_SYMBOL(drm_panel_attach);
@ -134,8 +130,6 @@ EXPORT_SYMBOL(drm_panel_attach);
*/
void drm_panel_detach(struct drm_panel *panel)
{
panel->connector = NULL;
panel->drm = NULL;
}
EXPORT_SYMBOL(drm_panel_detach);
@ -151,10 +145,13 @@ EXPORT_SYMBOL(drm_panel_detach);
*/
int drm_panel_prepare(struct drm_panel *panel)
{
if (panel && panel->funcs && panel->funcs->prepare)
if (!panel)
return -EINVAL;
if (panel->funcs && panel->funcs->prepare)
return panel->funcs->prepare(panel);
return panel ? -ENOSYS : -EINVAL;
return 0;
}
EXPORT_SYMBOL(drm_panel_prepare);
@ -171,10 +168,13 @@ EXPORT_SYMBOL(drm_panel_prepare);
*/
int drm_panel_unprepare(struct drm_panel *panel)
{
if (panel && panel->funcs && panel->funcs->unprepare)
if (!panel)
return -EINVAL;
if (panel->funcs && panel->funcs->unprepare)
return panel->funcs->unprepare(panel);
return panel ? -ENOSYS : -EINVAL;
return 0;
}
EXPORT_SYMBOL(drm_panel_unprepare);
@ -190,10 +190,23 @@ EXPORT_SYMBOL(drm_panel_unprepare);
*/
int drm_panel_enable(struct drm_panel *panel)
{
if (panel && panel->funcs && panel->funcs->enable)
return panel->funcs->enable(panel);
int ret;
return panel ? -ENOSYS : -EINVAL;
if (!panel)
return -EINVAL;
if (panel->funcs && panel->funcs->enable) {
ret = panel->funcs->enable(panel);
if (ret < 0)
return ret;
}
ret = backlight_enable(panel->backlight);
if (ret < 0)
DRM_DEV_INFO(panel->dev, "failed to enable backlight: %d\n",
ret);
return 0;
}
EXPORT_SYMBOL(drm_panel_enable);
@ -209,16 +222,27 @@ EXPORT_SYMBOL(drm_panel_enable);
*/
int drm_panel_disable(struct drm_panel *panel)
{
if (panel && panel->funcs && panel->funcs->disable)
int ret;
if (!panel)
return -EINVAL;
ret = backlight_disable(panel->backlight);
if (ret < 0)
DRM_DEV_INFO(panel->dev, "failed to disable backlight: %d\n",
ret);
if (panel->funcs && panel->funcs->disable)
return panel->funcs->disable(panel);
return panel ? -ENOSYS : -EINVAL;
return 0;
}
EXPORT_SYMBOL(drm_panel_disable);
/**
* drm_panel_get_modes - probe the available display modes of a panel
* @panel: DRM panel
* @connector: DRM connector
*
* The modes probed from the panel are automatically added to the connector
* that the panel is attached to.
@ -226,12 +250,16 @@ EXPORT_SYMBOL(drm_panel_disable);
* Return: The number of modes available from the panel on success or a
* negative error code on failure.
*/
int drm_panel_get_modes(struct drm_panel *panel)
int drm_panel_get_modes(struct drm_panel *panel,
struct drm_connector *connector)
{
if (panel && panel->funcs && panel->funcs->get_modes)
return panel->funcs->get_modes(panel);
if (!panel)
return -EINVAL;
return panel ? -ENOSYS : -EINVAL;
if (panel->funcs && panel->funcs->get_modes)
return panel->funcs->get_modes(panel, connector);
return -EOPNOTSUPP;
}
EXPORT_SYMBOL(drm_panel_get_modes);
@ -274,6 +302,45 @@ struct drm_panel *of_drm_find_panel(const struct device_node *np)
EXPORT_SYMBOL(of_drm_find_panel);
#endif
#if IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE)
/**
* drm_panel_of_backlight - use backlight device node for backlight
* @panel: DRM panel
*
* Use this function to enable backlight handling if your panel
* uses device tree and has a backlight phandle.
*
* When the panel is enabled backlight will be enabled after a
* successful call to &drm_panel_funcs.enable()
*
* When the panel is disabled backlight will be disabled before the
* call to &drm_panel_funcs.disable().
*
* A typical implementation for a panel driver supporting device tree
* will call this function at probe time. Backlight will then be handled
* transparently without requiring any intervention from the driver.
* drm_panel_of_backlight() must be called after the call to drm_panel_init().
*
* Return: 0 on success or a negative error code on failure.
*/
int drm_panel_of_backlight(struct drm_panel *panel)
{
struct backlight_device *backlight;
if (!panel || !panel->dev)
return -EINVAL;
backlight = devm_of_find_backlight(panel->dev);
if (IS_ERR(backlight))
return PTR_ERR(backlight);
panel->backlight = backlight;
return 0;
}
EXPORT_SYMBOL(drm_panel_of_backlight);
#endif
MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>");
MODULE_DESCRIPTION("DRM panel infrastructure");
MODULE_LICENSE("GPL and additional rights");

View File

@ -125,8 +125,6 @@ void drm_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah)
EXPORT_SYMBOL(drm_pci_free);
#ifdef CONFIG_PCI
static int drm_get_pci_domain(struct drm_device *dev)
{
#ifndef __alpha__
@ -284,6 +282,8 @@ err_free:
}
EXPORT_SYMBOL(drm_get_pci_dev);
#ifdef CONFIG_DRM_LEGACY
/**
* drm_legacy_pci_init - shadow-attach a legacy DRM PCI driver
* @driver: DRM device driver
@ -331,17 +331,6 @@ int drm_legacy_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
}
EXPORT_SYMBOL(drm_legacy_pci_init);
#else
void drm_pci_agp_destroy(struct drm_device *dev) {}
int drm_irq_by_busid(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
return -EINVAL;
}
#endif
/**
* drm_legacy_pci_exit - unregister shadow-attach legacy DRM driver
* @driver: DRM device driver
@ -367,3 +356,5 @@ void drm_legacy_pci_exit(struct drm_driver *driver, struct pci_driver *pdriver)
DRM_INFO("Module unloaded\n");
}
EXPORT_SYMBOL(drm_legacy_pci_exit);
#endif

View File

@ -240,6 +240,7 @@ void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv)
struct dma_buf *drm_gem_dmabuf_export(struct drm_device *dev,
struct dma_buf_export_info *exp_info)
{
struct drm_gem_object *obj = exp_info->priv;
struct dma_buf *dma_buf;
dma_buf = dma_buf_export(exp_info);
@ -247,7 +248,8 @@ struct dma_buf *drm_gem_dmabuf_export(struct drm_device *dev,
return dma_buf;
drm_dev_get(dev);
drm_gem_object_get(exp_info->priv);
drm_gem_object_get(obj);
dma_buf->file->f_mapping = obj->dev->anon_inode->i_mapping;
return dma_buf;
}
@ -713,6 +715,9 @@ int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
struct file *fil;
int ret;
/* Add the fake offset */
vma->vm_pgoff += drm_vma_node_start(&obj->vma_node);
if (obj->funcs && obj->funcs->mmap) {
ret = obj->funcs->mmap(obj, vma);
if (ret)
@ -737,8 +742,6 @@ int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
if (ret)
goto out;
vma->vm_pgoff += drm_vma_node_start(&obj->vma_node);
ret = obj->dev->driver->fops->mmap(fil, vma);
drm_vma_node_revoke(&obj->vma_node, priv);

View File

@ -37,11 +37,11 @@
#include <drm/drm_print.h>
/*
* drm_debug: Enable debug output.
* __drm_debug: Enable debug output.
* Bitmask of DRM_UT_x. See include/drm/drm_print.h for details.
*/
unsigned int drm_debug;
EXPORT_SYMBOL(drm_debug);
unsigned int __drm_debug;
EXPORT_SYMBOL(__drm_debug);
MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug category.\n"
"\t\tBit 0 (0x01) will enable CORE messages (drm core code)\n"
@ -52,7 +52,7 @@ MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug cat
"\t\tBit 5 (0x20) will enable VBL messages (vblank code)\n"
"\t\tBit 7 (0x80) will enable LEASE messages (leasing code)\n"
"\t\tBit 8 (0x100) will enable DP messages (displayport code)");
module_param_named(debug, drm_debug, int, 0600);
module_param_named(debug, __drm_debug, int, 0600);
void __drm_puts_coredump(struct drm_printer *p, const char *str)
{
@ -256,7 +256,7 @@ void drm_dev_printk(const struct device *dev, const char *level,
}
EXPORT_SYMBOL(drm_dev_printk);
void drm_dev_dbg(const struct device *dev, unsigned int category,
void drm_dev_dbg(const struct device *dev, enum drm_debug_category category,
const char *format, ...)
{
struct va_format vaf;
@ -280,7 +280,7 @@ void drm_dev_dbg(const struct device *dev, unsigned int category,
}
EXPORT_SYMBOL(drm_dev_dbg);
void drm_dbg(unsigned int category, const char *format, ...)
void __drm_dbg(enum drm_debug_category category, const char *format, ...)
{
struct va_format vaf;
va_list args;
@ -297,9 +297,9 @@ void drm_dbg(unsigned int category, const char *format, ...)
va_end(args);
}
EXPORT_SYMBOL(drm_dbg);
EXPORT_SYMBOL(__drm_dbg);
void drm_err(const char *format, ...)
void __drm_err(const char *format, ...)
{
struct va_format vaf;
va_list args;
@ -313,7 +313,7 @@ void drm_err(const char *format, ...)
va_end(args);
}
EXPORT_SYMBOL(drm_err);
EXPORT_SYMBOL(__drm_err);
/**
* drm_print_regset32 - print the contents of registers to a

View File

@ -101,6 +101,7 @@ drm_mode_validate_pipeline(struct drm_display_mode *mode,
/* Step 2: Validate against encoders and crtcs */
drm_connector_for_each_possible_encoder(connector, encoder) {
struct drm_bridge *bridge;
struct drm_crtc *crtc;
ret = drm_encoder_mode_valid(encoder, mode);
@ -112,7 +113,8 @@ drm_mode_validate_pipeline(struct drm_display_mode *mode,
continue;
}
ret = drm_bridge_mode_valid(encoder->bridge, mode);
bridge = drm_bridge_chain_get_first_bridge(encoder);
ret = drm_bridge_chain_mode_valid(bridge, mode);
if (ret != MODE_OK) {
/* There is also no point in continuing for crtc check
* here. */

View File

@ -52,9 +52,17 @@ bool drm_rect_intersect(struct drm_rect *r1, const struct drm_rect *r2)
}
EXPORT_SYMBOL(drm_rect_intersect);
static u32 clip_scaled(u32 src, u32 dst, u32 clip)
static u32 clip_scaled(int src, int dst, int *clip)
{
u64 tmp = mul_u32_u32(src, dst - clip);
u64 tmp;
if (dst == 0)
return 0;
/* Only clip what we have. Keeps the result bounded. */
*clip = min(*clip, dst);
tmp = mul_u32_u32(src, dst - *clip);
/*
* Round toward 1.0 when clipping so that we don't accidentally
@ -73,11 +81,13 @@ static u32 clip_scaled(u32 src, u32 dst, u32 clip)
* @clip: clip rectangle
*
* Clip rectangle @dst by rectangle @clip. Clip rectangle @src by the
* same amounts multiplied by @hscale and @vscale.
* the corresponding amounts, retaining the vertical and horizontal scaling
* factors from @src to @dst.
*
* RETURNS:
*
* %true if rectangle @dst is still visible after being clipped,
* %false otherwise
* %false otherwise.
*/
bool drm_rect_clip_scaled(struct drm_rect *src, struct drm_rect *dst,
const struct drm_rect *clip)
@ -87,34 +97,34 @@ bool drm_rect_clip_scaled(struct drm_rect *src, struct drm_rect *dst,
diff = clip->x1 - dst->x1;
if (diff > 0) {
u32 new_src_w = clip_scaled(drm_rect_width(src),
drm_rect_width(dst), diff);
drm_rect_width(dst), &diff);
src->x1 = clamp_t(int64_t, src->x2 - new_src_w, INT_MIN, INT_MAX);
dst->x1 = clip->x1;
src->x1 = src->x2 - new_src_w;
dst->x1 += diff;
}
diff = clip->y1 - dst->y1;
if (diff > 0) {
u32 new_src_h = clip_scaled(drm_rect_height(src),
drm_rect_height(dst), diff);
drm_rect_height(dst), &diff);
src->y1 = clamp_t(int64_t, src->y2 - new_src_h, INT_MIN, INT_MAX);
dst->y1 = clip->y1;
src->y1 = src->y2 - new_src_h;
dst->y1 += diff;
}
diff = dst->x2 - clip->x2;
if (diff > 0) {
u32 new_src_w = clip_scaled(drm_rect_width(src),
drm_rect_width(dst), diff);
drm_rect_width(dst), &diff);
src->x2 = clamp_t(int64_t, src->x1 + new_src_w, INT_MIN, INT_MAX);
dst->x2 = clip->x2;
src->x2 = src->x1 + new_src_w;
dst->x2 -= diff;
}
diff = dst->y2 - clip->y2;
if (diff > 0) {
u32 new_src_h = clip_scaled(drm_rect_height(src),
drm_rect_height(dst), diff);
drm_rect_height(dst), &diff);
src->y2 = clamp_t(int64_t, src->y1 + new_src_h, INT_MIN, INT_MAX);
dst->y2 = clip->y2;
src->y2 = src->y1 + new_src_h;
dst->y2 -= diff;
}
return drm_rect_visible(dst);

View File

@ -110,7 +110,6 @@ static int exynos_dp_bridge_attach(struct analogix_dp_plat_data *plat_data,
if (ret) {
DRM_DEV_ERROR(dp->dev,
"Failed to attach bridge to drm\n");
bridge->next = NULL;
return ret;
}
}

View File

@ -43,7 +43,7 @@ exynos_dpi_detect(struct drm_connector *connector, bool force)
{
struct exynos_dpi *ctx = connector_to_dpi(connector);
if (ctx->panel && !ctx->panel->connector)
if (ctx->panel)
drm_panel_attach(ctx->panel, &ctx->connector);
return connector_status_connected;
@ -85,7 +85,7 @@ static int exynos_dpi_get_modes(struct drm_connector *connector)
}
if (ctx->panel)
return ctx->panel->funcs->get_modes(ctx->panel);
return drm_panel_get_modes(ctx->panel, connector);
return 0;
}

Some files were not shown because too many files have changed in this diff Show More