1
0
Fork 0

main drm pull request for 4.14 merge window

-----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJZpRPIAAoJEAx081l5xIa+kCIP/2m2q0jBmCATvXXwrMBH0zNk
 4lm9yIfl9pmluJP97aklvkeKF77chhost76+hv+0sQ9ZsJD8koHWv5WyTHEs7Cfn
 NpmtGPqYlIZsWNSwW0OFF/XzllgLCVEWa+W/7ryYzPZrSEZr6Ge4HE0qS3LfuLJv
 K89amZWHkP5ysPZ1uxRBzHtZfNAhdyjYVTUntCR7gj3DYv3yNdeZu+/epfcWK2w/
 Q+ggoy644vX/yzy5L5zCGL/J1BjStDuec7sgAKTlNx4TwBUmp2wsfhEdovQBGFiu
 t5PHMajvrBRqSJWDIAZSUfjQzIMSz517J9LWeChU7KtAClNJQJEabbu4CoX4aEmG
 UbSzEe0IxnxQ4842jcqQXZ+mevlNIEIBVSNR7dXi17jL3Ts+APQgrYjRJYVk2ipg
 uQ9TwkeVVu2WRGyU8iRQrXAZI7+O3p4UnbNPjeG2qACD2Ur7Z3n7b0mhNFPOLzO4
 gbIv4D6CcUB/vltl+vhZTW3P50oMCVSq8ScCpY8CGo29mZ5vypj5PTS+W8FsyY3Z
 ypyMqWg/DyxKlOoO+aK8EmXuZmgtDR4kb8asltH/S1A0NZkzjrFkKgs10Cp6EjJy
 Zz1BWa1KKEpdN6yp+jrbJKjf9MJ7K2RPGv3bxWnCCdNv4j49rk4t3IHqvcihddsd
 XXFQB5zE7Pz0ROi/VkXR
 =5fxW
 -----END PGP SIGNATURE-----

Merge tag 'drm-for-v4.14' of git://people.freedesktop.org/~airlied/linux

Pull drm updates from Dave Airlie:
 "This is the main drm pull request for 4.14 merge window.

  I'm sending this early, as my continuing journey into fatherhood is
  occurring really soon now, I'm going to be mostly useless for the next
  couple of weeks, though I may be able to read email, I doubt I'll be
  doing much patch applications or git sending. If anything urgent pops
  up I've asked Daniel/Jani/Alex/Sean to try and direct stuff towards
  you.

  Outside drm changes:

  Some rcar-du updates that touch the V4L tree, all acks should be in
  place. It adds one export to the radix tree code for new i915 use
  case. There are some minor AGP cleanups (don't see that too often).
  Changes to the vbox driver in staging to avoid breaking compilation.

  Summary:

  core:
   - Atomic helper fixes
   - Atomic UAPI fixes
   - Add YCBCR 4:2:0 support
   - Drop set_busid hook
   - Refactor fb_helper locking
   - Remove a bunch of internal APIs
   - Add a bunch of better default handlers
   - Format modifier/blob plane property added
   - More internal header refactoring
   - Make more internal API names consistent
   - Enhanced syncobj APIs (wait/signal/reset/create signalled)

  bridge:
   - Add Synopsys Designware MIPI DSI host bridge driver

  tiny:
   - Add Pervasive Displays RePaper displays
   - Add support for LEGO MINDSTORMS EV3 LCD

  i915:
   - Lots of GEN10/CNL  support patches
   - drm syncobj support
   - Skylake+ watermark refactoring
   - GVT vGPU 48-bit ppgtt support
   - GVT performance improvements
   - NOA change ioctl
   - CCS (color compression) scanout support
   - GPU reset improvements

  amdgpu:
   - Initial hugepage support
   - BO migration logic rework
   - Vega10 improvements
   - Powerplay fixes
   - Stop reprogramming the MC
   - Fixes for ACP audio on stoney
   - SR-IOV fixes/improvements
   - Command submission overhead improvements

  amdkfd:
   - Non-dGPU upstreaming patches
   - Scratch VA ioctl
   - Image tiling modes
   - Update PM4 headers for new firmware
   - Drop all BUG_ONs.

  nouveau:
   - GP108 modesetting support.
   - Disable MSI on big endian.

  vmwgfx:
   - Add fence fd support.

  msm:
   - Runtime PM improvements

  exynos:
   - NV12MT support
   - Refactor KMS drivers

  imx-drm:
   - Lock scanout channel to improve memory bw
   - Cleanups

  etnaviv:
   - GEM object population fixes

  tegra:
   - Prep work for Tegra186 support
   - PRIME mmap support

  sunxi:
   - HDMI support improvements
   - HDMI CEC support

  omapdrm:
   - HDMI hotplug IRQ support
   - Big driver cleanup
   - OMAP5 DSI support

  rcar-du:
   - vblank fixes
   - VSP1 updates

  arcgpu:
   - Minor fixes

  stm:
   - Add STM32 DSI controller driver

  dw_hdmi:
   - Add support for Rockchip RK3399
   - HDMI CEC support

  atmel-hlcdc:
   - Add 8-bit color support

  vc4:
   - Atomic fixes
   - New ioctl to attach a label to a buffer object
   - HDMI CEC support
   - Allow userspace to dictate rendering order on submit ioctl"

* tag 'drm-for-v4.14' of git://people.freedesktop.org/~airlied/linux: (1074 commits)
  drm/syncobj: Add a signal ioctl (v3)
  drm/syncobj: Add a reset ioctl (v3)
  drm/syncobj: Add a syncobj_array_find helper
  drm/syncobj: Allow wait for submit and signal behavior (v5)
  drm/syncobj: Add a CREATE_SIGNALED flag
  drm/syncobj: Add a callback mechanism for replace_fence (v3)
  drm/syncobj: add sync obj wait interface. (v8)
  i915: Use drm_syncobj_fence_get
  drm/syncobj: Add a race-free drm_syncobj_fence_get helper (v2)
  drm/syncobj: Rename fence_get to find_fence
  drm: kirin: Add mode_valid logic to avoid mode clocks we can't generate
  drm/vmwgfx: Bump the version for fence FD support
  drm/vmwgfx: Add export fence to file descriptor support
  drm/vmwgfx: Add support for imported Fence File Descriptor
  drm/vmwgfx: Prepare to support fence fd
  drm/vmwgfx: Fix incorrect command header offset at restart
  drm/vmwgfx: Support the NOP_ERROR command
  drm/vmwgfx: Restart command buffers after errors
  drm/vmwgfx: Move irq bottom half processing to threads
  drm/vmwgfx: Don't use drm_irq_[un]install
  ...
zero-colors
Linus Torvalds 2017-09-03 17:02:26 -07:00
commit 906dde0f35
858 changed files with 28982 additions and 45997 deletions

View File

@ -0,0 +1,32 @@
Synopsys DesignWare MIPI DSI host controller
============================================
This document defines device tree properties for the Synopsys DesignWare MIPI
DSI host controller. It doesn't constitue a device tree binding specification
by itself but is meant to be referenced by platform-specific device tree
bindings.
When referenced from platform device tree bindings the properties defined in
this document are defined as follows. The platform device tree bindings are
responsible for defining whether each optional property is used or not.
- reg: Memory mapped base address and length of the DesignWare MIPI DSI
host controller registers. (mandatory)
- clocks: References to all the clocks specified in the clock-names property
as specified in [1]. (mandatory)
- clock-names:
- "pclk" is the peripheral clock for either AHB and APB. (mandatory)
- "px_clk" is the pixel clock for the DPI/RGB input. (optional)
- resets: References to all the resets specified in the reset-names property
as specified in [2]. (optional)
- reset-names: string reset name, must be "apb" if used. (optional)
- panel or bridge node: see [3]. (mandatory)
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
[2] Documentation/devicetree/bindings/reset/reset.txt
[3] Documentation/devicetree/bindings/display/mipi-dsi-bus.txt

View File

@ -25,12 +25,6 @@ Required properties:
size-cells must 1 and 0, respectively.
- port: contains an endpoint node which is connected to the endpoint in the mic
node. The reg value muset be 0.
- i80-if-timings: specify whether the panel which is connected to decon uses
i80 lcd interface or mipi video interface. This node contains
no timing information as that of fimd does. Because there is
no register in decon to specify i80 interface timing value,
it is not needed, but make it remain to use same kind of node
in fimd and exynos7 decon.
Example:
SoC specific DT entry:
@ -59,9 +53,3 @@ decon: decon@13800000 {
};
};
};
Board specific DT entry:
&decon {
i80-if-timings {
};
};

View File

@ -0,0 +1,52 @@
Pervasive Displays RePaper branded e-ink displays
Required properties:
- compatible: "pervasive,e1144cs021" for 1.44" display
"pervasive,e1190cs021" for 1.9" display
"pervasive,e2200cs021" for 2.0" display
"pervasive,e2271cs021" for 2.7" display
- panel-on-gpios: Timing controller power control
- discharge-gpios: Discharge control
- reset-gpios: RESET pin
- busy-gpios: BUSY pin
Required property for e2271cs021:
- border-gpios: Border control
The node for this driver must be a child node of a SPI controller, hence
all mandatory properties described in ../spi/spi-bus.txt must be specified.
Optional property:
- pervasive,thermal-zone: name of thermometer's thermal zone
Example:
display_temp: lm75@48 {
compatible = "lm75b";
reg = <0x48>;
#thermal-sensor-cells = <0>;
};
thermal-zones {
display {
polling-delay-passive = <0>;
polling-delay = <0>;
thermal-sensors = <&display_temp>;
};
};
papirus27@0{
compatible = "pervasive,e2271cs021";
reg = <0>;
spi-max-frequency = <8000000>;
panel-on-gpios = <&gpio 23 0>;
border-gpios = <&gpio 14 0>;
discharge-gpios = <&gpio 15 0>;
reset-gpios = <&gpio 24 0>;
busy-gpios = <&gpio 25 0>;
pervasive,thermal-zone = "display";
};

View File

@ -11,7 +11,9 @@ following device-specific properties.
Required properties:
- compatible: Shall contain "rockchip,rk3288-dw-hdmi".
- compatible: should be one of the following:
"rockchip,rk3288-dw-hdmi"
"rockchip,rk3399-dw-hdmi"
- reg: See dw_hdmi.txt.
- reg-io-width: See dw_hdmi.txt. Shall be 4.
- interrupts: HDMI interrupt number
@ -30,7 +32,8 @@ Optional properties
I2C master controller.
- clock-names: See dw_hdmi.txt. The "cec" clock is optional.
- clock-names: May contain "cec" as defined in dw_hdmi.txt.
- clock-names: May contain "grf", power for grf io.
- clock-names: May contain "vpll", external clock for some hdmi phy.
Example:

View File

@ -8,8 +8,12 @@ Required properties:
- compatible: value should be one of the following
"rockchip,rk3036-vop";
"rockchip,rk3288-vop";
"rockchip,rk3368-vop";
"rockchip,rk3366-vop";
"rockchip,rk3399-vop-big";
"rockchip,rk3399-vop-lit";
"rockchip,rk3228-vop";
"rockchip,rk3328-vop";
- interrupts: should contain a list of all VOP IP block interrupts in the
order: VSYNC, LCD_SYSTEM. The interrupt specifier

View File

@ -0,0 +1,22 @@
Sitronix ST7586 display panel
Required properties:
- compatible: "lego,ev3-lcd".
- a0-gpios: The A0 signal (since this binding is for serial mode, this is
the pin labeled D1 on the controller, not the pin labeled A0)
- reset-gpios: Reset pin
The node for this driver must be a child node of a SPI controller, hence
all mandatory properties described in ../spi/spi-bus.txt must be specified.
Optional properties:
- rotation: panel rotation in degrees counter clockwise (0,90,180,270)
Example:
display@0{
compatible = "lego,ev3-lcd";
reg = <0>;
spi-max-frequency = <10000000>;
a0-gpios = <&gpio 43 GPIO_ACTIVE_HIGH>;
reset-gpios = <&gpio 80 GPIO_ACTIVE_HIGH>;
};

View File

@ -1,7 +1,6 @@
* STMicroelectronics STM32 lcd-tft display controller
- ltdc: lcd-tft display controller host
must be a sub-node of st-display-subsystem
Required properties:
- compatible: "st,stm32-ltdc"
- reg: Physical base address of the IP registers and length of memory mapped region.
@ -13,8 +12,40 @@
Required nodes:
- Video port for RGB output.
Example:
* STMicroelectronics STM32 DSI controller specific extensions to Synopsys
DesignWare MIPI DSI host controller
The STMicroelectronics STM32 DSI controller uses the Synopsys DesignWare MIPI
DSI host controller. For all mandatory properties & nodes, please refer
to the related documentation in [5].
Mandatory properties specific to STM32 DSI:
- #address-cells: Should be <1>.
- #size-cells: Should be <0>.
- compatible: "st,stm32-dsi".
- clock-names:
- phy pll reference clock string name, must be "ref".
- resets: see [5].
- reset-names: see [5].
Mandatory nodes specific to STM32 DSI:
- ports: A node containing DSI input & output port nodes with endpoint
definitions as documented in [3] & [4].
- port@0: DSI input port node, connected to the ltdc rgb output port.
- port@1: DSI output port node, connected to a panel or a bridge input port.
- panel or bridge node: A node containing the panel or bridge description as
documented in [6].
- port: panel or bridge port node, connected to the DSI output port (port@1).
Note: You can find more documentation in the following references
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
[2] Documentation/devicetree/bindings/reset/reset.txt
[3] Documentation/devicetree/bindings/media/video-interfaces.txt
[4] Documentation/devicetree/bindings/graph.txt
[5] Documentation/devicetree/bindings/display/bridge/dw_mipi_dsi.txt
[6] Documentation/devicetree/bindings/display/mipi-dsi-bus.txt
Example 1: RGB panel
/ {
...
soc {
@ -34,3 +65,73 @@ Example:
};
};
};
Example 2: DSI panel
/ {
...
soc {
...
ltdc: display-controller@40016800 {
compatible = "st,stm32-ltdc";
reg = <0x40016800 0x200>;
interrupts = <88>, <89>;
resets = <&rcc STM32F4_APB2_RESET(LTDC)>;
clocks = <&rcc 1 CLK_LCD>;
clock-names = "lcd";
port {
ltdc_out_dsi: endpoint {
remote-endpoint = <&dsi_in>;
};
};
};
dsi: dsi@40016c00 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "st,stm32-dsi";
reg = <0x40016c00 0x800>;
clocks = <&rcc 1 CLK_F469_DSI>, <&clk_hse>;
clock-names = "ref", "pclk";
resets = <&rcc STM32F4_APB2_RESET(DSI)>;
reset-names = "apb";
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
dsi_in: endpoint {
remote-endpoint = <&ltdc_out_dsi>;
};
};
port@1 {
reg = <1>;
dsi_out: endpoint {
remote-endpoint = <&dsi_in_panel>;
};
};
};
panel-dsi@0 {
reg = <0>; /* dsi virtual channel (0..3) */
compatible = ...;
enable-gpios = ...;
port {
dsi_in_panel: endpoint {
remote-endpoint = <&dsi_out>;
};
};
};
};
};
};

View File

@ -4,15 +4,33 @@ Allwinner A10 Display Pipeline
The Allwinner A10 Display pipeline is composed of several components
that are going to be documented below:
For the input port of all components up to the TCON in the display
pipeline, if there are multiple components, the local endpoint IDs
must correspond to the index of the upstream block. For example, if
the remote endpoint is Frontend 1, then the local endpoint ID must
be 1.
For all connections between components up to the TCONs in the display
pipeline, when there are multiple components of the same type at the
same depth, the local endpoint ID must be the same as the remote
component's index. For example, if the remote endpoint is Frontend 1,
then the local endpoint ID must be 1.
Conversely, for the output ports of the same group, the remote endpoint
ID must be the index of the local hardware block. If the local backend
is backend 1, then the remote endpoint ID must be 1.
Frontend 0 [0] ------- [0] Backend 0 [0] ------- [0] TCON 0
[1] -- -- [1] [1] -- -- [1]
\ / \ /
X X
/ \ / \
[0] -- -- [0] [0] -- -- [0]
Frontend 1 [1] ------- [1] Backend 1 [1] ------- [1] TCON 1
For a two pipeline system such as the one depicted above, the lines
represent the connections between the components, while the numbers
within the square brackets corresponds to the ID of the local endpoint.
The same rule also applies to DE 2.0 mixer-TCON connections:
Mixer 0 [0] ----------- [0] TCON 0
[1] ---- ---- [1]
\ /
X
/ \
[0] ---- ---- [0]
Mixer 1 [1] ----------- [1] TCON 1
HDMI Encoder
------------

View File

@ -249,6 +249,7 @@ oxsemi Oxford Semiconductor, Ltd.
panasonic Panasonic Corporation
parade Parade Technologies Inc.
pericom Pericom Technology Inc.
pervasive Pervasive Displays, Inc.
phytec PHYTEC Messtechnik GmbH
picochip Picochip Ltd
pine64 Pine64

View File

@ -201,6 +201,8 @@ drivers.
Open/Close, File Operations and IOCTLs
======================================
.. _drm_driver_fops:
File Operations
---------------

View File

@ -296,3 +296,12 @@ Auxiliary Modeset Helpers
.. kernel-doc:: drivers/gpu/drm/drm_modeset_helper.c
:export:
Framebuffer GEM Helper Reference
================================
.. kernel-doc:: drivers/gpu/drm/drm_gem_framebuffer_helper.c
:doc: overview
.. kernel-doc:: drivers/gpu/drm/drm_gem_framebuffer_helper.c
:export:

View File

@ -523,9 +523,6 @@ Color Management Properties
.. kernel-doc:: drivers/gpu/drm/drm_color_mgmt.c
:doc: overview
.. kernel-doc:: include/drm/drm_color_mgmt.h
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_color_mgmt.c
:export:
@ -554,60 +551,8 @@ various modules/drivers.
Vertical Blanking
=================
Vertical blanking plays a major role in graphics rendering. To achieve
tear-free display, users must synchronize page flips and/or rendering to
vertical blanking. The DRM API offers ioctls to perform page flips
synchronized to vertical blanking and wait for vertical blanking.
The DRM core handles most of the vertical blanking management logic,
which involves filtering out spurious interrupts, keeping race-free
blanking counters, coping with counter wrap-around and resets and
keeping use counts. It relies on the driver to generate vertical
blanking interrupts and optionally provide a hardware vertical blanking
counter. Drivers must implement the following operations.
- int (\*enable_vblank) (struct drm_device \*dev, int crtc); void
(\*disable_vblank) (struct drm_device \*dev, int crtc);
Enable or disable vertical blanking interrupts for the given CRTC.
- u32 (\*get_vblank_counter) (struct drm_device \*dev, int crtc);
Retrieve the value of the vertical blanking counter for the given
CRTC. If the hardware maintains a vertical blanking counter its value
should be returned. Otherwise drivers can use the
:c:func:`drm_vblank_count()` helper function to handle this
operation.
Drivers must initialize the vertical blanking handling core with a call
to :c:func:`drm_vblank_init()` in their load operation.
Vertical blanking interrupts can be enabled by the DRM core or by
drivers themselves (for instance to handle page flipping operations).
The DRM core maintains a vertical blanking use count to ensure that the
interrupts are not disabled while a user still needs them. To increment
the use count, drivers call :c:func:`drm_vblank_get()`. Upon
return vertical blanking interrupts are guaranteed to be enabled.
To decrement the use count drivers call
:c:func:`drm_vblank_put()`. Only when the use count drops to zero
will the DRM core disable the vertical blanking interrupts after a delay
by scheduling a timer. The delay is accessible through the
vblankoffdelay module parameter or the ``drm_vblank_offdelay`` global
variable and expressed in milliseconds. Its default value is 5000 ms.
Zero means never disable, and a negative value means disable
immediately. Drivers may override the behaviour by setting the
:c:type:`struct drm_device <drm_device>`
vblank_disable_immediate flag, which when set causes vblank interrupts
to be disabled immediately regardless of the drm_vblank_offdelay
value. The flag should only be set if there's a properly working
hardware vblank counter present.
When a vertical blanking interrupt occurs drivers only need to call the
:c:func:`drm_handle_vblank()` function to account for the
interrupt.
Resources allocated by :c:func:`drm_vblank_init()` must be freed
with a call to :c:func:`drm_vblank_cleanup()` in the driver unload
operation handler.
.. kernel-doc:: drivers/gpu/drm/drm_vblank.c
:doc: vblank handling
Vertical Blanking and Interrupt Handling Functions Reference
------------------------------------------------------------

View File

@ -191,7 +191,7 @@ acquired and release by :c:func:`calling drm_gem_object_get()` and
holding the lock.
When the last reference to a GEM object is released the GEM core calls
the :c:type:`struct drm_driver <drm_driver>` gem_free_object
the :c:type:`struct drm_driver <drm_driver>` gem_free_object_unlocked
operation. That operation is mandatory for GEM-enabled drivers and must
free the GEM object and all associated resources.
@ -492,7 +492,7 @@ DRM Sync Objects
:doc: Overview
.. kernel-doc:: include/drm/drm_syncobj.h
:export:
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
:export:

View File

@ -160,6 +160,8 @@ other hand, a driver requires shared state between clients which is
visible to user-space and accessible beyond open-file boundaries, they
cannot support render nodes.
.. _drm_driver_ioctl:
IOCTL Support on Device Nodes
=============================

View File

@ -417,6 +417,10 @@ integrate with drm/i915 and to handle the `DRM_I915_PERF_OPEN` ioctl.
:functions: i915_perf_open_ioctl
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_release
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_add_config_ioctl
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_remove_config_ioctl
i915 Perf Stream
----------------
@ -477,4 +481,16 @@ specific details than found in the more high-level sections.
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:internal:
.. WARNING: DOCPROC directive not supported: !Cdrivers/gpu/drm/i915/i915_irq.c
Style
=====
The drm/i915 driver codebase has some style rules in addition to (and, in some
cases, deviating from) the kernel coding style.
Register macro definition style
-------------------------------
The style guide for ``i915_reg.h``.
.. kernel-doc:: drivers/gpu/drm/i915/i915_reg.h
:doc: The i915 register macro definition style guide

View File

@ -108,8 +108,8 @@ This would be especially useful for tinydrm:
crtc state, clear that to the max values, x/y = 0 and w/h = MAX_INT, in
__drm_atomic_helper_crtc_duplicate_state().
- Move tinydrm_merge_clips into drm_framebuffer.c, dropping the tinydrm_
prefix ofc and using drm_fb_. drm_framebuffer.c makes sense since this
- Move tinydrm_merge_clips into drm_framebuffer.c, dropping the tinydrm\_
prefix ofc and using drm_fb\_. drm_framebuffer.c makes sense since this
is a function useful to implement the fb->dirty function.
- Create a new drm_fb_dirty function which does essentially what e.g.

View File

@ -4359,6 +4359,12 @@ S: Maintained
F: drivers/gpu/drm/qxl/
F: include/uapi/drm/qxl_drm.h
DRM DRIVER FOR PERVASIVE DISPLAYS REPAPER PANELS
M: Noralf Trønnes <noralf@tronnes.org>
S: Maintained
F: drivers/gpu/drm/tinydrm/repaper.c
F: Documentation/devicetree/bindings/display/repaper.txt
DRM DRIVER FOR RAGE 128 VIDEO CARDS
S: Orphan / Obsolete
F: drivers/gpu/drm/r128/
@ -4374,6 +4380,12 @@ S: Orphan / Obsolete
F: drivers/gpu/drm/sis/
F: include/uapi/drm/sis_drm.h
DRM DRIVER FOR SITRONIX ST7586 PANELS
M: David Lechner <david@lechnology.com>
S: Maintained
F: drivers/gpu/drm/tinydrm/st7586.c
F: Documentation/devicetree/bindings/display/st7586.txt
DRM DRIVER FOR TDFX VIDEO CARDS
S: Orphan / Obsolete
F: drivers/gpu/drm/tdfx/
@ -4622,6 +4634,14 @@ F: drivers/gpu/drm/panel/
F: include/drm/drm_panel.h
F: Documentation/devicetree/bindings/display/panel/
DRM TINYDRM DRIVERS
M: Noralf Trønnes <noralf@tronnes.org>
W: https://github.com/notro/tinydrm/wiki/Development
T: git git://anongit.freedesktop.org/drm/drm-misc
S: Maintained
F: drivers/gpu/drm/tinydrm/
F: include/drm/tinydrm/
DSBR100 USB FM RADIO DRIVER
M: Alexey Klimov <klimov.linux@gmail.com>
L: linux-media@vger.kernel.org
@ -6744,8 +6764,9 @@ S: Supported
F: drivers/scsi/isci/
INTEL DRM DRIVERS (excluding Poulsbo, Moorestown and derivative chipsets)
M: Daniel Vetter <daniel.vetter@intel.com>
M: Jani Nikula <jani.nikula@linux.intel.com>
M: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
M: Rodrigo Vivi <rodrigo.vivi@intel.com>
L: intel-gfx@lists.freedesktop.org
W: https://01.org/linuxgraphics/
B: https://01.org/linuxgraphics/documentation/how-report-bugs

View File

@ -8,7 +8,7 @@ ccflags-y := -I$(srctree)/$(src)/include \
# Common support
obj-y := id.o io.o control.o devices.o fb.o timer.o pm.o \
common.o dma.o wd_timer.o display.o i2c.o hdq1w.o omap_hwmod.o \
omap_device.o omap-headsmp.o sram.o drm.o
omap_device.o omap-headsmp.o sram.o
hwmod-common = omap_hwmod.o omap_hwmod_reset.o \
omap_hwmod_common_data.o

View File

@ -33,6 +33,7 @@ static void __init __maybe_unused omap_generic_init(void)
pdata_quirks_init(omap_dt_match_table);
omapdss_init_of();
omap_soc_device_init();
}
#ifdef CONFIG_SOC_OMAP2420

View File

@ -66,6 +66,7 @@
*/
#define FRAMEDONE_IRQ_TIMEOUT 100
#if defined(CONFIG_FB_OMAP2)
static struct platform_device omap_display_device = {
.name = "omapdss",
.id = -1,
@ -163,6 +164,65 @@ static enum omapdss_version __init omap_display_get_version(void)
return OMAPDSS_VER_UNKNOWN;
}
static int __init omapdss_init_fbdev(void)
{
static struct omap_dss_board_info board_data = {
.dsi_enable_pads = omap_dsi_enable_pads,
.dsi_disable_pads = omap_dsi_disable_pads,
.set_min_bus_tput = omap_dss_set_min_bus_tput,
};
struct device_node *node;
int r;
board_data.version = omap_display_get_version();
if (board_data.version == OMAPDSS_VER_UNKNOWN) {
pr_err("DSS not supported on this SoC\n");
return -ENODEV;
}
omap_display_device.dev.platform_data = &board_data;
r = platform_device_register(&omap_display_device);
if (r < 0) {
pr_err("Unable to register omapdss device\n");
return r;
}
/* create vrfb device */
r = omap_init_vrfb();
if (r < 0) {
pr_err("Unable to register omapvrfb device\n");
return r;
}
/* create FB device */
r = omap_init_fb();
if (r < 0) {
pr_err("Unable to register omapfb device\n");
return r;
}
/* create V4L2 display device */
r = omap_init_vout();
if (r < 0) {
pr_err("Unable to register omap_vout device\n");
return r;
}
/* add DSI info for omap4 */
node = of_find_node_by_name(NULL, "omap4_padconf_global");
if (node)
omap4_dsi_mux_syscon = syscon_node_to_regmap(node);
return 0;
}
#else
static inline int omapdss_init_fbdev(void)
{
return 0;
}
#endif /* CONFIG_FB_OMAP2 */
static void dispc_disable_outputs(void)
{
u32 v, irq_mask = 0;
@ -335,16 +395,9 @@ static struct device_node * __init omapdss_find_dss_of_node(void)
int __init omapdss_init_of(void)
{
int r;
enum omapdss_version ver;
struct device_node *node;
struct platform_device *pdev;
static struct omap_dss_board_info board_data = {
.dsi_enable_pads = omap_dsi_enable_pads,
.dsi_disable_pads = omap_dsi_disable_pads,
.set_min_bus_tput = omap_dss_set_min_bus_tput,
};
/* only create dss helper devices if dss is enabled in the .dts */
node = omapdss_find_dss_of_node();
@ -354,13 +407,6 @@ int __init omapdss_init_of(void)
if (!of_device_is_available(node))
return 0;
ver = omap_display_get_version();
if (ver == OMAPDSS_VER_UNKNOWN) {
pr_err("DSS not supported on this SoC\n");
return -ENODEV;
}
pdev = of_find_device_by_node(node);
if (!pdev) {
@ -374,48 +420,5 @@ int __init omapdss_init_of(void)
return r;
}
board_data.version = ver;
omap_display_device.dev.platform_data = &board_data;
r = platform_device_register(&omap_display_device);
if (r < 0) {
pr_err("Unable to register omapdss device\n");
return r;
}
/* create DRM device */
r = omap_init_drm();
if (r < 0) {
pr_err("Unable to register omapdrm device\n");
return r;
}
/* create vrfb device */
r = omap_init_vrfb();
if (r < 0) {
pr_err("Unable to register omapvrfb device\n");
return r;
}
/* create FB device */
r = omap_init_fb();
if (r < 0) {
pr_err("Unable to register omapfb device\n");
return r;
}
/* create V4L2 display device */
r = omap_init_vout();
if (r < 0) {
pr_err("Unable to register omap_vout device\n");
return r;
}
/* add DSI info for omap4 */
node = of_find_node_by_name(NULL, "omap4_padconf_global");
if (node)
omap4_dsi_mux_syscon = syscon_node_to_regmap(node);
return 0;
return omapdss_init_fbdev();
}

View File

@ -26,7 +26,6 @@ struct omap_dss_dispc_dev_attr {
bool has_framedonetv_irq;
};
int omap_init_drm(void);
int omap_init_vrfb(void);
int omap_init_fb(void);
int omap_init_vout(void);

View File

@ -1,53 +0,0 @@
/*
* DRM/KMS device registration for TI OMAP platforms
*
* Copyright (C) 2012 Texas Instruments
* Author: Rob Clark <rob.clark@linaro.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
#include <linux/platform_data/omap_drm.h>
#include "soc.h"
#include "display.h"
#if IS_ENABLED(CONFIG_DRM_OMAP)
static struct omap_drm_platform_data platform_data;
static struct platform_device omap_drm_device = {
.dev = {
.coherent_dma_mask = DMA_BIT_MASK(32),
.platform_data = &platform_data,
},
.name = "omapdrm",
.id = 0,
};
int __init omap_init_drm(void)
{
platform_data.omaprev = GET_OMAP_TYPE;
return platform_device_register(&omap_drm_device);
}
#else
int __init omap_init_drm(void) { return 0; }
#endif

View File

@ -428,7 +428,6 @@ static void __init __maybe_unused omap_hwmod_init_postsetup(void)
static void __init __maybe_unused omap_common_late_init(void)
{
omap2_common_pm_late_init();
omap_soc_device_init();
}
#ifdef CONFIG_SOC_OMAP2420

View File

@ -280,9 +280,6 @@
&decon {
status = "okay";
i80-if-timings {
};
};
&decon_tv {
@ -1116,9 +1113,6 @@
&mic {
status = "okay";
i80-if-timings {
};
};
&pmu_system_controller {

View File

@ -527,6 +527,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
INTEL_BXT_IDS(&gen9_early_ops),
INTEL_KBL_IDS(&gen9_early_ops),
INTEL_GLK_IDS(&gen9_early_ops),
INTEL_CNL_IDS(&gen9_early_ops),
};
static void __init

View File

@ -381,7 +381,7 @@ static void agp_ali_remove(struct pci_dev *pdev)
agp_put_bridge(bridge);
}
static struct pci_device_id agp_ali_pci_table[] = {
static const struct pci_device_id agp_ali_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
.class_mask = ~0,

View File

@ -21,7 +21,7 @@
#define AMD_TLBFLUSH 0x0c /* In mmio region (32-bit register) */
#define AMD_CACHEENTRY 0x10 /* In mmio region (32-bit register) */
static struct pci_device_id agp_amdk7_pci_table[];
static const struct pci_device_id agp_amdk7_pci_table[];
struct amd_page_map {
unsigned long *real;
@ -508,7 +508,7 @@ static int agp_amdk7_resume(struct pci_dev *pdev)
#endif /* CONFIG_PM */
/* must be the same order as name table above */
static struct pci_device_id agp_amdk7_pci_table[] = {
static const struct pci_device_id agp_amdk7_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
.class_mask = ~0,

View File

@ -610,7 +610,7 @@ static int agp_amd64_resume(struct pci_dev *pdev)
#endif /* CONFIG_PM */
static struct pci_device_id agp_amd64_pci_table[] = {
static const struct pci_device_id agp_amd64_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
.class_mask = ~0,

View File

@ -540,7 +540,7 @@ static void agp_ati_remove(struct pci_dev *pdev)
agp_put_bridge(bridge);
}
static struct pci_device_id agp_ati_pci_table[] = {
static const struct pci_device_id agp_ati_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
.class_mask = ~0,

View File

@ -427,7 +427,7 @@ static int agp_efficeon_resume(struct pci_dev *pdev)
}
#endif
static struct pci_device_id agp_efficeon_pci_table[] = {
static const struct pci_device_id agp_efficeon_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
.class_mask = ~0,

View File

@ -828,7 +828,7 @@ static int agp_intel_resume(struct pci_dev *pdev)
}
#endif
static struct pci_device_id agp_intel_pci_table[] = {
static const struct pci_device_id agp_intel_pci_table[] = {
#define ID(x) \
{ \
.class = (PCI_CLASS_BRIDGE_HOST << 8), \

View File

@ -420,7 +420,7 @@ static int agp_nvidia_resume(struct pci_dev *pdev)
#endif
static struct pci_device_id agp_nvidia_pci_table[] = {
static const struct pci_device_id agp_nvidia_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
.class_mask = ~0,

View File

@ -237,7 +237,7 @@ static int agp_sis_resume(struct pci_dev *pdev)
#endif /* CONFIG_PM */
static struct pci_device_id agp_sis_pci_table[] = {
static const struct pci_device_id agp_sis_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
.class_mask = ~0,

View File

@ -679,7 +679,7 @@ static void agp_uninorth_remove(struct pci_dev *pdev)
agp_put_bridge(bridge);
}
static struct pci_device_id agp_uninorth_pci_table[] = {
static const struct pci_device_id agp_uninorth_pci_table[] = {
{
.class = (PCI_CLASS_BRIDGE_HOST << 8),
.class_mask = ~0,

View File

@ -48,7 +48,7 @@ static atomic64_t dma_fence_context_counter = ATOMIC64_INIT(0);
*/
u64 dma_fence_context_alloc(unsigned num)
{
BUG_ON(!num);
WARN_ON(!num);
return atomic64_add_return(num, &dma_fence_context_counter) - num;
}
EXPORT_SYMBOL(dma_fence_context_alloc);
@ -172,7 +172,7 @@ void dma_fence_release(struct kref *kref)
trace_dma_fence_destroy(fence);
BUG_ON(!list_empty(&fence->cb_list));
WARN_ON(!list_empty(&fence->cb_list));
if (fence->ops->release)
fence->ops->release(fence);

View File

@ -195,8 +195,7 @@ done:
if (old)
kfree_rcu(old, rcu);
if (old_fence)
dma_fence_put(old_fence);
dma_fence_put(old_fence);
}
/**
@ -258,11 +257,70 @@ void reservation_object_add_excl_fence(struct reservation_object *obj,
dma_fence_put(rcu_dereference_protected(old->shared[i],
reservation_object_held(obj)));
if (old_fence)
dma_fence_put(old_fence);
dma_fence_put(old_fence);
}
EXPORT_SYMBOL(reservation_object_add_excl_fence);
/**
* reservation_object_copy_fences - Copy all fences from src to dst.
* @dst: the destination reservation object
* @src: the source reservation object
*
* Copy all fences from src to dst. Both src->lock as well as dst-lock must be
* held.
*/
int reservation_object_copy_fences(struct reservation_object *dst,
struct reservation_object *src)
{
struct reservation_object_list *src_list, *dst_list;
struct dma_fence *old, *new;
size_t size;
unsigned i;
src_list = reservation_object_get_list(src);
if (src_list) {
size = offsetof(typeof(*src_list),
shared[src_list->shared_count]);
dst_list = kmalloc(size, GFP_KERNEL);
if (!dst_list)
return -ENOMEM;
dst_list->shared_count = src_list->shared_count;
dst_list->shared_max = src_list->shared_count;
for (i = 0; i < src_list->shared_count; ++i)
dst_list->shared[i] =
dma_fence_get(src_list->shared[i]);
} else {
dst_list = NULL;
}
kfree(dst->staged);
dst->staged = NULL;
src_list = reservation_object_get_list(dst);
old = reservation_object_get_excl(dst);
new = reservation_object_get_excl(src);
dma_fence_get(new);
preempt_disable();
write_seqcount_begin(&dst->seq);
/* write_seqcount_begin provides the necessary memory barrier */
RCU_INIT_POINTER(dst->fence_excl, new);
RCU_INIT_POINTER(dst->fence, dst_list);
write_seqcount_end(&dst->seq);
preempt_enable();
if (src_list)
kfree_rcu(src_list, rcu);
dma_fence_put(old);
return 0;
}
EXPORT_SYMBOL(reservation_object_copy_fences);
/**
* reservation_object_get_fences_rcu - Get an object's shared and exclusive
* fences without update side lock held
@ -373,12 +431,25 @@ long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
long ret = timeout ? timeout : 1;
retry:
fence = NULL;
shared_count = 0;
seq = read_seqcount_begin(&obj->seq);
rcu_read_lock();
if (wait_all) {
fence = rcu_dereference(obj->fence_excl);
if (fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
if (!dma_fence_get_rcu(fence))
goto unlock_retry;
if (dma_fence_is_signaled(fence)) {
dma_fence_put(fence);
fence = NULL;
}
} else {
fence = NULL;
}
if (!fence && wait_all) {
struct reservation_object_list *fobj =
rcu_dereference(obj->fence);
@ -405,22 +476,6 @@ retry:
}
}
if (!shared_count) {
struct dma_fence *fence_excl = rcu_dereference(obj->fence_excl);
if (fence_excl &&
!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
&fence_excl->flags)) {
if (!dma_fence_get_rcu(fence_excl))
goto unlock_retry;
if (dma_fence_is_signaled(fence_excl))
dma_fence_put(fence_excl);
else
fence = fence_excl;
}
}
rcu_read_unlock();
if (fence) {
if (read_seqcount_retry(&obj->seq, seq)) {

View File

@ -96,9 +96,9 @@ static struct sync_timeline *sync_timeline_create(const char *name)
obj->context = dma_fence_context_alloc(1);
strlcpy(obj->name, name, sizeof(obj->name));
INIT_LIST_HEAD(&obj->child_list_head);
INIT_LIST_HEAD(&obj->active_list_head);
spin_lock_init(&obj->child_list_lock);
obj->pt_tree = RB_ROOT;
INIT_LIST_HEAD(&obj->pt_list);
spin_lock_init(&obj->lock);
sync_timeline_debug_add(obj);
@ -125,68 +125,6 @@ static void sync_timeline_put(struct sync_timeline *obj)
kref_put(&obj->kref, sync_timeline_free);
}
/**
* sync_timeline_signal() - signal a status change on a sync_timeline
* @obj: sync_timeline to signal
* @inc: num to increment on timeline->value
*
* A sync implementation should call this any time one of it's fences
* has signaled or has an error condition.
*/
static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
{
unsigned long flags;
struct sync_pt *pt, *next;
trace_sync_timeline(obj);
spin_lock_irqsave(&obj->child_list_lock, flags);
obj->value += inc;
list_for_each_entry_safe(pt, next, &obj->active_list_head,
active_list) {
if (dma_fence_is_signaled_locked(&pt->base))
list_del_init(&pt->active_list);
}
spin_unlock_irqrestore(&obj->child_list_lock, flags);
}
/**
* sync_pt_create() - creates a sync pt
* @parent: fence's parent sync_timeline
* @size: size to allocate for this pt
* @inc: value of the fence
*
* Creates a new sync_pt as a child of @parent. @size bytes will be
* allocated allowing for implementation specific data to be kept after
* the generic sync_timeline struct. Returns the sync_pt object or
* NULL in case of error.
*/
static struct sync_pt *sync_pt_create(struct sync_timeline *obj, int size,
unsigned int value)
{
unsigned long flags;
struct sync_pt *pt;
if (size < sizeof(*pt))
return NULL;
pt = kzalloc(size, GFP_KERNEL);
if (!pt)
return NULL;
spin_lock_irqsave(&obj->child_list_lock, flags);
sync_timeline_get(obj);
dma_fence_init(&pt->base, &timeline_fence_ops, &obj->child_list_lock,
obj->context, value);
list_add_tail(&pt->child_list, &obj->child_list_head);
INIT_LIST_HEAD(&pt->active_list);
spin_unlock_irqrestore(&obj->child_list_lock, flags);
return pt;
}
static const char *timeline_fence_get_driver_name(struct dma_fence *fence)
{
return "sw_sync";
@ -203,13 +141,17 @@ static void timeline_fence_release(struct dma_fence *fence)
{
struct sync_pt *pt = dma_fence_to_sync_pt(fence);
struct sync_timeline *parent = dma_fence_parent(fence);
unsigned long flags;
spin_lock_irqsave(fence->lock, flags);
list_del(&pt->child_list);
if (!list_empty(&pt->active_list))
list_del(&pt->active_list);
spin_unlock_irqrestore(fence->lock, flags);
if (!list_empty(&pt->link)) {
unsigned long flags;
spin_lock_irqsave(fence->lock, flags);
if (!list_empty(&pt->link)) {
list_del(&pt->link);
rb_erase(&pt->node, &parent->pt_tree);
}
spin_unlock_irqrestore(fence->lock, flags);
}
sync_timeline_put(parent);
dma_fence_free(fence);
@ -219,18 +161,11 @@ static bool timeline_fence_signaled(struct dma_fence *fence)
{
struct sync_timeline *parent = dma_fence_parent(fence);
return (fence->seqno > parent->value) ? false : true;
return !__dma_fence_is_later(fence->seqno, parent->value);
}
static bool timeline_fence_enable_signaling(struct dma_fence *fence)
{
struct sync_pt *pt = dma_fence_to_sync_pt(fence);
struct sync_timeline *parent = dma_fence_parent(fence);
if (timeline_fence_signaled(fence))
return false;
list_add_tail(&pt->active_list, &parent->active_list_head);
return true;
}
@ -259,6 +194,107 @@ static const struct dma_fence_ops timeline_fence_ops = {
.timeline_value_str = timeline_fence_timeline_value_str,
};
/**
* sync_timeline_signal() - signal a status change on a sync_timeline
* @obj: sync_timeline to signal
* @inc: num to increment on timeline->value
*
* A sync implementation should call this any time one of it's fences
* has signaled or has an error condition.
*/
static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
{
struct sync_pt *pt, *next;
trace_sync_timeline(obj);
spin_lock_irq(&obj->lock);
obj->value += inc;
list_for_each_entry_safe(pt, next, &obj->pt_list, link) {
if (!timeline_fence_signaled(&pt->base))
break;
list_del_init(&pt->link);
rb_erase(&pt->node, &obj->pt_tree);
/*
* A signal callback may release the last reference to this
* fence, causing it to be freed. That operation has to be
* last to avoid a use after free inside this loop, and must
* be after we remove the fence from the timeline in order to
* prevent deadlocking on timeline->lock inside
* timeline_fence_release().
*/
dma_fence_signal_locked(&pt->base);
}
spin_unlock_irq(&obj->lock);
}
/**
* sync_pt_create() - creates a sync pt
* @parent: fence's parent sync_timeline
* @inc: value of the fence
*
* Creates a new sync_pt as a child of @parent. @size bytes will be
* allocated allowing for implementation specific data to be kept after
* the generic sync_timeline struct. Returns the sync_pt object or
* NULL in case of error.
*/
static struct sync_pt *sync_pt_create(struct sync_timeline *obj,
unsigned int value)
{
struct sync_pt *pt;
pt = kzalloc(sizeof(*pt), GFP_KERNEL);
if (!pt)
return NULL;
sync_timeline_get(obj);
dma_fence_init(&pt->base, &timeline_fence_ops, &obj->lock,
obj->context, value);
INIT_LIST_HEAD(&pt->link);
spin_lock_irq(&obj->lock);
if (!dma_fence_is_signaled_locked(&pt->base)) {
struct rb_node **p = &obj->pt_tree.rb_node;
struct rb_node *parent = NULL;
while (*p) {
struct sync_pt *other;
int cmp;
parent = *p;
other = rb_entry(parent, typeof(*pt), node);
cmp = value - other->base.seqno;
if (cmp > 0) {
p = &parent->rb_right;
} else if (cmp < 0) {
p = &parent->rb_left;
} else {
if (dma_fence_get_rcu(&other->base)) {
dma_fence_put(&pt->base);
pt = other;
goto unlock;
}
p = &parent->rb_left;
}
}
rb_link_node(&pt->node, parent, p);
rb_insert_color(&pt->node, &obj->pt_tree);
parent = rb_next(&pt->node);
list_add_tail(&pt->link,
parent ? &rb_entry(parent, typeof(*pt), node)->link : &obj->pt_list);
}
unlock:
spin_unlock_irq(&obj->lock);
return pt;
}
/*
* *WARNING*
*
@ -309,7 +345,7 @@ static long sw_sync_ioctl_create_fence(struct sync_timeline *obj,
goto err;
}
pt = sync_pt_create(obj, sizeof(*pt), data.value);
pt = sync_pt_create(obj, data.value);
if (!pt) {
err = -ENOMEM;
goto err;
@ -345,6 +381,11 @@ static long sw_sync_ioctl_inc(struct sync_timeline *obj, unsigned long arg)
if (copy_from_user(&value, (void __user *)arg, sizeof(value)))
return -EFAULT;
while (value > INT_MAX) {
sync_timeline_signal(obj, INT_MAX);
value -= INT_MAX;
}
sync_timeline_signal(obj, value);
return 0;

View File

@ -116,17 +116,15 @@ static void sync_print_fence(struct seq_file *s,
static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj)
{
struct list_head *pos;
unsigned long flags;
seq_printf(s, "%s: %d\n", obj->name, obj->value);
spin_lock_irqsave(&obj->child_list_lock, flags);
list_for_each(pos, &obj->child_list_head) {
struct sync_pt *pt =
container_of(pos, struct sync_pt, child_list);
spin_lock_irq(&obj->lock);
list_for_each(pos, &obj->pt_list) {
struct sync_pt *pt = container_of(pos, struct sync_pt, link);
sync_print_fence(s, &pt->base, false);
}
spin_unlock_irqrestore(&obj->child_list_lock, flags);
spin_unlock_irq(&obj->lock);
}
static void sync_print_sync_file(struct seq_file *s,
@ -151,12 +149,11 @@ static void sync_print_sync_file(struct seq_file *s,
static int sync_debugfs_show(struct seq_file *s, void *unused)
{
unsigned long flags;
struct list_head *pos;
seq_puts(s, "objs:\n--------------\n");
spin_lock_irqsave(&sync_timeline_list_lock, flags);
spin_lock_irq(&sync_timeline_list_lock);
list_for_each(pos, &sync_timeline_list_head) {
struct sync_timeline *obj =
container_of(pos, struct sync_timeline,
@ -165,11 +162,11 @@ static int sync_debugfs_show(struct seq_file *s, void *unused)
sync_print_obj(s, obj);
seq_putc(s, '\n');
}
spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
spin_unlock_irq(&sync_timeline_list_lock);
seq_puts(s, "fences:\n--------------\n");
spin_lock_irqsave(&sync_file_list_lock, flags);
spin_lock_irq(&sync_file_list_lock);
list_for_each(pos, &sync_file_list_head) {
struct sync_file *sync_file =
container_of(pos, struct sync_file, sync_file_list);
@ -177,7 +174,7 @@ static int sync_debugfs_show(struct seq_file *s, void *unused)
sync_print_sync_file(s, sync_file);
seq_putc(s, '\n');
}
spin_unlock_irqrestore(&sync_file_list_lock, flags);
spin_unlock_irq(&sync_file_list_lock);
return 0;
}

View File

@ -14,6 +14,7 @@
#define _LINUX_SYNC_H
#include <linux/list.h>
#include <linux/rbtree.h>
#include <linux/spinlock.h>
#include <linux/dma-fence.h>
@ -24,42 +25,41 @@
* struct sync_timeline - sync object
* @kref: reference count on fence.
* @name: name of the sync_timeline. Useful for debugging
* @child_list_head: list of children sync_pts for this sync_timeline
* @child_list_lock: lock protecting @child_list_head and fence.status
* @active_list_head: list of active (unsignaled/errored) sync_pts
* @lock: lock protecting @pt_list and @value
* @pt_tree: rbtree of active (unsignaled/errored) sync_pts
* @pt_list: list of active (unsignaled/errored) sync_pts
* @sync_timeline_list: membership in global sync_timeline_list
*/
struct sync_timeline {
struct kref kref;
char name[32];
/* protected by child_list_lock */
/* protected by lock */
u64 context;
int value;
struct list_head child_list_head;
spinlock_t child_list_lock;
struct list_head active_list_head;
struct rb_root pt_tree;
struct list_head pt_list;
spinlock_t lock;
struct list_head sync_timeline_list;
};
static inline struct sync_timeline *dma_fence_parent(struct dma_fence *fence)
{
return container_of(fence->lock, struct sync_timeline, child_list_lock);
return container_of(fence->lock, struct sync_timeline, lock);
}
/**
* struct sync_pt - sync_pt object
* @base: base fence object
* @child_list: sync timeline child's list
* @active_list: sync timeline active child's list
* @link: link on the sync timeline's list
* @node: node in the sync timeline's tree
*/
struct sync_pt {
struct dma_fence base;
struct list_head child_list;
struct list_head active_list;
struct list_head link;
struct rb_node node;
};
#ifdef CONFIG_SW_SYNC

View File

@ -33,7 +33,7 @@ drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_probe_helper.o \
drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \
drm_kms_helper_common.o drm_dp_dual_mode_helper.o \
drm_simple_kms_helper.o drm_modeset_helper.o \
drm_scdc_helper.o
drm_scdc_helper.o drm_gem_framebuffer_helper.o
drm_kms_helper-$(CONFIG_DRM_PANEL_BRIDGE) += bridge/panel.o
drm_kms_helper-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o

View File

@ -25,7 +25,7 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
amdgpu_prime.o amdgpu_vm.o amdgpu_ib.o amdgpu_pll.o \
amdgpu_ucode.o amdgpu_bo_list.o amdgpu_ctx.o amdgpu_sync.o \
amdgpu_gtt_mgr.o amdgpu_vram_mgr.o amdgpu_virt.o amdgpu_atomfirmware.o \
amdgpu_queue_mgr.o
amdgpu_queue_mgr.o amdgpu_vf_error.o
# add asic specific block
amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \

View File

@ -68,13 +68,16 @@
#include "gpu_scheduler.h"
#include "amdgpu_virt.h"
#include "amdgpu_gart.h"
/*
* Modules parameters.
*/
extern int amdgpu_modeset;
extern int amdgpu_vram_limit;
extern int amdgpu_gart_size;
extern int amdgpu_vis_vram_limit;
extern unsigned amdgpu_gart_size;
extern int amdgpu_gtt_size;
extern int amdgpu_moverate;
extern int amdgpu_benchmarking;
extern int amdgpu_testing;
@ -93,6 +96,7 @@ extern int amdgpu_bapm;
extern int amdgpu_deep_color;
extern int amdgpu_vm_size;
extern int amdgpu_vm_block_size;
extern int amdgpu_vm_fragment_size;
extern int amdgpu_vm_fault_stop;
extern int amdgpu_vm_debug;
extern int amdgpu_vm_update_mode;
@ -104,6 +108,7 @@ extern unsigned amdgpu_pcie_gen_cap;
extern unsigned amdgpu_pcie_lane_cap;
extern unsigned amdgpu_cg_mask;
extern unsigned amdgpu_pg_mask;
extern unsigned amdgpu_sdma_phase_quantum;
extern char *amdgpu_disable_cu;
extern char *amdgpu_virtual_display;
extern unsigned amdgpu_pp_feature_mask;
@ -369,78 +374,10 @@ struct amdgpu_clock {
};
/*
* BO.
* GEM.
*/
struct amdgpu_bo_list_entry {
struct amdgpu_bo *robj;
struct ttm_validate_buffer tv;
struct amdgpu_bo_va *bo_va;
uint32_t priority;
struct page **user_pages;
int user_invalidated;
};
struct amdgpu_bo_va_mapping {
struct list_head list;
struct rb_node rb;
uint64_t start;
uint64_t last;
uint64_t __subtree_last;
uint64_t offset;
uint64_t flags;
};
/* bo virtual addresses in a specific vm */
struct amdgpu_bo_va {
/* protected by bo being reserved */
struct list_head bo_list;
struct dma_fence *last_pt_update;
unsigned ref_count;
/* protected by vm mutex and spinlock */
struct list_head vm_status;
/* mappings for this bo_va */
struct list_head invalids;
struct list_head valids;
/* constant after initialization */
struct amdgpu_vm *vm;
struct amdgpu_bo *bo;
};
#define AMDGPU_GEM_DOMAIN_MAX 0x3
struct amdgpu_bo {
/* Protected by tbo.reserved */
u32 prefered_domains;
u32 allowed_domains;
struct ttm_place placements[AMDGPU_GEM_DOMAIN_MAX + 1];
struct ttm_placement placement;
struct ttm_buffer_object tbo;
struct ttm_bo_kmap_obj kmap;
u64 flags;
unsigned pin_count;
void *kptr;
u64 tiling_flags;
u64 metadata_flags;
void *metadata;
u32 metadata_size;
unsigned prime_shared_count;
/* list of all virtual address to which this bo
* is associated to
*/
struct list_head va;
/* Constant after initialization */
struct drm_gem_object gem_base;
struct amdgpu_bo *parent;
struct amdgpu_bo *shadow;
struct ttm_bo_kmap_obj dma_buf_vmap;
struct amdgpu_mn *mn;
struct list_head mn_list;
struct list_head shadow_list;
};
#define gem_to_amdgpu_bo(gobj) container_of((gobj), struct amdgpu_bo, gem_base)
void amdgpu_gem_object_free(struct drm_gem_object *obj);
@ -531,49 +468,6 @@ int amdgpu_mode_dumb_mmap(struct drm_file *filp,
int amdgpu_fence_slab_init(void);
void amdgpu_fence_slab_fini(void);
/*
* GART structures, functions & helpers
*/
struct amdgpu_mc;
#define AMDGPU_GPU_PAGE_SIZE 4096
#define AMDGPU_GPU_PAGE_MASK (AMDGPU_GPU_PAGE_SIZE - 1)
#define AMDGPU_GPU_PAGE_SHIFT 12
#define AMDGPU_GPU_PAGE_ALIGN(a) (((a) + AMDGPU_GPU_PAGE_MASK) & ~AMDGPU_GPU_PAGE_MASK)
struct amdgpu_gart {
dma_addr_t table_addr;
struct amdgpu_bo *robj;
void *ptr;
unsigned num_gpu_pages;
unsigned num_cpu_pages;
unsigned table_size;
#ifdef CONFIG_DRM_AMDGPU_GART_DEBUGFS
struct page **pages;
#endif
bool ready;
/* Asic default pte flags */
uint64_t gart_pte_flags;
const struct amdgpu_gart_funcs *gart_funcs;
};
int amdgpu_gart_table_ram_alloc(struct amdgpu_device *adev);
void amdgpu_gart_table_ram_free(struct amdgpu_device *adev);
int amdgpu_gart_table_vram_alloc(struct amdgpu_device *adev);
void amdgpu_gart_table_vram_free(struct amdgpu_device *adev);
int amdgpu_gart_table_vram_pin(struct amdgpu_device *adev);
void amdgpu_gart_table_vram_unpin(struct amdgpu_device *adev);
int amdgpu_gart_init(struct amdgpu_device *adev);
void amdgpu_gart_fini(struct amdgpu_device *adev);
int amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
int pages);
int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
int pages, struct page **pagelist,
dma_addr_t *dma_addr, uint64_t flags);
int amdgpu_ttm_recover_gart(struct amdgpu_device *adev);
/*
* VMHUB structures, functions & helpers
*/
@ -598,22 +492,20 @@ struct amdgpu_mc {
* about vram size near mc fb location */
u64 mc_vram_size;
u64 visible_vram_size;
u64 gtt_size;
u64 gtt_start;
u64 gtt_end;
u64 gart_size;
u64 gart_start;
u64 gart_end;
u64 vram_start;
u64 vram_end;
unsigned vram_width;
u64 real_vram_size;
int vram_mtrr;
u64 gtt_base_align;
u64 mc_mask;
const struct firmware *fw; /* MC firmware */
uint32_t fw_version;
struct amdgpu_irq_src vm_fault;
uint32_t vram_type;
uint32_t srbm_soft_reset;
struct amdgpu_mode_mc_save save;
bool prt_warning;
uint64_t stolen_size;
/* apertures */
@ -719,15 +611,15 @@ typedef enum _AMDGPU_DOORBELL64_ASSIGNMENT
/* overlap the doorbell assignment with VCN as they are mutually exclusive
* VCE engine's doorbell is 32 bit and two VCE ring share one QWORD
*/
AMDGPU_DOORBELL64_RING0_1 = 0xF8,
AMDGPU_DOORBELL64_RING2_3 = 0xF9,
AMDGPU_DOORBELL64_RING4_5 = 0xFA,
AMDGPU_DOORBELL64_RING6_7 = 0xFB,
AMDGPU_DOORBELL64_UVD_RING0_1 = 0xF8,
AMDGPU_DOORBELL64_UVD_RING2_3 = 0xF9,
AMDGPU_DOORBELL64_UVD_RING4_5 = 0xFA,
AMDGPU_DOORBELL64_UVD_RING6_7 = 0xFB,
AMDGPU_DOORBELL64_UVD_RING0_1 = 0xFC,
AMDGPU_DOORBELL64_UVD_RING2_3 = 0xFD,
AMDGPU_DOORBELL64_UVD_RING4_5 = 0xFE,
AMDGPU_DOORBELL64_UVD_RING6_7 = 0xFF,
AMDGPU_DOORBELL64_VCE_RING0_1 = 0xFC,
AMDGPU_DOORBELL64_VCE_RING2_3 = 0xFD,
AMDGPU_DOORBELL64_VCE_RING4_5 = 0xFE,
AMDGPU_DOORBELL64_VCE_RING6_7 = 0xFF,
AMDGPU_DOORBELL64_MAX_ASSIGNMENT = 0xFF,
AMDGPU_DOORBELL64_INVALID = 0xFFFF
@ -857,6 +749,7 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
struct amdgpu_fpriv {
struct amdgpu_vm vm;
struct amdgpu_bo_va *prt_va;
struct amdgpu_bo_va *csa_va;
struct mutex bo_list_lock;
struct idr bo_list_handles;
struct amdgpu_ctx_mgr ctx_mgr;
@ -866,6 +759,14 @@ struct amdgpu_fpriv {
/*
* residency list
*/
struct amdgpu_bo_list_entry {
struct amdgpu_bo *robj;
struct ttm_validate_buffer tv;
struct amdgpu_bo_va *bo_va;
uint32_t priority;
struct page **user_pages;
int user_invalidated;
};
struct amdgpu_bo_list {
struct mutex lock;
@ -1159,7 +1060,9 @@ struct amdgpu_cs_parser {
struct list_head validated;
struct dma_fence *fence;
uint64_t bytes_moved_threshold;
uint64_t bytes_moved_vis_threshold;
uint64_t bytes_moved;
uint64_t bytes_moved_vis;
struct amdgpu_bo_list_entry *evictable;
/* user fence */
@ -1230,8 +1133,6 @@ struct amdgpu_wb {
int amdgpu_wb_get(struct amdgpu_device *adev, u32 *wb);
void amdgpu_wb_free(struct amdgpu_device *adev, u32 wb);
int amdgpu_wb_get_64bit(struct amdgpu_device *adev, u32 *wb);
void amdgpu_wb_free_64bit(struct amdgpu_device *adev, u32 wb);
void amdgpu_get_pcie_info(struct amdgpu_device *adev);
@ -1525,7 +1426,7 @@ struct amdgpu_device {
bool is_atom_fw;
uint8_t *bios;
uint32_t bios_size;
struct amdgpu_bo *stollen_vga_memory;
struct amdgpu_bo *stolen_vga_memory;
uint32_t bios_scratch_reg_offset;
uint32_t bios_scratch[AMDGPU_BIOS_NUM_SCRATCH];
@ -1557,6 +1458,10 @@ struct amdgpu_device {
spinlock_t gc_cac_idx_lock;
amdgpu_rreg_t gc_cac_rreg;
amdgpu_wreg_t gc_cac_wreg;
/* protects concurrent se_cac register access */
spinlock_t se_cac_idx_lock;
amdgpu_rreg_t se_cac_rreg;
amdgpu_wreg_t se_cac_wreg;
/* protects concurrent ENDPOINT (audio) register access */
spinlock_t audio_endpt_idx_lock;
amdgpu_block_rreg_t audio_endpt_rreg;
@ -1579,9 +1484,6 @@ struct amdgpu_device {
struct amdgpu_mman mman;
struct amdgpu_vram_scratch vram_scratch;
struct amdgpu_wb wb;
atomic64_t vram_usage;
atomic64_t vram_vis_usage;
atomic64_t gtt_usage;
atomic64_t num_bytes_moved;
atomic64_t num_evictions;
atomic64_t num_vram_cpu_page_faults;
@ -1593,6 +1495,7 @@ struct amdgpu_device {
spinlock_t lock;
s64 last_update_us;
s64 accum_us; /* accumulated microseconds */
s64 accum_us_vis; /* for visible VRAM */
u32 log2_max_MBps;
} mm_stats;
@ -1687,6 +1590,8 @@ struct amdgpu_device {
bool has_hw_reset;
u8 reset_magic[AMDGPU_RESET_MAGIC_NUM];
/* record last mm index being written through WREG32*/
unsigned long last_mm_index;
};
static inline struct amdgpu_device *amdgpu_ttm_adev(struct ttm_bo_device *bdev)
@ -1742,6 +1647,8 @@ void amdgpu_mm_wdoorbell64(struct amdgpu_device *adev, u32 index, u64 v);
#define WREG32_DIDT(reg, v) adev->didt_wreg(adev, (reg), (v))
#define RREG32_GC_CAC(reg) adev->gc_cac_rreg(adev, (reg))
#define WREG32_GC_CAC(reg, v) adev->gc_cac_wreg(adev, (reg), (v))
#define RREG32_SE_CAC(reg) adev->se_cac_rreg(adev, (reg))
#define WREG32_SE_CAC(reg, v) adev->se_cac_wreg(adev, (reg), (v))
#define RREG32_AUDIO_ENDPT(block, reg) adev->audio_endpt_rreg(adev, (block), (reg))
#define WREG32_AUDIO_ENDPT(block, reg, v) adev->audio_endpt_wreg(adev, (block), (reg), (v))
#define WREG32_P(reg, val, mask) \
@ -1792,50 +1699,6 @@ void amdgpu_mm_wdoorbell64(struct amdgpu_device *adev, u32 index, u64 v);
#define RBIOS16(i) (RBIOS8(i) | (RBIOS8((i)+1) << 8))
#define RBIOS32(i) ((RBIOS16(i)) | (RBIOS16((i)+2) << 16))
/*
* RING helpers.
*/
static inline void amdgpu_ring_write(struct amdgpu_ring *ring, uint32_t v)
{
if (ring->count_dw <= 0)
DRM_ERROR("amdgpu: writing more dwords to the ring than expected!\n");
ring->ring[ring->wptr++ & ring->buf_mask] = v;
ring->wptr &= ring->ptr_mask;
ring->count_dw--;
}
static inline void amdgpu_ring_write_multiple(struct amdgpu_ring *ring, void *src, int count_dw)
{
unsigned occupied, chunk1, chunk2;
void *dst;
if (unlikely(ring->count_dw < count_dw)) {
DRM_ERROR("amdgpu: writing more dwords to the ring than expected!\n");
return;
}
occupied = ring->wptr & ring->buf_mask;
dst = (void *)&ring->ring[occupied];
chunk1 = ring->buf_mask + 1 - occupied;
chunk1 = (chunk1 >= count_dw) ? count_dw: chunk1;
chunk2 = count_dw - chunk1;
chunk1 <<= 2;
chunk2 <<= 2;
if (chunk1)
memcpy(dst, src, chunk1);
if (chunk2) {
src += chunk1;
dst = (void *)ring->ring;
memcpy(dst, src, chunk2);
}
ring->wptr += count_dw;
ring->wptr &= ring->ptr_mask;
ring->count_dw -= count_dw;
}
static inline struct amdgpu_sdma_instance *
amdgpu_get_sdma_instance(struct amdgpu_ring *ring)
{
@ -1898,7 +1761,6 @@ amdgpu_get_sdma_instance(struct amdgpu_ring *ring)
#define amdgpu_ih_get_wptr(adev) (adev)->irq.ih_funcs->get_wptr((adev))
#define amdgpu_ih_decode_iv(adev, iv) (adev)->irq.ih_funcs->decode_iv((adev), (iv))
#define amdgpu_ih_set_rptr(adev) (adev)->irq.ih_funcs->set_rptr((adev))
#define amdgpu_display_set_vga_render_state(adev, r) (adev)->mode_info.funcs->set_vga_render_state((adev), (r))
#define amdgpu_display_vblank_get_counter(adev, crtc) (adev)->mode_info.funcs->vblank_get_counter((adev), (crtc))
#define amdgpu_display_vblank_wait(adev, crtc) (adev)->mode_info.funcs->vblank_wait((adev), (crtc))
#define amdgpu_display_backlight_set_level(adev, e, l) (adev)->mode_info.funcs->backlight_set_level((e), (l))
@ -1911,8 +1773,6 @@ amdgpu_get_sdma_instance(struct amdgpu_ring *ring)
#define amdgpu_display_page_flip_get_scanoutpos(adev, crtc, vbl, pos) (adev)->mode_info.funcs->page_flip_get_scanoutpos((adev), (crtc), (vbl), (pos))
#define amdgpu_display_add_encoder(adev, e, s, c) (adev)->mode_info.funcs->add_encoder((adev), (e), (s), (c))
#define amdgpu_display_add_connector(adev, ci, sd, ct, ib, coi, h, r) (adev)->mode_info.funcs->add_connector((adev), (ci), (sd), (ct), (ib), (coi), (h), (r))
#define amdgpu_display_stop_mc_access(adev, s) (adev)->mode_info.funcs->stop_mc_access((adev), (s))
#define amdgpu_display_resume_mc_access(adev, s) (adev)->mode_info.funcs->resume_mc_access((adev), (s))
#define amdgpu_emit_copy_buffer(adev, ib, s, d, b) (adev)->mman.buffer_funcs->emit_copy_buffer((ib), (s), (d), (b))
#define amdgpu_emit_fill_buffer(adev, ib, s, d, b) (adev)->mman.buffer_funcs->emit_fill_buffer((ib), (s), (d), (b))
#define amdgpu_gfx_get_gpu_clock_counter(adev) (adev)->gfx.funcs->get_gpu_clock_counter((adev))
@ -1927,7 +1787,8 @@ void amdgpu_pci_config_reset(struct amdgpu_device *adev);
bool amdgpu_need_post(struct amdgpu_device *adev);
void amdgpu_update_display_priority(struct amdgpu_device *adev);
void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes);
void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
u64 num_vis_bytes);
void amdgpu_ttm_placement_from_domain(struct amdgpu_bo *abo, u32 domain);
bool amdgpu_ttm_bo_is_amdgpu_bo(struct ttm_buffer_object *bo);
int amdgpu_ttm_tt_get_user_pages(struct ttm_tt *ttm, struct page **pages);
@ -1943,7 +1804,7 @@ bool amdgpu_ttm_tt_is_readonly(struct ttm_tt *ttm);
uint64_t amdgpu_ttm_tt_pte_flags(struct amdgpu_device *adev, struct ttm_tt *ttm,
struct ttm_mem_reg *mem);
void amdgpu_vram_location(struct amdgpu_device *adev, struct amdgpu_mc *mc, u64 base);
void amdgpu_gtt_location(struct amdgpu_device *adev, struct amdgpu_mc *mc);
void amdgpu_gart_location(struct amdgpu_device *adev, struct amdgpu_mc *mc);
void amdgpu_ttm_set_active_vram_size(struct amdgpu_device *adev, u64 size);
int amdgpu_ttm_init(struct amdgpu_device *adev);
void amdgpu_ttm_fini(struct amdgpu_device *adev);

View File

@ -285,19 +285,20 @@ static int acp_hw_init(void *handle)
return 0;
else if (r)
return r;
if (adev->asic_type != CHIP_STONEY) {
adev->acp.acp_genpd = kzalloc(sizeof(struct acp_pm_domain), GFP_KERNEL);
if (adev->acp.acp_genpd == NULL)
return -ENOMEM;
adev->acp.acp_genpd = kzalloc(sizeof(struct acp_pm_domain), GFP_KERNEL);
if (adev->acp.acp_genpd == NULL)
return -ENOMEM;
adev->acp.acp_genpd->gpd.name = "ACP_AUDIO";
adev->acp.acp_genpd->gpd.power_off = acp_poweroff;
adev->acp.acp_genpd->gpd.power_on = acp_poweron;
adev->acp.acp_genpd->gpd.name = "ACP_AUDIO";
adev->acp.acp_genpd->gpd.power_off = acp_poweroff;
adev->acp.acp_genpd->gpd.power_on = acp_poweron;
adev->acp.acp_genpd->cgs_dev = adev->acp.cgs_device;
adev->acp.acp_genpd->cgs_dev = adev->acp.cgs_device;
pm_genpd_init(&adev->acp.acp_genpd->gpd, NULL, false);
pm_genpd_init(&adev->acp.acp_genpd->gpd, NULL, false);
}
adev->acp.acp_cell = kzalloc(sizeof(struct mfd_cell) * ACP_DEVS,
GFP_KERNEL);
@ -319,14 +320,29 @@ static int acp_hw_init(void *handle)
return -ENOMEM;
}
i2s_pdata[0].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET;
switch (adev->asic_type) {
case CHIP_STONEY:
i2s_pdata[0].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET |
DW_I2S_QUIRK_16BIT_IDX_OVERRIDE;
break;
default:
i2s_pdata[0].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET;
}
i2s_pdata[0].cap = DWC_I2S_PLAY;
i2s_pdata[0].snd_rates = SNDRV_PCM_RATE_8000_96000;
i2s_pdata[0].i2s_reg_comp1 = ACP_I2S_COMP1_PLAY_REG_OFFSET;
i2s_pdata[0].i2s_reg_comp2 = ACP_I2S_COMP2_PLAY_REG_OFFSET;
switch (adev->asic_type) {
case CHIP_STONEY:
i2s_pdata[1].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET |
DW_I2S_QUIRK_COMP_PARAM1 |
DW_I2S_QUIRK_16BIT_IDX_OVERRIDE;
break;
default:
i2s_pdata[1].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET |
DW_I2S_QUIRK_COMP_PARAM1;
}
i2s_pdata[1].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET |
DW_I2S_QUIRK_COMP_PARAM1;
i2s_pdata[1].cap = DWC_I2S_RECORD;
i2s_pdata[1].snd_rates = SNDRV_PCM_RATE_8000_96000;
i2s_pdata[1].i2s_reg_comp1 = ACP_I2S_COMP1_CAP_REG_OFFSET;
@ -373,12 +389,14 @@ static int acp_hw_init(void *handle)
if (r)
return r;
for (i = 0; i < ACP_DEVS ; i++) {
dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
r = pm_genpd_add_device(&adev->acp.acp_genpd->gpd, dev);
if (r) {
dev_err(dev, "Failed to add dev to genpd\n");
return r;
if (adev->asic_type != CHIP_STONEY) {
for (i = 0; i < ACP_DEVS ; i++) {
dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
r = pm_genpd_add_device(&adev->acp.acp_genpd->gpd, dev);
if (r) {
dev_err(dev, "Failed to add dev to genpd\n");
return r;
}
}
}
@ -398,20 +416,22 @@ static int acp_hw_fini(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
/* return early if no ACP */
if (!adev->acp.acp_genpd)
if (!adev->acp.acp_cell)
return 0;
for (i = 0; i < ACP_DEVS ; i++) {
dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
ret = pm_genpd_remove_device(&adev->acp.acp_genpd->gpd, dev);
/* If removal fails, dont giveup and try rest */
if (ret)
dev_err(dev, "remove dev from genpd failed\n");
if (adev->acp.acp_genpd) {
for (i = 0; i < ACP_DEVS ; i++) {
dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
ret = pm_genpd_remove_device(&adev->acp.acp_genpd->gpd, dev);
/* If removal fails, dont giveup and try rest */
if (ret)
dev_err(dev, "remove dev from genpd failed\n");
}
kfree(adev->acp.acp_genpd);
}
mfd_remove_devices(adev->acp.parent);
kfree(adev->acp.acp_res);
kfree(adev->acp.acp_genpd);
kfree(adev->acp.acp_cell);
return 0;

View File

@ -30,10 +30,10 @@
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include "amdgpu.h"
#include "amdgpu_pm.h"
#include "amd_acpi.h"
#include "atom.h"
extern void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev);
/* Call the ATIF method
*/
/**
@ -289,7 +289,7 @@ out:
* handles it.
* Returns NOTIFY code
*/
int amdgpu_atif_handler(struct amdgpu_device *adev,
static int amdgpu_atif_handler(struct amdgpu_device *adev,
struct acpi_bus_event *event)
{
struct amdgpu_atif *atif = &adev->atif;

View File

@ -27,16 +27,15 @@
#include "amdgpu_gfx.h"
#include <linux/module.h>
const struct kfd2kgd_calls *kfd2kgd;
const struct kgd2kfd_calls *kgd2kfd;
bool (*kgd2kfd_init_p)(unsigned, const struct kgd2kfd_calls**);
bool (*kgd2kfd_init_p)(unsigned int, const struct kgd2kfd_calls**);
int amdgpu_amdkfd_init(void)
{
int ret;
#if defined(CONFIG_HSA_AMD_MODULE)
int (*kgd2kfd_init_p)(unsigned, const struct kgd2kfd_calls**);
int (*kgd2kfd_init_p)(unsigned int, const struct kgd2kfd_calls**);
kgd2kfd_init_p = symbol_request(kgd2kfd_init);
@ -61,24 +60,6 @@ int amdgpu_amdkfd_init(void)
return ret;
}
bool amdgpu_amdkfd_load_interface(struct amdgpu_device *adev)
{
switch (adev->asic_type) {
#ifdef CONFIG_DRM_AMDGPU_CIK
case CHIP_KAVERI:
kfd2kgd = amdgpu_amdkfd_gfx_7_get_functions();
break;
#endif
case CHIP_CARRIZO:
kfd2kgd = amdgpu_amdkfd_gfx_8_0_get_functions();
break;
default:
return false;
}
return true;
}
void amdgpu_amdkfd_fini(void)
{
if (kgd2kfd) {
@ -89,9 +70,27 @@ void amdgpu_amdkfd_fini(void)
void amdgpu_amdkfd_device_probe(struct amdgpu_device *adev)
{
if (kgd2kfd)
adev->kfd = kgd2kfd->probe((struct kgd_dev *)adev,
adev->pdev, kfd2kgd);
const struct kfd2kgd_calls *kfd2kgd;
if (!kgd2kfd)
return;
switch (adev->asic_type) {
#ifdef CONFIG_DRM_AMDGPU_CIK
case CHIP_KAVERI:
kfd2kgd = amdgpu_amdkfd_gfx_7_get_functions();
break;
#endif
case CHIP_CARRIZO:
kfd2kgd = amdgpu_amdkfd_gfx_8_0_get_functions();
break;
default:
dev_info(adev->dev, "kfd not supported on this ASIC\n");
return;
}
adev->kfd = kgd2kfd->probe((struct kgd_dev *)adev,
adev->pdev, kfd2kgd);
}
void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
@ -184,7 +183,8 @@ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
return -ENOMEM;
r = amdgpu_bo_create(adev, size, PAGE_SIZE, true, AMDGPU_GEM_DOMAIN_GTT,
AMDGPU_GEM_CREATE_CPU_GTT_USWC, NULL, NULL, &(*mem)->bo);
AMDGPU_GEM_CREATE_CPU_GTT_USWC, NULL, NULL, 0,
&(*mem)->bo);
if (r) {
dev_err(adev->dev,
"failed to allocate BO for amdkfd (%d)\n", r);

View File

@ -26,6 +26,7 @@
#define AMDGPU_AMDKFD_H_INCLUDED
#include <linux/types.h>
#include <linux/mmu_context.h>
#include <kgd_kfd_interface.h>
struct amdgpu_device;
@ -39,8 +40,6 @@ struct kgd_mem {
int amdgpu_amdkfd_init(void);
void amdgpu_amdkfd_fini(void);
bool amdgpu_amdkfd_load_interface(struct amdgpu_device *adev);
void amdgpu_amdkfd_suspend(struct amdgpu_device *adev);
int amdgpu_amdkfd_resume(struct amdgpu_device *adev);
void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev,
@ -62,4 +61,19 @@ uint64_t get_gpu_clock_counter(struct kgd_dev *kgd);
uint32_t get_max_engine_clock_in_mhz(struct kgd_dev *kgd);
#define read_user_wptr(mmptr, wptr, dst) \
({ \
bool valid = false; \
if ((mmptr) && (wptr)) { \
if ((mmptr) == current->mm) { \
valid = !get_user((dst), (wptr)); \
} else if (current->mm == NULL) { \
use_mm(mmptr); \
valid = !get_user((dst), (wptr)); \
unuse_mm(mmptr); \
} \
} \
valid; \
})
#endif /* AMDGPU_AMDKFD_H_INCLUDED */

View File

@ -39,6 +39,12 @@
#include "gmc/gmc_7_1_sh_mask.h"
#include "cik_structs.h"
enum hqd_dequeue_request_type {
NO_ACTION = 0,
DRAIN_PIPE,
RESET_WAVES
};
enum {
MAX_TRAPID = 8, /* 3 bits in the bitfield. */
MAX_WATCH_ADDRESSES = 4
@ -96,12 +102,15 @@ static int kgd_init_pipeline(struct kgd_dev *kgd, uint32_t pipe_id,
uint32_t hpd_size, uint64_t hpd_gpu_addr);
static int kgd_init_interrupts(struct kgd_dev *kgd, uint32_t pipe_id);
static int kgd_hqd_load(struct kgd_dev *kgd, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr);
uint32_t queue_id, uint32_t __user *wptr,
uint32_t wptr_shift, uint32_t wptr_mask,
struct mm_struct *mm);
static int kgd_hqd_sdma_load(struct kgd_dev *kgd, void *mqd);
static bool kgd_hqd_is_occupied(struct kgd_dev *kgd, uint64_t queue_address,
uint32_t pipe_id, uint32_t queue_id);
static int kgd_hqd_destroy(struct kgd_dev *kgd, uint32_t reset_type,
static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
enum kfd_preempt_type reset_type,
unsigned int utimeout, uint32_t pipe_id,
uint32_t queue_id);
static bool kgd_hqd_sdma_is_occupied(struct kgd_dev *kgd, void *mqd);
@ -126,6 +135,33 @@ static uint16_t get_atc_vmid_pasid_mapping_pasid(struct kgd_dev *kgd,
static void write_vmid_invalidate_request(struct kgd_dev *kgd, uint8_t vmid);
static uint16_t get_fw_version(struct kgd_dev *kgd, enum kgd_engine_type type);
static void set_scratch_backing_va(struct kgd_dev *kgd,
uint64_t va, uint32_t vmid);
/* Because of REG_GET_FIELD() being used, we put this function in the
* asic specific file.
*/
static int get_tile_config(struct kgd_dev *kgd,
struct tile_config *config)
{
struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
config->gb_addr_config = adev->gfx.config.gb_addr_config;
config->num_banks = REG_GET_FIELD(adev->gfx.config.mc_arb_ramcfg,
MC_ARB_RAMCFG, NOOFBANK);
config->num_ranks = REG_GET_FIELD(adev->gfx.config.mc_arb_ramcfg,
MC_ARB_RAMCFG, NOOFRANKS);
config->tile_config_ptr = adev->gfx.config.tile_mode_array;
config->num_tile_configs =
ARRAY_SIZE(adev->gfx.config.tile_mode_array);
config->macro_tile_config_ptr =
adev->gfx.config.macrotile_mode_array;
config->num_macro_tile_configs =
ARRAY_SIZE(adev->gfx.config.macrotile_mode_array);
return 0;
}
static const struct kfd2kgd_calls kfd2kgd = {
.init_gtt_mem_allocation = alloc_gtt_mem,
@ -150,7 +186,9 @@ static const struct kfd2kgd_calls kfd2kgd = {
.get_atc_vmid_pasid_mapping_pasid = get_atc_vmid_pasid_mapping_pasid,
.get_atc_vmid_pasid_mapping_valid = get_atc_vmid_pasid_mapping_valid,
.write_vmid_invalidate_request = write_vmid_invalidate_request,
.get_fw_version = get_fw_version
.get_fw_version = get_fw_version,
.set_scratch_backing_va = set_scratch_backing_va,
.get_tile_config = get_tile_config,
};
struct kfd2kgd_calls *amdgpu_amdkfd_gfx_7_get_functions(void)
@ -186,7 +224,7 @@ static void acquire_queue(struct kgd_dev *kgd, uint32_t pipe_id,
{
struct amdgpu_device *adev = get_amdgpu_device(kgd);
uint32_t mec = (++pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
uint32_t mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
uint32_t pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
lock_srbm(kgd, mec, pipe, queue_id, 0);
@ -290,20 +328,38 @@ static inline struct cik_sdma_rlc_registers *get_sdma_mqd(void *mqd)
}
static int kgd_hqd_load(struct kgd_dev *kgd, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr)
uint32_t queue_id, uint32_t __user *wptr,
uint32_t wptr_shift, uint32_t wptr_mask,
struct mm_struct *mm)
{
struct amdgpu_device *adev = get_amdgpu_device(kgd);
uint32_t wptr_shadow, is_wptr_shadow_valid;
struct cik_mqd *m;
uint32_t *mqd_hqd;
uint32_t reg, wptr_val, data;
m = get_mqd(mqd);
is_wptr_shadow_valid = !get_user(wptr_shadow, wptr);
if (is_wptr_shadow_valid)
m->cp_hqd_pq_wptr = wptr_shadow;
acquire_queue(kgd, pipe_id, queue_id);
gfx_v7_0_mqd_commit(adev, m);
/* HQD registers extend from CP_MQD_BASE_ADDR to CP_MQD_CONTROL. */
mqd_hqd = &m->cp_mqd_base_addr_lo;
for (reg = mmCP_MQD_BASE_ADDR; reg <= mmCP_MQD_CONTROL; reg++)
WREG32(reg, mqd_hqd[reg - mmCP_MQD_BASE_ADDR]);
/* Copy userspace write pointer value to register.
* Activate doorbell logic to monitor subsequent changes.
*/
data = REG_SET_FIELD(m->cp_hqd_pq_doorbell_control,
CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_EN, 1);
WREG32(mmCP_HQD_PQ_DOORBELL_CONTROL, data);
if (read_user_wptr(mm, wptr, wptr_val))
WREG32(mmCP_HQD_PQ_WPTR, (wptr_val << wptr_shift) & wptr_mask);
data = REG_SET_FIELD(m->cp_hqd_active, CP_HQD_ACTIVE, ACTIVE, 1);
WREG32(mmCP_HQD_ACTIVE, data);
release_queue(kgd);
return 0;
@ -382,30 +438,99 @@ static bool kgd_hqd_sdma_is_occupied(struct kgd_dev *kgd, void *mqd)
return false;
}
static int kgd_hqd_destroy(struct kgd_dev *kgd, uint32_t reset_type,
static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
enum kfd_preempt_type reset_type,
unsigned int utimeout, uint32_t pipe_id,
uint32_t queue_id)
{
struct amdgpu_device *adev = get_amdgpu_device(kgd);
uint32_t temp;
int timeout = utimeout;
enum hqd_dequeue_request_type type;
unsigned long flags, end_jiffies;
int retry;
acquire_queue(kgd, pipe_id, queue_id);
WREG32(mmCP_HQD_PQ_DOORBELL_CONTROL, 0);
WREG32(mmCP_HQD_DEQUEUE_REQUEST, reset_type);
switch (reset_type) {
case KFD_PREEMPT_TYPE_WAVEFRONT_DRAIN:
type = DRAIN_PIPE;
break;
case KFD_PREEMPT_TYPE_WAVEFRONT_RESET:
type = RESET_WAVES;
break;
default:
type = DRAIN_PIPE;
break;
}
/* Workaround: If IQ timer is active and the wait time is close to or
* equal to 0, dequeueing is not safe. Wait until either the wait time
* is larger or timer is cleared. Also, ensure that IQ_REQ_PEND is
* cleared before continuing. Also, ensure wait times are set to at
* least 0x3.
*/
local_irq_save(flags);
preempt_disable();
retry = 5000; /* wait for 500 usecs at maximum */
while (true) {
temp = RREG32(mmCP_HQD_IQ_TIMER);
if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, PROCESSING_IQ)) {
pr_debug("HW is processing IQ\n");
goto loop;
}
if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, ACTIVE)) {
if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, RETRY_TYPE)
== 3) /* SEM-rearm is safe */
break;
/* Wait time 3 is safe for CP, but our MMIO read/write
* time is close to 1 microsecond, so check for 10 to
* leave more buffer room
*/
if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, WAIT_TIME)
>= 10)
break;
pr_debug("IQ timer is active\n");
} else
break;
loop:
if (!retry) {
pr_err("CP HQD IQ timer status time out\n");
break;
}
ndelay(100);
--retry;
}
retry = 1000;
while (true) {
temp = RREG32(mmCP_HQD_DEQUEUE_REQUEST);
if (!(temp & CP_HQD_DEQUEUE_REQUEST__IQ_REQ_PEND_MASK))
break;
pr_debug("Dequeue request is pending\n");
if (!retry) {
pr_err("CP HQD dequeue request time out\n");
break;
}
ndelay(100);
--retry;
}
local_irq_restore(flags);
preempt_enable();
WREG32(mmCP_HQD_DEQUEUE_REQUEST, type);
end_jiffies = (utimeout * HZ / 1000) + jiffies;
while (true) {
temp = RREG32(mmCP_HQD_ACTIVE);
if (temp & CP_HQD_ACTIVE__ACTIVE_MASK)
if (!(temp & CP_HQD_ACTIVE__ACTIVE_MASK))
break;
if (timeout <= 0) {
pr_err("kfd: cp queue preemption time out.\n");
if (time_after(jiffies, end_jiffies)) {
pr_err("cp queue preemption time out\n");
release_queue(kgd);
return -ETIME;
}
msleep(20);
timeout -= 20;
usleep_range(500, 1000);
}
release_queue(kgd);
@ -556,6 +681,16 @@ static void write_vmid_invalidate_request(struct kgd_dev *kgd, uint8_t vmid)
WREG32(mmVM_INVALIDATE_REQUEST, 1 << vmid);
}
static void set_scratch_backing_va(struct kgd_dev *kgd,
uint64_t va, uint32_t vmid)
{
struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
lock_srbm(kgd, 0, 0, 0, vmid);
WREG32(mmSH_HIDDEN_PRIVATE_BASE_VMID, va);
unlock_srbm(kgd);
}
static uint16_t get_fw_version(struct kgd_dev *kgd, enum kgd_engine_type type)
{
struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
@ -566,42 +701,42 @@ static uint16_t get_fw_version(struct kgd_dev *kgd, enum kgd_engine_type type)
switch (type) {
case KGD_ENGINE_PFP:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.pfp_fw->data;
adev->gfx.pfp_fw->data;
break;
case KGD_ENGINE_ME:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.me_fw->data;
adev->gfx.me_fw->data;
break;
case KGD_ENGINE_CE:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.ce_fw->data;
adev->gfx.ce_fw->data;
break;
case KGD_ENGINE_MEC1:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.mec_fw->data;
adev->gfx.mec_fw->data;
break;
case KGD_ENGINE_MEC2:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.mec2_fw->data;
adev->gfx.mec2_fw->data;
break;
case KGD_ENGINE_RLC:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.rlc_fw->data;
adev->gfx.rlc_fw->data;
break;
case KGD_ENGINE_SDMA1:
hdr = (const union amdgpu_firmware_header *)
adev->sdma.instance[0].fw->data;
adev->sdma.instance[0].fw->data;
break;
case KGD_ENGINE_SDMA2:
hdr = (const union amdgpu_firmware_header *)
adev->sdma.instance[1].fw->data;
adev->sdma.instance[1].fw->data;
break;
default:

View File

@ -39,6 +39,12 @@
#include "vi_structs.h"
#include "vid.h"
enum hqd_dequeue_request_type {
NO_ACTION = 0,
DRAIN_PIPE,
RESET_WAVES
};
struct cik_sdma_rlc_registers;
/*
@ -55,12 +61,15 @@ static int kgd_init_pipeline(struct kgd_dev *kgd, uint32_t pipe_id,
uint32_t hpd_size, uint64_t hpd_gpu_addr);
static int kgd_init_interrupts(struct kgd_dev *kgd, uint32_t pipe_id);
static int kgd_hqd_load(struct kgd_dev *kgd, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr);
uint32_t queue_id, uint32_t __user *wptr,
uint32_t wptr_shift, uint32_t wptr_mask,
struct mm_struct *mm);
static int kgd_hqd_sdma_load(struct kgd_dev *kgd, void *mqd);
static bool kgd_hqd_is_occupied(struct kgd_dev *kgd, uint64_t queue_address,
uint32_t pipe_id, uint32_t queue_id);
static bool kgd_hqd_sdma_is_occupied(struct kgd_dev *kgd, void *mqd);
static int kgd_hqd_destroy(struct kgd_dev *kgd, uint32_t reset_type,
static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
enum kfd_preempt_type reset_type,
unsigned int utimeout, uint32_t pipe_id,
uint32_t queue_id);
static int kgd_hqd_sdma_destroy(struct kgd_dev *kgd, void *mqd,
@ -85,6 +94,33 @@ static uint16_t get_atc_vmid_pasid_mapping_pasid(struct kgd_dev *kgd,
uint8_t vmid);
static void write_vmid_invalidate_request(struct kgd_dev *kgd, uint8_t vmid);
static uint16_t get_fw_version(struct kgd_dev *kgd, enum kgd_engine_type type);
static void set_scratch_backing_va(struct kgd_dev *kgd,
uint64_t va, uint32_t vmid);
/* Because of REG_GET_FIELD() being used, we put this function in the
* asic specific file.
*/
static int get_tile_config(struct kgd_dev *kgd,
struct tile_config *config)
{
struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
config->gb_addr_config = adev->gfx.config.gb_addr_config;
config->num_banks = REG_GET_FIELD(adev->gfx.config.mc_arb_ramcfg,
MC_ARB_RAMCFG, NOOFBANK);
config->num_ranks = REG_GET_FIELD(adev->gfx.config.mc_arb_ramcfg,
MC_ARB_RAMCFG, NOOFRANKS);
config->tile_config_ptr = adev->gfx.config.tile_mode_array;
config->num_tile_configs =
ARRAY_SIZE(adev->gfx.config.tile_mode_array);
config->macro_tile_config_ptr =
adev->gfx.config.macrotile_mode_array;
config->num_macro_tile_configs =
ARRAY_SIZE(adev->gfx.config.macrotile_mode_array);
return 0;
}
static const struct kfd2kgd_calls kfd2kgd = {
.init_gtt_mem_allocation = alloc_gtt_mem,
@ -111,12 +147,15 @@ static const struct kfd2kgd_calls kfd2kgd = {
.get_atc_vmid_pasid_mapping_valid =
get_atc_vmid_pasid_mapping_valid,
.write_vmid_invalidate_request = write_vmid_invalidate_request,
.get_fw_version = get_fw_version
.get_fw_version = get_fw_version,
.set_scratch_backing_va = set_scratch_backing_va,
.get_tile_config = get_tile_config,
};
struct kfd2kgd_calls *amdgpu_amdkfd_gfx_8_0_get_functions(void)
{
return (struct kfd2kgd_calls *)&kfd2kgd;
return (struct kfd2kgd_calls *)&kfd2kgd;
}
static inline struct amdgpu_device *get_amdgpu_device(struct kgd_dev *kgd)
@ -147,7 +186,7 @@ static void acquire_queue(struct kgd_dev *kgd, uint32_t pipe_id,
{
struct amdgpu_device *adev = get_amdgpu_device(kgd);
uint32_t mec = (++pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
uint32_t mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
uint32_t pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
lock_srbm(kgd, mec, pipe, queue_id, 0);
@ -216,7 +255,7 @@ static int kgd_init_interrupts(struct kgd_dev *kgd, uint32_t pipe_id)
uint32_t mec;
uint32_t pipe;
mec = (++pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
lock_srbm(kgd, mec, pipe, 0, 0);
@ -244,20 +283,67 @@ static inline struct cik_sdma_rlc_registers *get_sdma_mqd(void *mqd)
}
static int kgd_hqd_load(struct kgd_dev *kgd, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr)
uint32_t queue_id, uint32_t __user *wptr,
uint32_t wptr_shift, uint32_t wptr_mask,
struct mm_struct *mm)
{
struct vi_mqd *m;
uint32_t shadow_wptr, valid_wptr;
struct amdgpu_device *adev = get_amdgpu_device(kgd);
struct vi_mqd *m;
uint32_t *mqd_hqd;
uint32_t reg, wptr_val, data;
m = get_mqd(mqd);
valid_wptr = copy_from_user(&shadow_wptr, wptr, sizeof(shadow_wptr));
if (valid_wptr == 0)
m->cp_hqd_pq_wptr = shadow_wptr;
acquire_queue(kgd, pipe_id, queue_id);
gfx_v8_0_mqd_commit(adev, mqd);
/* HIQ is set during driver init period with vmid set to 0*/
if (m->cp_hqd_vmid == 0) {
uint32_t value, mec, pipe;
mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
pr_debug("kfd: set HIQ, mec:%d, pipe:%d, queue:%d.\n",
mec, pipe, queue_id);
value = RREG32(mmRLC_CP_SCHEDULERS);
value = REG_SET_FIELD(value, RLC_CP_SCHEDULERS, scheduler1,
((mec << 5) | (pipe << 3) | queue_id | 0x80));
WREG32(mmRLC_CP_SCHEDULERS, value);
}
/* HQD registers extend from CP_MQD_BASE_ADDR to CP_HQD_EOP_WPTR_MEM. */
mqd_hqd = &m->cp_mqd_base_addr_lo;
for (reg = mmCP_MQD_BASE_ADDR; reg <= mmCP_HQD_EOP_CONTROL; reg++)
WREG32(reg, mqd_hqd[reg - mmCP_MQD_BASE_ADDR]);
/* Tonga errata: EOP RPTR/WPTR should be left unmodified.
* This is safe since EOP RPTR==WPTR for any inactive HQD
* on ASICs that do not support context-save.
* EOP writes/reads can start anywhere in the ring.
*/
if (get_amdgpu_device(kgd)->asic_type != CHIP_TONGA) {
WREG32(mmCP_HQD_EOP_RPTR, m->cp_hqd_eop_rptr);
WREG32(mmCP_HQD_EOP_WPTR, m->cp_hqd_eop_wptr);
WREG32(mmCP_HQD_EOP_WPTR_MEM, m->cp_hqd_eop_wptr_mem);
}
for (reg = mmCP_HQD_EOP_EVENTS; reg <= mmCP_HQD_ERROR; reg++)
WREG32(reg, mqd_hqd[reg - mmCP_MQD_BASE_ADDR]);
/* Copy userspace write pointer value to register.
* Activate doorbell logic to monitor subsequent changes.
*/
data = REG_SET_FIELD(m->cp_hqd_pq_doorbell_control,
CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_EN, 1);
WREG32(mmCP_HQD_PQ_DOORBELL_CONTROL, data);
if (read_user_wptr(mm, wptr, wptr_val))
WREG32(mmCP_HQD_PQ_WPTR, (wptr_val << wptr_shift) & wptr_mask);
data = REG_SET_FIELD(m->cp_hqd_active, CP_HQD_ACTIVE, ACTIVE, 1);
WREG32(mmCP_HQD_ACTIVE, data);
release_queue(kgd);
return 0;
@ -308,29 +394,102 @@ static bool kgd_hqd_sdma_is_occupied(struct kgd_dev *kgd, void *mqd)
return false;
}
static int kgd_hqd_destroy(struct kgd_dev *kgd, uint32_t reset_type,
static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
enum kfd_preempt_type reset_type,
unsigned int utimeout, uint32_t pipe_id,
uint32_t queue_id)
{
struct amdgpu_device *adev = get_amdgpu_device(kgd);
uint32_t temp;
int timeout = utimeout;
enum hqd_dequeue_request_type type;
unsigned long flags, end_jiffies;
int retry;
struct vi_mqd *m = get_mqd(mqd);
acquire_queue(kgd, pipe_id, queue_id);
WREG32(mmCP_HQD_DEQUEUE_REQUEST, reset_type);
if (m->cp_hqd_vmid == 0)
WREG32_FIELD(RLC_CP_SCHEDULERS, scheduler1, 0);
switch (reset_type) {
case KFD_PREEMPT_TYPE_WAVEFRONT_DRAIN:
type = DRAIN_PIPE;
break;
case KFD_PREEMPT_TYPE_WAVEFRONT_RESET:
type = RESET_WAVES;
break;
default:
type = DRAIN_PIPE;
break;
}
/* Workaround: If IQ timer is active and the wait time is close to or
* equal to 0, dequeueing is not safe. Wait until either the wait time
* is larger or timer is cleared. Also, ensure that IQ_REQ_PEND is
* cleared before continuing. Also, ensure wait times are set to at
* least 0x3.
*/
local_irq_save(flags);
preempt_disable();
retry = 5000; /* wait for 500 usecs at maximum */
while (true) {
temp = RREG32(mmCP_HQD_IQ_TIMER);
if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, PROCESSING_IQ)) {
pr_debug("HW is processing IQ\n");
goto loop;
}
if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, ACTIVE)) {
if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, RETRY_TYPE)
== 3) /* SEM-rearm is safe */
break;
/* Wait time 3 is safe for CP, but our MMIO read/write
* time is close to 1 microsecond, so check for 10 to
* leave more buffer room
*/
if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, WAIT_TIME)
>= 10)
break;
pr_debug("IQ timer is active\n");
} else
break;
loop:
if (!retry) {
pr_err("CP HQD IQ timer status time out\n");
break;
}
ndelay(100);
--retry;
}
retry = 1000;
while (true) {
temp = RREG32(mmCP_HQD_DEQUEUE_REQUEST);
if (!(temp & CP_HQD_DEQUEUE_REQUEST__IQ_REQ_PEND_MASK))
break;
pr_debug("Dequeue request is pending\n");
if (!retry) {
pr_err("CP HQD dequeue request time out\n");
break;
}
ndelay(100);
--retry;
}
local_irq_restore(flags);
preempt_enable();
WREG32(mmCP_HQD_DEQUEUE_REQUEST, type);
end_jiffies = (utimeout * HZ / 1000) + jiffies;
while (true) {
temp = RREG32(mmCP_HQD_ACTIVE);
if (temp & CP_HQD_ACTIVE__ACTIVE_MASK)
if (!(temp & CP_HQD_ACTIVE__ACTIVE_MASK))
break;
if (timeout <= 0) {
pr_err("kfd: cp queue preemption time out.\n");
if (time_after(jiffies, end_jiffies)) {
pr_err("cp queue preemption time out.\n");
release_queue(kgd);
return -ETIME;
}
msleep(20);
timeout -= 20;
usleep_range(500, 1000);
}
release_queue(kgd);
@ -444,6 +603,16 @@ static uint32_t kgd_address_watch_get_offset(struct kgd_dev *kgd,
return 0;
}
static void set_scratch_backing_va(struct kgd_dev *kgd,
uint64_t va, uint32_t vmid)
{
struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
lock_srbm(kgd, 0, 0, 0, vmid);
WREG32(mmSH_HIDDEN_PRIVATE_BASE_VMID, va);
unlock_srbm(kgd);
}
static uint16_t get_fw_version(struct kgd_dev *kgd, enum kgd_engine_type type)
{
struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
@ -454,42 +623,42 @@ static uint16_t get_fw_version(struct kgd_dev *kgd, enum kgd_engine_type type)
switch (type) {
case KGD_ENGINE_PFP:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.pfp_fw->data;
adev->gfx.pfp_fw->data;
break;
case KGD_ENGINE_ME:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.me_fw->data;
adev->gfx.me_fw->data;
break;
case KGD_ENGINE_CE:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.ce_fw->data;
adev->gfx.ce_fw->data;
break;
case KGD_ENGINE_MEC1:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.mec_fw->data;
adev->gfx.mec_fw->data;
break;
case KGD_ENGINE_MEC2:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.mec2_fw->data;
adev->gfx.mec2_fw->data;
break;
case KGD_ENGINE_RLC:
hdr = (const union amdgpu_firmware_header *)
adev->gfx.rlc_fw->data;
adev->gfx.rlc_fw->data;
break;
case KGD_ENGINE_SDMA1:
hdr = (const union amdgpu_firmware_header *)
adev->sdma.instance[0].fw->data;
adev->sdma.instance[0].fw->data;
break;
case KGD_ENGINE_SDMA2:
hdr = (const union amdgpu_firmware_header *)
adev->sdma.instance[1].fw->data;
adev->sdma.instance[1].fw->data;
break;
default:

View File

@ -1686,7 +1686,7 @@ void amdgpu_atombios_scratch_regs_lock(struct amdgpu_device *adev, bool lock)
{
uint32_t bios_6_scratch;
bios_6_scratch = RREG32(mmBIOS_SCRATCH_6);
bios_6_scratch = RREG32(adev->bios_scratch_reg_offset + 6);
if (lock) {
bios_6_scratch |= ATOM_S6_CRITICAL_STATE;
@ -1696,15 +1696,17 @@ void amdgpu_atombios_scratch_regs_lock(struct amdgpu_device *adev, bool lock)
bios_6_scratch |= ATOM_S6_ACC_MODE;
}
WREG32(mmBIOS_SCRATCH_6, bios_6_scratch);
WREG32(adev->bios_scratch_reg_offset + 6, bios_6_scratch);
}
void amdgpu_atombios_scratch_regs_init(struct amdgpu_device *adev)
{
uint32_t bios_2_scratch, bios_6_scratch;
bios_2_scratch = RREG32(mmBIOS_SCRATCH_2);
bios_6_scratch = RREG32(mmBIOS_SCRATCH_6);
adev->bios_scratch_reg_offset = mmBIOS_SCRATCH_0;
bios_2_scratch = RREG32(adev->bios_scratch_reg_offset + 2);
bios_6_scratch = RREG32(adev->bios_scratch_reg_offset + 6);
/* let the bios control the backlight */
bios_2_scratch &= ~ATOM_S2_VRI_BRIGHT_ENABLE;
@ -1715,8 +1717,8 @@ void amdgpu_atombios_scratch_regs_init(struct amdgpu_device *adev)
/* clear the vbios dpms state */
bios_2_scratch &= ~ATOM_S2_DEVICE_DPMS_STATE;
WREG32(mmBIOS_SCRATCH_2, bios_2_scratch);
WREG32(mmBIOS_SCRATCH_6, bios_6_scratch);
WREG32(adev->bios_scratch_reg_offset + 2, bios_2_scratch);
WREG32(adev->bios_scratch_reg_offset + 6, bios_6_scratch);
}
void amdgpu_atombios_scratch_regs_save(struct amdgpu_device *adev)
@ -1724,7 +1726,7 @@ void amdgpu_atombios_scratch_regs_save(struct amdgpu_device *adev)
int i;
for (i = 0; i < AMDGPU_BIOS_NUM_SCRATCH; i++)
adev->bios_scratch[i] = RREG32(mmBIOS_SCRATCH_0 + i);
adev->bios_scratch[i] = RREG32(adev->bios_scratch_reg_offset + i);
}
void amdgpu_atombios_scratch_regs_restore(struct amdgpu_device *adev)
@ -1738,20 +1740,30 @@ void amdgpu_atombios_scratch_regs_restore(struct amdgpu_device *adev)
adev->bios_scratch[7] &= ~ATOM_S7_ASIC_INIT_COMPLETE_MASK;
for (i = 0; i < AMDGPU_BIOS_NUM_SCRATCH; i++)
WREG32(mmBIOS_SCRATCH_0 + i, adev->bios_scratch[i]);
WREG32(adev->bios_scratch_reg_offset + i, adev->bios_scratch[i]);
}
void amdgpu_atombios_scratch_regs_engine_hung(struct amdgpu_device *adev,
bool hung)
{
u32 tmp = RREG32(mmBIOS_SCRATCH_3);
u32 tmp = RREG32(adev->bios_scratch_reg_offset + 3);
if (hung)
tmp |= ATOM_S3_ASIC_GUI_ENGINE_HUNG;
else
tmp &= ~ATOM_S3_ASIC_GUI_ENGINE_HUNG;
WREG32(mmBIOS_SCRATCH_3, tmp);
WREG32(adev->bios_scratch_reg_offset + 3, tmp);
}
bool amdgpu_atombios_scratch_need_asic_init(struct amdgpu_device *adev)
{
u32 tmp = RREG32(adev->bios_scratch_reg_offset + 7);
if (tmp & ATOM_S7_ASIC_INIT_COMPLETE_MASK)
return false;
else
return true;
}
/* Atom needs data in little endian format

View File

@ -200,6 +200,7 @@ void amdgpu_atombios_scratch_regs_save(struct amdgpu_device *adev);
void amdgpu_atombios_scratch_regs_restore(struct amdgpu_device *adev);
void amdgpu_atombios_scratch_regs_engine_hung(struct amdgpu_device *adev,
bool hung);
bool amdgpu_atombios_scratch_need_asic_init(struct amdgpu_device *adev);
void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le);
int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,

View File

@ -66,41 +66,6 @@ void amdgpu_atomfirmware_scratch_regs_init(struct amdgpu_device *adev)
}
}
void amdgpu_atomfirmware_scratch_regs_save(struct amdgpu_device *adev)
{
int i;
for (i = 0; i < AMDGPU_BIOS_NUM_SCRATCH; i++)
adev->bios_scratch[i] = RREG32(adev->bios_scratch_reg_offset + i);
}
void amdgpu_atomfirmware_scratch_regs_restore(struct amdgpu_device *adev)
{
int i;
/*
* VBIOS will check ASIC_INIT_COMPLETE bit to decide if
* execute ASIC_Init posting via driver
*/
adev->bios_scratch[7] &= ~ATOM_S7_ASIC_INIT_COMPLETE_MASK;
for (i = 0; i < AMDGPU_BIOS_NUM_SCRATCH; i++)
WREG32(adev->bios_scratch_reg_offset + i, adev->bios_scratch[i]);
}
void amdgpu_atomfirmware_scratch_regs_engine_hung(struct amdgpu_device *adev,
bool hung)
{
u32 tmp = RREG32(adev->bios_scratch_reg_offset + 3);
if (hung)
tmp |= ATOM_S3_ASIC_GUI_ENGINE_HUNG;
else
tmp &= ~ATOM_S3_ASIC_GUI_ENGINE_HUNG;
WREG32(adev->bios_scratch_reg_offset + 3, tmp);
}
int amdgpu_atomfirmware_allocate_fb_scratch(struct amdgpu_device *adev)
{
struct atom_context *ctx = adev->mode_info.atom_context;
@ -130,3 +95,129 @@ int amdgpu_atomfirmware_allocate_fb_scratch(struct amdgpu_device *adev)
ctx->scratch_size_bytes = usage_bytes;
return 0;
}
union igp_info {
struct atom_integrated_system_info_v1_11 v11;
};
/*
* Return vram width from integrated system info table, if available,
* or 0 if not.
*/
int amdgpu_atomfirmware_get_vram_width(struct amdgpu_device *adev)
{
struct amdgpu_mode_info *mode_info = &adev->mode_info;
int index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
integratedsysteminfo);
u16 data_offset, size;
union igp_info *igp_info;
u8 frev, crev;
/* get any igp specific overrides */
if (amdgpu_atom_parse_data_header(mode_info->atom_context, index, &size,
&frev, &crev, &data_offset)) {
igp_info = (union igp_info *)
(mode_info->atom_context->bios + data_offset);
switch (crev) {
case 11:
return igp_info->v11.umachannelnumber * 64;
default:
return 0;
}
}
return 0;
}
union firmware_info {
struct atom_firmware_info_v3_1 v31;
};
union smu_info {
struct atom_smu_info_v3_1 v31;
};
union umc_info {
struct atom_umc_info_v3_1 v31;
};
int amdgpu_atomfirmware_get_clock_info(struct amdgpu_device *adev)
{
struct amdgpu_mode_info *mode_info = &adev->mode_info;
struct amdgpu_pll *spll = &adev->clock.spll;
struct amdgpu_pll *mpll = &adev->clock.mpll;
uint8_t frev, crev;
uint16_t data_offset;
int ret = -EINVAL, index;
index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
firmwareinfo);
if (amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
&frev, &crev, &data_offset)) {
union firmware_info *firmware_info =
(union firmware_info *)(mode_info->atom_context->bios +
data_offset);
adev->clock.default_sclk =
le32_to_cpu(firmware_info->v31.bootup_sclk_in10khz);
adev->clock.default_mclk =
le32_to_cpu(firmware_info->v31.bootup_mclk_in10khz);
adev->pm.current_sclk = adev->clock.default_sclk;
adev->pm.current_mclk = adev->clock.default_mclk;
/* not technically a clock, but... */
adev->mode_info.firmware_flags =
le32_to_cpu(firmware_info->v31.firmware_capability);
ret = 0;
}
index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
smu_info);
if (amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
&frev, &crev, &data_offset)) {
union smu_info *smu_info =
(union smu_info *)(mode_info->atom_context->bios +
data_offset);
/* system clock */
spll->reference_freq = le32_to_cpu(smu_info->v31.core_refclk_10khz);
spll->reference_div = 0;
spll->min_post_div = 1;
spll->max_post_div = 1;
spll->min_ref_div = 2;
spll->max_ref_div = 0xff;
spll->min_feedback_div = 4;
spll->max_feedback_div = 0xff;
spll->best_vco = 0;
ret = 0;
}
index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
umc_info);
if (amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
&frev, &crev, &data_offset)) {
union umc_info *umc_info =
(union umc_info *)(mode_info->atom_context->bios +
data_offset);
/* memory clock */
mpll->reference_freq = le32_to_cpu(umc_info->v31.mem_refclk_10khz);
mpll->reference_div = 0;
mpll->min_post_div = 1;
mpll->max_post_div = 1;
mpll->min_ref_div = 2;
mpll->max_ref_div = 0xff;
mpll->min_feedback_div = 4;
mpll->max_feedback_div = 0xff;
mpll->best_vco = 0;
ret = 0;
}
return ret;
}

View File

@ -26,10 +26,8 @@
bool amdgpu_atomfirmware_gpu_supports_virtualization(struct amdgpu_device *adev);
void amdgpu_atomfirmware_scratch_regs_init(struct amdgpu_device *adev);
void amdgpu_atomfirmware_scratch_regs_save(struct amdgpu_device *adev);
void amdgpu_atomfirmware_scratch_regs_restore(struct amdgpu_device *adev);
void amdgpu_atomfirmware_scratch_regs_engine_hung(struct amdgpu_device *adev,
bool hung);
int amdgpu_atomfirmware_allocate_fb_scratch(struct amdgpu_device *adev);
int amdgpu_atomfirmware_get_vram_width(struct amdgpu_device *adev);
int amdgpu_atomfirmware_get_clock_info(struct amdgpu_device *adev);
#endif

View File

@ -40,7 +40,7 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
for (i = 0; i < n; i++) {
struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
r = amdgpu_copy_buffer(ring, saddr, daddr, size, NULL, &fence,
false);
false, false);
if (r)
goto exit_do_move;
r = dma_fence_wait(fence, false);
@ -81,7 +81,7 @@ static void amdgpu_benchmark_move(struct amdgpu_device *adev, unsigned size,
n = AMDGPU_BENCHMARK_ITERATIONS;
r = amdgpu_bo_create(adev, size, PAGE_SIZE, true, sdomain, 0, NULL,
NULL, &sobj);
NULL, 0, &sobj);
if (r) {
goto out_cleanup;
}
@ -94,7 +94,7 @@ static void amdgpu_benchmark_move(struct amdgpu_device *adev, unsigned size,
goto out_cleanup;
}
r = amdgpu_bo_create(adev, size, PAGE_SIZE, true, ddomain, 0, NULL,
NULL, &dobj);
NULL, 0, &dobj);
if (r) {
goto out_cleanup;
}

View File

@ -86,19 +86,6 @@ static bool check_atom_bios(uint8_t *bios, size_t size)
return false;
}
static bool is_atom_fw(uint8_t *bios)
{
uint16_t bios_header_start = bios[0x48] | (bios[0x49] << 8);
uint8_t frev = bios[bios_header_start + 2];
uint8_t crev = bios[bios_header_start + 3];
if ((frev < 3) ||
((frev == 3) && (crev < 3)))
return false;
return true;
}
/* If you boot an IGP board with a discrete card as the primary,
* the IGP rom is not accessible via the rom bar as the IGP rom is
* part of the system bios. On boot, the system bios puts a
@ -117,7 +104,7 @@ static bool igp_read_bios_from_vram(struct amdgpu_device *adev)
adev->bios = NULL;
vram_base = pci_resource_start(adev->pdev, 0);
bios = ioremap(vram_base, size);
bios = ioremap_wc(vram_base, size);
if (!bios) {
return false;
}
@ -455,6 +442,6 @@ bool amdgpu_get_bios(struct amdgpu_device *adev)
return false;
success:
adev->is_atom_fw = is_atom_fw(adev->bios);
adev->is_atom_fw = (adev->asic_type >= CHIP_VEGA10) ? true : false;
return true;
}

View File

@ -83,7 +83,7 @@ static int amdgpu_bo_list_create(struct amdgpu_device *adev,
r = idr_alloc(&fpriv->bo_list_handles, list, 1, 0, GFP_KERNEL);
mutex_unlock(&fpriv->bo_list_lock);
if (r < 0) {
kfree(list);
amdgpu_bo_list_free(list);
return r;
}
*id = r;
@ -136,7 +136,7 @@ static int amdgpu_bo_list_set(struct amdgpu_device *adev,
}
bo = amdgpu_bo_ref(gem_to_amdgpu_bo(gobj));
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
usermm = amdgpu_ttm_tt_get_usermm(bo->tbo.ttm);
if (usermm) {
@ -156,11 +156,11 @@ static int amdgpu_bo_list_set(struct amdgpu_device *adev,
entry->tv.bo = &entry->robj->tbo;
entry->tv.shared = !entry->robj->prime_shared_count;
if (entry->robj->prefered_domains == AMDGPU_GEM_DOMAIN_GDS)
if (entry->robj->preferred_domains == AMDGPU_GEM_DOMAIN_GDS)
gds_obj = entry->robj;
if (entry->robj->prefered_domains == AMDGPU_GEM_DOMAIN_GWS)
if (entry->robj->preferred_domains == AMDGPU_GEM_DOMAIN_GWS)
gws_obj = entry->robj;
if (entry->robj->prefered_domains == AMDGPU_GEM_DOMAIN_OA)
if (entry->robj->preferred_domains == AMDGPU_GEM_DOMAIN_OA)
oa_obj = entry->robj;
total_size += amdgpu_bo_size(entry->robj);
@ -270,7 +270,7 @@ int amdgpu_bo_list_ioctl(struct drm_device *dev, void *data,
struct amdgpu_fpriv *fpriv = filp->driver_priv;
union drm_amdgpu_bo_list *args = data;
uint32_t handle = args->in.list_handle;
const void __user *uptr = (const void*)(uintptr_t)args->in.bo_info_ptr;
const void __user *uptr = u64_to_user_ptr(args->in.bo_info_ptr);
struct drm_amdgpu_bo_list_entry *info;
struct amdgpu_bo_list *list;

View File

@ -124,7 +124,7 @@ static int amdgpu_cgs_alloc_gpu_mem(struct cgs_device *cgs_device,
ret = amdgpu_bo_create_restricted(adev, size, PAGE_SIZE,
true, domain, flags,
NULL, &placement, NULL,
&obj);
0, &obj);
if (ret) {
DRM_ERROR("(%d) bo create failed\n", ret);
return ret;
@ -166,7 +166,7 @@ static int amdgpu_cgs_gmap_gpu_mem(struct cgs_device *cgs_device, cgs_handle_t h
r = amdgpu_bo_reserve(obj, true);
if (unlikely(r != 0))
return r;
r = amdgpu_bo_pin_restricted(obj, obj->prefered_domains,
r = amdgpu_bo_pin_restricted(obj, obj->preferred_domains,
min_offset, max_offset, mcaddr);
amdgpu_bo_unreserve(obj);
return r;
@ -240,6 +240,8 @@ static uint32_t amdgpu_cgs_read_ind_register(struct cgs_device *cgs_device,
return RREG32_DIDT(index);
case CGS_IND_REG_GC_CAC:
return RREG32_GC_CAC(index);
case CGS_IND_REG_SE_CAC:
return RREG32_SE_CAC(index);
case CGS_IND_REG__AUDIO_ENDPT:
DRM_ERROR("audio endpt register access not implemented.\n");
return 0;
@ -266,6 +268,8 @@ static void amdgpu_cgs_write_ind_register(struct cgs_device *cgs_device,
return WREG32_DIDT(index, value);
case CGS_IND_REG_GC_CAC:
return WREG32_GC_CAC(index, value);
case CGS_IND_REG_SE_CAC:
return WREG32_SE_CAC(index, value);
case CGS_IND_REG__AUDIO_ENDPT:
DRM_ERROR("audio endpt register access not implemented.\n");
return;
@ -610,6 +614,17 @@ static int amdgpu_cgs_enter_safe_mode(struct cgs_device *cgs_device,
return 0;
}
static void amdgpu_cgs_lock_grbm_idx(struct cgs_device *cgs_device,
bool lock)
{
CGS_FUNC_ADEV;
if (lock)
mutex_lock(&adev->grbm_idx_mutex);
else
mutex_unlock(&adev->grbm_idx_mutex);
}
static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
enum cgs_ucode_id type,
struct cgs_firmware_info *info)
@ -644,7 +659,7 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
info->version = (uint16_t)le32_to_cpu(header->header.ucode_version);
if (CGS_UCODE_ID_CP_MEC == type)
info->image_size = (header->jt_offset) << 2;
info->image_size = le32_to_cpu(header->jt_offset) << 2;
info->fw_version = amdgpu_get_firmware_version(cgs_device, type);
info->feature_version = (uint16_t)le32_to_cpu(header->ucode_feature_version);
@ -719,7 +734,13 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
strcpy(fw_name, "amdgpu/polaris12_smc.bin");
break;
case CHIP_VEGA10:
strcpy(fw_name, "amdgpu/vega10_smc.bin");
if ((adev->pdev->device == 0x687f) &&
((adev->pdev->revision == 0xc0) ||
(adev->pdev->revision == 0xc1) ||
(adev->pdev->revision == 0xc3)))
strcpy(fw_name, "amdgpu/vega10_acg_smc.bin");
else
strcpy(fw_name, "amdgpu/vega10_smc.bin");
break;
default:
DRM_ERROR("SMC firmware not supported\n");
@ -1117,6 +1138,7 @@ static const struct cgs_ops amdgpu_cgs_ops = {
.query_system_info = amdgpu_cgs_query_system_info,
.is_virtualization_enabled = amdgpu_cgs_is_virtualization_enabled,
.enter_safe_mode = amdgpu_cgs_enter_safe_mode,
.lock_grbm_idx = amdgpu_cgs_lock_grbm_idx,
};
static const struct cgs_os_ops amdgpu_cgs_os_ops = {

View File

@ -54,7 +54,7 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
*offset = data->offset;
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
if (amdgpu_ttm_tt_get_usermm(p->uf_entry.robj->tbo.ttm)) {
amdgpu_bo_unref(&p->uf_entry.robj);
@ -90,7 +90,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data)
}
/* get chunks */
chunk_array_user = (uint64_t __user *)(uintptr_t)(cs->in.chunks);
chunk_array_user = u64_to_user_ptr(cs->in.chunks);
if (copy_from_user(chunk_array, chunk_array_user,
sizeof(uint64_t)*cs->in.num_chunks)) {
ret = -EFAULT;
@ -110,7 +110,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data)
struct drm_amdgpu_cs_chunk user_chunk;
uint32_t __user *cdata;
chunk_ptr = (void __user *)(uintptr_t)chunk_array[i];
chunk_ptr = u64_to_user_ptr(chunk_array[i]);
if (copy_from_user(&user_chunk, chunk_ptr,
sizeof(struct drm_amdgpu_cs_chunk))) {
ret = -EFAULT;
@ -121,7 +121,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data)
p->chunks[i].length_dw = user_chunk.length_dw;
size = p->chunks[i].length_dw;
cdata = (void __user *)(uintptr_t)user_chunk.chunk_data;
cdata = u64_to_user_ptr(user_chunk.chunk_data);
p->chunks[i].kdata = kvmalloc_array(size, sizeof(uint32_t), GFP_KERNEL);
if (p->chunks[i].kdata == NULL) {
@ -223,10 +223,11 @@ static s64 bytes_to_us(struct amdgpu_device *adev, u64 bytes)
* ticks. The accumulated microseconds (us) are converted to bytes and
* returned.
*/
static u64 amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev)
static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
u64 *max_bytes,
u64 *max_vis_bytes)
{
s64 time_us, increment_us;
u64 max_bytes;
u64 free_vram, total_vram, used_vram;
/* Allow a maximum of 200 accumulated ms. This is basically per-IB
@ -238,11 +239,14 @@ static u64 amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev)
*/
const s64 us_upper_bound = 200000;
if (!adev->mm_stats.log2_max_MBps)
return 0;
if (!adev->mm_stats.log2_max_MBps) {
*max_bytes = 0;
*max_vis_bytes = 0;
return;
}
total_vram = adev->mc.real_vram_size - adev->vram_pin_size;
used_vram = atomic64_read(&adev->vram_usage);
used_vram = amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
spin_lock(&adev->mm_stats.lock);
@ -280,23 +284,46 @@ static u64 amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev)
adev->mm_stats.accum_us = max(min_us, adev->mm_stats.accum_us);
}
/* This returns 0 if the driver is in debt to disallow (optional)
/* This is set to 0 if the driver is in debt to disallow (optional)
* buffer moves.
*/
max_bytes = us_to_bytes(adev, adev->mm_stats.accum_us);
*max_bytes = us_to_bytes(adev, adev->mm_stats.accum_us);
/* Do the same for visible VRAM if half of it is free */
if (adev->mc.visible_vram_size < adev->mc.real_vram_size) {
u64 total_vis_vram = adev->mc.visible_vram_size;
u64 used_vis_vram =
amdgpu_vram_mgr_vis_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
if (used_vis_vram < total_vis_vram) {
u64 free_vis_vram = total_vis_vram - used_vis_vram;
adev->mm_stats.accum_us_vis = min(adev->mm_stats.accum_us_vis +
increment_us, us_upper_bound);
if (free_vis_vram >= total_vis_vram / 2)
adev->mm_stats.accum_us_vis =
max(bytes_to_us(adev, free_vis_vram / 2),
adev->mm_stats.accum_us_vis);
}
*max_vis_bytes = us_to_bytes(adev, adev->mm_stats.accum_us_vis);
} else {
*max_vis_bytes = 0;
}
spin_unlock(&adev->mm_stats.lock);
return max_bytes;
}
/* Report how many bytes have really been moved for the last command
* submission. This can result in a debt that can stop buffer migrations
* temporarily.
*/
void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes)
void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
u64 num_vis_bytes)
{
spin_lock(&adev->mm_stats.lock);
adev->mm_stats.accum_us -= bytes_to_us(adev, num_bytes);
adev->mm_stats.accum_us_vis -= bytes_to_us(adev, num_vis_bytes);
spin_unlock(&adev->mm_stats.lock);
}
@ -304,7 +331,7 @@ static int amdgpu_cs_bo_validate(struct amdgpu_cs_parser *p,
struct amdgpu_bo *bo)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
u64 initial_bytes_moved;
u64 initial_bytes_moved, bytes_moved;
uint32_t domain;
int r;
@ -314,17 +341,35 @@ static int amdgpu_cs_bo_validate(struct amdgpu_cs_parser *p,
/* Don't move this buffer if we have depleted our allowance
* to move it. Don't move anything if the threshold is zero.
*/
if (p->bytes_moved < p->bytes_moved_threshold)
domain = bo->prefered_domains;
else
if (p->bytes_moved < p->bytes_moved_threshold) {
if (adev->mc.visible_vram_size < adev->mc.real_vram_size &&
(bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)) {
/* And don't move a CPU_ACCESS_REQUIRED BO to limited
* visible VRAM if we've depleted our allowance to do
* that.
*/
if (p->bytes_moved_vis < p->bytes_moved_vis_threshold)
domain = bo->preferred_domains;
else
domain = bo->allowed_domains;
} else {
domain = bo->preferred_domains;
}
} else {
domain = bo->allowed_domains;
}
retry:
amdgpu_ttm_placement_from_domain(bo, domain);
initial_bytes_moved = atomic64_read(&adev->num_bytes_moved);
r = ttm_bo_validate(&bo->tbo, &bo->placement, true, false);
p->bytes_moved += atomic64_read(&adev->num_bytes_moved) -
initial_bytes_moved;
bytes_moved = atomic64_read(&adev->num_bytes_moved) -
initial_bytes_moved;
p->bytes_moved += bytes_moved;
if (adev->mc.visible_vram_size < adev->mc.real_vram_size &&
bo->tbo.mem.mem_type == TTM_PL_VRAM &&
bo->tbo.mem.start < adev->mc.visible_vram_size >> PAGE_SHIFT)
p->bytes_moved_vis += bytes_moved;
if (unlikely(r == -ENOMEM) && domain != bo->allowed_domains) {
domain = bo->allowed_domains;
@ -350,7 +395,8 @@ static bool amdgpu_cs_try_evict(struct amdgpu_cs_parser *p,
struct amdgpu_bo_list_entry *candidate = p->evictable;
struct amdgpu_bo *bo = candidate->robj;
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
u64 initial_bytes_moved;
u64 initial_bytes_moved, bytes_moved;
bool update_bytes_moved_vis;
uint32_t other;
/* If we reached our current BO we can forget it */
@ -370,10 +416,17 @@ static bool amdgpu_cs_try_evict(struct amdgpu_cs_parser *p,
/* Good we can try to move this BO somewhere else */
amdgpu_ttm_placement_from_domain(bo, other);
update_bytes_moved_vis =
adev->mc.visible_vram_size < adev->mc.real_vram_size &&
bo->tbo.mem.mem_type == TTM_PL_VRAM &&
bo->tbo.mem.start < adev->mc.visible_vram_size >> PAGE_SHIFT;
initial_bytes_moved = atomic64_read(&adev->num_bytes_moved);
r = ttm_bo_validate(&bo->tbo, &bo->placement, true, false);
p->bytes_moved += atomic64_read(&adev->num_bytes_moved) -
bytes_moved = atomic64_read(&adev->num_bytes_moved) -
initial_bytes_moved;
p->bytes_moved += bytes_moved;
if (update_bytes_moved_vis)
p->bytes_moved_vis += bytes_moved;
if (unlikely(r))
break;
@ -554,8 +607,10 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
list_splice(&need_pages, &p->validated);
}
p->bytes_moved_threshold = amdgpu_cs_get_threshold_for_moves(p->adev);
amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold,
&p->bytes_moved_vis_threshold);
p->bytes_moved = 0;
p->bytes_moved_vis = 0;
p->evictable = list_last_entry(&p->validated,
struct amdgpu_bo_list_entry,
tv.head);
@ -579,8 +634,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
goto error_validate;
}
amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved);
amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
p->bytes_moved_vis);
fpriv->vm.last_eviction_counter =
atomic64_read(&p->adev->num_evictions);
@ -619,10 +674,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
}
error_validate:
if (r) {
amdgpu_vm_move_pt_bos_in_lru(p->adev, &fpriv->vm);
if (r)
ttm_eu_backoff_reservation(&p->ticket, &p->validated);
}
error_free_pages:
@ -670,21 +723,18 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
* If error is set than unvalidate buffer, otherwise just free memory
* used by parsing context.
**/
static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bool backoff)
static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error,
bool backoff)
{
struct amdgpu_fpriv *fpriv = parser->filp->driver_priv;
unsigned i;
if (!error) {
amdgpu_vm_move_pt_bos_in_lru(parser->adev, &fpriv->vm);
if (!error)
ttm_eu_fence_buffer_objects(&parser->ticket,
&parser->validated,
parser->fence);
} else if (backoff) {
else if (backoff)
ttm_eu_backoff_reservation(&parser->ticket,
&parser->validated);
}
for (i = 0; i < parser->num_post_dep_syncobjs; i++)
drm_syncobj_put(parser->post_dep_syncobjs[i]);
@ -737,7 +787,8 @@ static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser *p)
if (amdgpu_sriov_vf(adev)) {
struct dma_fence *f;
bo_va = vm->csa_bo_va;
bo_va = fpriv->csa_va;
BUG_ON(!bo_va);
r = amdgpu_vm_bo_update(adev, bo_va, false);
if (r)
@ -774,7 +825,7 @@ static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser *p)
}
r = amdgpu_vm_clear_invalids(adev, vm, &p->job->sync);
r = amdgpu_vm_clear_moved(adev, vm, &p->job->sync);
if (amdgpu_vm_debug && p->bo_list) {
/* Invalidate all BOs to test for userspace bugs */
@ -984,7 +1035,7 @@ static int amdgpu_syncobj_lookup_and_add_to_sync(struct amdgpu_cs_parser *p,
{
int r;
struct dma_fence *fence;
r = drm_syncobj_fence_get(p->filp, handle, &fence);
r = drm_syncobj_find_fence(p->filp, handle, &fence);
if (r)
return r;
@ -1383,7 +1434,7 @@ int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
if (fences == NULL)
return -ENOMEM;
fences_user = (void __user *)(uintptr_t)(wait->in.fences);
fences_user = u64_to_user_ptr(wait->in.fences);
if (copy_from_user(fences, fences_user,
sizeof(struct drm_amdgpu_fence) * fence_count)) {
r = -EFAULT;
@ -1436,7 +1487,7 @@ amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
addr > mapping->last)
continue;
*bo = lobj->bo_va->bo;
*bo = lobj->bo_va->base.bo;
return mapping;
}
@ -1445,7 +1496,7 @@ amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
addr > mapping->last)
continue;
*bo = lobj->bo_va->bo;
*bo = lobj->bo_va->base.bo;
return mapping;
}
}

View File

@ -53,6 +53,9 @@
#include "bif/bif_4_1_d.h"
#include <linux/pci.h>
#include <linux/firmware.h>
#include "amdgpu_vf_error.h"
#include "amdgpu_amdkfd.h"
MODULE_FIRMWARE("amdgpu/vega10_gpu_info.bin");
MODULE_FIRMWARE("amdgpu/raven_gpu_info.bin");
@ -128,6 +131,10 @@ void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v,
{
trace_amdgpu_mm_wreg(adev->pdev->device, reg, v);
if (adev->asic_type >= CHIP_VEGA10 && reg == 0) {
adev->last_mm_index = v;
}
if (!(acc_flags & AMDGPU_REGS_NO_KIQ) && amdgpu_sriov_runtime(adev)) {
BUG_ON(in_interrupt());
return amdgpu_virt_kiq_wreg(adev, reg, v);
@ -143,6 +150,10 @@ void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v,
writel(v, ((void __iomem *)adev->rmmio) + (mmMM_DATA * 4));
spin_unlock_irqrestore(&adev->mmio_idx_lock, flags);
}
if (adev->asic_type >= CHIP_VEGA10 && reg == 1 && adev->last_mm_index == 0x5702C) {
udelay(500);
}
}
u32 amdgpu_io_rreg(struct amdgpu_device *adev, u32 reg)
@ -157,6 +168,9 @@ u32 amdgpu_io_rreg(struct amdgpu_device *adev, u32 reg)
void amdgpu_io_wreg(struct amdgpu_device *adev, u32 reg, u32 v)
{
if (adev->asic_type >= CHIP_VEGA10 && reg == 0) {
adev->last_mm_index = v;
}
if ((reg * 4) < adev->rio_mem_size)
iowrite32(v, adev->rio_mem + (reg * 4));
@ -164,6 +178,10 @@ void amdgpu_io_wreg(struct amdgpu_device *adev, u32 reg, u32 v)
iowrite32((reg * 4), adev->rio_mem + (mmMM_INDEX * 4));
iowrite32(v, adev->rio_mem + (mmMM_DATA * 4));
}
if (adev->asic_type >= CHIP_VEGA10 && reg == 1 && adev->last_mm_index == 0x5702C) {
udelay(500);
}
}
/**
@ -318,51 +336,16 @@ static void amdgpu_block_invalid_wreg(struct amdgpu_device *adev,
static int amdgpu_vram_scratch_init(struct amdgpu_device *adev)
{
int r;
if (adev->vram_scratch.robj == NULL) {
r = amdgpu_bo_create(adev, AMDGPU_GPU_PAGE_SIZE,
PAGE_SIZE, true, AMDGPU_GEM_DOMAIN_VRAM,
AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, &adev->vram_scratch.robj);
if (r) {
return r;
}
}
r = amdgpu_bo_reserve(adev->vram_scratch.robj, false);
if (unlikely(r != 0))
return r;
r = amdgpu_bo_pin(adev->vram_scratch.robj,
AMDGPU_GEM_DOMAIN_VRAM, &adev->vram_scratch.gpu_addr);
if (r) {
amdgpu_bo_unreserve(adev->vram_scratch.robj);
return r;
}
r = amdgpu_bo_kmap(adev->vram_scratch.robj,
(void **)&adev->vram_scratch.ptr);
if (r)
amdgpu_bo_unpin(adev->vram_scratch.robj);
amdgpu_bo_unreserve(adev->vram_scratch.robj);
return r;
return amdgpu_bo_create_kernel(adev, AMDGPU_GPU_PAGE_SIZE,
PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM,
&adev->vram_scratch.robj,
&adev->vram_scratch.gpu_addr,
(void **)&adev->vram_scratch.ptr);
}
static void amdgpu_vram_scratch_fini(struct amdgpu_device *adev)
{
int r;
if (adev->vram_scratch.robj == NULL) {
return;
}
r = amdgpu_bo_reserve(adev->vram_scratch.robj, true);
if (likely(r == 0)) {
amdgpu_bo_kunmap(adev->vram_scratch.robj);
amdgpu_bo_unpin(adev->vram_scratch.robj);
amdgpu_bo_unreserve(adev->vram_scratch.robj);
}
amdgpu_bo_unref(&adev->vram_scratch.robj);
amdgpu_bo_free_kernel(&adev->vram_scratch.robj, NULL, NULL);
}
/**
@ -521,7 +504,8 @@ static int amdgpu_wb_init(struct amdgpu_device *adev)
int r;
if (adev->wb.wb_obj == NULL) {
r = amdgpu_bo_create_kernel(adev, AMDGPU_MAX_WB * sizeof(uint32_t),
/* AMDGPU_MAX_WB * sizeof(uint32_t) * 8 = AMDGPU_MAX_WB 256bit slots */
r = amdgpu_bo_create_kernel(adev, AMDGPU_MAX_WB * sizeof(uint32_t) * 8,
PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
&adev->wb.wb_obj, &adev->wb.gpu_addr,
(void **)&adev->wb.wb);
@ -552,32 +536,10 @@ static int amdgpu_wb_init(struct amdgpu_device *adev)
int amdgpu_wb_get(struct amdgpu_device *adev, u32 *wb)
{
unsigned long offset = find_first_zero_bit(adev->wb.used, adev->wb.num_wb);
if (offset < adev->wb.num_wb) {
__set_bit(offset, adev->wb.used);
*wb = offset;
return 0;
} else {
return -EINVAL;
}
}
/**
* amdgpu_wb_get_64bit - Allocate a wb entry
*
* @adev: amdgpu_device pointer
* @wb: wb index
*
* Allocate a wb slot for use by the driver (all asics).
* Returns 0 on success or -EINVAL on failure.
*/
int amdgpu_wb_get_64bit(struct amdgpu_device *adev, u32 *wb)
{
unsigned long offset = bitmap_find_next_zero_area_off(adev->wb.used,
adev->wb.num_wb, 0, 2, 7, 0);
if ((offset + 1) < adev->wb.num_wb) {
__set_bit(offset, adev->wb.used);
__set_bit(offset + 1, adev->wb.used);
*wb = offset;
*wb = offset * 8; /* convert to dw offset */
return 0;
} else {
return -EINVAL;
@ -598,22 +560,6 @@ void amdgpu_wb_free(struct amdgpu_device *adev, u32 wb)
__clear_bit(wb, adev->wb.used);
}
/**
* amdgpu_wb_free_64bit - Free a wb entry
*
* @adev: amdgpu_device pointer
* @wb: wb index
*
* Free a wb slot allocated for use by the driver (all asics)
*/
void amdgpu_wb_free_64bit(struct amdgpu_device *adev, u32 wb)
{
if ((wb + 1) < adev->wb.num_wb) {
__clear_bit(wb, adev->wb.used);
__clear_bit(wb + 1, adev->wb.used);
}
}
/**
* amdgpu_vram_location - try to find VRAM location
* @adev: amdgpu device structure holding all necessary informations
@ -665,7 +611,7 @@ void amdgpu_vram_location(struct amdgpu_device *adev, struct amdgpu_mc *mc, u64
}
/**
* amdgpu_gtt_location - try to find GTT location
* amdgpu_gart_location - try to find GTT location
* @adev: amdgpu device structure holding all necessary informations
* @mc: memory controller structure holding memory informations
*
@ -676,28 +622,28 @@ void amdgpu_vram_location(struct amdgpu_device *adev, struct amdgpu_mc *mc, u64
*
* FIXME: when reducing GTT size align new size on power of 2.
*/
void amdgpu_gtt_location(struct amdgpu_device *adev, struct amdgpu_mc *mc)
void amdgpu_gart_location(struct amdgpu_device *adev, struct amdgpu_mc *mc)
{
u64 size_af, size_bf;
size_af = ((adev->mc.mc_mask - mc->vram_end) + mc->gtt_base_align) & ~mc->gtt_base_align;
size_bf = mc->vram_start & ~mc->gtt_base_align;
size_af = adev->mc.mc_mask - mc->vram_end;
size_bf = mc->vram_start;
if (size_bf > size_af) {
if (mc->gtt_size > size_bf) {
if (mc->gart_size > size_bf) {
dev_warn(adev->dev, "limiting GTT\n");
mc->gtt_size = size_bf;
mc->gart_size = size_bf;
}
mc->gtt_start = 0;
mc->gart_start = 0;
} else {
if (mc->gtt_size > size_af) {
if (mc->gart_size > size_af) {
dev_warn(adev->dev, "limiting GTT\n");
mc->gtt_size = size_af;
mc->gart_size = size_af;
}
mc->gtt_start = (mc->vram_end + 1 + mc->gtt_base_align) & ~mc->gtt_base_align;
mc->gart_start = mc->vram_end + 1;
}
mc->gtt_end = mc->gtt_start + mc->gtt_size - 1;
mc->gart_end = mc->gart_start + mc->gart_size - 1;
dev_info(adev->dev, "GTT: %lluM 0x%016llX - 0x%016llX\n",
mc->gtt_size >> 20, mc->gtt_start, mc->gtt_end);
mc->gart_size >> 20, mc->gart_start, mc->gart_end);
}
/*
@ -720,7 +666,12 @@ bool amdgpu_need_post(struct amdgpu_device *adev)
adev->has_hw_reset = false;
return true;
}
/* then check MEM_SIZE, in case the crtcs are off */
/* bios scratch used on CIK+ */
if (adev->asic_type >= CHIP_BONAIRE)
return amdgpu_atombios_scratch_need_asic_init(adev);
/* check MEM_SIZE for older asics */
reg = amdgpu_asic_get_config_memsize(adev);
if ((reg != 0) && (reg != 0xffffffff))
@ -1031,19 +982,6 @@ static unsigned int amdgpu_vga_set_decode(void *cookie, bool state)
return VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;
}
/**
* amdgpu_check_pot_argument - check that argument is a power of two
*
* @arg: value to check
*
* Validates that a certain argument is a power of two (all asics).
* Returns true if argument is valid.
*/
static bool amdgpu_check_pot_argument(int arg)
{
return (arg & (arg - 1)) == 0;
}
static void amdgpu_check_block_size(struct amdgpu_device *adev)
{
/* defines number of bits in page table versus page directory,
@ -1077,7 +1015,7 @@ static void amdgpu_check_vm_size(struct amdgpu_device *adev)
if (amdgpu_vm_size == -1)
return;
if (!amdgpu_check_pot_argument(amdgpu_vm_size)) {
if (!is_power_of_2(amdgpu_vm_size)) {
dev_warn(adev->dev, "VM size (%d) must be a power of 2\n",
amdgpu_vm_size);
goto def_value;
@ -1118,19 +1056,31 @@ static void amdgpu_check_arguments(struct amdgpu_device *adev)
dev_warn(adev->dev, "sched jobs (%d) must be at least 4\n",
amdgpu_sched_jobs);
amdgpu_sched_jobs = 4;
} else if (!amdgpu_check_pot_argument(amdgpu_sched_jobs)){
} else if (!is_power_of_2(amdgpu_sched_jobs)){
dev_warn(adev->dev, "sched jobs (%d) must be a power of 2\n",
amdgpu_sched_jobs);
amdgpu_sched_jobs = roundup_pow_of_two(amdgpu_sched_jobs);
}
if (amdgpu_gart_size != -1) {
if (amdgpu_gart_size < 32) {
/* gart size must be greater or equal to 32M */
dev_warn(adev->dev, "gart size (%d) too small\n",
amdgpu_gart_size);
amdgpu_gart_size = 32;
}
if (amdgpu_gtt_size != -1 && amdgpu_gtt_size < 32) {
/* gtt size must be greater or equal to 32M */
if (amdgpu_gart_size < 32) {
dev_warn(adev->dev, "gart size (%d) too small\n",
amdgpu_gart_size);
amdgpu_gart_size = -1;
}
dev_warn(adev->dev, "gtt size (%d) too small\n",
amdgpu_gtt_size);
amdgpu_gtt_size = -1;
}
/* valid range is between 4 and 9 inclusive */
if (amdgpu_vm_fragment_size != -1 &&
(amdgpu_vm_fragment_size > 9 || amdgpu_vm_fragment_size < 4)) {
dev_warn(adev->dev, "valid range is between 4 and 9\n");
amdgpu_vm_fragment_size = -1;
}
amdgpu_check_vm_size(adev);
@ -1138,7 +1088,7 @@ static void amdgpu_check_arguments(struct amdgpu_device *adev)
amdgpu_check_block_size(adev);
if (amdgpu_vram_page_split != -1 && (amdgpu_vram_page_split < 16 ||
!amdgpu_check_pot_argument(amdgpu_vram_page_split))) {
!is_power_of_2(amdgpu_vram_page_split))) {
dev_warn(adev->dev, "invalid VRAM page split (%d)\n",
amdgpu_vram_page_split);
amdgpu_vram_page_split = 1024;
@ -1901,7 +1851,8 @@ static int amdgpu_sriov_reinit_late(struct amdgpu_device *adev)
AMD_IP_BLOCK_TYPE_DCE,
AMD_IP_BLOCK_TYPE_GFX,
AMD_IP_BLOCK_TYPE_SDMA,
AMD_IP_BLOCK_TYPE_VCE,
AMD_IP_BLOCK_TYPE_UVD,
AMD_IP_BLOCK_TYPE_VCE
};
for (i = 0; i < ARRAY_SIZE(ip_order); i++) {
@ -2019,7 +1970,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
adev->flags = flags;
adev->asic_type = flags & AMD_ASIC_MASK;
adev->usec_timeout = AMDGPU_MAX_USEC_TIMEOUT;
adev->mc.gtt_size = 512 * 1024 * 1024;
adev->mc.gart_size = 512 * 1024 * 1024;
adev->accel_working = false;
adev->num_rings = 0;
adev->mman.buffer_funcs = NULL;
@ -2068,6 +2019,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
spin_lock_init(&adev->uvd_ctx_idx_lock);
spin_lock_init(&adev->didt_idx_lock);
spin_lock_init(&adev->gc_cac_idx_lock);
spin_lock_init(&adev->se_cac_idx_lock);
spin_lock_init(&adev->audio_endpt_idx_lock);
spin_lock_init(&adev->mm_stats.lock);
@ -2143,6 +2095,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
r = amdgpu_atombios_init(adev);
if (r) {
dev_err(adev->dev, "amdgpu_atombios_init failed\n");
amdgpu_vf_error_put(AMDGIM_ERROR_VF_ATOMBIOS_INIT_FAIL, 0, 0);
goto failed;
}
@ -2153,6 +2106,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
if (amdgpu_vpost_needed(adev)) {
if (!adev->bios) {
dev_err(adev->dev, "no vBIOS found\n");
amdgpu_vf_error_put(AMDGIM_ERROR_VF_NO_VBIOS, 0, 0);
r = -EINVAL;
goto failed;
}
@ -2160,18 +2114,28 @@ int amdgpu_device_init(struct amdgpu_device *adev,
r = amdgpu_atom_asic_init(adev->mode_info.atom_context);
if (r) {
dev_err(adev->dev, "gpu post error!\n");
amdgpu_vf_error_put(AMDGIM_ERROR_VF_GPU_POST_ERROR, 0, 0);
goto failed;
}
} else {
DRM_INFO("GPU post is not needed\n");
}
if (!adev->is_atom_fw) {
if (adev->is_atom_fw) {
/* Initialize clocks */
r = amdgpu_atomfirmware_get_clock_info(adev);
if (r) {
dev_err(adev->dev, "amdgpu_atomfirmware_get_clock_info failed\n");
amdgpu_vf_error_put(AMDGIM_ERROR_VF_ATOMBIOS_GET_CLOCK_FAIL, 0, 0);
goto failed;
}
} else {
/* Initialize clocks */
r = amdgpu_atombios_get_clock_info(adev);
if (r) {
dev_err(adev->dev, "amdgpu_atombios_get_clock_info failed\n");
return r;
amdgpu_vf_error_put(AMDGIM_ERROR_VF_ATOMBIOS_GET_CLOCK_FAIL, 0, 0);
goto failed;
}
/* init i2c buses */
amdgpu_atombios_i2c_init(adev);
@ -2181,6 +2145,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
r = amdgpu_fence_driver_init(adev);
if (r) {
dev_err(adev->dev, "amdgpu_fence_driver_init failed\n");
amdgpu_vf_error_put(AMDGIM_ERROR_VF_FENCE_INIT_FAIL, 0, 0);
goto failed;
}
@ -2190,6 +2155,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
r = amdgpu_init(adev);
if (r) {
dev_err(adev->dev, "amdgpu_init failed\n");
amdgpu_vf_error_put(AMDGIM_ERROR_VF_AMDGPU_INIT_FAIL, 0, 0);
amdgpu_fini(adev);
goto failed;
}
@ -2209,6 +2175,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
r = amdgpu_ib_pool_init(adev);
if (r) {
dev_err(adev->dev, "IB initialization failed (%d).\n", r);
amdgpu_vf_error_put(AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
goto failed;
}
@ -2253,12 +2220,14 @@ int amdgpu_device_init(struct amdgpu_device *adev,
r = amdgpu_late_init(adev);
if (r) {
dev_err(adev->dev, "amdgpu_late_init failed\n");
amdgpu_vf_error_put(AMDGIM_ERROR_VF_AMDGPU_LATE_INIT_FAIL, 0, r);
goto failed;
}
return 0;
failed:
amdgpu_vf_error_trans_all(adev);
if (runtime)
vga_switcheroo_fini_domain_pm_ops(adev->dev);
return r;
@ -2351,6 +2320,8 @@ int amdgpu_device_suspend(struct drm_device *dev, bool suspend, bool fbcon)
}
drm_modeset_unlock_all(dev);
amdgpu_amdkfd_suspend(adev);
/* unpin the front buffers and cursors */
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
@ -2392,10 +2363,7 @@ int amdgpu_device_suspend(struct drm_device *dev, bool suspend, bool fbcon)
*/
amdgpu_bo_evict_vram(adev);
if (adev->is_atom_fw)
amdgpu_atomfirmware_scratch_regs_save(adev);
else
amdgpu_atombios_scratch_regs_save(adev);
amdgpu_atombios_scratch_regs_save(adev);
pci_save_state(dev->pdev);
if (suspend) {
/* Shut down the device */
@ -2444,10 +2412,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool resume, bool fbcon)
if (r)
goto unlock;
}
if (adev->is_atom_fw)
amdgpu_atomfirmware_scratch_regs_restore(adev);
else
amdgpu_atombios_scratch_regs_restore(adev);
amdgpu_atombios_scratch_regs_restore(adev);
/* post card */
if (amdgpu_need_post(adev)) {
@ -2490,6 +2455,9 @@ int amdgpu_device_resume(struct drm_device *dev, bool resume, bool fbcon)
}
}
}
r = amdgpu_amdkfd_resume(adev);
if (r)
return r;
/* blat the mode back in */
if (fbcon) {
@ -2860,21 +2828,9 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
r = amdgpu_suspend(adev);
retry:
/* Disable fb access */
if (adev->mode_info.num_crtc) {
struct amdgpu_mode_mc_save save;
amdgpu_display_stop_mc_access(adev, &save);
amdgpu_wait_for_idle(adev, AMD_IP_BLOCK_TYPE_GMC);
}
if (adev->is_atom_fw)
amdgpu_atomfirmware_scratch_regs_save(adev);
else
amdgpu_atombios_scratch_regs_save(adev);
amdgpu_atombios_scratch_regs_save(adev);
r = amdgpu_asic_reset(adev);
if (adev->is_atom_fw)
amdgpu_atomfirmware_scratch_regs_restore(adev);
else
amdgpu_atombios_scratch_regs_restore(adev);
amdgpu_atombios_scratch_regs_restore(adev);
/* post card */
amdgpu_atom_asic_init(adev->mode_info.atom_context);
@ -2952,6 +2908,7 @@ out:
}
} else {
dev_err(adev->dev, "asic resume failed (%d).\n", r);
amdgpu_vf_error_put(AMDGIM_ERROR_VF_ASIC_RESUME_FAIL, 0, r);
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
if (adev->rings[i] && adev->rings[i]->sched.thread) {
kthread_unpark(adev->rings[i]->sched.thread);
@ -2962,12 +2919,16 @@ out:
drm_helper_resume_force_mode(adev->ddev);
ttm_bo_unlock_delayed_workqueue(&adev->mman.bdev, resched);
if (r)
if (r) {
/* bad news, how to tell it to userspace ? */
dev_info(adev->dev, "GPU reset failed\n");
else
amdgpu_vf_error_put(AMDGIM_ERROR_VF_GPU_RESET_FAIL, 0, r);
}
else {
dev_info(adev->dev, "GPU reset successed!\n");
}
amdgpu_vf_error_trans_all(adev);
return r;
}

View File

@ -482,7 +482,7 @@ static void amdgpu_user_framebuffer_destroy(struct drm_framebuffer *fb)
{
struct amdgpu_framebuffer *amdgpu_fb = to_amdgpu_framebuffer(fb);
drm_gem_object_unreference_unlocked(amdgpu_fb->obj);
drm_gem_object_put_unlocked(amdgpu_fb->obj);
drm_framebuffer_cleanup(fb);
kfree(amdgpu_fb);
}
@ -542,14 +542,14 @@ amdgpu_user_framebuffer_create(struct drm_device *dev,
amdgpu_fb = kzalloc(sizeof(*amdgpu_fb), GFP_KERNEL);
if (amdgpu_fb == NULL) {
drm_gem_object_unreference_unlocked(obj);
drm_gem_object_put_unlocked(obj);
return ERR_PTR(-ENOMEM);
}
ret = amdgpu_framebuffer_init(dev, amdgpu_fb, mode_cmd, obj);
if (ret) {
kfree(amdgpu_fb);
drm_gem_object_unreference_unlocked(obj);
drm_gem_object_put_unlocked(obj);
return ERR_PTR(ret);
}

View File

@ -68,13 +68,16 @@
* - 3.16.0 - Add reserved vmid support
* - 3.17.0 - Add AMDGPU_NUM_VRAM_CPU_PAGE_FAULTS.
* - 3.18.0 - Export gpu always on cu bitmap
* - 3.19.0 - Add support for UVD MJPEG decode
*/
#define KMS_DRIVER_MAJOR 3
#define KMS_DRIVER_MINOR 18
#define KMS_DRIVER_MINOR 19
#define KMS_DRIVER_PATCHLEVEL 0
int amdgpu_vram_limit = 0;
int amdgpu_gart_size = -1; /* auto */
int amdgpu_vis_vram_limit = 0;
unsigned amdgpu_gart_size = 256;
int amdgpu_gtt_size = -1; /* auto */
int amdgpu_moverate = -1; /* auto */
int amdgpu_benchmarking = 0;
int amdgpu_testing = 0;
@ -92,6 +95,7 @@ unsigned amdgpu_ip_block_mask = 0xffffffff;
int amdgpu_bapm = -1;
int amdgpu_deep_color = 0;
int amdgpu_vm_size = -1;
int amdgpu_vm_fragment_size = -1;
int amdgpu_vm_block_size = -1;
int amdgpu_vm_fault_stop = 0;
int amdgpu_vm_debug = 0;
@ -106,6 +110,7 @@ unsigned amdgpu_pcie_gen_cap = 0;
unsigned amdgpu_pcie_lane_cap = 0;
unsigned amdgpu_cg_mask = 0xffffffff;
unsigned amdgpu_pg_mask = 0xffffffff;
unsigned amdgpu_sdma_phase_quantum = 32;
char *amdgpu_disable_cu = NULL;
char *amdgpu_virtual_display = NULL;
unsigned amdgpu_pp_feature_mask = 0xffffffff;
@ -120,8 +125,14 @@ int amdgpu_lbpw = -1;
MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes");
module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
MODULE_PARM_DESC(gartsize, "Size of PCIE/IGP gart to setup in megabytes (32, 64, etc., -1 = auto)");
module_param_named(gartsize, amdgpu_gart_size, int, 0600);
MODULE_PARM_DESC(vis_vramlimit, "Restrict visible VRAM for testing, in megabytes");
module_param_named(vis_vramlimit, amdgpu_vis_vram_limit, int, 0444);
MODULE_PARM_DESC(gartsize, "Size of PCIE/IGP gart to setup in megabytes (32, 64, etc.)");
module_param_named(gartsize, amdgpu_gart_size, uint, 0600);
MODULE_PARM_DESC(gttsize, "Size of the GTT domain in megabytes (-1 = auto)");
module_param_named(gttsize, amdgpu_gtt_size, int, 0600);
MODULE_PARM_DESC(moverate, "Maximum buffer migration rate in MB/s. (32, 64, etc., -1=auto, 0=1=disabled)");
module_param_named(moverate, amdgpu_moverate, int, 0600);
@ -174,6 +185,9 @@ module_param_named(deep_color, amdgpu_deep_color, int, 0444);
MODULE_PARM_DESC(vm_size, "VM address space size in gigabytes (default 64GB)");
module_param_named(vm_size, amdgpu_vm_size, int, 0444);
MODULE_PARM_DESC(vm_fragment_size, "VM fragment size in bits (4, 5, etc. 4 = 64K (default), Max 9 = 2M)");
module_param_named(vm_fragment_size, amdgpu_vm_fragment_size, int, 0444);
MODULE_PARM_DESC(vm_block_size, "VM page table size in bits (default depending on vm_size)");
module_param_named(vm_block_size, amdgpu_vm_block_size, int, 0444);
@ -186,7 +200,7 @@ module_param_named(vm_debug, amdgpu_vm_debug, int, 0644);
MODULE_PARM_DESC(vm_update_mode, "VM update using CPU (0 = never (default except for large BAR(LB)), 1 = Graphics only, 2 = Compute only (default for LB), 3 = Both");
module_param_named(vm_update_mode, amdgpu_vm_update_mode, int, 0444);
MODULE_PARM_DESC(vram_page_split, "Number of pages after we split VRAM allocations (default 1024, -1 = disable)");
MODULE_PARM_DESC(vram_page_split, "Number of pages after we split VRAM allocations (default 512, -1 = disable)");
module_param_named(vram_page_split, amdgpu_vram_page_split, int, 0444);
MODULE_PARM_DESC(exp_hw_support, "experimental hw support (1 = enable, 0 = disable (default))");
@ -199,7 +213,7 @@ MODULE_PARM_DESC(sched_hw_submission, "the max number of HW submissions (default
module_param_named(sched_hw_submission, amdgpu_sched_hw_submission, int, 0444);
MODULE_PARM_DESC(ppfeaturemask, "all power features enabled (default))");
module_param_named(ppfeaturemask, amdgpu_pp_feature_mask, int, 0444);
module_param_named(ppfeaturemask, amdgpu_pp_feature_mask, uint, 0444);
MODULE_PARM_DESC(no_evict, "Support pinning request from user space (1 = enable, 0 = disable (default))");
module_param_named(no_evict, amdgpu_no_evict, int, 0444);
@ -219,6 +233,9 @@ module_param_named(cg_mask, amdgpu_cg_mask, uint, 0444);
MODULE_PARM_DESC(pg_mask, "Powergating flags mask (0 = disable power gating)");
module_param_named(pg_mask, amdgpu_pg_mask, uint, 0444);
MODULE_PARM_DESC(sdma_phase_quantum, "SDMA context switch phase quantum (x 1K GPU clock cycles, 0 = no change (default 32))");
module_param_named(sdma_phase_quantum, amdgpu_sdma_phase_quantum, uint, 0444);
MODULE_PARM_DESC(disable_cu, "Disable CUs (se.sh.cu,...)");
module_param_named(disable_cu, amdgpu_disable_cu, charp, 0444);
@ -803,7 +820,6 @@ static struct drm_driver kms_driver = {
.open = amdgpu_driver_open_kms,
.postclose = amdgpu_driver_postclose_kms,
.lastclose = amdgpu_driver_lastclose_kms,
.set_busid = drm_pci_set_busid,
.unload = amdgpu_driver_unload_kms,
.get_vblank_counter = amdgpu_get_vblank_counter_kms,
.enable_vblank = amdgpu_enable_vblank_kms,
@ -823,7 +839,6 @@ static struct drm_driver kms_driver = {
.gem_close_object = amdgpu_gem_object_close,
.dumb_create = amdgpu_mode_dumb_create,
.dumb_map_offset = amdgpu_mode_dumb_mmap,
.dumb_destroy = drm_gem_dumb_destroy,
.fops = &amdgpu_driver_kms_fops,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,

View File

@ -118,7 +118,7 @@ static void amdgpufb_destroy_pinned_object(struct drm_gem_object *gobj)
amdgpu_bo_unpin(abo);
amdgpu_bo_unreserve(abo);
}
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
}
static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
@ -245,13 +245,12 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT;
info->fbops = &amdgpufb_ops;
tmp = amdgpu_bo_gpu_offset(abo) - adev->mc.vram_start;
info->fix.smem_start = adev->mc.aper_base + tmp;
info->fix.smem_len = amdgpu_bo_size(abo);
info->screen_base = abo->kptr;
info->screen_base = amdgpu_bo_kptr(abo);
info->screen_size = amdgpu_bo_size(abo);
drm_fb_helper_fill_var(info, &rfbdev->helper, sizes->fb_width, sizes->fb_height);
@ -281,7 +280,7 @@ out:
}
if (fb && ret) {
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
drm_framebuffer_unregister_private(fb);
drm_framebuffer_cleanup(fb);
kfree(fb);
@ -312,31 +311,7 @@ static int amdgpu_fbdev_destroy(struct drm_device *dev, struct amdgpu_fbdev *rfb
return 0;
}
/** Sets the color ramps on behalf of fbcon */
static void amdgpu_crtc_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
u16 blue, int regno)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
amdgpu_crtc->lut_r[regno] = red >> 6;
amdgpu_crtc->lut_g[regno] = green >> 6;
amdgpu_crtc->lut_b[regno] = blue >> 6;
}
/** Gets the color ramps on behalf of fbcon */
static void amdgpu_crtc_fb_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green,
u16 *blue, int regno)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
*red = amdgpu_crtc->lut_r[regno] << 6;
*green = amdgpu_crtc->lut_g[regno] << 6;
*blue = amdgpu_crtc->lut_b[regno] << 6;
}
static const struct drm_fb_helper_funcs amdgpu_fb_helper_funcs = {
.gamma_set = amdgpu_crtc_fb_gamma_set,
.gamma_get = amdgpu_crtc_fb_gamma_get,
.fb_probe = amdgpufb_create,
};

View File

@ -55,6 +55,19 @@
/*
* Common GART table functions.
*/
/**
* amdgpu_gart_set_defaults - set the default gart_size
*
* @adev: amdgpu_device pointer
*
* Set the default gart_size based on parameters and available VRAM.
*/
void amdgpu_gart_set_defaults(struct amdgpu_device *adev)
{
adev->mc.gart_size = (uint64_t)amdgpu_gart_size << 20;
}
/**
* amdgpu_gart_table_ram_alloc - allocate system ram for gart page table
*
@ -131,7 +144,7 @@ int amdgpu_gart_table_vram_alloc(struct amdgpu_device *adev)
PAGE_SIZE, true, AMDGPU_GEM_DOMAIN_VRAM,
AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, &adev->gart.robj);
NULL, NULL, 0, &adev->gart.robj);
if (r) {
return r;
}
@ -262,6 +275,41 @@ int amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
return 0;
}
/**
* amdgpu_gart_map - map dma_addresses into GART entries
*
* @adev: amdgpu_device pointer
* @offset: offset into the GPU's gart aperture
* @pages: number of pages to bind
* @dma_addr: DMA addresses of pages
*
* Map the dma_addresses into GART entries (all asics).
* Returns 0 for success, -EINVAL for failure.
*/
int amdgpu_gart_map(struct amdgpu_device *adev, uint64_t offset,
int pages, dma_addr_t *dma_addr, uint64_t flags,
void *dst)
{
uint64_t page_base;
unsigned i, j, t;
if (!adev->gart.ready) {
WARN(1, "trying to bind memory to uninitialized GART !\n");
return -EINVAL;
}
t = offset / AMDGPU_GPU_PAGE_SIZE;
for (i = 0; i < pages; i++) {
page_base = dma_addr[i];
for (j = 0; j < (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); j++, t++) {
amdgpu_gart_set_pte_pde(adev, dst, t, page_base, flags);
page_base += AMDGPU_GPU_PAGE_SIZE;
}
}
return 0;
}
/**
* amdgpu_gart_bind - bind pages into the gart page table
*
@ -279,31 +327,30 @@ int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
int pages, struct page **pagelist, dma_addr_t *dma_addr,
uint64_t flags)
{
unsigned t;
unsigned p;
uint64_t page_base;
int i, j;
#ifdef CONFIG_DRM_AMDGPU_GART_DEBUGFS
unsigned i,t,p;
#endif
int r;
if (!adev->gart.ready) {
WARN(1, "trying to bind memory to uninitialized GART !\n");
return -EINVAL;
}
#ifdef CONFIG_DRM_AMDGPU_GART_DEBUGFS
t = offset / AMDGPU_GPU_PAGE_SIZE;
p = t / (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE);
for (i = 0; i < pages; i++, p++) {
#ifdef CONFIG_DRM_AMDGPU_GART_DEBUGFS
for (i = 0; i < pages; i++, p++)
adev->gart.pages[p] = pagelist[i];
#endif
if (adev->gart.ptr) {
page_base = dma_addr[i];
for (j = 0; j < (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); j++, t++) {
amdgpu_gart_set_pte_pde(adev, adev->gart.ptr, t, page_base, flags);
page_base += AMDGPU_GPU_PAGE_SIZE;
}
}
if (adev->gart.ptr) {
r = amdgpu_gart_map(adev, offset, pages, dma_addr, flags,
adev->gart.ptr);
if (r)
return r;
}
mb();
amdgpu_gart_flush_gpu_tlb(adev, 0);
return 0;
@ -333,8 +380,8 @@ int amdgpu_gart_init(struct amdgpu_device *adev)
if (r)
return r;
/* Compute table size */
adev->gart.num_cpu_pages = adev->mc.gtt_size / PAGE_SIZE;
adev->gart.num_gpu_pages = adev->mc.gtt_size / AMDGPU_GPU_PAGE_SIZE;
adev->gart.num_cpu_pages = adev->mc.gart_size / PAGE_SIZE;
adev->gart.num_gpu_pages = adev->mc.gart_size / AMDGPU_GPU_PAGE_SIZE;
DRM_INFO("GART: num cpu pages %u, num gpu pages %u\n",
adev->gart.num_cpu_pages, adev->gart.num_gpu_pages);

View File

@ -0,0 +1,77 @@
/*
* Copyright 2017 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef __AMDGPU_GART_H__
#define __AMDGPU_GART_H__
#include <linux/types.h>
/*
* GART structures, functions & helpers
*/
struct amdgpu_device;
struct amdgpu_bo;
struct amdgpu_gart_funcs;
#define AMDGPU_GPU_PAGE_SIZE 4096
#define AMDGPU_GPU_PAGE_MASK (AMDGPU_GPU_PAGE_SIZE - 1)
#define AMDGPU_GPU_PAGE_SHIFT 12
#define AMDGPU_GPU_PAGE_ALIGN(a) (((a) + AMDGPU_GPU_PAGE_MASK) & ~AMDGPU_GPU_PAGE_MASK)
struct amdgpu_gart {
dma_addr_t table_addr;
struct amdgpu_bo *robj;
void *ptr;
unsigned num_gpu_pages;
unsigned num_cpu_pages;
unsigned table_size;
#ifdef CONFIG_DRM_AMDGPU_GART_DEBUGFS
struct page **pages;
#endif
bool ready;
/* Asic default pte flags */
uint64_t gart_pte_flags;
const struct amdgpu_gart_funcs *gart_funcs;
};
void amdgpu_gart_set_defaults(struct amdgpu_device *adev);
int amdgpu_gart_table_ram_alloc(struct amdgpu_device *adev);
void amdgpu_gart_table_ram_free(struct amdgpu_device *adev);
int amdgpu_gart_table_vram_alloc(struct amdgpu_device *adev);
void amdgpu_gart_table_vram_free(struct amdgpu_device *adev);
int amdgpu_gart_table_vram_pin(struct amdgpu_device *adev);
void amdgpu_gart_table_vram_unpin(struct amdgpu_device *adev);
int amdgpu_gart_init(struct amdgpu_device *adev);
void amdgpu_gart_fini(struct amdgpu_device *adev);
int amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
int pages);
int amdgpu_gart_map(struct amdgpu_device *adev, uint64_t offset,
int pages, dma_addr_t *dma_addr, uint64_t flags,
void *dst);
int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
int pages, struct page **pagelist,
dma_addr_t *dma_addr, uint64_t flags);
#endif

View File

@ -49,7 +49,6 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
struct drm_gem_object **obj)
{
struct amdgpu_bo *robj;
unsigned long max_size;
int r;
*obj = NULL;
@ -58,20 +57,9 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
alignment = PAGE_SIZE;
}
if (!(initial_domain & (AMDGPU_GEM_DOMAIN_GDS | AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA))) {
/* Maximum bo size is the unpinned gtt size since we use the gtt to
* handle vram to system pool migrations.
*/
max_size = adev->mc.gtt_size - adev->gart_pin_size;
if (size > max_size) {
DRM_DEBUG("Allocation size %ldMb bigger than %ldMb limit\n",
size >> 20, max_size >> 20);
return -ENOMEM;
}
}
retry:
r = amdgpu_bo_create(adev, size, alignment, kernel, initial_domain,
flags, NULL, NULL, &robj);
flags, NULL, NULL, 0, &robj);
if (r) {
if (r != -ERESTARTSYS) {
if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
@ -103,7 +91,7 @@ void amdgpu_gem_force_release(struct amdgpu_device *adev)
spin_lock(&file->table_lock);
idr_for_each_entry(&file->object_idr, gobj, handle) {
WARN_ONCE(1, "And also active allocations!\n");
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
}
idr_destroy(&file->object_idr);
spin_unlock(&file->table_lock);
@ -237,9 +225,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void *data,
if (args->in.domain_flags & ~(AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_NO_CPU_ACCESS |
AMDGPU_GEM_CREATE_CPU_GTT_USWC |
AMDGPU_GEM_CREATE_VRAM_CLEARED|
AMDGPU_GEM_CREATE_SHADOW |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS))
AMDGPU_GEM_CREATE_VRAM_CLEARED))
return -EINVAL;
/* reject invalid gem domains */
@ -275,7 +261,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void *data,
r = drm_gem_handle_create(filp, gobj, &handle);
/* drop reference from allocate - handle holds it now */
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
if (r)
return r;
@ -318,7 +304,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void *data,
return r;
bo = gem_to_amdgpu_bo(gobj);
bo->prefered_domains = AMDGPU_GEM_DOMAIN_GTT;
bo->preferred_domains = AMDGPU_GEM_DOMAIN_GTT;
bo->allowed_domains = AMDGPU_GEM_DOMAIN_GTT;
r = amdgpu_ttm_tt_set_userptr(bo->tbo.ttm, args->addr, args->flags);
if (r)
@ -353,7 +339,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void *data,
r = drm_gem_handle_create(filp, gobj, &handle);
/* drop reference from allocate - handle holds it now */
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
if (r)
return r;
@ -367,7 +353,7 @@ unlock_mmap_sem:
up_read(&current->mm->mmap_sem);
release_object:
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
return r;
}
@ -386,11 +372,11 @@ int amdgpu_mode_dumb_mmap(struct drm_file *filp,
robj = gem_to_amdgpu_bo(gobj);
if (amdgpu_ttm_tt_get_usermm(robj->tbo.ttm) ||
(robj->flags & AMDGPU_GEM_CREATE_NO_CPU_ACCESS)) {
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
return -EPERM;
}
*offset_p = amdgpu_bo_mmap_offset(robj);
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
return 0;
}
@ -460,7 +446,7 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
} else
r = ret;
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
return r;
}
@ -503,7 +489,7 @@ int amdgpu_gem_metadata_ioctl(struct drm_device *dev, void *data,
unreserve:
amdgpu_bo_unreserve(robj);
out:
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
return r;
}
@ -635,7 +621,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
switch (args->operation) {
case AMDGPU_VA_OP_MAP:
r = amdgpu_vm_alloc_pts(adev, bo_va->vm, args->va_address,
r = amdgpu_vm_alloc_pts(adev, bo_va->base.vm, args->va_address,
args->map_size);
if (r)
goto error_backoff;
@ -655,7 +641,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
args->map_size);
break;
case AMDGPU_VA_OP_REPLACE:
r = amdgpu_vm_alloc_pts(adev, bo_va->vm, args->va_address,
r = amdgpu_vm_alloc_pts(adev, bo_va->base.vm, args->va_address,
args->map_size);
if (r)
goto error_backoff;
@ -676,7 +662,7 @@ error_backoff:
ttm_eu_backoff_reservation(&ticket, &list);
error_unref:
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
return r;
}
@ -701,11 +687,11 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
switch (args->op) {
case AMDGPU_GEM_OP_GET_GEM_CREATE_INFO: {
struct drm_amdgpu_gem_create_in info;
void __user *out = (void __user *)(uintptr_t)args->value;
void __user *out = u64_to_user_ptr(args->value);
info.bo_size = robj->gem_base.size;
info.alignment = robj->tbo.mem.page_alignment << PAGE_SHIFT;
info.domains = robj->prefered_domains;
info.domains = robj->preferred_domains;
info.domain_flags = robj->flags;
amdgpu_bo_unreserve(robj);
if (copy_to_user(out, &info, sizeof(info)))
@ -723,10 +709,10 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
amdgpu_bo_unreserve(robj);
break;
}
robj->prefered_domains = args->value & (AMDGPU_GEM_DOMAIN_VRAM |
robj->preferred_domains = args->value & (AMDGPU_GEM_DOMAIN_VRAM |
AMDGPU_GEM_DOMAIN_GTT |
AMDGPU_GEM_DOMAIN_CPU);
robj->allowed_domains = robj->prefered_domains;
robj->allowed_domains = robj->preferred_domains;
if (robj->allowed_domains == AMDGPU_GEM_DOMAIN_VRAM)
robj->allowed_domains |= AMDGPU_GEM_DOMAIN_GTT;
@ -738,7 +724,7 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
}
out:
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
return r;
}
@ -766,7 +752,7 @@ int amdgpu_mode_dumb_create(struct drm_file *file_priv,
r = drm_gem_handle_create(file_priv, gobj, &handle);
/* drop reference from allocate - handle holds it now */
drm_gem_object_unreference_unlocked(gobj);
drm_gem_object_put_unlocked(gobj);
if (r) {
return r;
}
@ -784,6 +770,7 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, void *data)
unsigned domain;
const char *placement;
unsigned pin_count;
uint64_t offset;
domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type);
switch (domain) {
@ -798,9 +785,12 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, void *data)
placement = " CPU";
break;
}
seq_printf(m, "\t0x%08x: %12ld byte %s @ 0x%010Lx",
id, amdgpu_bo_size(bo), placement,
amdgpu_bo_gpu_offset(bo));
seq_printf(m, "\t0x%08x: %12ld byte %s",
id, amdgpu_bo_size(bo), placement);
offset = ACCESS_ONCE(bo->tbo.mem.start);
if (offset != AMDGPU_BO_INVALID_OFFSET)
seq_printf(m, " @ 0x%010Lx", offset);
pin_count = ACCESS_ONCE(bo->pin_count);
if (pin_count)

View File

@ -125,7 +125,8 @@ void amdgpu_gfx_compute_queue_acquire(struct amdgpu_device *adev)
if (mec >= adev->gfx.mec.num_mec)
break;
if (adev->gfx.mec.num_mec > 1) {
/* FIXME: spreading the queues across pipes causes perf regressions */
if (0) {
/* policy: amdgpu owns the first two queues of the first MEC */
if (mec == 0 && queue < 2)
set_bit(i, adev->gfx.mec.queue_bitmap);

View File

@ -28,7 +28,7 @@
struct amdgpu_gtt_mgr {
struct drm_mm mm;
spinlock_t lock;
uint64_t available;
atomic64_t available;
};
/**
@ -42,15 +42,19 @@ struct amdgpu_gtt_mgr {
static int amdgpu_gtt_mgr_init(struct ttm_mem_type_manager *man,
unsigned long p_size)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(man->bdev);
struct amdgpu_gtt_mgr *mgr;
uint64_t start, size;
mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
if (!mgr)
return -ENOMEM;
drm_mm_init(&mgr->mm, 0, p_size);
start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
size = (adev->mc.gart_size >> PAGE_SHIFT) - start;
drm_mm_init(&mgr->mm, start, size);
spin_lock_init(&mgr->lock);
mgr->available = p_size;
atomic64_set(&mgr->available, p_size);
man->priv = mgr;
return 0;
}
@ -80,6 +84,20 @@ static int amdgpu_gtt_mgr_fini(struct ttm_mem_type_manager *man)
return 0;
}
/**
* amdgpu_gtt_mgr_is_allocated - Check if mem has address space
*
* @mem: the mem object to check
*
* Check if a mem object has already address space allocated.
*/
bool amdgpu_gtt_mgr_is_allocated(struct ttm_mem_reg *mem)
{
struct drm_mm_node *node = mem->mm_node;
return (node->start != AMDGPU_BO_INVALID_OFFSET);
}
/**
* amdgpu_gtt_mgr_alloc - allocate new ranges
*
@ -95,13 +113,14 @@ int amdgpu_gtt_mgr_alloc(struct ttm_mem_type_manager *man,
const struct ttm_place *place,
struct ttm_mem_reg *mem)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(man->bdev);
struct amdgpu_gtt_mgr *mgr = man->priv;
struct drm_mm_node *node = mem->mm_node;
enum drm_mm_insert_mode mode;
unsigned long fpfn, lpfn;
int r;
if (node->start != AMDGPU_BO_INVALID_OFFSET)
if (amdgpu_gtt_mgr_is_allocated(mem))
return 0;
if (place)
@ -112,7 +131,7 @@ int amdgpu_gtt_mgr_alloc(struct ttm_mem_type_manager *man,
if (place && place->lpfn)
lpfn = place->lpfn;
else
lpfn = man->size;
lpfn = adev->gart.num_cpu_pages;
mode = DRM_MM_INSERT_BEST;
if (place && place->flags & TTM_PL_FLAG_TOPDOWN)
@ -134,15 +153,6 @@ int amdgpu_gtt_mgr_alloc(struct ttm_mem_type_manager *man,
return r;
}
void amdgpu_gtt_mgr_print(struct seq_file *m, struct ttm_mem_type_manager *man)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(man->bdev);
struct amdgpu_gtt_mgr *mgr = man->priv;
seq_printf(m, "man size:%llu pages, gtt available:%llu pages, usage:%lluMB\n",
man->size, mgr->available, (u64)atomic64_read(&adev->gtt_usage) >> 20);
}
/**
* amdgpu_gtt_mgr_new - allocate a new node
*
@ -163,11 +173,11 @@ static int amdgpu_gtt_mgr_new(struct ttm_mem_type_manager *man,
int r;
spin_lock(&mgr->lock);
if (mgr->available < mem->num_pages) {
if (atomic64_read(&mgr->available) < mem->num_pages) {
spin_unlock(&mgr->lock);
return 0;
}
mgr->available -= mem->num_pages;
atomic64_sub(mem->num_pages, &mgr->available);
spin_unlock(&mgr->lock);
node = kzalloc(sizeof(*node), GFP_KERNEL);
@ -194,9 +204,7 @@ static int amdgpu_gtt_mgr_new(struct ttm_mem_type_manager *man,
return 0;
err_out:
spin_lock(&mgr->lock);
mgr->available += mem->num_pages;
spin_unlock(&mgr->lock);
atomic64_add(mem->num_pages, &mgr->available);
return r;
}
@ -223,30 +231,47 @@ static void amdgpu_gtt_mgr_del(struct ttm_mem_type_manager *man,
spin_lock(&mgr->lock);
if (node->start != AMDGPU_BO_INVALID_OFFSET)
drm_mm_remove_node(node);
mgr->available += mem->num_pages;
spin_unlock(&mgr->lock);
atomic64_add(mem->num_pages, &mgr->available);
kfree(node);
mem->mm_node = NULL;
}
/**
* amdgpu_gtt_mgr_usage - return usage of GTT domain
*
* @man: TTM memory type manager
*
* Return how many bytes are used in the GTT domain
*/
uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man)
{
struct amdgpu_gtt_mgr *mgr = man->priv;
return (u64)(man->size - atomic64_read(&mgr->available)) * PAGE_SIZE;
}
/**
* amdgpu_gtt_mgr_debug - dump VRAM table
*
* @man: TTM memory type manager
* @prefix: text prefix
* @printer: DRM printer to use
*
* Dump the table content using printk.
*/
static void amdgpu_gtt_mgr_debug(struct ttm_mem_type_manager *man,
const char *prefix)
struct drm_printer *printer)
{
struct amdgpu_gtt_mgr *mgr = man->priv;
struct drm_printer p = drm_debug_printer(prefix);
spin_lock(&mgr->lock);
drm_mm_print(&mgr->mm, &p);
drm_mm_print(&mgr->mm, printer);
spin_unlock(&mgr->lock);
drm_printf(printer, "man size:%llu pages, gtt available:%llu pages, usage:%lluMB\n",
man->size, (u64)atomic64_read(&mgr->available),
amdgpu_gtt_mgr_usage(man) >> 20);
}
const struct ttm_mem_type_manager_func amdgpu_gtt_mgr_func = {

View File

@ -130,6 +130,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
unsigned i;
int r = 0;
bool need_pipe_sync = false;
if (num_ibs == 0)
return -EINVAL;
@ -165,15 +166,15 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
if (ring->funcs->emit_pipeline_sync && job &&
((tmp = amdgpu_sync_get_fence(&job->sched_sync)) ||
amdgpu_vm_need_pipeline_sync(ring, job))) {
amdgpu_ring_emit_pipeline_sync(ring);
need_pipe_sync = true;
dma_fence_put(tmp);
}
if (ring->funcs->insert_start)
ring->funcs->insert_start(ring);
if (vm) {
r = amdgpu_vm_flush(ring, job);
if (job) {
r = amdgpu_vm_flush(ring, job, need_pipe_sync);
if (r) {
amdgpu_ring_undo(ring);
return r;

View File

@ -220,6 +220,10 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
int r = 0;
spin_lock_init(&adev->irq.lock);
/* Disable vblank irqs aggressively for power-saving */
adev->ddev->vblank_disable_immediate = true;
r = drm_vblank_init(adev->ddev, adev->mode_info.num_crtc);
if (r) {
return r;
@ -263,7 +267,6 @@ void amdgpu_irq_fini(struct amdgpu_device *adev)
{
unsigned i, j;
drm_vblank_cleanup(adev->ddev);
if (adev->irq.installed) {
drm_irq_uninstall(adev->ddev);
adev->irq.installed = false;

View File

@ -81,6 +81,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size,
r = amdgpu_ib_get(adev, NULL, size, &(*job)->ibs[0]);
if (r)
kfree(*job);
else
(*job)->vm_pd_addr = adev->gart.table_addr;
return r;
}

View File

@ -158,7 +158,6 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
"Error during ACPI methods call\n");
}
amdgpu_amdkfd_load_interface(adev);
amdgpu_amdkfd_device_probe(adev);
amdgpu_amdkfd_device_init(adev);
@ -456,13 +455,13 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
ui64 = atomic64_read(&adev->num_vram_cpu_page_faults);
return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0;
case AMDGPU_INFO_VRAM_USAGE:
ui64 = atomic64_read(&adev->vram_usage);
ui64 = amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0;
case AMDGPU_INFO_VIS_VRAM_USAGE:
ui64 = atomic64_read(&adev->vram_vis_usage);
ui64 = amdgpu_vram_mgr_vis_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0;
case AMDGPU_INFO_GTT_USAGE:
ui64 = atomic64_read(&adev->gtt_usage);
ui64 = amdgpu_gtt_mgr_usage(&adev->mman.bdev.man[TTM_PL_TT]);
return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0;
case AMDGPU_INFO_GDS_CONFIG: {
struct drm_amdgpu_info_gds gds_info;
@ -485,7 +484,8 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
vram_gtt.vram_size -= adev->vram_pin_size;
vram_gtt.vram_cpu_accessible_size = adev->mc.visible_vram_size;
vram_gtt.vram_cpu_accessible_size -= (adev->vram_pin_size - adev->invisible_pin_size);
vram_gtt.gtt_size = adev->mc.gtt_size;
vram_gtt.gtt_size = adev->mman.bdev.man[TTM_PL_TT].size;
vram_gtt.gtt_size *= PAGE_SIZE;
vram_gtt.gtt_size -= adev->gart_pin_size;
return copy_to_user(out, &vram_gtt,
min((size_t)size, sizeof(vram_gtt))) ? -EFAULT : 0;
@ -497,7 +497,8 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
mem.vram.total_heap_size = adev->mc.real_vram_size;
mem.vram.usable_heap_size =
adev->mc.real_vram_size - adev->vram_pin_size;
mem.vram.heap_usage = atomic64_read(&adev->vram_usage);
mem.vram.heap_usage =
amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
mem.vram.max_allocation = mem.vram.usable_heap_size * 3 / 4;
mem.cpu_accessible_vram.total_heap_size =
@ -506,14 +507,16 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
adev->mc.visible_vram_size -
(adev->vram_pin_size - adev->invisible_pin_size);
mem.cpu_accessible_vram.heap_usage =
atomic64_read(&adev->vram_vis_usage);
amdgpu_vram_mgr_vis_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
mem.cpu_accessible_vram.max_allocation =
mem.cpu_accessible_vram.usable_heap_size * 3 / 4;
mem.gtt.total_heap_size = adev->mc.gtt_size;
mem.gtt.usable_heap_size =
adev->mc.gtt_size - adev->gart_pin_size;
mem.gtt.heap_usage = atomic64_read(&adev->gtt_usage);
mem.gtt.total_heap_size = adev->mman.bdev.man[TTM_PL_TT].size;
mem.gtt.total_heap_size *= PAGE_SIZE;
mem.gtt.usable_heap_size = mem.gtt.total_heap_size
- adev->gart_pin_size;
mem.gtt.heap_usage =
amdgpu_gtt_mgr_usage(&adev->mman.bdev.man[TTM_PL_TT]);
mem.gtt.max_allocation = mem.gtt.usable_heap_size * 3 / 4;
return copy_to_user(out, &mem,
@ -571,8 +574,8 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
dev_info.max_engine_clock = amdgpu_dpm_get_sclk(adev, false) * 10;
dev_info.max_memory_clock = amdgpu_dpm_get_mclk(adev, false) * 10;
} else {
dev_info.max_engine_clock = adev->pm.default_sclk * 10;
dev_info.max_memory_clock = adev->pm.default_mclk * 10;
dev_info.max_engine_clock = adev->clock.default_sclk * 10;
dev_info.max_memory_clock = adev->clock.default_mclk * 10;
}
dev_info.enabled_rb_pipes_mask = adev->gfx.config.backend_enable_mask;
dev_info.num_rb_pipes = adev->gfx.config.max_backends_per_se *
@ -587,10 +590,8 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
dev_info.virtual_address_offset = AMDGPU_VA_RESERVED_SIZE;
dev_info.virtual_address_max = (uint64_t)adev->vm_manager.max_pfn * AMDGPU_GPU_PAGE_SIZE;
dev_info.virtual_address_alignment = max((int)PAGE_SIZE, AMDGPU_GPU_PAGE_SIZE);
dev_info.pte_fragment_size = (1 << AMDGPU_LOG2_PAGES_PER_FRAG) *
AMDGPU_GPU_PAGE_SIZE;
dev_info.pte_fragment_size = (1 << adev->vm_manager.fragment_size) * AMDGPU_GPU_PAGE_SIZE;
dev_info.gart_page_size = AMDGPU_GPU_PAGE_SIZE;
dev_info.cu_active_number = adev->gfx.cu_info.number;
dev_info.cu_ao_mask = adev->gfx.cu_info.ao_cu_mask;
dev_info.ce_ram_size = adev->gfx.ce_ram_size;
@ -839,7 +840,7 @@ int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
}
if (amdgpu_sriov_vf(adev)) {
r = amdgpu_map_static_csa(adev, &fpriv->vm);
r = amdgpu_map_static_csa(adev, &fpriv->vm, &fpriv->csa_va);
if (r)
goto out_suspend;
}
@ -892,8 +893,8 @@ void amdgpu_driver_postclose_kms(struct drm_device *dev,
if (amdgpu_sriov_vf(adev)) {
/* TODO: how to handle reserve failure */
BUG_ON(amdgpu_bo_reserve(adev->virt.csa_obj, true));
amdgpu_vm_bo_rmv(adev, fpriv->vm.csa_bo_va);
fpriv->vm.csa_bo_va = NULL;
amdgpu_vm_bo_rmv(adev, fpriv->csa_va);
fpriv->csa_va = NULL;
amdgpu_bo_unreserve(adev->virt.csa_obj);
}

View File

@ -257,15 +257,7 @@ struct amdgpu_audio {
int num_pins;
};
struct amdgpu_mode_mc_save {
u32 vga_render_control;
u32 vga_hdp_control;
bool crtc_enabled[AMDGPU_MAX_CRTCS];
};
struct amdgpu_display_funcs {
/* vga render */
void (*set_vga_render_state)(struct amdgpu_device *adev, bool render);
/* display watermarks */
void (*bandwidth_update)(struct amdgpu_device *adev);
/* get frame count */
@ -300,10 +292,6 @@ struct amdgpu_display_funcs {
uint16_t connector_object_id,
struct amdgpu_hpd *hpd,
struct amdgpu_router *router);
void (*stop_mc_access)(struct amdgpu_device *adev,
struct amdgpu_mode_mc_save *save);
void (*resume_mc_access)(struct amdgpu_device *adev,
struct amdgpu_mode_mc_save *save);
};
struct amdgpu_mode_info {
@ -369,7 +357,6 @@ struct amdgpu_atom_ss {
struct amdgpu_crtc {
struct drm_crtc base;
int crtc_id;
u16 lut_r[256], lut_g[256], lut_b[256];
bool enabled;
bool can_tile;
uint32_t crtc_offset;

View File

@ -37,55 +37,6 @@
#include "amdgpu.h"
#include "amdgpu_trace.h"
static u64 amdgpu_get_vis_part_size(struct amdgpu_device *adev,
struct ttm_mem_reg *mem)
{
if (mem->start << PAGE_SHIFT >= adev->mc.visible_vram_size)
return 0;
return ((mem->start << PAGE_SHIFT) + mem->size) >
adev->mc.visible_vram_size ?
adev->mc.visible_vram_size - (mem->start << PAGE_SHIFT) :
mem->size;
}
static void amdgpu_update_memory_usage(struct amdgpu_device *adev,
struct ttm_mem_reg *old_mem,
struct ttm_mem_reg *new_mem)
{
u64 vis_size;
if (!adev)
return;
if (new_mem) {
switch (new_mem->mem_type) {
case TTM_PL_TT:
atomic64_add(new_mem->size, &adev->gtt_usage);
break;
case TTM_PL_VRAM:
atomic64_add(new_mem->size, &adev->vram_usage);
vis_size = amdgpu_get_vis_part_size(adev, new_mem);
atomic64_add(vis_size, &adev->vram_vis_usage);
break;
}
}
if (old_mem) {
switch (old_mem->mem_type) {
case TTM_PL_TT:
atomic64_sub(old_mem->size, &adev->gtt_usage);
break;
case TTM_PL_VRAM:
atomic64_sub(old_mem->size, &adev->vram_usage);
vis_size = amdgpu_get_vis_part_size(adev, old_mem);
atomic64_sub(vis_size, &adev->vram_vis_usage);
break;
}
}
}
static void amdgpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev);
@ -93,7 +44,7 @@ static void amdgpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
bo = container_of(tbo, struct amdgpu_bo, tbo);
amdgpu_update_memory_usage(adev, &bo->tbo.mem, NULL);
amdgpu_bo_kunmap(bo);
drm_gem_object_release(&bo->gem_base);
amdgpu_bo_unref(&bo->parent);
@ -219,7 +170,7 @@ static void amdgpu_fill_placement_to_bo(struct amdgpu_bo *bo,
}
/**
* amdgpu_bo_create_kernel - create BO for kernel use
* amdgpu_bo_create_reserved - create reserved BO for kernel use
*
* @adev: amdgpu device object
* @size: size for the new BO
@ -229,24 +180,30 @@ static void amdgpu_fill_placement_to_bo(struct amdgpu_bo *bo,
* @gpu_addr: GPU addr of the pinned BO
* @cpu_addr: optional CPU address mapping
*
* Allocates and pins a BO for kernel internal use.
* Allocates and pins a BO for kernel internal use, and returns it still
* reserved.
*
* Returns 0 on success, negative error code otherwise.
*/
int amdgpu_bo_create_kernel(struct amdgpu_device *adev,
unsigned long size, int align,
u32 domain, struct amdgpu_bo **bo_ptr,
u64 *gpu_addr, void **cpu_addr)
int amdgpu_bo_create_reserved(struct amdgpu_device *adev,
unsigned long size, int align,
u32 domain, struct amdgpu_bo **bo_ptr,
u64 *gpu_addr, void **cpu_addr)
{
bool free = false;
int r;
r = amdgpu_bo_create(adev, size, align, true, domain,
AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, bo_ptr);
if (r) {
dev_err(adev->dev, "(%d) failed to allocate kernel bo\n", r);
return r;
if (!*bo_ptr) {
r = amdgpu_bo_create(adev, size, align, true, domain,
AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, 0, bo_ptr);
if (r) {
dev_err(adev->dev, "(%d) failed to allocate kernel bo\n",
r);
return r;
}
free = true;
}
r = amdgpu_bo_reserve(*bo_ptr, false);
@ -269,19 +226,51 @@ int amdgpu_bo_create_kernel(struct amdgpu_device *adev,
}
}
amdgpu_bo_unreserve(*bo_ptr);
return 0;
error_unreserve:
amdgpu_bo_unreserve(*bo_ptr);
error_free:
amdgpu_bo_unref(bo_ptr);
if (free)
amdgpu_bo_unref(bo_ptr);
return r;
}
/**
* amdgpu_bo_create_kernel - create BO for kernel use
*
* @adev: amdgpu device object
* @size: size for the new BO
* @align: alignment for the new BO
* @domain: where to place it
* @bo_ptr: resulting BO
* @gpu_addr: GPU addr of the pinned BO
* @cpu_addr: optional CPU address mapping
*
* Allocates and pins a BO for kernel internal use.
*
* Returns 0 on success, negative error code otherwise.
*/
int amdgpu_bo_create_kernel(struct amdgpu_device *adev,
unsigned long size, int align,
u32 domain, struct amdgpu_bo **bo_ptr,
u64 *gpu_addr, void **cpu_addr)
{
int r;
r = amdgpu_bo_create_reserved(adev, size, align, domain, bo_ptr,
gpu_addr, cpu_addr);
if (r)
return r;
amdgpu_bo_unreserve(*bo_ptr);
return 0;
}
/**
* amdgpu_bo_free_kernel - free BO for kernel use
*
@ -317,12 +306,13 @@ int amdgpu_bo_create_restricted(struct amdgpu_device *adev,
struct sg_table *sg,
struct ttm_placement *placement,
struct reservation_object *resv,
uint64_t init_value,
struct amdgpu_bo **bo_ptr)
{
struct amdgpu_bo *bo;
enum ttm_bo_type type;
unsigned long page_align;
u64 initial_bytes_moved;
u64 initial_bytes_moved, bytes_moved;
size_t acc_size;
int r;
@ -351,13 +341,13 @@ int amdgpu_bo_create_restricted(struct amdgpu_device *adev,
}
INIT_LIST_HEAD(&bo->shadow_list);
INIT_LIST_HEAD(&bo->va);
bo->prefered_domains = domain & (AMDGPU_GEM_DOMAIN_VRAM |
bo->preferred_domains = domain & (AMDGPU_GEM_DOMAIN_VRAM |
AMDGPU_GEM_DOMAIN_GTT |
AMDGPU_GEM_DOMAIN_CPU |
AMDGPU_GEM_DOMAIN_GDS |
AMDGPU_GEM_DOMAIN_GWS |
AMDGPU_GEM_DOMAIN_OA);
bo->allowed_domains = bo->prefered_domains;
bo->allowed_domains = bo->preferred_domains;
if (!kernel && bo->allowed_domains == AMDGPU_GEM_DOMAIN_VRAM)
bo->allowed_domains |= AMDGPU_GEM_DOMAIN_GTT;
@ -398,8 +388,14 @@ int amdgpu_bo_create_restricted(struct amdgpu_device *adev,
r = ttm_bo_init_reserved(&adev->mman.bdev, &bo->tbo, size, type,
&bo->placement, page_align, !kernel, NULL,
acc_size, sg, resv, &amdgpu_ttm_bo_destroy);
amdgpu_cs_report_moved_bytes(adev,
atomic64_read(&adev->num_bytes_moved) - initial_bytes_moved);
bytes_moved = atomic64_read(&adev->num_bytes_moved) -
initial_bytes_moved;
if (adev->mc.visible_vram_size < adev->mc.real_vram_size &&
bo->tbo.mem.mem_type == TTM_PL_VRAM &&
bo->tbo.mem.start < adev->mc.visible_vram_size >> PAGE_SHIFT)
amdgpu_cs_report_moved_bytes(adev, bytes_moved, bytes_moved);
else
amdgpu_cs_report_moved_bytes(adev, bytes_moved, 0);
if (unlikely(r != 0))
return r;
@ -411,7 +407,7 @@ int amdgpu_bo_create_restricted(struct amdgpu_device *adev,
bo->tbo.mem.placement & TTM_PL_FLAG_VRAM) {
struct dma_fence *fence;
r = amdgpu_fill_buffer(bo, 0, bo->tbo.resv, &fence);
r = amdgpu_fill_buffer(bo, init_value, bo->tbo.resv, &fence);
if (unlikely(r))
goto fail_unreserve;
@ -426,6 +422,10 @@ int amdgpu_bo_create_restricted(struct amdgpu_device *adev,
trace_amdgpu_bo_create(bo);
/* Treat CPU_ACCESS_REQUIRED only as a hint if given by UMD */
if (type == ttm_bo_type_device)
bo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
return 0;
fail_unreserve:
@ -459,6 +459,7 @@ static int amdgpu_bo_create_shadow(struct amdgpu_device *adev,
AMDGPU_GEM_CREATE_CPU_GTT_USWC,
NULL, &placement,
bo->tbo.resv,
0,
&bo->shadow);
if (!r) {
bo->shadow->parent = amdgpu_bo_ref(bo);
@ -470,11 +471,15 @@ static int amdgpu_bo_create_shadow(struct amdgpu_device *adev,
return r;
}
/* init_value will only take effect when flags contains
* AMDGPU_GEM_CREATE_VRAM_CLEARED.
*/
int amdgpu_bo_create(struct amdgpu_device *adev,
unsigned long size, int byte_align,
bool kernel, u32 domain, u64 flags,
struct sg_table *sg,
struct reservation_object *resv,
uint64_t init_value,
struct amdgpu_bo **bo_ptr)
{
struct ttm_placement placement = {0};
@ -489,7 +494,7 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
r = amdgpu_bo_create_restricted(adev, size, byte_align, kernel,
domain, flags, sg, &placement,
resv, bo_ptr);
resv, init_value, bo_ptr);
if (r)
return r;
@ -535,7 +540,7 @@ int amdgpu_bo_backup_to_shadow(struct amdgpu_device *adev,
r = amdgpu_copy_buffer(ring, bo_addr, shadow_addr,
amdgpu_bo_size(bo), resv, fence,
direct);
direct, false);
if (!r)
amdgpu_bo_fence(bo, *fence, true);
@ -551,7 +556,7 @@ int amdgpu_bo_validate(struct amdgpu_bo *bo)
if (bo->pin_count)
return 0;
domain = bo->prefered_domains;
domain = bo->preferred_domains;
retry:
amdgpu_ttm_placement_from_domain(bo, domain);
@ -588,7 +593,7 @@ int amdgpu_bo_restore_from_shadow(struct amdgpu_device *adev,
r = amdgpu_copy_buffer(ring, shadow_addr, bo_addr,
amdgpu_bo_size(bo), resv, fence,
direct);
direct, false);
if (!r)
amdgpu_bo_fence(bo, *fence, true);
@ -598,16 +603,16 @@ err:
int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
{
bool is_iomem;
void *kptr;
long r;
if (bo->flags & AMDGPU_GEM_CREATE_NO_CPU_ACCESS)
return -EPERM;
if (bo->kptr) {
if (ptr) {
*ptr = bo->kptr;
}
kptr = amdgpu_bo_kptr(bo);
if (kptr) {
if (ptr)
*ptr = kptr;
return 0;
}
@ -620,19 +625,23 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
if (r)
return r;
bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
if (ptr)
*ptr = bo->kptr;
*ptr = amdgpu_bo_kptr(bo);
return 0;
}
void *amdgpu_bo_kptr(struct amdgpu_bo *bo)
{
bool is_iomem;
return ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
}
void amdgpu_bo_kunmap(struct amdgpu_bo *bo)
{
if (bo->kptr == NULL)
return;
bo->kptr = NULL;
ttm_bo_kunmap(&bo->kmap);
if (bo->kmap.bo)
ttm_bo_kunmap(&bo->kmap);
}
struct amdgpu_bo *amdgpu_bo_ref(struct amdgpu_bo *bo)
@ -724,15 +733,16 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
dev_err(adev->dev, "%p pin failed\n", bo);
goto error;
}
r = amdgpu_ttm_bind(&bo->tbo, &bo->tbo.mem);
if (unlikely(r)) {
dev_err(adev->dev, "%p bind failed\n", bo);
goto error;
}
bo->pin_count = 1;
if (gpu_addr != NULL)
if (gpu_addr != NULL) {
r = amdgpu_ttm_bind(&bo->tbo, &bo->tbo.mem);
if (unlikely(r)) {
dev_err(adev->dev, "%p bind failed\n", bo);
goto error;
}
*gpu_addr = amdgpu_bo_gpu_offset(bo);
}
if (domain == AMDGPU_GEM_DOMAIN_VRAM) {
adev->vram_pin_size += amdgpu_bo_size(bo);
if (bo->flags & AMDGPU_GEM_CREATE_NO_CPU_ACCESS)
@ -921,6 +931,8 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
abo = container_of(bo, struct amdgpu_bo, tbo);
amdgpu_vm_bo_invalidate(adev, abo);
amdgpu_bo_kunmap(abo);
/* remember the eviction */
if (evict)
atomic64_inc(&adev->num_evictions);
@ -930,8 +942,6 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
return;
/* move_notify is called before move happens */
amdgpu_update_memory_usage(adev, &bo->mem, new_mem);
trace_amdgpu_ttm_bo_move(abo, new_mem->mem_type, old_mem->mem_type);
}
@ -939,19 +949,22 @@ int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
struct amdgpu_bo *abo;
unsigned long offset, size, lpfn;
int i, r;
unsigned long offset, size;
int r;
if (!amdgpu_ttm_bo_is_amdgpu_bo(bo))
return 0;
abo = container_of(bo, struct amdgpu_bo, tbo);
/* Remember that this BO was accessed by the CPU */
abo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
if (bo->mem.mem_type != TTM_PL_VRAM)
return 0;
size = bo->mem.num_pages << PAGE_SHIFT;
offset = bo->mem.start << PAGE_SHIFT;
/* TODO: figure out how to map scattered VRAM to the CPU */
if ((offset + size) <= adev->mc.visible_vram_size)
return 0;
@ -961,26 +974,21 @@ int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
/* hurrah the memory is not visible ! */
atomic64_inc(&adev->num_vram_cpu_page_faults);
amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_VRAM);
lpfn = adev->mc.visible_vram_size >> PAGE_SHIFT;
for (i = 0; i < abo->placement.num_placement; i++) {
/* Force into visible VRAM */
if ((abo->placements[i].flags & TTM_PL_FLAG_VRAM) &&
(!abo->placements[i].lpfn ||
abo->placements[i].lpfn > lpfn))
abo->placements[i].lpfn = lpfn;
}
amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_VRAM |
AMDGPU_GEM_DOMAIN_GTT);
/* Avoid costly evictions; only set GTT as a busy placement */
abo->placement.num_busy_placement = 1;
abo->placement.busy_placement = &abo->placements[1];
r = ttm_bo_validate(bo, &abo->placement, false, false);
if (unlikely(r == -ENOMEM)) {
amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_GTT);
return ttm_bo_validate(bo, &abo->placement, false, false);
} else if (unlikely(r != 0)) {
if (unlikely(r != 0))
return r;
}
offset = bo->mem.start << PAGE_SHIFT;
/* this should never happen */
if ((offset + size) > adev->mc.visible_vram_size)
if (bo->mem.mem_type == TTM_PL_VRAM &&
(offset + size) > adev->mc.visible_vram_size)
return -EINVAL;
return 0;

View File

@ -33,6 +33,61 @@
#define AMDGPU_BO_INVALID_OFFSET LONG_MAX
/* bo virtual addresses in a vm */
struct amdgpu_bo_va_mapping {
struct list_head list;
struct rb_node rb;
uint64_t start;
uint64_t last;
uint64_t __subtree_last;
uint64_t offset;
uint64_t flags;
};
/* User space allocated BO in a VM */
struct amdgpu_bo_va {
struct amdgpu_vm_bo_base base;
/* protected by bo being reserved */
struct dma_fence *last_pt_update;
unsigned ref_count;
/* mappings for this bo_va */
struct list_head invalids;
struct list_head valids;
};
struct amdgpu_bo {
/* Protected by tbo.reserved */
u32 preferred_domains;
u32 allowed_domains;
struct ttm_place placements[AMDGPU_GEM_DOMAIN_MAX + 1];
struct ttm_placement placement;
struct ttm_buffer_object tbo;
struct ttm_bo_kmap_obj kmap;
u64 flags;
unsigned pin_count;
u64 tiling_flags;
u64 metadata_flags;
void *metadata;
u32 metadata_size;
unsigned prime_shared_count;
/* list of all virtual address to which this bo is associated to */
struct list_head va;
/* Constant after initialization */
struct drm_gem_object gem_base;
struct amdgpu_bo *parent;
struct amdgpu_bo *shadow;
struct ttm_bo_kmap_obj dma_buf_vmap;
struct amdgpu_mn *mn;
union {
struct list_head mn_list;
struct list_head shadow_list;
};
};
/**
* amdgpu_mem_type_to_domain - return domain corresponding to mem_type
* @mem_type: ttm memory type
@ -120,7 +175,11 @@ static inline u64 amdgpu_bo_mmap_offset(struct amdgpu_bo *bo)
*/
static inline bool amdgpu_bo_gpu_accessible(struct amdgpu_bo *bo)
{
return bo->tbo.mem.mem_type != TTM_PL_SYSTEM;
switch (bo->tbo.mem.mem_type) {
case TTM_PL_TT: return amdgpu_ttm_is_bound(bo->tbo.ttm);
case TTM_PL_VRAM: return true;
default: return false;
}
}
int amdgpu_bo_create(struct amdgpu_device *adev,
@ -128,6 +187,7 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
bool kernel, u32 domain, u64 flags,
struct sg_table *sg,
struct reservation_object *resv,
uint64_t init_value,
struct amdgpu_bo **bo_ptr);
int amdgpu_bo_create_restricted(struct amdgpu_device *adev,
unsigned long size, int byte_align,
@ -135,7 +195,12 @@ int amdgpu_bo_create_restricted(struct amdgpu_device *adev,
struct sg_table *sg,
struct ttm_placement *placement,
struct reservation_object *resv,
uint64_t init_value,
struct amdgpu_bo **bo_ptr);
int amdgpu_bo_create_reserved(struct amdgpu_device *adev,
unsigned long size, int align,
u32 domain, struct amdgpu_bo **bo_ptr,
u64 *gpu_addr, void **cpu_addr);
int amdgpu_bo_create_kernel(struct amdgpu_device *adev,
unsigned long size, int align,
u32 domain, struct amdgpu_bo **bo_ptr,
@ -143,6 +208,7 @@ int amdgpu_bo_create_kernel(struct amdgpu_device *adev,
void amdgpu_bo_free_kernel(struct amdgpu_bo **bo, u64 *gpu_addr,
void **cpu_addr);
int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr);
void *amdgpu_bo_kptr(struct amdgpu_bo *bo);
void amdgpu_bo_kunmap(struct amdgpu_bo *bo);
struct amdgpu_bo *amdgpu_bo_ref(struct amdgpu_bo *bo);
void amdgpu_bo_unref(struct amdgpu_bo **bo);

View File

@ -30,6 +30,7 @@ struct cg_flag_name
const char *name;
};
void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev);
int amdgpu_pm_sysfs_init(struct amdgpu_device *adev);
void amdgpu_pm_sysfs_fini(struct amdgpu_device *adev);
void amdgpu_pm_print_power_states(struct amdgpu_device *adev);

View File

@ -69,7 +69,7 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
ww_mutex_lock(&resv->lock, NULL);
ret = amdgpu_bo_create(adev, attach->dmabuf->size, PAGE_SIZE, false,
AMDGPU_GEM_DOMAIN_GTT, 0, sg, resv, &bo);
AMDGPU_GEM_DOMAIN_GTT, 0, sg, resv, 0, &bo);
ww_mutex_unlock(&resv->lock);
if (ret)
return ERR_PTR(ret);

View File

@ -63,8 +63,13 @@ static int psp_sw_init(void *handle)
psp->smu_reload_quirk = psp_v3_1_smu_reload_quirk;
break;
case CHIP_RAVEN:
#if 0
psp->init_microcode = psp_v10_0_init_microcode;
#endif
psp->prep_cmd_buf = psp_v10_0_prep_cmd_buf;
psp->ring_init = psp_v10_0_ring_init;
psp->ring_create = psp_v10_0_ring_create;
psp->ring_destroy = psp_v10_0_ring_destroy;
psp->cmd_submit = psp_v10_0_cmd_submit;
psp->compare_sram_data = psp_v10_0_compare_sram_data;
break;
@ -95,9 +100,8 @@ int psp_wait_for(struct psp_context *psp, uint32_t reg_index,
int i;
struct amdgpu_device *adev = psp->adev;
val = RREG32(reg_index);
for (i = 0; i < adev->usec_timeout; i++) {
val = RREG32(reg_index);
if (check_changed) {
if (val != reg_val)
return 0;
@ -118,33 +122,18 @@ psp_cmd_submit_buf(struct psp_context *psp,
int index)
{
int ret;
struct amdgpu_bo *cmd_buf_bo;
uint64_t cmd_buf_mc_addr;
struct psp_gfx_cmd_resp *cmd_buf_mem;
struct amdgpu_device *adev = psp->adev;
ret = amdgpu_bo_create_kernel(adev, PSP_CMD_BUFFER_SIZE, PAGE_SIZE,
AMDGPU_GEM_DOMAIN_VRAM,
&cmd_buf_bo, &cmd_buf_mc_addr,
(void **)&cmd_buf_mem);
if (ret)
return ret;
memset(psp->cmd_buf_mem, 0, PSP_CMD_BUFFER_SIZE);
memset(cmd_buf_mem, 0, PSP_CMD_BUFFER_SIZE);
memcpy(psp->cmd_buf_mem, cmd, sizeof(struct psp_gfx_cmd_resp));
memcpy(cmd_buf_mem, cmd, sizeof(struct psp_gfx_cmd_resp));
ret = psp_cmd_submit(psp, ucode, cmd_buf_mc_addr,
ret = psp_cmd_submit(psp, ucode, psp->cmd_buf_mc_addr,
fence_mc_addr, index);
while (*((unsigned int *)psp->fence_buf) != index) {
msleep(1);
}
amdgpu_bo_free_kernel(&cmd_buf_bo,
&cmd_buf_mc_addr,
(void **)&cmd_buf_mem);
return ret;
}
@ -351,6 +340,13 @@ static int psp_load_fw(struct amdgpu_device *adev)
&psp->fence_buf_bo,
&psp->fence_buf_mc_addr,
&psp->fence_buf);
if (ret)
goto failed_mem2;
ret = amdgpu_bo_create_kernel(adev, PSP_CMD_BUFFER_SIZE, PAGE_SIZE,
AMDGPU_GEM_DOMAIN_VRAM,
&psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
(void **)&psp->cmd_buf_mem);
if (ret)
goto failed_mem1;
@ -358,7 +354,7 @@ static int psp_load_fw(struct amdgpu_device *adev)
ret = psp_ring_init(psp, PSP_RING_TYPE__KM);
if (ret)
goto failed_mem1;
goto failed_mem;
ret = psp_tmr_init(psp);
if (ret)
@ -379,9 +375,13 @@ static int psp_load_fw(struct amdgpu_device *adev)
return 0;
failed_mem:
amdgpu_bo_free_kernel(&psp->cmd_buf_bo,
&psp->cmd_buf_mc_addr,
(void **)&psp->cmd_buf_mem);
failed_mem1:
amdgpu_bo_free_kernel(&psp->fence_buf_bo,
&psp->fence_buf_mc_addr, &psp->fence_buf);
failed_mem1:
failed_mem2:
amdgpu_bo_free_kernel(&psp->fw_pri_bo,
&psp->fw_pri_mc_addr, &psp->fw_pri_buf);
failed:
@ -435,16 +435,15 @@ static int psp_hw_fini(void *handle)
psp_ring_destroy(psp, PSP_RING_TYPE__KM);
if (psp->tmr_buf)
amdgpu_bo_free_kernel(&psp->tmr_bo, &psp->tmr_mc_addr, &psp->tmr_buf);
if (psp->fw_pri_buf)
amdgpu_bo_free_kernel(&psp->fw_pri_bo,
&psp->fw_pri_mc_addr, &psp->fw_pri_buf);
if (psp->fence_buf_bo)
amdgpu_bo_free_kernel(&psp->fence_buf_bo,
&psp->fence_buf_mc_addr, &psp->fence_buf);
amdgpu_bo_free_kernel(&psp->tmr_bo, &psp->tmr_mc_addr, &psp->tmr_buf);
amdgpu_bo_free_kernel(&psp->fw_pri_bo,
&psp->fw_pri_mc_addr, &psp->fw_pri_buf);
amdgpu_bo_free_kernel(&psp->fence_buf_bo,
&psp->fence_buf_mc_addr, &psp->fence_buf);
amdgpu_bo_free_kernel(&psp->asd_shared_bo, &psp->asd_shared_mc_addr,
&psp->asd_shared_buf);
amdgpu_bo_free_kernel(&psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
(void **)&psp->cmd_buf_mem);
kfree(psp->cmd);
psp->cmd = NULL;

View File

@ -108,6 +108,11 @@ struct psp_context
struct amdgpu_bo *fence_buf_bo;
uint64_t fence_buf_mc_addr;
void *fence_buf;
/* cmd buffer */
struct amdgpu_bo *cmd_buf_bo;
uint64_t cmd_buf_mc_addr;
struct psp_gfx_cmd_resp *cmd_buf_mem;
};
struct amdgpu_psp_funcs {

View File

@ -184,32 +184,16 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
return r;
}
if (ring->funcs->support_64bit_ptrs) {
r = amdgpu_wb_get_64bit(adev, &ring->rptr_offs);
if (r) {
dev_err(adev->dev, "(%d) ring rptr_offs wb alloc failed\n", r);
return r;
}
r = amdgpu_wb_get_64bit(adev, &ring->wptr_offs);
if (r) {
dev_err(adev->dev, "(%d) ring wptr_offs wb alloc failed\n", r);
return r;
}
} else {
r = amdgpu_wb_get(adev, &ring->rptr_offs);
if (r) {
dev_err(adev->dev, "(%d) ring rptr_offs wb alloc failed\n", r);
return r;
}
r = amdgpu_wb_get(adev, &ring->wptr_offs);
if (r) {
dev_err(adev->dev, "(%d) ring wptr_offs wb alloc failed\n", r);
return r;
}
r = amdgpu_wb_get(adev, &ring->rptr_offs);
if (r) {
dev_err(adev->dev, "(%d) ring rptr_offs wb alloc failed\n", r);
return r;
}
r = amdgpu_wb_get(adev, &ring->wptr_offs);
if (r) {
dev_err(adev->dev, "(%d) ring wptr_offs wb alloc failed\n", r);
return r;
}
r = amdgpu_wb_get(adev, &ring->fence_offs);
@ -277,18 +261,15 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring)
{
ring->ready = false;
if (ring->funcs->support_64bit_ptrs) {
amdgpu_wb_free_64bit(ring->adev, ring->cond_exe_offs);
amdgpu_wb_free_64bit(ring->adev, ring->fence_offs);
amdgpu_wb_free_64bit(ring->adev, ring->rptr_offs);
amdgpu_wb_free_64bit(ring->adev, ring->wptr_offs);
} else {
amdgpu_wb_free(ring->adev, ring->cond_exe_offs);
amdgpu_wb_free(ring->adev, ring->fence_offs);
amdgpu_wb_free(ring->adev, ring->rptr_offs);
amdgpu_wb_free(ring->adev, ring->wptr_offs);
}
/* Not to finish a ring which is not initialized */
if (!(ring->adev) || !(ring->adev->rings[ring->idx]))
return;
amdgpu_wb_free(ring->adev, ring->rptr_offs);
amdgpu_wb_free(ring->adev, ring->wptr_offs);
amdgpu_wb_free(ring->adev, ring->cond_exe_offs);
amdgpu_wb_free(ring->adev, ring->fence_offs);
amdgpu_bo_free_kernel(&ring->ring_obj,
&ring->gpu_addr,

View File

@ -212,4 +212,44 @@ static inline void amdgpu_ring_clear_ring(struct amdgpu_ring *ring)
}
static inline void amdgpu_ring_write(struct amdgpu_ring *ring, uint32_t v)
{
if (ring->count_dw <= 0)
DRM_ERROR("amdgpu: writing more dwords to the ring than expected!\n");
ring->ring[ring->wptr++ & ring->buf_mask] = v;
ring->wptr &= ring->ptr_mask;
ring->count_dw--;
}
static inline void amdgpu_ring_write_multiple(struct amdgpu_ring *ring,
void *src, int count_dw)
{
unsigned occupied, chunk1, chunk2;
void *dst;
if (unlikely(ring->count_dw < count_dw))
DRM_ERROR("amdgpu: writing more dwords to the ring than expected!\n");
occupied = ring->wptr & ring->buf_mask;
dst = (void *)&ring->ring[occupied];
chunk1 = ring->buf_mask + 1 - occupied;
chunk1 = (chunk1 >= count_dw) ? count_dw: chunk1;
chunk2 = count_dw - chunk1;
chunk1 <<= 2;
chunk2 <<= 2;
if (chunk1)
memcpy(dst, src, chunk1);
if (chunk2) {
src += chunk1;
dst = (void *)ring->ring;
memcpy(dst, src, chunk2);
}
ring->wptr += count_dw;
ring->wptr &= ring->ptr_mask;
ring->count_dw -= count_dw;
}
#endif

View File

@ -64,7 +64,7 @@ int amdgpu_sa_bo_manager_init(struct amdgpu_device *adev,
INIT_LIST_HEAD(&sa_manager->flist[i]);
r = amdgpu_bo_create(adev, size, align, true, domain,
0, NULL, NULL, &sa_manager->bo);
0, NULL, NULL, 0, &sa_manager->bo);
if (r) {
dev_err(adev->dev, "(%d) failed to allocate bo for manager\n", r);
return r;

View File

@ -33,7 +33,7 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
struct amdgpu_bo *vram_obj = NULL;
struct amdgpu_bo **gtt_obj = NULL;
uint64_t gtt_addr, vram_addr;
uint64_t gart_addr, vram_addr;
unsigned n, size;
int i, r;
@ -42,7 +42,7 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
/* Number of tests =
* (Total GTT - IB pool - writeback page - ring buffers) / test size
*/
n = adev->mc.gtt_size - AMDGPU_IB_POOL_SIZE*64*1024;
n = adev->mc.gart_size - AMDGPU_IB_POOL_SIZE*64*1024;
for (i = 0; i < AMDGPU_MAX_RINGS; ++i)
if (adev->rings[i])
n -= adev->rings[i]->ring_size;
@ -61,7 +61,7 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
r = amdgpu_bo_create(adev, size, PAGE_SIZE, true,
AMDGPU_GEM_DOMAIN_VRAM, 0,
NULL, NULL, &vram_obj);
NULL, NULL, 0, &vram_obj);
if (r) {
DRM_ERROR("Failed to create VRAM object\n");
goto out_cleanup;
@ -76,13 +76,13 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
}
for (i = 0; i < n; i++) {
void *gtt_map, *vram_map;
void **gtt_start, **gtt_end;
void **gart_start, **gart_end;
void **vram_start, **vram_end;
struct dma_fence *fence = NULL;
r = amdgpu_bo_create(adev, size, PAGE_SIZE, true,
AMDGPU_GEM_DOMAIN_GTT, 0, NULL,
NULL, gtt_obj + i);
NULL, 0, gtt_obj + i);
if (r) {
DRM_ERROR("Failed to create GTT object %d\n", i);
goto out_lclean;
@ -91,7 +91,7 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
r = amdgpu_bo_reserve(gtt_obj[i], false);
if (unlikely(r != 0))
goto out_lclean_unref;
r = amdgpu_bo_pin(gtt_obj[i], AMDGPU_GEM_DOMAIN_GTT, &gtt_addr);
r = amdgpu_bo_pin(gtt_obj[i], AMDGPU_GEM_DOMAIN_GTT, &gart_addr);
if (r) {
DRM_ERROR("Failed to pin GTT object %d\n", i);
goto out_lclean_unres;
@ -103,15 +103,15 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
goto out_lclean_unpin;
}
for (gtt_start = gtt_map, gtt_end = gtt_map + size;
gtt_start < gtt_end;
gtt_start++)
*gtt_start = gtt_start;
for (gart_start = gtt_map, gart_end = gtt_map + size;
gart_start < gart_end;
gart_start++)
*gart_start = gart_start;
amdgpu_bo_kunmap(gtt_obj[i]);
r = amdgpu_copy_buffer(ring, gtt_addr, vram_addr,
size, NULL, &fence, false);
r = amdgpu_copy_buffer(ring, gart_addr, vram_addr,
size, NULL, &fence, false, false);
if (r) {
DRM_ERROR("Failed GTT->VRAM copy %d\n", i);
@ -132,21 +132,21 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
goto out_lclean_unpin;
}
for (gtt_start = gtt_map, gtt_end = gtt_map + size,
for (gart_start = gtt_map, gart_end = gtt_map + size,
vram_start = vram_map, vram_end = vram_map + size;
vram_start < vram_end;
gtt_start++, vram_start++) {
if (*vram_start != gtt_start) {
gart_start++, vram_start++) {
if (*vram_start != gart_start) {
DRM_ERROR("Incorrect GTT->VRAM copy %d: Got 0x%p, "
"expected 0x%p (GTT/VRAM offset "
"0x%16llx/0x%16llx)\n",
i, *vram_start, gtt_start,
i, *vram_start, gart_start,
(unsigned long long)
(gtt_addr - adev->mc.gtt_start +
(void*)gtt_start - gtt_map),
(gart_addr - adev->mc.gart_start +
(void*)gart_start - gtt_map),
(unsigned long long)
(vram_addr - adev->mc.vram_start +
(void*)gtt_start - gtt_map));
(void*)gart_start - gtt_map));
amdgpu_bo_kunmap(vram_obj);
goto out_lclean_unpin;
}
@ -155,8 +155,8 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
amdgpu_bo_kunmap(vram_obj);
r = amdgpu_copy_buffer(ring, vram_addr, gtt_addr,
size, NULL, &fence, false);
r = amdgpu_copy_buffer(ring, vram_addr, gart_addr,
size, NULL, &fence, false, false);
if (r) {
DRM_ERROR("Failed VRAM->GTT copy %d\n", i);
@ -177,20 +177,20 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
goto out_lclean_unpin;
}
for (gtt_start = gtt_map, gtt_end = gtt_map + size,
for (gart_start = gtt_map, gart_end = gtt_map + size,
vram_start = vram_map, vram_end = vram_map + size;
gtt_start < gtt_end;
gtt_start++, vram_start++) {
if (*gtt_start != vram_start) {
gart_start < gart_end;
gart_start++, vram_start++) {
if (*gart_start != vram_start) {
DRM_ERROR("Incorrect VRAM->GTT copy %d: Got 0x%p, "
"expected 0x%p (VRAM/GTT offset "
"0x%16llx/0x%16llx)\n",
i, *gtt_start, vram_start,
i, *gart_start, vram_start,
(unsigned long long)
(vram_addr - adev->mc.vram_start +
(void*)vram_start - vram_map),
(unsigned long long)
(gtt_addr - adev->mc.gtt_start +
(gart_addr - adev->mc.gart_start +
(void*)vram_start - vram_map));
amdgpu_bo_kunmap(gtt_obj[i]);
goto out_lclean_unpin;
@ -200,7 +200,7 @@ static void amdgpu_do_test_moves(struct amdgpu_device *adev)
amdgpu_bo_kunmap(gtt_obj[i]);
DRM_INFO("Tested GTT->VRAM and VRAM->GTT copy for GTT offset 0x%llx\n",
gtt_addr - adev->mc.gtt_start);
gart_addr - adev->mc.gart_start);
continue;
out_lclean_unpin:

View File

@ -14,6 +14,62 @@
#define AMDGPU_JOB_GET_TIMELINE_NAME(job) \
job->base.s_fence->finished.ops->get_timeline_name(&job->base.s_fence->finished)
TRACE_EVENT(amdgpu_ttm_tt_populate,
TP_PROTO(struct amdgpu_device *adev, uint64_t dma_address, uint64_t phys_address),
TP_ARGS(adev, dma_address, phys_address),
TP_STRUCT__entry(
__field(uint16_t, domain)
__field(uint8_t, bus)
__field(uint8_t, slot)
__field(uint8_t, func)
__field(uint64_t, dma)
__field(uint64_t, phys)
),
TP_fast_assign(
__entry->domain = pci_domain_nr(adev->pdev->bus);
__entry->bus = adev->pdev->bus->number;
__entry->slot = PCI_SLOT(adev->pdev->devfn);
__entry->func = PCI_FUNC(adev->pdev->devfn);
__entry->dma = dma_address;
__entry->phys = phys_address;
),
TP_printk("%04x:%02x:%02x.%x: 0x%llx => 0x%llx",
(unsigned)__entry->domain,
(unsigned)__entry->bus,
(unsigned)__entry->slot,
(unsigned)__entry->func,
(unsigned long long)__entry->dma,
(unsigned long long)__entry->phys)
);
TRACE_EVENT(amdgpu_ttm_tt_unpopulate,
TP_PROTO(struct amdgpu_device *adev, uint64_t dma_address, uint64_t phys_address),
TP_ARGS(adev, dma_address, phys_address),
TP_STRUCT__entry(
__field(uint16_t, domain)
__field(uint8_t, bus)
__field(uint8_t, slot)
__field(uint8_t, func)
__field(uint64_t, dma)
__field(uint64_t, phys)
),
TP_fast_assign(
__entry->domain = pci_domain_nr(adev->pdev->bus);
__entry->bus = adev->pdev->bus->number;
__entry->slot = PCI_SLOT(adev->pdev->devfn);
__entry->func = PCI_FUNC(adev->pdev->devfn);
__entry->dma = dma_address;
__entry->phys = phys_address;
),
TP_printk("%04x:%02x:%02x.%x: 0x%llx => 0x%llx",
(unsigned)__entry->domain,
(unsigned)__entry->bus,
(unsigned)__entry->slot,
(unsigned)__entry->func,
(unsigned long long)__entry->dma,
(unsigned long long)__entry->phys)
);
TRACE_EVENT(amdgpu_mm_rreg,
TP_PROTO(unsigned did, uint32_t reg, uint32_t value),
TP_ARGS(did, reg, value),
@ -105,12 +161,12 @@ TRACE_EVENT(amdgpu_bo_create,
__entry->bo = bo;
__entry->pages = bo->tbo.num_pages;
__entry->type = bo->tbo.mem.mem_type;
__entry->prefer = bo->prefered_domains;
__entry->prefer = bo->preferred_domains;
__entry->allow = bo->allowed_domains;
__entry->visible = bo->flags;
),
TP_printk("bo=%p, pages=%u, type=%d, prefered=%d, allowed=%d, visible=%d",
TP_printk("bo=%p, pages=%u, type=%d, preferred=%d, allowed=%d, visible=%d",
__entry->bo, __entry->pages, __entry->type,
__entry->prefer, __entry->allow, __entry->visible)
);
@ -224,17 +280,17 @@ TRACE_EVENT(amdgpu_vm_bo_map,
__field(long, start)
__field(long, last)
__field(u64, offset)
__field(u32, flags)
__field(u64, flags)
),
TP_fast_assign(
__entry->bo = bo_va ? bo_va->bo : NULL;
__entry->bo = bo_va ? bo_va->base.bo : NULL;
__entry->start = mapping->start;
__entry->last = mapping->last;
__entry->offset = mapping->offset;
__entry->flags = mapping->flags;
),
TP_printk("bo=%p, start=%lx, last=%lx, offset=%010llx, flags=%08x",
TP_printk("bo=%p, start=%lx, last=%lx, offset=%010llx, flags=%llx",
__entry->bo, __entry->start, __entry->last,
__entry->offset, __entry->flags)
);
@ -248,17 +304,17 @@ TRACE_EVENT(amdgpu_vm_bo_unmap,
__field(long, start)
__field(long, last)
__field(u64, offset)
__field(u32, flags)
__field(u64, flags)
),
TP_fast_assign(
__entry->bo = bo_va->bo;
__entry->bo = bo_va->base.bo;
__entry->start = mapping->start;
__entry->last = mapping->last;
__entry->offset = mapping->offset;
__entry->flags = mapping->flags;
),
TP_printk("bo=%p, start=%lx, last=%lx, offset=%010llx, flags=%08x",
TP_printk("bo=%p, start=%lx, last=%lx, offset=%010llx, flags=%llx",
__entry->bo, __entry->start, __entry->last,
__entry->offset, __entry->flags)
);
@ -269,7 +325,7 @@ DECLARE_EVENT_CLASS(amdgpu_vm_mapping,
TP_STRUCT__entry(
__field(u64, soffset)
__field(u64, eoffset)
__field(u32, flags)
__field(u64, flags)
),
TP_fast_assign(
@ -277,7 +333,7 @@ DECLARE_EVENT_CLASS(amdgpu_vm_mapping,
__entry->eoffset = mapping->last + 1;
__entry->flags = mapping->flags;
),
TP_printk("soffs=%010llx, eoffs=%010llx, flags=%08x",
TP_printk("soffs=%010llx, eoffs=%010llx, flags=%llx",
__entry->soffset, __entry->eoffset, __entry->flags)
);
@ -293,14 +349,14 @@ DEFINE_EVENT(amdgpu_vm_mapping, amdgpu_vm_bo_mapping,
TRACE_EVENT(amdgpu_vm_set_ptes,
TP_PROTO(uint64_t pe, uint64_t addr, unsigned count,
uint32_t incr, uint32_t flags),
uint32_t incr, uint64_t flags),
TP_ARGS(pe, addr, count, incr, flags),
TP_STRUCT__entry(
__field(u64, pe)
__field(u64, addr)
__field(u32, count)
__field(u32, incr)
__field(u32, flags)
__field(u64, flags)
),
TP_fast_assign(
@ -310,7 +366,7 @@ TRACE_EVENT(amdgpu_vm_set_ptes,
__entry->incr = incr;
__entry->flags = flags;
),
TP_printk("pe=%010Lx, addr=%010Lx, incr=%u, flags=%08x, count=%u",
TP_printk("pe=%010Lx, addr=%010Lx, incr=%u, flags=%llx, count=%u",
__entry->pe, __entry->addr, __entry->incr,
__entry->flags, __entry->count)
);

View File

@ -43,14 +43,20 @@
#include <linux/pagemap.h>
#include <linux/debugfs.h>
#include "amdgpu.h"
#include "amdgpu_trace.h"
#include "bif/bif_4_1_d.h"
#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
static int amdgpu_map_buffer(struct ttm_buffer_object *bo,
struct ttm_mem_reg *mem, unsigned num_pages,
uint64_t offset, unsigned window,
struct amdgpu_ring *ring,
uint64_t *addr);
static int amdgpu_ttm_debugfs_init(struct amdgpu_device *adev);
static void amdgpu_ttm_debugfs_fini(struct amdgpu_device *adev);
/*
* Global memory.
*/
@ -97,6 +103,8 @@ static int amdgpu_ttm_global_init(struct amdgpu_device *adev)
goto error_bo;
}
mutex_init(&adev->mman.gtt_window_lock);
ring = adev->mman.buffer_funcs_ring;
rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_KERNEL];
r = amd_sched_entity_init(&ring->sched, &adev->mman.entity,
@ -123,6 +131,7 @@ static void amdgpu_ttm_global_fini(struct amdgpu_device *adev)
if (adev->mman.mem_global_referenced) {
amd_sched_entity_fini(adev->mman.entity.sched,
&adev->mman.entity);
mutex_destroy(&adev->mman.gtt_window_lock);
drm_global_item_unref(&adev->mman.bo_global_ref.ref);
drm_global_item_unref(&adev->mman.mem_global_ref);
adev->mman.mem_global_referenced = false;
@ -150,7 +159,7 @@ static int amdgpu_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
break;
case TTM_PL_TT:
man->func = &amdgpu_gtt_mgr_func;
man->gpu_offset = adev->mc.gtt_start;
man->gpu_offset = adev->mc.gart_start;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE | TTM_MEMTYPE_FLAG_CMA;
@ -186,12 +195,11 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
struct amdgpu_bo *abo;
static struct ttm_place placements = {
static const struct ttm_place placements = {
.fpfn = 0,
.lpfn = 0,
.flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM
};
unsigned i;
if (!amdgpu_ttm_bo_is_amdgpu_bo(bo)) {
placement->placement = &placements;
@ -207,22 +215,36 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
adev->mman.buffer_funcs_ring &&
adev->mman.buffer_funcs_ring->ready == false) {
amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_CPU);
} else {
amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_GTT);
for (i = 0; i < abo->placement.num_placement; ++i) {
if (!(abo->placements[i].flags &
TTM_PL_FLAG_TT))
continue;
} else if (adev->mc.visible_vram_size < adev->mc.real_vram_size &&
!(abo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)) {
unsigned fpfn = adev->mc.visible_vram_size >> PAGE_SHIFT;
struct drm_mm_node *node = bo->mem.mm_node;
unsigned long pages_left;
if (abo->placements[i].lpfn)
continue;
/* set an upper limit to force directly
* allocating address space for the BO.
*/
abo->placements[i].lpfn =
adev->mc.gtt_size >> PAGE_SHIFT;
for (pages_left = bo->mem.num_pages;
pages_left;
pages_left -= node->size, node++) {
if (node->start < fpfn)
break;
}
if (!pages_left)
goto gtt;
/* Try evicting to the CPU inaccessible part of VRAM
* first, but only set GTT as busy placement, so this
* BO will be evicted to GTT rather than causing other
* BOs to be evicted from VRAM
*/
amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_VRAM |
AMDGPU_GEM_DOMAIN_GTT);
abo->placements[0].fpfn = fpfn;
abo->placements[0].lpfn = 0;
abo->placement.busy_placement = &abo->placements[1];
abo->placement.num_busy_placement = 1;
} else {
gtt:
amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_GTT);
}
break;
case TTM_PL_TT:
@ -252,29 +274,18 @@ static void amdgpu_move_null(struct ttm_buffer_object *bo,
new_mem->mm_node = NULL;
}
static int amdgpu_mm_node_addr(struct ttm_buffer_object *bo,
struct drm_mm_node *mm_node,
struct ttm_mem_reg *mem,
uint64_t *addr)
static uint64_t amdgpu_mm_node_addr(struct ttm_buffer_object *bo,
struct drm_mm_node *mm_node,
struct ttm_mem_reg *mem)
{
int r;
uint64_t addr = 0;
switch (mem->mem_type) {
case TTM_PL_TT:
r = amdgpu_ttm_bind(bo, mem);
if (r)
return r;
case TTM_PL_VRAM:
*addr = mm_node->start << PAGE_SHIFT;
*addr += bo->bdev->man[mem->mem_type].gpu_offset;
break;
default:
DRM_ERROR("Unknown placement %d\n", mem->mem_type);
return -EINVAL;
if (mem->mem_type != TTM_PL_TT ||
amdgpu_gtt_mgr_is_allocated(mem)) {
addr = mm_node->start << PAGE_SHIFT;
addr += bo->bdev->man[mem->mem_type].gpu_offset;
}
return 0;
return addr;
}
static int amdgpu_move_blit(struct ttm_buffer_object *bo,
@ -299,26 +310,40 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
}
old_mm = old_mem->mm_node;
r = amdgpu_mm_node_addr(bo, old_mm, old_mem, &old_start);
if (r)
return r;
old_size = old_mm->size;
old_start = amdgpu_mm_node_addr(bo, old_mm, old_mem);
new_mm = new_mem->mm_node;
r = amdgpu_mm_node_addr(bo, new_mm, new_mem, &new_start);
if (r)
return r;
new_size = new_mm->size;
new_start = amdgpu_mm_node_addr(bo, new_mm, new_mem);
num_pages = new_mem->num_pages;
mutex_lock(&adev->mman.gtt_window_lock);
while (num_pages) {
unsigned long cur_pages = min(old_size, new_size);
unsigned long cur_pages = min(min(old_size, new_size),
(u64)AMDGPU_GTT_MAX_TRANSFER_SIZE);
uint64_t from = old_start, to = new_start;
struct dma_fence *next;
r = amdgpu_copy_buffer(ring, old_start, new_start,
if (old_mem->mem_type == TTM_PL_TT &&
!amdgpu_gtt_mgr_is_allocated(old_mem)) {
r = amdgpu_map_buffer(bo, old_mem, cur_pages,
old_start, 0, ring, &from);
if (r)
goto error;
}
if (new_mem->mem_type == TTM_PL_TT &&
!amdgpu_gtt_mgr_is_allocated(new_mem)) {
r = amdgpu_map_buffer(bo, new_mem, cur_pages,
new_start, 1, ring, &to);
if (r)
goto error;
}
r = amdgpu_copy_buffer(ring, from, to,
cur_pages * PAGE_SIZE,
bo->resv, &next, false);
bo->resv, &next, false, true);
if (r)
goto error;
@ -331,10 +356,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
old_size -= cur_pages;
if (!old_size) {
r = amdgpu_mm_node_addr(bo, ++old_mm, old_mem,
&old_start);
if (r)
goto error;
old_start = amdgpu_mm_node_addr(bo, ++old_mm, old_mem);
old_size = old_mm->size;
} else {
old_start += cur_pages * PAGE_SIZE;
@ -342,22 +364,21 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
new_size -= cur_pages;
if (!new_size) {
r = amdgpu_mm_node_addr(bo, ++new_mm, new_mem,
&new_start);
if (r)
goto error;
new_start = amdgpu_mm_node_addr(bo, ++new_mm, new_mem);
new_size = new_mm->size;
} else {
new_start += cur_pages * PAGE_SIZE;
}
}
mutex_unlock(&adev->mman.gtt_window_lock);
r = ttm_bo_pipeline_move(bo, fence, evict, new_mem);
dma_fence_put(fence);
return r;
error:
mutex_unlock(&adev->mman.gtt_window_lock);
if (fence)
dma_fence_wait(fence, false);
dma_fence_put(fence);
@ -384,7 +405,7 @@ static int amdgpu_move_vram_ram(struct ttm_buffer_object *bo,
placement.num_busy_placement = 1;
placement.busy_placement = &placements;
placements.fpfn = 0;
placements.lpfn = adev->mc.gtt_size >> PAGE_SHIFT;
placements.lpfn = 0;
placements.flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
r = ttm_bo_mem_space(bo, &placement, &tmp_mem,
interruptible, no_wait_gpu);
@ -431,7 +452,7 @@ static int amdgpu_move_ram_vram(struct ttm_buffer_object *bo,
placement.num_busy_placement = 1;
placement.busy_placement = &placements;
placements.fpfn = 0;
placements.lpfn = adev->mc.gtt_size >> PAGE_SHIFT;
placements.lpfn = 0;
placements.flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
r = ttm_bo_mem_space(bo, &placement, &tmp_mem,
interruptible, no_wait_gpu);
@ -507,6 +528,15 @@ memcpy:
}
}
if (bo->type == ttm_bo_type_device &&
new_mem->mem_type == TTM_PL_VRAM &&
old_mem->mem_type != TTM_PL_VRAM) {
/* amdgpu_bo_fault_reserve_notify will re-set this if the CPU
* accesses the BO after it's moved.
*/
abo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
}
/* update statistics */
atomic64_add((u64)bo->num_pages << PAGE_SHIFT, &adev->num_bytes_moved);
return 0;
@ -633,6 +663,38 @@ release_pages:
return r;
}
static void amdgpu_trace_dma_map(struct ttm_tt *ttm)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(ttm->bdev);
struct amdgpu_ttm_tt *gtt = (void *)ttm;
unsigned i;
if (unlikely(trace_amdgpu_ttm_tt_populate_enabled())) {
for (i = 0; i < ttm->num_pages; i++) {
trace_amdgpu_ttm_tt_populate(
adev,
gtt->ttm.dma_address[i],
page_to_phys(ttm->pages[i]));
}
}
}
static void amdgpu_trace_dma_unmap(struct ttm_tt *ttm)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(ttm->bdev);
struct amdgpu_ttm_tt *gtt = (void *)ttm;
unsigned i;
if (unlikely(trace_amdgpu_ttm_tt_unpopulate_enabled())) {
for (i = 0; i < ttm->num_pages; i++) {
trace_amdgpu_ttm_tt_unpopulate(
adev,
gtt->ttm.dma_address[i],
page_to_phys(ttm->pages[i]));
}
}
}
/* prepare the sg table with the user pages */
static int amdgpu_ttm_tt_pin_userptr(struct ttm_tt *ttm)
{
@ -659,6 +721,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_tt *ttm)
drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
gtt->ttm.dma_address, ttm->num_pages);
amdgpu_trace_dma_map(ttm);
return 0;
release_sg:
@ -692,14 +756,41 @@ static void amdgpu_ttm_tt_unpin_userptr(struct ttm_tt *ttm)
put_page(page);
}
amdgpu_trace_dma_unmap(ttm);
sg_free_table(ttm->sg);
}
static int amdgpu_ttm_do_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem)
{
struct amdgpu_ttm_tt *gtt = (void *)ttm;
uint64_t flags;
int r;
spin_lock(&gtt->adev->gtt_list_lock);
flags = amdgpu_ttm_tt_pte_flags(gtt->adev, ttm, mem);
gtt->offset = (u64)mem->start << PAGE_SHIFT;
r = amdgpu_gart_bind(gtt->adev, gtt->offset, ttm->num_pages,
ttm->pages, gtt->ttm.dma_address, flags);
if (r) {
DRM_ERROR("failed to bind %lu pages at 0x%08llX\n",
ttm->num_pages, gtt->offset);
goto error_gart_bind;
}
list_add_tail(&gtt->list, &gtt->adev->gtt_list);
error_gart_bind:
spin_unlock(&gtt->adev->gtt_list_lock);
return r;
}
static int amdgpu_ttm_backend_bind(struct ttm_tt *ttm,
struct ttm_mem_reg *bo_mem)
{
struct amdgpu_ttm_tt *gtt = (void*)ttm;
int r;
int r = 0;
if (gtt->userptr) {
r = amdgpu_ttm_tt_pin_userptr(ttm);
@ -718,7 +809,10 @@ static int amdgpu_ttm_backend_bind(struct ttm_tt *ttm,
bo_mem->mem_type == AMDGPU_PL_OA)
return -EINVAL;
return 0;
if (amdgpu_gtt_mgr_is_allocated(bo_mem))
r = amdgpu_ttm_do_bind(ttm, bo_mem);
return r;
}
bool amdgpu_ttm_is_bound(struct ttm_tt *ttm)
@ -731,8 +825,6 @@ bool amdgpu_ttm_is_bound(struct ttm_tt *ttm)
int amdgpu_ttm_bind(struct ttm_buffer_object *bo, struct ttm_mem_reg *bo_mem)
{
struct ttm_tt *ttm = bo->ttm;
struct amdgpu_ttm_tt *gtt = (void *)bo->ttm;
uint64_t flags;
int r;
if (!ttm || amdgpu_ttm_is_bound(ttm))
@ -745,22 +837,7 @@ int amdgpu_ttm_bind(struct ttm_buffer_object *bo, struct ttm_mem_reg *bo_mem)
return r;
}
spin_lock(&gtt->adev->gtt_list_lock);
flags = amdgpu_ttm_tt_pte_flags(gtt->adev, ttm, bo_mem);
gtt->offset = (u64)bo_mem->start << PAGE_SHIFT;
r = amdgpu_gart_bind(gtt->adev, gtt->offset, ttm->num_pages,
ttm->pages, gtt->ttm.dma_address, flags);
if (r) {
DRM_ERROR("failed to bind %lu pages at 0x%08llX\n",
ttm->num_pages, gtt->offset);
goto error_gart_bind;
}
list_add_tail(&gtt->list, &gtt->adev->gtt_list);
error_gart_bind:
spin_unlock(&gtt->adev->gtt_list_lock);
return r;
return amdgpu_ttm_do_bind(ttm, bo_mem);
}
int amdgpu_ttm_recover_gart(struct amdgpu_device *adev)
@ -852,7 +929,7 @@ static struct ttm_tt *amdgpu_ttm_tt_create(struct ttm_bo_device *bdev,
static int amdgpu_ttm_tt_populate(struct ttm_tt *ttm)
{
struct amdgpu_device *adev;
struct amdgpu_device *adev = amdgpu_ttm_adev(ttm->bdev);
struct amdgpu_ttm_tt *gtt = (void *)ttm;
unsigned i;
int r;
@ -875,14 +952,14 @@ static int amdgpu_ttm_tt_populate(struct ttm_tt *ttm)
drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
gtt->ttm.dma_address, ttm->num_pages);
ttm->state = tt_unbound;
return 0;
r = 0;
goto trace_mappings;
}
adev = amdgpu_ttm_adev(ttm->bdev);
#ifdef CONFIG_SWIOTLB
if (swiotlb_nr_tbl()) {
return ttm_dma_populate(&gtt->ttm, adev->dev);
r = ttm_dma_populate(&gtt->ttm, adev->dev);
goto trace_mappings;
}
#endif
@ -905,7 +982,12 @@ static int amdgpu_ttm_tt_populate(struct ttm_tt *ttm)
return -EFAULT;
}
}
return 0;
r = 0;
trace_mappings:
if (likely(!r))
amdgpu_trace_dma_map(ttm);
return r;
}
static void amdgpu_ttm_tt_unpopulate(struct ttm_tt *ttm)
@ -926,6 +1008,8 @@ static void amdgpu_ttm_tt_unpopulate(struct ttm_tt *ttm)
adev = amdgpu_ttm_adev(ttm->bdev);
amdgpu_trace_dma_unmap(ttm);
#ifdef CONFIG_SWIOTLB
if (swiotlb_nr_tbl()) {
ttm_dma_unpopulate(&gtt->ttm, adev->dev);
@ -1075,6 +1159,67 @@ static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
return ttm_bo_eviction_valuable(bo, place);
}
static int amdgpu_ttm_access_memory(struct ttm_buffer_object *bo,
unsigned long offset,
void *buf, int len, int write)
{
struct amdgpu_bo *abo = container_of(bo, struct amdgpu_bo, tbo);
struct amdgpu_device *adev = amdgpu_ttm_adev(abo->tbo.bdev);
struct drm_mm_node *nodes = abo->tbo.mem.mm_node;
uint32_t value = 0;
int ret = 0;
uint64_t pos;
unsigned long flags;
if (bo->mem.mem_type != TTM_PL_VRAM)
return -EIO;
while (offset >= (nodes->size << PAGE_SHIFT)) {
offset -= nodes->size << PAGE_SHIFT;
++nodes;
}
pos = (nodes->start << PAGE_SHIFT) + offset;
while (len && pos < adev->mc.mc_vram_size) {
uint64_t aligned_pos = pos & ~(uint64_t)3;
uint32_t bytes = 4 - (pos & 3);
uint32_t shift = (pos & 3) * 8;
uint32_t mask = 0xffffffff << shift;
if (len < bytes) {
mask &= 0xffffffff >> (bytes - len) * 8;
bytes = len;
}
spin_lock_irqsave(&adev->mmio_idx_lock, flags);
WREG32(mmMM_INDEX, ((uint32_t)aligned_pos) | 0x80000000);
WREG32(mmMM_INDEX_HI, aligned_pos >> 31);
if (!write || mask != 0xffffffff)
value = RREG32(mmMM_DATA);
if (write) {
value &= ~mask;
value |= (*(uint32_t *)buf << shift) & mask;
WREG32(mmMM_DATA, value);
}
spin_unlock_irqrestore(&adev->mmio_idx_lock, flags);
if (!write) {
value = (value & mask) >> shift;
memcpy(buf, &value, bytes);
}
ret += bytes;
buf = (uint8_t *)buf + bytes;
pos += bytes;
len -= bytes;
if (pos >= (nodes->start + nodes->size) << PAGE_SHIFT) {
++nodes;
pos = (nodes->start << PAGE_SHIFT);
}
}
return ret;
}
static struct ttm_bo_driver amdgpu_bo_driver = {
.ttm_tt_create = &amdgpu_ttm_tt_create,
.ttm_tt_populate = &amdgpu_ttm_tt_populate,
@ -1090,11 +1235,14 @@ static struct ttm_bo_driver amdgpu_bo_driver = {
.io_mem_reserve = &amdgpu_ttm_io_mem_reserve,
.io_mem_free = &amdgpu_ttm_io_mem_free,
.io_mem_pfn = amdgpu_ttm_io_mem_pfn,
.access_memory = &amdgpu_ttm_access_memory
};
int amdgpu_ttm_init(struct amdgpu_device *adev)
{
uint64_t gtt_size;
int r;
u64 vis_vram_limit;
r = amdgpu_ttm_global_init(adev);
if (r) {
@ -1118,36 +1266,37 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
DRM_ERROR("Failed initializing VRAM heap.\n");
return r;
}
/* Reduce size of CPU-visible VRAM if requested */
vis_vram_limit = (u64)amdgpu_vis_vram_limit * 1024 * 1024;
if (amdgpu_vis_vram_limit > 0 &&
vis_vram_limit <= adev->mc.visible_vram_size)
adev->mc.visible_vram_size = vis_vram_limit;
/* Change the size here instead of the init above so only lpfn is affected */
amdgpu_ttm_set_active_vram_size(adev, adev->mc.visible_vram_size);
r = amdgpu_bo_create(adev, adev->mc.stolen_size, PAGE_SIZE, true,
AMDGPU_GEM_DOMAIN_VRAM,
AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, &adev->stollen_vga_memory);
if (r) {
return r;
}
r = amdgpu_bo_reserve(adev->stollen_vga_memory, false);
r = amdgpu_bo_create_kernel(adev, adev->mc.stolen_size, PAGE_SIZE,
AMDGPU_GEM_DOMAIN_VRAM,
&adev->stolen_vga_memory,
NULL, NULL);
if (r)
return r;
r = amdgpu_bo_pin(adev->stollen_vga_memory, AMDGPU_GEM_DOMAIN_VRAM, NULL);
amdgpu_bo_unreserve(adev->stollen_vga_memory);
if (r) {
amdgpu_bo_unref(&adev->stollen_vga_memory);
return r;
}
DRM_INFO("amdgpu: %uM of VRAM memory ready\n",
(unsigned) (adev->mc.real_vram_size / (1024 * 1024)));
r = ttm_bo_init_mm(&adev->mman.bdev, TTM_PL_TT,
adev->mc.gtt_size >> PAGE_SHIFT);
if (amdgpu_gtt_size == -1)
gtt_size = max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20),
adev->mc.mc_vram_size);
else
gtt_size = (uint64_t)amdgpu_gtt_size << 20;
r = ttm_bo_init_mm(&adev->mman.bdev, TTM_PL_TT, gtt_size >> PAGE_SHIFT);
if (r) {
DRM_ERROR("Failed initializing GTT heap.\n");
return r;
}
DRM_INFO("amdgpu: %uM of GTT memory ready.\n",
(unsigned)(adev->mc.gtt_size / (1024 * 1024)));
(unsigned)(gtt_size / (1024 * 1024)));
adev->gds.mem.total_size = adev->gds.mem.total_size << AMDGPU_GDS_SHIFT;
adev->gds.mem.gfx_partition_size = adev->gds.mem.gfx_partition_size << AMDGPU_GDS_SHIFT;
@ -1203,13 +1352,13 @@ void amdgpu_ttm_fini(struct amdgpu_device *adev)
if (!adev->mman.initialized)
return;
amdgpu_ttm_debugfs_fini(adev);
if (adev->stollen_vga_memory) {
r = amdgpu_bo_reserve(adev->stollen_vga_memory, true);
if (adev->stolen_vga_memory) {
r = amdgpu_bo_reserve(adev->stolen_vga_memory, true);
if (r == 0) {
amdgpu_bo_unpin(adev->stollen_vga_memory);
amdgpu_bo_unreserve(adev->stollen_vga_memory);
amdgpu_bo_unpin(adev->stolen_vga_memory);
amdgpu_bo_unreserve(adev->stolen_vga_memory);
}
amdgpu_bo_unref(&adev->stollen_vga_memory);
amdgpu_bo_unref(&adev->stolen_vga_memory);
}
ttm_bo_clean_mm(&adev->mman.bdev, TTM_PL_VRAM);
ttm_bo_clean_mm(&adev->mman.bdev, TTM_PL_TT);
@ -1256,12 +1405,77 @@ int amdgpu_mmap(struct file *filp, struct vm_area_struct *vma)
return ttm_bo_mmap(filp, vma, &adev->mman.bdev);
}
int amdgpu_copy_buffer(struct amdgpu_ring *ring,
uint64_t src_offset,
uint64_t dst_offset,
uint32_t byte_count,
static int amdgpu_map_buffer(struct ttm_buffer_object *bo,
struct ttm_mem_reg *mem, unsigned num_pages,
uint64_t offset, unsigned window,
struct amdgpu_ring *ring,
uint64_t *addr)
{
struct amdgpu_ttm_tt *gtt = (void *)bo->ttm;
struct amdgpu_device *adev = ring->adev;
struct ttm_tt *ttm = bo->ttm;
struct amdgpu_job *job;
unsigned num_dw, num_bytes;
dma_addr_t *dma_address;
struct dma_fence *fence;
uint64_t src_addr, dst_addr;
uint64_t flags;
int r;
BUG_ON(adev->mman.buffer_funcs->copy_max_bytes <
AMDGPU_GTT_MAX_TRANSFER_SIZE * 8);
*addr = adev->mc.gart_start;
*addr += (u64)window * AMDGPU_GTT_MAX_TRANSFER_SIZE *
AMDGPU_GPU_PAGE_SIZE;
num_dw = adev->mman.buffer_funcs->copy_num_dw;
while (num_dw & 0x7)
num_dw++;
num_bytes = num_pages * 8;
r = amdgpu_job_alloc_with_ib(adev, num_dw * 4 + num_bytes, &job);
if (r)
return r;
src_addr = num_dw * 4;
src_addr += job->ibs[0].gpu_addr;
dst_addr = adev->gart.table_addr;
dst_addr += window * AMDGPU_GTT_MAX_TRANSFER_SIZE * 8;
amdgpu_emit_copy_buffer(adev, &job->ibs[0], src_addr,
dst_addr, num_bytes);
amdgpu_ring_pad_ib(ring, &job->ibs[0]);
WARN_ON(job->ibs[0].length_dw > num_dw);
dma_address = &gtt->ttm.dma_address[offset >> PAGE_SHIFT];
flags = amdgpu_ttm_tt_pte_flags(adev, ttm, mem);
r = amdgpu_gart_map(adev, 0, num_pages, dma_address, flags,
&job->ibs[0].ptr[num_dw]);
if (r)
goto error_free;
r = amdgpu_job_submit(job, ring, &adev->mman.entity,
AMDGPU_FENCE_OWNER_UNDEFINED, &fence);
if (r)
goto error_free;
dma_fence_put(fence);
return r;
error_free:
amdgpu_job_free(job);
return r;
}
int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
uint64_t dst_offset, uint32_t byte_count,
struct reservation_object *resv,
struct dma_fence **fence, bool direct_submit)
struct dma_fence **fence, bool direct_submit,
bool vm_needs_flush)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_job *job;
@ -1283,6 +1497,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring,
if (r)
return r;
job->vm_needs_flush = vm_needs_flush;
if (resv) {
r = amdgpu_sync_resv(adev, &job->sync, resv,
AMDGPU_FENCE_OWNER_UNDEFINED);
@ -1327,11 +1542,12 @@ error_free:
}
int amdgpu_fill_buffer(struct amdgpu_bo *bo,
uint32_t src_data,
uint64_t src_data,
struct reservation_object *resv,
struct dma_fence **fence)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
/* max_bytes applies to SDMA_OP_PTEPDE as well as SDMA_OP_CONST_FILL*/
uint32_t max_bytes = adev->mman.buffer_funcs->fill_max_bytes;
struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
@ -1347,6 +1563,12 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
return -EINVAL;
}
if (bo->tbo.mem.mem_type == TTM_PL_TT) {
r = amdgpu_ttm_bind(&bo->tbo, &bo->tbo.mem);
if (r)
return r;
}
num_pages = bo->tbo.num_pages;
mm_node = bo->tbo.mem.mm_node;
num_loops = 0;
@ -1357,7 +1579,9 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
num_pages -= mm_node->size;
++mm_node;
}
num_dw = num_loops * adev->mman.buffer_funcs->fill_num_dw;
/* 10 double words for each SDMA_OP_PTEPDE cmd */
num_dw = num_loops * 10;
/* for IB padding */
num_dw += 64;
@ -1382,16 +1606,16 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
uint32_t byte_count = mm_node->size << PAGE_SHIFT;
uint64_t dst_addr;
r = amdgpu_mm_node_addr(&bo->tbo, mm_node,
&bo->tbo.mem, &dst_addr);
if (r)
return r;
WARN_ONCE(byte_count & 0x7, "size should be a multiple of 8");
dst_addr = amdgpu_mm_node_addr(&bo->tbo, mm_node, &bo->tbo.mem);
while (byte_count) {
uint32_t cur_size_in_bytes = min(byte_count, max_bytes);
amdgpu_emit_fill_buffer(adev, &job->ibs[0], src_data,
dst_addr, cur_size_in_bytes);
amdgpu_vm_set_pte_pde(adev, &job->ibs[0],
dst_addr, 0,
cur_size_in_bytes >> 3, 0,
src_data);
dst_addr += cur_size_in_bytes;
byte_count -= cur_size_in_bytes;
@ -1417,32 +1641,16 @@ error_free:
#if defined(CONFIG_DEBUG_FS)
extern void amdgpu_gtt_mgr_print(struct seq_file *m, struct ttm_mem_type_manager
*man);
static int amdgpu_mm_dump_table(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *)m->private;
unsigned ttm_pl = *(int *)node->info_ent->data;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
struct drm_mm *mm = (struct drm_mm *)adev->mman.bdev.man[ttm_pl].priv;
struct ttm_bo_global *glob = adev->mman.bdev.glob;
struct ttm_mem_type_manager *man = &adev->mman.bdev.man[ttm_pl];
struct drm_printer p = drm_seq_file_printer(m);
spin_lock(&glob->lru_lock);
drm_mm_print(mm, &p);
spin_unlock(&glob->lru_lock);
switch (ttm_pl) {
case TTM_PL_VRAM:
seq_printf(m, "man size:%llu pages, ram usage:%lluMB, vis usage:%lluMB\n",
adev->mman.bdev.man[ttm_pl].size,
(u64)atomic64_read(&adev->vram_usage) >> 20,
(u64)atomic64_read(&adev->vram_vis_usage) >> 20);
break;
case TTM_PL_TT:
amdgpu_gtt_mgr_print(m, &adev->mman.bdev.man[TTM_PL_TT]);
break;
}
man->func->debug(man, &p);
return 0;
}
@ -1574,7 +1782,7 @@ static int amdgpu_ttm_debugfs_init(struct amdgpu_device *adev)
adev, &amdgpu_ttm_gtt_fops);
if (IS_ERR(ent))
return PTR_ERR(ent);
i_size_write(ent->d_inode, adev->mc.gtt_size);
i_size_write(ent->d_inode, adev->mc.gart_size);
adev->mman.gtt = ent;
#endif

View File

@ -34,6 +34,9 @@
#define AMDGPU_PL_FLAG_GWS (TTM_PL_FLAG_PRIV << 1)
#define AMDGPU_PL_FLAG_OA (TTM_PL_FLAG_PRIV << 2)
#define AMDGPU_GTT_MAX_TRANSFER_SIZE 512
#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS 2
struct amdgpu_mman {
struct ttm_bo_global_ref bo_global_ref;
struct drm_global_reference mem_global_ref;
@ -49,6 +52,8 @@ struct amdgpu_mman {
/* buffer handling */
const struct amdgpu_buffer_funcs *buffer_funcs;
struct amdgpu_ring *buffer_funcs_ring;
struct mutex gtt_window_lock;
/* Scheduler entity for buffer moves */
struct amd_sched_entity entity;
};
@ -56,24 +61,29 @@ struct amdgpu_mman {
extern const struct ttm_mem_type_manager_func amdgpu_gtt_mgr_func;
extern const struct ttm_mem_type_manager_func amdgpu_vram_mgr_func;
bool amdgpu_gtt_mgr_is_allocated(struct ttm_mem_reg *mem);
int amdgpu_gtt_mgr_alloc(struct ttm_mem_type_manager *man,
struct ttm_buffer_object *tbo,
const struct ttm_place *place,
struct ttm_mem_reg *mem);
uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man);
int amdgpu_copy_buffer(struct amdgpu_ring *ring,
uint64_t src_offset,
uint64_t dst_offset,
uint32_t byte_count,
uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man);
uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man);
int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
uint64_t dst_offset, uint32_t byte_count,
struct reservation_object *resv,
struct dma_fence **fence, bool direct_submit);
struct dma_fence **fence, bool direct_submit,
bool vm_needs_flush);
int amdgpu_fill_buffer(struct amdgpu_bo *bo,
uint32_t src_data,
uint64_t src_data,
struct reservation_object *resv,
struct dma_fence **fence);
int amdgpu_mmap(struct file *filp, struct vm_area_struct *vma);
bool amdgpu_ttm_is_bound(struct ttm_tt *ttm);
int amdgpu_ttm_bind(struct ttm_buffer_object *bo, struct ttm_mem_reg *bo_mem);
int amdgpu_ttm_recover_gart(struct amdgpu_device *adev);
#endif

View File

@ -275,14 +275,10 @@ amdgpu_ucode_get_load_type(struct amdgpu_device *adev, int load_type)
else
return AMDGPU_FW_LOAD_PSP;
case CHIP_RAVEN:
#if 0
if (!load_type)
if (load_type != 2)
return AMDGPU_FW_LOAD_DIRECT;
else
return AMDGPU_FW_LOAD_PSP;
#else
return AMDGPU_FW_LOAD_DIRECT;
#endif
default:
DRM_ERROR("Unknow firmware load type\n");
}
@ -362,8 +358,6 @@ static int amdgpu_ucode_patch_jt(struct amdgpu_firmware_info *ucode,
(le32_to_cpu(header->jt_offset) * 4);
memcpy(dst_addr, src_addr, le32_to_cpu(header->jt_size) * 4);
ucode->ucode_size += le32_to_cpu(header->jt_size) * 4;
return 0;
}
@ -377,10 +371,15 @@ int amdgpu_ucode_init_bo(struct amdgpu_device *adev)
struct amdgpu_firmware_info *ucode = NULL;
const struct common_firmware_header *header = NULL;
if (!adev->firmware.fw_size) {
dev_warn(adev->dev, "No ip firmware need to load\n");
return 0;
}
err = amdgpu_bo_create(adev, adev->firmware.fw_size, PAGE_SIZE, true,
amdgpu_sriov_vf(adev) ? AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, bo);
NULL, NULL, 0, bo);
if (err) {
dev_err(adev->dev, "(%d) Firmware buffer allocate failed\n", err);
goto failed;
@ -459,6 +458,9 @@ int amdgpu_ucode_fini_bo(struct amdgpu_device *adev)
int i;
struct amdgpu_firmware_info *ucode = NULL;
if (!adev->firmware.fw_size)
return 0;
for (i = 0; i < adev->firmware.max_ucodes; i++) {
ucode = &adev->firmware.ucode[i];
if (ucode->fw) {

View File

@ -588,6 +588,10 @@ static int amdgpu_uvd_cs_msg_decode(struct amdgpu_device *adev, uint32_t *msg,
}
break;
case 8: /* MJPEG */
min_dpb_size = 0;
break;
case 16: /* H265 */
image_size = (ALIGN(width, 16) * ALIGN(height, 16) * 3) / 2;
image_size = ALIGN(image_size, 256);
@ -1051,7 +1055,7 @@ int amdgpu_uvd_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
AMDGPU_GEM_DOMAIN_VRAM,
AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, &bo);
NULL, NULL, 0, &bo);
if (r)
return r;
@ -1101,7 +1105,7 @@ int amdgpu_uvd_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
AMDGPU_GEM_DOMAIN_VRAM,
AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, &bo);
NULL, NULL, 0, &bo);
if (r)
return r;

View File

@ -937,9 +937,9 @@ int amdgpu_vce_ring_test_ring(struct amdgpu_ring *ring)
unsigned i;
int r, timeout = adev->usec_timeout;
/* workaround VCE ring test slow issue for sriov*/
/* skip ring test for sriov*/
if (amdgpu_sriov_vf(adev))
timeout *= 10;
return 0;
r = amdgpu_ring_alloc(ring, 16);
if (r) {

View File

@ -209,9 +209,9 @@ static void amdgpu_vcn_idle_work_handler(struct work_struct *work)
if (fences == 0) {
if (adev->pm.dpm_enabled) {
/* might be used when with pg/cg
amdgpu_dpm_enable_uvd(adev, false);
} else {
amdgpu_asic_set_uvd_clocks(adev, 0, 0);
*/
}
} else {
schedule_delayed_work(&adev->vcn.idle_work, VCN_IDLE_TIMEOUT);
@ -223,12 +223,10 @@ void amdgpu_vcn_ring_begin_use(struct amdgpu_ring *ring)
struct amdgpu_device *adev = ring->adev;
bool set_clocks = !cancel_delayed_work_sync(&adev->vcn.idle_work);
if (set_clocks) {
if (adev->pm.dpm_enabled) {
amdgpu_dpm_enable_uvd(adev, true);
} else {
amdgpu_asic_set_uvd_clocks(adev, 53300, 40000);
}
if (set_clocks && adev->pm.dpm_enabled) {
/* might be used when with pg/cg
amdgpu_dpm_enable_uvd(adev, true);
*/
}
}
@ -361,7 +359,7 @@ static int amdgpu_vcn_dec_get_create_msg(struct amdgpu_ring *ring, uint32_t hand
AMDGPU_GEM_DOMAIN_VRAM,
AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, &bo);
NULL, NULL, 0, &bo);
if (r)
return r;
@ -413,7 +411,7 @@ static int amdgpu_vcn_dec_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
AMDGPU_GEM_DOMAIN_VRAM,
AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS,
NULL, NULL, &bo);
NULL, NULL, 0, &bo);
if (r)
return r;

View File

@ -0,0 +1,85 @@
/*
* Copyright 2017 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "amdgpu.h"
#include "amdgpu_vf_error.h"
#include "mxgpu_ai.h"
#define AMDGPU_VF_ERROR_ENTRY_SIZE 16
/* struct error_entry - amdgpu VF error information. */
struct amdgpu_vf_error_buffer {
int read_count;
int write_count;
uint16_t code[AMDGPU_VF_ERROR_ENTRY_SIZE];
uint16_t flags[AMDGPU_VF_ERROR_ENTRY_SIZE];
uint64_t data[AMDGPU_VF_ERROR_ENTRY_SIZE];
};
struct amdgpu_vf_error_buffer admgpu_vf_errors;
void amdgpu_vf_error_put(uint16_t sub_error_code, uint16_t error_flags, uint64_t error_data)
{
int index;
uint16_t error_code = AMDGIM_ERROR_CODE(AMDGIM_ERROR_CATEGORY_VF, sub_error_code);
index = admgpu_vf_errors.write_count % AMDGPU_VF_ERROR_ENTRY_SIZE;
admgpu_vf_errors.code [index] = error_code;
admgpu_vf_errors.flags [index] = error_flags;
admgpu_vf_errors.data [index] = error_data;
admgpu_vf_errors.write_count ++;
}
void amdgpu_vf_error_trans_all(struct amdgpu_device *adev)
{
/* u32 pf2vf_flags = 0; */
u32 data1, data2, data3;
int index;
if ((NULL == adev) || (!amdgpu_sriov_vf(adev)) || (!adev->virt.ops) || (!adev->virt.ops->trans_msg)) {
return;
}
/*
TODO: Enable these code when pv2vf_info is merged
AMDGPU_FW_VRAM_PF2VF_READ (adev, feature_flags, &pf2vf_flags);
if (!(pf2vf_flags & AMDGIM_FEATURE_ERROR_LOG_COLLECT)) {
return;
}
*/
/* The errors are overlay of array, correct read_count as full. */
if (admgpu_vf_errors.write_count - admgpu_vf_errors.read_count > AMDGPU_VF_ERROR_ENTRY_SIZE) {
admgpu_vf_errors.read_count = admgpu_vf_errors.write_count - AMDGPU_VF_ERROR_ENTRY_SIZE;
}
while (admgpu_vf_errors.read_count < admgpu_vf_errors.write_count) {
index =admgpu_vf_errors.read_count % AMDGPU_VF_ERROR_ENTRY_SIZE;
data1 = AMDGIM_ERROR_CODE_FLAGS_TO_MAILBOX (admgpu_vf_errors.code[index], admgpu_vf_errors.flags[index]);
data2 = admgpu_vf_errors.data[index] & 0xFFFFFFFF;
data3 = (admgpu_vf_errors.data[index] >> 32) & 0xFFFFFFFF;
adev->virt.ops->trans_msg(adev, IDH_LOG_VF_ERROR, data1, data2, data3);
admgpu_vf_errors.read_count ++;
}
}

View File

@ -0,0 +1,62 @@
/*
* Copyright 2017 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef __VF_ERROR_H__
#define __VF_ERROR_H__
#define AMDGIM_ERROR_CODE_FLAGS_TO_MAILBOX(c,f) (((c & 0xFFFF) << 16) | (f & 0xFFFF))
#define AMDGIM_ERROR_CODE(t,c) (((t&0xF)<<12)|(c&0xFFF))
/* Please keep enum same as AMD GIM driver */
enum AMDGIM_ERROR_VF {
AMDGIM_ERROR_VF_ATOMBIOS_INIT_FAIL = 0,
AMDGIM_ERROR_VF_NO_VBIOS,
AMDGIM_ERROR_VF_GPU_POST_ERROR,
AMDGIM_ERROR_VF_ATOMBIOS_GET_CLOCK_FAIL,
AMDGIM_ERROR_VF_FENCE_INIT_FAIL,
AMDGIM_ERROR_VF_AMDGPU_INIT_FAIL,
AMDGIM_ERROR_VF_IB_INIT_FAIL,
AMDGIM_ERROR_VF_AMDGPU_LATE_INIT_FAIL,
AMDGIM_ERROR_VF_ASIC_RESUME_FAIL,
AMDGIM_ERROR_VF_GPU_RESET_FAIL,
AMDGIM_ERROR_VF_TEST,
AMDGIM_ERROR_VF_MAX
};
enum AMDGIM_ERROR_CATEGORY {
AMDGIM_ERROR_CATEGORY_NON_USED = 0,
AMDGIM_ERROR_CATEGORY_GIM,
AMDGIM_ERROR_CATEGORY_PF,
AMDGIM_ERROR_CATEGORY_VF,
AMDGIM_ERROR_CATEGORY_VBIOS,
AMDGIM_ERROR_CATEGORY_MONITOR,
AMDGIM_ERROR_CATEGORY_MAX
};
void amdgpu_vf_error_put(uint16_t sub_error_code, uint16_t error_flags, uint64_t error_data);
void amdgpu_vf_error_trans_all (struct amdgpu_device *adev);
#endif /* __VF_ERROR_H__ */

View File

@ -46,14 +46,14 @@ int amdgpu_allocate_static_csa(struct amdgpu_device *adev)
* address within META_DATA init package to support SRIOV gfx preemption.
*/
int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm)
int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_bo_va **bo_va)
{
int r;
struct amdgpu_bo_va *bo_va;
struct ww_acquire_ctx ticket;
struct list_head list;
struct amdgpu_bo_list_entry pd;
struct ttm_validate_buffer csa_tv;
int r;
INIT_LIST_HEAD(&list);
INIT_LIST_HEAD(&csa_tv.head);
@ -69,34 +69,33 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm)
return r;
}
bo_va = amdgpu_vm_bo_add(adev, vm, adev->virt.csa_obj);
if (!bo_va) {
*bo_va = amdgpu_vm_bo_add(adev, vm, adev->virt.csa_obj);
if (!*bo_va) {
ttm_eu_backoff_reservation(&ticket, &list);
DRM_ERROR("failed to create bo_va for static CSA\n");
return -ENOMEM;
}
r = amdgpu_vm_alloc_pts(adev, bo_va->vm, AMDGPU_CSA_VADDR,
AMDGPU_CSA_SIZE);
r = amdgpu_vm_alloc_pts(adev, (*bo_va)->base.vm, AMDGPU_CSA_VADDR,
AMDGPU_CSA_SIZE);
if (r) {
DRM_ERROR("failed to allocate pts for static CSA, err=%d\n", r);
amdgpu_vm_bo_rmv(adev, bo_va);
amdgpu_vm_bo_rmv(adev, *bo_va);
ttm_eu_backoff_reservation(&ticket, &list);
return r;
}
r = amdgpu_vm_bo_map(adev, bo_va, AMDGPU_CSA_VADDR, 0,AMDGPU_CSA_SIZE,
AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE |
AMDGPU_PTE_EXECUTABLE);
r = amdgpu_vm_bo_map(adev, *bo_va, AMDGPU_CSA_VADDR, 0, AMDGPU_CSA_SIZE,
AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE |
AMDGPU_PTE_EXECUTABLE);
if (r) {
DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r);
amdgpu_vm_bo_rmv(adev, bo_va);
amdgpu_vm_bo_rmv(adev, *bo_va);
ttm_eu_backoff_reservation(&ticket, &list);
return r;
}
vm->csa_bo_va = bo_va;
ttm_eu_backoff_reservation(&ticket, &list);
return 0;
}

View File

@ -43,6 +43,7 @@ struct amdgpu_virt_ops {
int (*req_full_gpu)(struct amdgpu_device *adev, bool init);
int (*rel_full_gpu)(struct amdgpu_device *adev, bool init);
int (*reset_gpu)(struct amdgpu_device *adev);
void (*trans_msg)(struct amdgpu_device *adev, u32 req, u32 data1, u32 data2, u32 data3);
};
/* GPU virtualization */
@ -89,7 +90,8 @@ static inline bool is_virtual_machine(void)
struct amdgpu_vm;
int amdgpu_allocate_static_csa(struct amdgpu_device *adev);
int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm);
int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_bo_va **bo_va);
void amdgpu_virt_init_setting(struct amdgpu_device *adev);
uint32_t amdgpu_virt_kiq_rreg(struct amdgpu_device *adev, uint32_t reg);
void amdgpu_virt_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v);

View File

@ -77,8 +77,6 @@ struct amdgpu_pte_update_params {
void (*func)(struct amdgpu_pte_update_params *params, uint64_t pe,
uint64_t addr, unsigned count, uint32_t incr,
uint64_t flags);
/* indicate update pt or its shadow */
bool shadow;
/* The next two are used during VM update by CPU
* DMA addresses to use for mapping
* Kernel pointer of PD/PT BO that needs to be updated
@ -161,11 +159,26 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
*/
static int amdgpu_vm_validate_level(struct amdgpu_vm_pt *parent,
int (*validate)(void *, struct amdgpu_bo *),
void *param)
void *param, bool use_cpu_for_update,
struct ttm_bo_global *glob)
{
unsigned i;
int r;
if (parent->bo->shadow) {
struct amdgpu_bo *shadow = parent->bo->shadow;
r = amdgpu_ttm_bind(&shadow->tbo, &shadow->tbo.mem);
if (r)
return r;
}
if (use_cpu_for_update) {
r = amdgpu_bo_kmap(parent->bo, NULL);
if (r)
return r;
}
if (!parent->entries)
return 0;
@ -179,11 +192,18 @@ static int amdgpu_vm_validate_level(struct amdgpu_vm_pt *parent,
if (r)
return r;
spin_lock(&glob->lru_lock);
ttm_bo_move_to_lru_tail(&entry->bo->tbo);
if (entry->bo->shadow)
ttm_bo_move_to_lru_tail(&entry->bo->shadow->tbo);
spin_unlock(&glob->lru_lock);
/*
* Recurse into the sub directory. This is harmless because we
* have only a maximum of 5 layers.
*/
r = amdgpu_vm_validate_level(entry, validate, param);
r = amdgpu_vm_validate_level(entry, validate, param,
use_cpu_for_update, glob);
if (r)
return r;
}
@ -214,54 +234,12 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
if (num_evictions == vm->last_eviction_counter)
return 0;
return amdgpu_vm_validate_level(&vm->root, validate, param);
return amdgpu_vm_validate_level(&vm->root, validate, param,
vm->use_cpu_for_update,
adev->mman.bdev.glob);
}
/**
* amdgpu_vm_move_level_in_lru - move one level of PT BOs to the LRU tail
*
* @adev: amdgpu device instance
* @vm: vm providing the BOs
*
* Move the PT BOs to the tail of the LRU.
*/
static void amdgpu_vm_move_level_in_lru(struct amdgpu_vm_pt *parent)
{
unsigned i;
if (!parent->entries)
return;
for (i = 0; i <= parent->last_entry_used; ++i) {
struct amdgpu_vm_pt *entry = &parent->entries[i];
if (!entry->bo)
continue;
ttm_bo_move_to_lru_tail(&entry->bo->tbo);
amdgpu_vm_move_level_in_lru(entry);
}
}
/**
* amdgpu_vm_move_pt_bos_in_lru - move the PT BOs to the LRU tail
*
* @adev: amdgpu device instance
* @vm: vm providing the BOs
*
* Move the PT BOs to the tail of the LRU.
*/
void amdgpu_vm_move_pt_bos_in_lru(struct amdgpu_device *adev,
struct amdgpu_vm *vm)
{
struct ttm_bo_global *glob = adev->mman.bdev.glob;
spin_lock(&glob->lru_lock);
amdgpu_vm_move_level_in_lru(&vm->root);
spin_unlock(&glob->lru_lock);
}
/**
* amdgpu_vm_alloc_levels - allocate the PD/PT levels
*
* @adev: amdgpu_device pointer
@ -282,6 +260,7 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device *adev,
unsigned pt_idx, from, to;
int r;
u64 flags;
uint64_t init_value = 0;
if (!parent->entries) {
unsigned num_entries = amdgpu_vm_num_entries(adev, level);
@ -315,6 +294,12 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device *adev,
flags |= (AMDGPU_GEM_CREATE_NO_CPU_ACCESS |
AMDGPU_GEM_CREATE_SHADOW);
if (vm->pte_support_ats) {
init_value = AMDGPU_PTE_SYSTEM;
if (level != adev->vm_manager.num_level - 1)
init_value |= AMDGPU_PDE_PTE;
}
/* walk over the address space and allocate the page tables */
for (pt_idx = from; pt_idx <= to; ++pt_idx) {
struct reservation_object *resv = vm->root.bo->tbo.resv;
@ -327,10 +312,18 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device *adev,
AMDGPU_GPU_PAGE_SIZE, true,
AMDGPU_GEM_DOMAIN_VRAM,
flags,
NULL, resv, &pt);
NULL, resv, init_value, &pt);
if (r)
return r;
if (vm->use_cpu_for_update) {
r = amdgpu_bo_kmap(pt, NULL);
if (r) {
amdgpu_bo_unref(&pt);
return r;
}
}
/* Keep a reference to the root directory to avoid
* freeing them up in the wrong order.
*/
@ -424,7 +417,7 @@ static int amdgpu_vm_grab_reserved_vmid_locked(struct amdgpu_vm *vm,
struct dma_fence *updates = sync->last_vm_update;
int r = 0;
struct dma_fence *flushed, *tmp;
bool needs_flush = false;
bool needs_flush = vm->use_cpu_for_update;
flushed = id->flushed_updates;
if ((amdgpu_vm_had_gpu_reset(adev, id)) ||
@ -545,11 +538,11 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
}
kfree(fences);
job->vm_needs_flush = false;
job->vm_needs_flush = vm->use_cpu_for_update;
/* Check if we can use a VMID already assigned to this VM */
list_for_each_entry_reverse(id, &id_mgr->ids_lru, list) {
struct dma_fence *flushed;
bool needs_flush = false;
bool needs_flush = vm->use_cpu_for_update;
/* Check all the prerequisites to using this VMID */
if (amdgpu_vm_had_gpu_reset(adev, id))
@ -745,7 +738,7 @@ static bool amdgpu_vm_is_large_bar(struct amdgpu_device *adev)
*
* Emit a VM flush when it is necessary.
*/
int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job)
int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job, bool need_pipe_sync)
{
struct amdgpu_device *adev = ring->adev;
unsigned vmhub = ring->funcs->vmhub;
@ -767,12 +760,15 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job)
vm_flush_needed = true;
}
if (!vm_flush_needed && !gds_switch_needed)
if (!vm_flush_needed && !gds_switch_needed && !need_pipe_sync)
return 0;
if (ring->funcs->init_cond_exec)
patch_offset = amdgpu_ring_init_cond_exec(ring);
if (need_pipe_sync)
amdgpu_ring_emit_pipeline_sync(ring);
if (ring->funcs->emit_vm_flush && vm_flush_needed) {
struct dma_fence *fence;
@ -874,8 +870,8 @@ struct amdgpu_bo_va *amdgpu_vm_bo_find(struct amdgpu_vm *vm,
{
struct amdgpu_bo_va *bo_va;
list_for_each_entry(bo_va, &bo->va, bo_list) {
if (bo_va->vm == vm) {
list_for_each_entry(bo_va, &bo->va, base.bo_list) {
if (bo_va->base.vm == vm) {
return bo_va;
}
}
@ -981,6 +977,8 @@ static void amdgpu_vm_cpu_set_ptes(struct amdgpu_pte_update_params *params,
unsigned int i;
uint64_t value;
trace_amdgpu_vm_set_ptes(pe, addr, count, incr, flags);
for (i = 0; i < count; i++) {
value = params->pages_addr ?
amdgpu_vm_map_gart(params->pages_addr, addr) :
@ -989,19 +987,16 @@ static void amdgpu_vm_cpu_set_ptes(struct amdgpu_pte_update_params *params,
i, value, flags);
addr += incr;
}
/* Flush HDP */
mb();
amdgpu_gart_flush_gpu_tlb(params->adev, 0);
}
static int amdgpu_vm_bo_wait(struct amdgpu_device *adev, struct amdgpu_bo *bo)
static int amdgpu_vm_wait_pd(struct amdgpu_device *adev, struct amdgpu_vm *vm,
void *owner)
{
struct amdgpu_sync sync;
int r;
amdgpu_sync_create(&sync);
amdgpu_sync_resv(adev, &sync, bo->tbo.resv, AMDGPU_FENCE_OWNER_VM);
amdgpu_sync_resv(adev, &sync, vm->root.bo->tbo.resv, owner);
r = amdgpu_sync_wait(&sync, true);
amdgpu_sync_free(&sync);
@ -1042,23 +1037,14 @@ static int amdgpu_vm_update_level(struct amdgpu_device *adev,
params.adev = adev;
shadow = parent->bo->shadow;
WARN_ON(vm->use_cpu_for_update && shadow);
if (vm->use_cpu_for_update && !shadow) {
r = amdgpu_bo_kmap(parent->bo, (void **)&pd_addr);
if (r)
if (vm->use_cpu_for_update) {
pd_addr = (unsigned long)amdgpu_bo_kptr(parent->bo);
r = amdgpu_vm_wait_pd(adev, vm, AMDGPU_FENCE_OWNER_VM);
if (unlikely(r))
return r;
r = amdgpu_vm_bo_wait(adev, parent->bo);
if (unlikely(r)) {
amdgpu_bo_kunmap(parent->bo);
return r;
}
params.func = amdgpu_vm_cpu_set_ptes;
} else {
if (shadow) {
r = amdgpu_ttm_bind(&shadow->tbo, &shadow->tbo.mem);
if (r)
return r;
}
ring = container_of(vm->entity.sched, struct amdgpu_ring,
sched);
@ -1094,21 +1080,14 @@ static int amdgpu_vm_update_level(struct amdgpu_device *adev,
if (bo == NULL)
continue;
if (bo->shadow) {
struct amdgpu_bo *pt_shadow = bo->shadow;
r = amdgpu_ttm_bind(&pt_shadow->tbo,
&pt_shadow->tbo.mem);
if (r)
return r;
}
pt = amdgpu_bo_gpu_offset(bo);
pt = amdgpu_gart_get_vm_pde(adev, pt);
if (parent->entries[pt_idx].addr == pt)
/* Don't update huge pages here */
if ((parent->entries[pt_idx].addr & AMDGPU_PDE_PTE) ||
parent->entries[pt_idx].addr == (pt | AMDGPU_PTE_VALID))
continue;
parent->entries[pt_idx].addr = pt;
parent->entries[pt_idx].addr = pt | AMDGPU_PTE_VALID;
pde = pd_addr + pt_idx * 8;
if (((last_pde + 8 * count) != pde) ||
@ -1146,28 +1125,29 @@ static int amdgpu_vm_update_level(struct amdgpu_device *adev,
count, incr, AMDGPU_PTE_VALID);
}
if (params.func == amdgpu_vm_cpu_set_ptes)
amdgpu_bo_kunmap(parent->bo);
else if (params.ib->length_dw == 0) {
amdgpu_job_free(job);
} else {
amdgpu_ring_pad_ib(ring, params.ib);
amdgpu_sync_resv(adev, &job->sync, parent->bo->tbo.resv,
AMDGPU_FENCE_OWNER_VM);
if (shadow)
amdgpu_sync_resv(adev, &job->sync, shadow->tbo.resv,
if (!vm->use_cpu_for_update) {
if (params.ib->length_dw == 0) {
amdgpu_job_free(job);
} else {
amdgpu_ring_pad_ib(ring, params.ib);
amdgpu_sync_resv(adev, &job->sync, parent->bo->tbo.resv,
AMDGPU_FENCE_OWNER_VM);
if (shadow)
amdgpu_sync_resv(adev, &job->sync,
shadow->tbo.resv,
AMDGPU_FENCE_OWNER_VM);
WARN_ON(params.ib->length_dw > ndw);
r = amdgpu_job_submit(job, ring, &vm->entity,
AMDGPU_FENCE_OWNER_VM, &fence);
if (r)
goto error_free;
WARN_ON(params.ib->length_dw > ndw);
r = amdgpu_job_submit(job, ring, &vm->entity,
AMDGPU_FENCE_OWNER_VM, &fence);
if (r)
goto error_free;
amdgpu_bo_fence(parent->bo, fence, true);
dma_fence_put(vm->last_dir_update);
vm->last_dir_update = dma_fence_get(fence);
dma_fence_put(fence);
amdgpu_bo_fence(parent->bo, fence, true);
dma_fence_put(vm->last_dir_update);
vm->last_dir_update = dma_fence_get(fence);
dma_fence_put(fence);
}
}
/*
* Recurse into the subdirectories. This recursion is harmless because
@ -1235,33 +1215,98 @@ int amdgpu_vm_update_directories(struct amdgpu_device *adev,
if (r)
amdgpu_vm_invalidate_level(&vm->root);
if (vm->use_cpu_for_update) {
/* Flush HDP */
mb();
amdgpu_gart_flush_gpu_tlb(adev, 0);
}
return r;
}
/**
* amdgpu_vm_find_pt - find the page table for an address
* amdgpu_vm_find_entry - find the entry for an address
*
* @p: see amdgpu_pte_update_params definition
* @addr: virtual address in question
* @entry: resulting entry or NULL
* @parent: parent entry
*
* Find the page table BO for a virtual address, return NULL when none found.
* Find the vm_pt entry and it's parent for the given address.
*/
static struct amdgpu_bo *amdgpu_vm_get_pt(struct amdgpu_pte_update_params *p,
uint64_t addr)
void amdgpu_vm_get_entry(struct amdgpu_pte_update_params *p, uint64_t addr,
struct amdgpu_vm_pt **entry,
struct amdgpu_vm_pt **parent)
{
struct amdgpu_vm_pt *entry = &p->vm->root;
unsigned idx, level = p->adev->vm_manager.num_level;
while (entry->entries) {
*parent = NULL;
*entry = &p->vm->root;
while ((*entry)->entries) {
idx = addr >> (p->adev->vm_manager.block_size * level--);
idx %= amdgpu_bo_size(entry->bo) / 8;
entry = &entry->entries[idx];
idx %= amdgpu_bo_size((*entry)->bo) / 8;
*parent = *entry;
*entry = &(*entry)->entries[idx];
}
if (level)
return NULL;
*entry = NULL;
}
return entry->bo;
/**
* amdgpu_vm_handle_huge_pages - handle updating the PD with huge pages
*
* @p: see amdgpu_pte_update_params definition
* @entry: vm_pt entry to check
* @parent: parent entry
* @nptes: number of PTEs updated with this operation
* @dst: destination address where the PTEs should point to
* @flags: access flags fro the PTEs
*
* Check if we can update the PD with a huge page.
*/
static void amdgpu_vm_handle_huge_pages(struct amdgpu_pte_update_params *p,
struct amdgpu_vm_pt *entry,
struct amdgpu_vm_pt *parent,
unsigned nptes, uint64_t dst,
uint64_t flags)
{
bool use_cpu_update = (p->func == amdgpu_vm_cpu_set_ptes);
uint64_t pd_addr, pde;
/* In the case of a mixed PT the PDE must point to it*/
if (p->adev->asic_type < CHIP_VEGA10 ||
nptes != AMDGPU_VM_PTE_COUNT(p->adev) ||
p->func == amdgpu_vm_do_copy_ptes ||
!(flags & AMDGPU_PTE_VALID)) {
dst = amdgpu_bo_gpu_offset(entry->bo);
dst = amdgpu_gart_get_vm_pde(p->adev, dst);
flags = AMDGPU_PTE_VALID;
} else {
/* Set the huge page flag to stop scanning at this PDE */
flags |= AMDGPU_PDE_PTE;
}
if (entry->addr == (dst | flags))
return;
entry->addr = (dst | flags);
if (use_cpu_update) {
pd_addr = (unsigned long)amdgpu_bo_kptr(parent->bo);
pde = pd_addr + (entry - parent->entries) * 8;
amdgpu_vm_cpu_set_ptes(p, pde, dst, 1, 0, flags);
} else {
if (parent->bo->shadow) {
pd_addr = amdgpu_bo_gpu_offset(parent->bo->shadow);
pde = pd_addr + (entry - parent->entries) * 8;
amdgpu_vm_do_set_ptes(p, pde, dst, 1, 0, flags);
}
pd_addr = amdgpu_bo_gpu_offset(parent->bo);
pde = pd_addr + (entry - parent->entries) * 8;
amdgpu_vm_do_set_ptes(p, pde, dst, 1, 0, flags);
}
}
/**
@ -1287,49 +1332,44 @@ static int amdgpu_vm_update_ptes(struct amdgpu_pte_update_params *params,
uint64_t addr, pe_start;
struct amdgpu_bo *pt;
unsigned nptes;
int r;
bool use_cpu_update = (params->func == amdgpu_vm_cpu_set_ptes);
/* walk over the address space and update the page tables */
for (addr = start; addr < end; addr += nptes) {
pt = amdgpu_vm_get_pt(params, addr);
if (!pt) {
pr_err("PT not found, aborting update_ptes\n");
return -EINVAL;
}
for (addr = start; addr < end; addr += nptes,
dst += nptes * AMDGPU_GPU_PAGE_SIZE) {
struct amdgpu_vm_pt *entry, *parent;
if (params->shadow) {
if (WARN_ONCE(use_cpu_update,
"CPU VM update doesn't suuport shadow pages"))
return 0;
if (!pt->shadow)
return 0;
pt = pt->shadow;
}
amdgpu_vm_get_entry(params, addr, &entry, &parent);
if (!entry)
return -ENOENT;
if ((addr & ~mask) == (end & ~mask))
nptes = end - addr;
else
nptes = AMDGPU_VM_PTE_COUNT(adev) - (addr & mask);
amdgpu_vm_handle_huge_pages(params, entry, parent,
nptes, dst, flags);
/* We don't need to update PTEs for huge pages */
if (entry->addr & AMDGPU_PDE_PTE)
continue;
pt = entry->bo;
if (use_cpu_update) {
r = amdgpu_bo_kmap(pt, (void *)&pe_start);
if (r)
return r;
} else
pe_start = (unsigned long)amdgpu_bo_kptr(pt);
} else {
if (pt->shadow) {
pe_start = amdgpu_bo_gpu_offset(pt->shadow);
pe_start += (addr & mask) * 8;
params->func(params, pe_start, dst, nptes,
AMDGPU_GPU_PAGE_SIZE, flags);
}
pe_start = amdgpu_bo_gpu_offset(pt);
}
pe_start += (addr & mask) * 8;
params->func(params, pe_start, dst, nptes,
AMDGPU_GPU_PAGE_SIZE, flags);
dst += nptes * AMDGPU_GPU_PAGE_SIZE;
if (use_cpu_update)
amdgpu_bo_kunmap(pt);
}
return 0;
@ -1370,10 +1410,9 @@ static int amdgpu_vm_frag_ptes(struct amdgpu_pte_update_params *params,
* Userspace can support this by aligning virtual base address and
* allocation size to the fragment size.
*/
/* SI and newer are optimized for 64KB */
uint64_t frag_flags = AMDGPU_PTE_FRAG(AMDGPU_LOG2_PAGES_PER_FRAG);
uint64_t frag_align = 1 << AMDGPU_LOG2_PAGES_PER_FRAG;
unsigned pages_per_frag = params->adev->vm_manager.fragment_size;
uint64_t frag_flags = AMDGPU_PTE_FRAG(pages_per_frag);
uint64_t frag_align = 1 << pages_per_frag;
uint64_t frag_start = ALIGN(start, frag_align);
uint64_t frag_end = end & ~(frag_align - 1);
@ -1445,6 +1484,10 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
params.vm = vm;
params.src = src;
/* sync to everything on unmapping */
if (!(flags & AMDGPU_PTE_VALID))
owner = AMDGPU_FENCE_OWNER_UNDEFINED;
if (vm->use_cpu_for_update) {
/* params.src is used as flag to indicate system Memory */
if (pages_addr)
@ -1453,23 +1496,18 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
/* Wait for PT BOs to be free. PTs share the same resv. object
* as the root PD BO
*/
r = amdgpu_vm_bo_wait(adev, vm->root.bo);
r = amdgpu_vm_wait_pd(adev, vm, owner);
if (unlikely(r))
return r;
params.func = amdgpu_vm_cpu_set_ptes;
params.pages_addr = pages_addr;
params.shadow = false;
return amdgpu_vm_frag_ptes(&params, start, last + 1,
addr, flags);
}
ring = container_of(vm->entity.sched, struct amdgpu_ring, sched);
/* sync to everything on unmapping */
if (!(flags & AMDGPU_PTE_VALID))
owner = AMDGPU_FENCE_OWNER_UNDEFINED;
nptes = last - start + 1;
/*
@ -1481,6 +1519,9 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
/* padding, etc. */
ndw = 64;
/* one PDE write for each huge page */
ndw += ((nptes >> adev->vm_manager.block_size) + 1) * 6;
if (src) {
/* only copy commands needed */
ndw += ncmds * 7;
@ -1542,11 +1583,6 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
if (r)
goto error_free;
params.shadow = true;
r = amdgpu_vm_frag_ptes(&params, start, last + 1, addr, flags);
if (r)
goto error_free;
params.shadow = false;
r = amdgpu_vm_frag_ptes(&params, start, last + 1, addr, flags);
if (r)
goto error_free;
@ -1565,6 +1601,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
error_free:
amdgpu_job_free(job);
amdgpu_vm_invalidate_level(&vm->root);
return r;
}
@ -1687,7 +1724,8 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
struct amdgpu_bo_va *bo_va,
bool clear)
{
struct amdgpu_vm *vm = bo_va->vm;
struct amdgpu_bo *bo = bo_va->base.bo;
struct amdgpu_vm *vm = bo_va->base.vm;
struct amdgpu_bo_va_mapping *mapping;
dma_addr_t *pages_addr = NULL;
uint64_t gtt_flags, flags;
@ -1696,27 +1734,27 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
struct dma_fence *exclusive;
int r;
if (clear || !bo_va->bo) {
if (clear || !bo_va->base.bo) {
mem = NULL;
nodes = NULL;
exclusive = NULL;
} else {
struct ttm_dma_tt *ttm;
mem = &bo_va->bo->tbo.mem;
mem = &bo_va->base.bo->tbo.mem;
nodes = mem->mm_node;
if (mem->mem_type == TTM_PL_TT) {
ttm = container_of(bo_va->bo->tbo.ttm, struct
ttm_dma_tt, ttm);
ttm = container_of(bo_va->base.bo->tbo.ttm,
struct ttm_dma_tt, ttm);
pages_addr = ttm->dma_address;
}
exclusive = reservation_object_get_excl(bo_va->bo->tbo.resv);
exclusive = reservation_object_get_excl(bo->tbo.resv);
}
if (bo_va->bo) {
flags = amdgpu_ttm_tt_pte_flags(adev, bo_va->bo->tbo.ttm, mem);
gtt_flags = (amdgpu_ttm_is_bound(bo_va->bo->tbo.ttm) &&
adev == amdgpu_ttm_adev(bo_va->bo->tbo.bdev)) ?
if (bo) {
flags = amdgpu_ttm_tt_pte_flags(adev, bo->tbo.ttm, mem);
gtt_flags = (amdgpu_ttm_is_bound(bo->tbo.ttm) &&
adev == amdgpu_ttm_adev(bo->tbo.bdev)) ?
flags : 0;
} else {
flags = 0x0;
@ -1724,7 +1762,7 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
}
spin_lock(&vm->status_lock);
if (!list_empty(&bo_va->vm_status))
if (!list_empty(&bo_va->base.vm_status))
list_splice_init(&bo_va->valids, &bo_va->invalids);
spin_unlock(&vm->status_lock);
@ -1747,11 +1785,17 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
spin_lock(&vm->status_lock);
list_splice_init(&bo_va->invalids, &bo_va->valids);
list_del_init(&bo_va->vm_status);
list_del_init(&bo_va->base.vm_status);
if (clear)
list_add(&bo_va->vm_status, &vm->cleared);
list_add(&bo_va->base.vm_status, &vm->cleared);
spin_unlock(&vm->status_lock);
if (vm->use_cpu_for_update) {
/* Flush HDP */
mb();
amdgpu_gart_flush_gpu_tlb(adev, 0);
}
return 0;
}
@ -1905,15 +1949,19 @@ int amdgpu_vm_clear_freed(struct amdgpu_device *adev,
struct amdgpu_bo_va_mapping *mapping;
struct dma_fence *f = NULL;
int r;
uint64_t init_pte_value = 0;
while (!list_empty(&vm->freed)) {
mapping = list_first_entry(&vm->freed,
struct amdgpu_bo_va_mapping, list);
list_del(&mapping->list);
if (vm->pte_support_ats)
init_pte_value = AMDGPU_PTE_SYSTEM;
r = amdgpu_vm_bo_update_mapping(adev, NULL, 0, NULL, vm,
mapping->start, mapping->last,
0, 0, &f);
init_pte_value, 0, &f);
amdgpu_vm_free_mapping(adev, vm, mapping, f);
if (r) {
dma_fence_put(f);
@ -1933,26 +1981,26 @@ int amdgpu_vm_clear_freed(struct amdgpu_device *adev,
}
/**
* amdgpu_vm_clear_invalids - clear invalidated BOs in the PT
* amdgpu_vm_clear_moved - clear moved BOs in the PT
*
* @adev: amdgpu_device pointer
* @vm: requested vm
*
* Make sure all invalidated BOs are cleared in the PT.
* Make sure all moved BOs are cleared in the PT.
* Returns 0 for success.
*
* PTs have to be reserved and mutex must be locked!
*/
int amdgpu_vm_clear_invalids(struct amdgpu_device *adev,
struct amdgpu_vm *vm, struct amdgpu_sync *sync)
int amdgpu_vm_clear_moved(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_sync *sync)
{
struct amdgpu_bo_va *bo_va = NULL;
int r = 0;
spin_lock(&vm->status_lock);
while (!list_empty(&vm->invalidated)) {
bo_va = list_first_entry(&vm->invalidated,
struct amdgpu_bo_va, vm_status);
while (!list_empty(&vm->moved)) {
bo_va = list_first_entry(&vm->moved,
struct amdgpu_bo_va, base.vm_status);
spin_unlock(&vm->status_lock);
r = amdgpu_vm_bo_update(adev, bo_va, true);
@ -1992,16 +2040,17 @@ struct amdgpu_bo_va *amdgpu_vm_bo_add(struct amdgpu_device *adev,
if (bo_va == NULL) {
return NULL;
}
bo_va->vm = vm;
bo_va->bo = bo;
bo_va->base.vm = vm;
bo_va->base.bo = bo;
INIT_LIST_HEAD(&bo_va->base.bo_list);
INIT_LIST_HEAD(&bo_va->base.vm_status);
bo_va->ref_count = 1;
INIT_LIST_HEAD(&bo_va->bo_list);
INIT_LIST_HEAD(&bo_va->valids);
INIT_LIST_HEAD(&bo_va->invalids);
INIT_LIST_HEAD(&bo_va->vm_status);
if (bo)
list_add_tail(&bo_va->bo_list, &bo->va);
list_add_tail(&bo_va->base.bo_list, &bo->va);
return bo_va;
}
@ -2026,7 +2075,8 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
uint64_t size, uint64_t flags)
{
struct amdgpu_bo_va_mapping *mapping, *tmp;
struct amdgpu_vm *vm = bo_va->vm;
struct amdgpu_bo *bo = bo_va->base.bo;
struct amdgpu_vm *vm = bo_va->base.vm;
uint64_t eaddr;
/* validate the parameters */
@ -2037,7 +2087,7 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
/* make sure object fit at this offset */
eaddr = saddr + size - 1;
if (saddr >= eaddr ||
(bo_va->bo && offset + size > amdgpu_bo_size(bo_va->bo)))
(bo && offset + size > amdgpu_bo_size(bo)))
return -EINVAL;
saddr /= AMDGPU_GPU_PAGE_SIZE;
@ -2047,7 +2097,7 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
if (tmp) {
/* bo and tmp overlap, invalid addr */
dev_err(adev->dev, "bo %p va 0x%010Lx-0x%010Lx conflict with "
"0x%010Lx-0x%010Lx\n", bo_va->bo, saddr, eaddr,
"0x%010Lx-0x%010Lx\n", bo, saddr, eaddr,
tmp->start, tmp->last + 1);
return -EINVAL;
}
@ -2092,7 +2142,8 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
uint64_t size, uint64_t flags)
{
struct amdgpu_bo_va_mapping *mapping;
struct amdgpu_vm *vm = bo_va->vm;
struct amdgpu_bo *bo = bo_va->base.bo;
struct amdgpu_vm *vm = bo_va->base.vm;
uint64_t eaddr;
int r;
@ -2104,7 +2155,7 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
/* make sure object fit at this offset */
eaddr = saddr + size - 1;
if (saddr >= eaddr ||
(bo_va->bo && offset + size > amdgpu_bo_size(bo_va->bo)))
(bo && offset + size > amdgpu_bo_size(bo)))
return -EINVAL;
/* Allocate all the needed memory */
@ -2112,7 +2163,7 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
if (!mapping)
return -ENOMEM;
r = amdgpu_vm_bo_clear_mappings(adev, bo_va->vm, saddr, size);
r = amdgpu_vm_bo_clear_mappings(adev, bo_va->base.vm, saddr, size);
if (r) {
kfree(mapping);
return r;
@ -2152,7 +2203,7 @@ int amdgpu_vm_bo_unmap(struct amdgpu_device *adev,
uint64_t saddr)
{
struct amdgpu_bo_va_mapping *mapping;
struct amdgpu_vm *vm = bo_va->vm;
struct amdgpu_vm *vm = bo_va->base.vm;
bool valid = true;
saddr /= AMDGPU_GPU_PAGE_SIZE;
@ -2300,12 +2351,12 @@ void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
struct amdgpu_bo_va *bo_va)
{
struct amdgpu_bo_va_mapping *mapping, *next;
struct amdgpu_vm *vm = bo_va->vm;
struct amdgpu_vm *vm = bo_va->base.vm;
list_del(&bo_va->bo_list);
list_del(&bo_va->base.bo_list);
spin_lock(&vm->status_lock);
list_del(&bo_va->vm_status);
list_del(&bo_va->base.vm_status);
spin_unlock(&vm->status_lock);
list_for_each_entry_safe(mapping, next, &bo_va->valids, list) {
@ -2337,13 +2388,14 @@ void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
void amdgpu_vm_bo_invalidate(struct amdgpu_device *adev,
struct amdgpu_bo *bo)
{
struct amdgpu_bo_va *bo_va;
struct amdgpu_vm_bo_base *bo_base;
list_for_each_entry(bo_va, &bo->va, bo_list) {
spin_lock(&bo_va->vm->status_lock);
if (list_empty(&bo_va->vm_status))
list_add(&bo_va->vm_status, &bo_va->vm->invalidated);
spin_unlock(&bo_va->vm->status_lock);
list_for_each_entry(bo_base, &bo->va, bo_list) {
spin_lock(&bo_base->vm->status_lock);
if (list_empty(&bo_base->vm_status))
list_add(&bo_base->vm_status,
&bo_base->vm->moved);
spin_unlock(&bo_base->vm->status_lock);
}
}
@ -2361,12 +2413,26 @@ static uint32_t amdgpu_vm_get_block_size(uint64_t vm_size)
}
/**
* amdgpu_vm_adjust_size - adjust vm size and block size
* amdgpu_vm_set_fragment_size - adjust fragment size in PTE
*
* @adev: amdgpu_device pointer
* @fragment_size_default: the default fragment size if it's set auto
*/
void amdgpu_vm_set_fragment_size(struct amdgpu_device *adev, uint32_t fragment_size_default)
{
if (amdgpu_vm_fragment_size == -1)
adev->vm_manager.fragment_size = fragment_size_default;
else
adev->vm_manager.fragment_size = amdgpu_vm_fragment_size;
}
/**
* amdgpu_vm_adjust_size - adjust vm size, block size and fragment size
*
* @adev: amdgpu_device pointer
* @vm_size: the default vm size if it's set auto
*/
void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint64_t vm_size)
void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint64_t vm_size, uint32_t fragment_size_default)
{
/* adjust vm size firstly */
if (amdgpu_vm_size == -1)
@ -2381,8 +2447,11 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint64_t vm_size)
else
adev->vm_manager.block_size = amdgpu_vm_block_size;
DRM_INFO("vm size is %llu GB, block size is %u-bit\n",
adev->vm_manager.vm_size, adev->vm_manager.block_size);
amdgpu_vm_set_fragment_size(adev, fragment_size_default);
DRM_INFO("vm size is %llu GB, block size is %u-bit, fragment size is %u-bit\n",
adev->vm_manager.vm_size, adev->vm_manager.block_size,
adev->vm_manager.fragment_size);
}
/**
@ -2404,13 +2473,14 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amd_sched_rq *rq;
int r, i;
u64 flags;
uint64_t init_pde_value = 0;
vm->va = RB_ROOT;
vm->client_id = atomic64_inc_return(&adev->vm_manager.client_counter);
for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
vm->reserved_vmid[i] = NULL;
spin_lock_init(&vm->status_lock);
INIT_LIST_HEAD(&vm->invalidated);
INIT_LIST_HEAD(&vm->moved);
INIT_LIST_HEAD(&vm->cleared);
INIT_LIST_HEAD(&vm->freed);
@ -2425,10 +2495,17 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
if (r)
return r;
if (vm_context == AMDGPU_VM_CONTEXT_COMPUTE)
vm->pte_support_ats = false;
if (vm_context == AMDGPU_VM_CONTEXT_COMPUTE) {
vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode &
AMDGPU_VM_USE_CPU_FOR_COMPUTE);
else
if (adev->asic_type == CHIP_RAVEN) {
vm->pte_support_ats = true;
init_pde_value = AMDGPU_PTE_SYSTEM | AMDGPU_PDE_PTE;
}
} else
vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode &
AMDGPU_VM_USE_CPU_FOR_GFX);
DRM_DEBUG_DRIVER("VM update mode is %s\n",
@ -2448,7 +2525,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
r = amdgpu_bo_create(adev, amdgpu_vm_bo_size(adev, 0), align, true,
AMDGPU_GEM_DOMAIN_VRAM,
flags,
NULL, NULL, &vm->root.bo);
NULL, NULL, init_pde_value, &vm->root.bo);
if (r)
goto error_free_sched_entity;
@ -2457,6 +2534,13 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
goto error_free_root;
vm->last_eviction_counter = atomic64_read(&adev->num_evictions);
if (vm->use_cpu_for_update) {
r = amdgpu_bo_kmap(vm->root.bo, NULL);
if (r)
goto error_free_root;
}
amdgpu_bo_unreserve(vm->root.bo);
return 0;

View File

@ -50,9 +50,6 @@ struct amdgpu_bo_list_entry;
/* PTBs (Page Table Blocks) need to be aligned to 32K */
#define AMDGPU_VM_PTB_ALIGN_SIZE 32768
/* LOG2 number of continuous pages for the fragment field */
#define AMDGPU_LOG2_PAGES_PER_FRAG 4
#define AMDGPU_PTE_VALID (1ULL << 0)
#define AMDGPU_PTE_SYSTEM (1ULL << 1)
#define AMDGPU_PTE_SNOOPED (1ULL << 2)
@ -68,6 +65,9 @@ struct amdgpu_bo_list_entry;
/* TILED for VEGA10, reserved for older ASICs */
#define AMDGPU_PTE_PRT (1ULL << 51)
/* PDE is handled as PTE for VEGA10 */
#define AMDGPU_PDE_PTE (1ULL << 54)
/* VEGA10 only */
#define AMDGPU_PTE_MTYPE(a) ((uint64_t)a << 57)
#define AMDGPU_PTE_MTYPE_MASK AMDGPU_PTE_MTYPE(3ULL)
@ -94,6 +94,18 @@ struct amdgpu_bo_list_entry;
#define AMDGPU_VM_USE_CPU_FOR_GFX (1 << 0)
#define AMDGPU_VM_USE_CPU_FOR_COMPUTE (1 << 1)
/* base structure for tracking BO usage in a VM */
struct amdgpu_vm_bo_base {
/* constant after initialization */
struct amdgpu_vm *vm;
struct amdgpu_bo *bo;
/* protected by bo being reserved */
struct list_head bo_list;
/* protected by spinlock */
struct list_head vm_status;
};
struct amdgpu_vm_pt {
struct amdgpu_bo *bo;
@ -112,7 +124,7 @@ struct amdgpu_vm {
spinlock_t status_lock;
/* BOs moved, but not yet updated in the PT */
struct list_head invalidated;
struct list_head moved;
/* BOs cleared in the PT because of a move */
struct list_head cleared;
@ -135,11 +147,12 @@ struct amdgpu_vm {
u64 client_id;
/* dedicated to vm */
struct amdgpu_vm_id *reserved_vmid[AMDGPU_MAX_VMHUBS];
/* each VM will map on CSA */
struct amdgpu_bo_va *csa_bo_va;
/* Flag to indicate if VM tables are updated by CPU or GPU (SDMA) */
bool use_cpu_for_update;
/* Flag to indicate ATS support from PTE for GFX9 */
bool pte_support_ats;
};
struct amdgpu_vm_id {
@ -182,6 +195,7 @@ struct amdgpu_vm_manager {
uint32_t num_level;
uint64_t vm_size;
uint32_t block_size;
uint32_t fragment_size;
/* vram base address for page table entry */
u64 vram_base_offset;
/* vm pte handling */
@ -214,15 +228,13 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
int (*callback)(void *p, struct amdgpu_bo *bo),
void *param);
void amdgpu_vm_move_pt_bos_in_lru(struct amdgpu_device *adev,
struct amdgpu_vm *vm);
int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
struct amdgpu_vm *vm,
uint64_t saddr, uint64_t size);
int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
struct amdgpu_sync *sync, struct dma_fence *fence,
struct amdgpu_job *job);
int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job);
int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job, bool need_pipe_sync);
void amdgpu_vm_reset_id(struct amdgpu_device *adev, unsigned vmhub,
unsigned vmid);
void amdgpu_vm_reset_all_ids(struct amdgpu_device *adev);
@ -231,8 +243,8 @@ int amdgpu_vm_update_directories(struct amdgpu_device *adev,
int amdgpu_vm_clear_freed(struct amdgpu_device *adev,
struct amdgpu_vm *vm,
struct dma_fence **fence);
int amdgpu_vm_clear_invalids(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_sync *sync);
int amdgpu_vm_clear_moved(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_sync *sync);
int amdgpu_vm_bo_update(struct amdgpu_device *adev,
struct amdgpu_bo_va *bo_va,
bool clear);
@ -259,7 +271,10 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
uint64_t saddr, uint64_t size);
void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
struct amdgpu_bo_va *bo_va);
void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint64_t vm_size);
void amdgpu_vm_set_fragment_size(struct amdgpu_device *adev,
uint32_t fragment_size_default);
void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint64_t vm_size,
uint32_t fragment_size_default);
int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
bool amdgpu_vm_need_pipeline_sync(struct amdgpu_ring *ring,
struct amdgpu_job *job);

View File

@ -28,6 +28,8 @@
struct amdgpu_vram_mgr {
struct drm_mm mm;
spinlock_t lock;
atomic64_t usage;
atomic64_t vis_usage;
};
/**
@ -78,6 +80,27 @@ static int amdgpu_vram_mgr_fini(struct ttm_mem_type_manager *man)
return 0;
}
/**
* amdgpu_vram_mgr_vis_size - Calculate visible node size
*
* @adev: amdgpu device structure
* @node: MM node structure
*
* Calculate how many bytes of the MM node are inside visible VRAM
*/
static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
struct drm_mm_node *node)
{
uint64_t start = node->start << PAGE_SHIFT;
uint64_t end = (node->size + node->start) << PAGE_SHIFT;
if (start >= adev->mc.visible_vram_size)
return 0;
return (end > adev->mc.visible_vram_size ?
adev->mc.visible_vram_size : end) - start;
}
/**
* amdgpu_vram_mgr_new - allocate new ranges
*
@ -93,11 +116,13 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager *man,
const struct ttm_place *place,
struct ttm_mem_reg *mem)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(man->bdev);
struct amdgpu_vram_mgr *mgr = man->priv;
struct drm_mm *mm = &mgr->mm;
struct drm_mm_node *nodes;
enum drm_mm_insert_mode mode;
unsigned long lpfn, num_nodes, pages_per_node, pages_left;
uint64_t usage = 0, vis_usage = 0;
unsigned i;
int r;
@ -142,6 +167,9 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager *man,
if (unlikely(r))
goto error;
usage += nodes[i].size << PAGE_SHIFT;
vis_usage += amdgpu_vram_mgr_vis_size(adev, &nodes[i]);
/* Calculate a virtual BO start address to easily check if
* everything is CPU accessible.
*/
@ -155,6 +183,9 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager *man,
}
spin_unlock(&mgr->lock);
atomic64_add(usage, &mgr->usage);
atomic64_add(vis_usage, &mgr->vis_usage);
mem->mm_node = nodes;
return 0;
@ -181,8 +212,10 @@ error:
static void amdgpu_vram_mgr_del(struct ttm_mem_type_manager *man,
struct ttm_mem_reg *mem)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(man->bdev);
struct amdgpu_vram_mgr *mgr = man->priv;
struct drm_mm_node *nodes = mem->mm_node;
uint64_t usage = 0, vis_usage = 0;
unsigned pages = mem->num_pages;
if (!mem->mm_node)
@ -192,31 +225,67 @@ static void amdgpu_vram_mgr_del(struct ttm_mem_type_manager *man,
while (pages) {
pages -= nodes->size;
drm_mm_remove_node(nodes);
usage += nodes->size << PAGE_SHIFT;
vis_usage += amdgpu_vram_mgr_vis_size(adev, nodes);
++nodes;
}
spin_unlock(&mgr->lock);
atomic64_sub(usage, &mgr->usage);
atomic64_sub(vis_usage, &mgr->vis_usage);
kfree(mem->mm_node);
mem->mm_node = NULL;
}
/**
* amdgpu_vram_mgr_usage - how many bytes are used in this domain
*
* @man: TTM memory type manager
*
* Returns how many bytes are used in this domain.
*/
uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man)
{
struct amdgpu_vram_mgr *mgr = man->priv;
return atomic64_read(&mgr->usage);
}
/**
* amdgpu_vram_mgr_vis_usage - how many bytes are used in the visible part
*
* @man: TTM memory type manager
*
* Returns how many bytes are used in the visible part of VRAM
*/
uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man)
{
struct amdgpu_vram_mgr *mgr = man->priv;
return atomic64_read(&mgr->vis_usage);
}
/**
* amdgpu_vram_mgr_debug - dump VRAM table
*
* @man: TTM memory type manager
* @prefix: text prefix
* @printer: DRM printer to use
*
* Dump the table content using printk.
*/
static void amdgpu_vram_mgr_debug(struct ttm_mem_type_manager *man,
const char *prefix)
struct drm_printer *printer)
{
struct amdgpu_vram_mgr *mgr = man->priv;
struct drm_printer p = drm_debug_printer(prefix);
spin_lock(&mgr->lock);
drm_mm_print(&mgr->mm, &p);
drm_mm_print(&mgr->mm, printer);
spin_unlock(&mgr->lock);
drm_printf(printer, "man size:%llu pages, ram usage:%lluMB, vis usage:%lluMB\n",
man->size, amdgpu_vram_mgr_usage(man) >> 20,
amdgpu_vram_mgr_vis_usage(man) >> 20);
}
const struct ttm_mem_type_manager_func amdgpu_vram_mgr_func = {

View File

@ -1824,21 +1824,14 @@ static int cik_common_suspend(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
amdgpu_amdkfd_suspend(adev);
return cik_common_hw_fini(adev);
}
static int cik_common_resume(void *handle)
{
int r;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
r = cik_common_hw_init(adev);
if (r)
return r;
return amdgpu_amdkfd_resume(adev);
return cik_common_hw_init(adev);
}
static bool cik_common_is_idle(void *handle)

View File

@ -341,6 +341,63 @@ static void cik_sdma_rlc_stop(struct amdgpu_device *adev)
/* XXX todo */
}
/**
* cik_ctx_switch_enable - stop the async dma engines context switch
*
* @adev: amdgpu_device pointer
* @enable: enable/disable the DMA MEs context switch.
*
* Halt or unhalt the async dma engines context switch (VI).
*/
static void cik_ctx_switch_enable(struct amdgpu_device *adev, bool enable)
{
u32 f32_cntl, phase_quantum = 0;
int i;
if (amdgpu_sdma_phase_quantum) {
unsigned value = amdgpu_sdma_phase_quantum;
unsigned unit = 0;
while (value > (SDMA0_PHASE0_QUANTUM__VALUE_MASK >>
SDMA0_PHASE0_QUANTUM__VALUE__SHIFT)) {
value = (value + 1) >> 1;
unit++;
}
if (unit > (SDMA0_PHASE0_QUANTUM__UNIT_MASK >>
SDMA0_PHASE0_QUANTUM__UNIT__SHIFT)) {
value = (SDMA0_PHASE0_QUANTUM__VALUE_MASK >>
SDMA0_PHASE0_QUANTUM__VALUE__SHIFT);
unit = (SDMA0_PHASE0_QUANTUM__UNIT_MASK >>
SDMA0_PHASE0_QUANTUM__UNIT__SHIFT);
WARN_ONCE(1,
"clamping sdma_phase_quantum to %uK clock cycles\n",
value << unit);
}
phase_quantum =
value << SDMA0_PHASE0_QUANTUM__VALUE__SHIFT |
unit << SDMA0_PHASE0_QUANTUM__UNIT__SHIFT;
}
for (i = 0; i < adev->sdma.num_instances; i++) {
f32_cntl = RREG32(mmSDMA0_CNTL + sdma_offsets[i]);
if (enable) {
f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_CNTL,
AUTO_CTXSW_ENABLE, 1);
if (amdgpu_sdma_phase_quantum) {
WREG32(mmSDMA0_PHASE0_QUANTUM + sdma_offsets[i],
phase_quantum);
WREG32(mmSDMA0_PHASE1_QUANTUM + sdma_offsets[i],
phase_quantum);
}
} else {
f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_CNTL,
AUTO_CTXSW_ENABLE, 0);
}
WREG32(mmSDMA0_CNTL + sdma_offsets[i], f32_cntl);
}
}
/**
* cik_sdma_enable - stop the async dma engines
*
@ -537,6 +594,8 @@ static int cik_sdma_start(struct amdgpu_device *adev)
/* halt the engine before programing */
cik_sdma_enable(adev, false);
/* enable sdma ring preemption */
cik_ctx_switch_enable(adev, true);
/* start the gfx rings and rlc compute queues */
r = cik_sdma_gfx_resume(adev);
@ -984,6 +1043,7 @@ static int cik_sdma_hw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
cik_ctx_switch_enable(adev, false);
cik_sdma_enable(adev, false);
return 0;

View File

@ -484,134 +484,6 @@ static bool dce_v10_0_is_display_hung(struct amdgpu_device *adev)
return true;
}
static void dce_v10_0_stop_mc_access(struct amdgpu_device *adev,
struct amdgpu_mode_mc_save *save)
{
u32 crtc_enabled, tmp;
int i;
save->vga_render_control = RREG32(mmVGA_RENDER_CONTROL);
save->vga_hdp_control = RREG32(mmVGA_HDP_CONTROL);
/* disable VGA render */
tmp = RREG32(mmVGA_RENDER_CONTROL);
tmp = REG_SET_FIELD(tmp, VGA_RENDER_CONTROL, VGA_VSTATUS_CNTL, 0);
WREG32(mmVGA_RENDER_CONTROL, tmp);
/* blank the display controllers */
for (i = 0; i < adev->mode_info.num_crtc; i++) {
crtc_enabled = REG_GET_FIELD(RREG32(mmCRTC_CONTROL + crtc_offsets[i]),
CRTC_CONTROL, CRTC_MASTER_EN);
if (crtc_enabled) {
#if 0
u32 frame_count;
int j;
save->crtc_enabled[i] = true;
tmp = RREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i]);
if (REG_GET_FIELD(tmp, CRTC_BLANK_CONTROL, CRTC_BLANK_DATA_EN) == 0) {
amdgpu_display_vblank_wait(adev, i);
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 1);
tmp = REG_SET_FIELD(tmp, CRTC_BLANK_CONTROL, CRTC_BLANK_DATA_EN, 1);
WREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 0);
}
/* wait for the next frame */
frame_count = amdgpu_display_vblank_get_counter(adev, i);
for (j = 0; j < adev->usec_timeout; j++) {
if (amdgpu_display_vblank_get_counter(adev, i) != frame_count)
break;
udelay(1);
}
tmp = RREG32(mmGRPH_UPDATE + crtc_offsets[i]);
if (REG_GET_FIELD(tmp, GRPH_UPDATE, GRPH_UPDATE_LOCK) == 0) {
tmp = REG_SET_FIELD(tmp, GRPH_UPDATE, GRPH_UPDATE_LOCK, 1);
WREG32(mmGRPH_UPDATE + crtc_offsets[i], tmp);
}
tmp = RREG32(mmMASTER_UPDATE_LOCK + crtc_offsets[i]);
if (REG_GET_FIELD(tmp, MASTER_UPDATE_LOCK, MASTER_UPDATE_LOCK) == 0) {
tmp = REG_SET_FIELD(tmp, MASTER_UPDATE_LOCK, MASTER_UPDATE_LOCK, 1);
WREG32(mmMASTER_UPDATE_LOCK + crtc_offsets[i], tmp);
}
#else
/* XXX this is a hack to avoid strange behavior with EFI on certain systems */
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 1);
tmp = RREG32(mmCRTC_CONTROL + crtc_offsets[i]);
tmp = REG_SET_FIELD(tmp, CRTC_CONTROL, CRTC_MASTER_EN, 0);
WREG32(mmCRTC_CONTROL + crtc_offsets[i], tmp);
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 0);
save->crtc_enabled[i] = false;
/* ***** */
#endif
} else {
save->crtc_enabled[i] = false;
}
}
}
static void dce_v10_0_resume_mc_access(struct amdgpu_device *adev,
struct amdgpu_mode_mc_save *save)
{
u32 tmp, frame_count;
int i, j;
/* update crtc base addresses */
for (i = 0; i < adev->mode_info.num_crtc; i++) {
WREG32(mmGRPH_PRIMARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
upper_32_bits(adev->mc.vram_start));
WREG32(mmGRPH_SECONDARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
upper_32_bits(adev->mc.vram_start));
WREG32(mmGRPH_PRIMARY_SURFACE_ADDRESS + crtc_offsets[i],
(u32)adev->mc.vram_start);
WREG32(mmGRPH_SECONDARY_SURFACE_ADDRESS + crtc_offsets[i],
(u32)adev->mc.vram_start);
if (save->crtc_enabled[i]) {
tmp = RREG32(mmMASTER_UPDATE_MODE + crtc_offsets[i]);
if (REG_GET_FIELD(tmp, MASTER_UPDATE_MODE, MASTER_UPDATE_MODE) != 0) {
tmp = REG_SET_FIELD(tmp, MASTER_UPDATE_MODE, MASTER_UPDATE_MODE, 0);
WREG32(mmMASTER_UPDATE_MODE + crtc_offsets[i], tmp);
}
tmp = RREG32(mmGRPH_UPDATE + crtc_offsets[i]);
if (REG_GET_FIELD(tmp, GRPH_UPDATE, GRPH_UPDATE_LOCK)) {
tmp = REG_SET_FIELD(tmp, GRPH_UPDATE, GRPH_UPDATE_LOCK, 0);
WREG32(mmGRPH_UPDATE + crtc_offsets[i], tmp);
}
tmp = RREG32(mmMASTER_UPDATE_LOCK + crtc_offsets[i]);
if (REG_GET_FIELD(tmp, MASTER_UPDATE_LOCK, MASTER_UPDATE_LOCK)) {
tmp = REG_SET_FIELD(tmp, MASTER_UPDATE_LOCK, MASTER_UPDATE_LOCK, 0);
WREG32(mmMASTER_UPDATE_LOCK + crtc_offsets[i], tmp);
}
for (j = 0; j < adev->usec_timeout; j++) {
tmp = RREG32(mmGRPH_UPDATE + crtc_offsets[i]);
if (REG_GET_FIELD(tmp, GRPH_UPDATE, GRPH_SURFACE_UPDATE_PENDING) == 0)
break;
udelay(1);
}
tmp = RREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i]);
tmp = REG_SET_FIELD(tmp, CRTC_BLANK_CONTROL, CRTC_BLANK_DATA_EN, 0);
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 1);
WREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 0);
/* wait for the next frame */
frame_count = amdgpu_display_vblank_get_counter(adev, i);
for (j = 0; j < adev->usec_timeout; j++) {
if (amdgpu_display_vblank_get_counter(adev, i) != frame_count)
break;
udelay(1);
}
}
}
WREG32(mmVGA_MEMORY_BASE_ADDRESS_HIGH, upper_32_bits(adev->mc.vram_start));
WREG32(mmVGA_MEMORY_BASE_ADDRESS, lower_32_bits(adev->mc.vram_start));
/* Unlock vga access */
WREG32(mmVGA_HDP_CONTROL, save->vga_hdp_control);
mdelay(1);
WREG32(mmVGA_RENDER_CONTROL, save->vga_render_control);
}
static void dce_v10_0_set_vga_render_state(struct amdgpu_device *adev,
bool render)
{
@ -1867,7 +1739,7 @@ static void dce_v10_0_afmt_setmode(struct drm_encoder *encoder,
dce_v10_0_audio_write_sad_regs(encoder);
dce_v10_0_audio_write_latency_fields(encoder, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
if (err < 0) {
DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
return;
@ -2267,6 +2139,7 @@ static void dce_v10_0_crtc_load_lut(struct drm_crtc *crtc)
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct amdgpu_device *adev = dev->dev_private;
u16 *r, *g, *b;
int i;
u32 tmp;
@ -2304,11 +2177,14 @@ static void dce_v10_0_crtc_load_lut(struct drm_crtc *crtc)
WREG32(mmDC_LUT_WRITE_EN_MASK + amdgpu_crtc->crtc_offset, 0x00000007);
WREG32(mmDC_LUT_RW_INDEX + amdgpu_crtc->crtc_offset, 0);
r = crtc->gamma_store;
g = r + crtc->gamma_size;
b = g + crtc->gamma_size;
for (i = 0; i < 256; i++) {
WREG32(mmDC_LUT_30_COLOR + amdgpu_crtc->crtc_offset,
(amdgpu_crtc->lut_r[i] << 20) |
(amdgpu_crtc->lut_g[i] << 10) |
(amdgpu_crtc->lut_b[i] << 0));
((*r++ & 0xffc0) << 14) |
((*g++ & 0xffc0) << 4) |
(*b++ >> 6));
}
tmp = RREG32(mmDEGAMMA_CONTROL + amdgpu_crtc->crtc_offset);
@ -2555,7 +2431,7 @@ static int dce_v10_0_crtc_cursor_set2(struct drm_crtc *crtc,
aobj = gem_to_amdgpu_bo(obj);
ret = amdgpu_bo_reserve(aobj, false);
if (ret != 0) {
drm_gem_object_unreference_unlocked(obj);
drm_gem_object_put_unlocked(obj);
return ret;
}
@ -2563,7 +2439,7 @@ static int dce_v10_0_crtc_cursor_set2(struct drm_crtc *crtc,
amdgpu_bo_unreserve(aobj);
if (ret) {
DRM_ERROR("Failed to pin new cursor BO (%d)\n", ret);
drm_gem_object_unreference_unlocked(obj);
drm_gem_object_put_unlocked(obj);
return ret;
}
@ -2597,7 +2473,7 @@ unpin:
amdgpu_bo_unpin(aobj);
amdgpu_bo_unreserve(aobj);
}
drm_gem_object_unreference_unlocked(amdgpu_crtc->cursor_bo);
drm_gem_object_put_unlocked(amdgpu_crtc->cursor_bo);
}
amdgpu_crtc->cursor_bo = obj;
@ -2624,15 +2500,6 @@ static int dce_v10_0_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
u16 *blue, uint32_t size,
struct drm_modeset_acquire_ctx *ctx)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
int i;
/* userspace palettes are always correct as is */
for (i = 0; i < size; i++) {
amdgpu_crtc->lut_r[i] = red[i] >> 6;
amdgpu_crtc->lut_g[i] = green[i] >> 6;
amdgpu_crtc->lut_b[i] = blue[i] >> 6;
}
dce_v10_0_crtc_load_lut(crtc);
return 0;
@ -2844,14 +2711,12 @@ static const struct drm_crtc_helper_funcs dce_v10_0_crtc_helper_funcs = {
.mode_set_base_atomic = dce_v10_0_crtc_set_base_atomic,
.prepare = dce_v10_0_crtc_prepare,
.commit = dce_v10_0_crtc_commit,
.load_lut = dce_v10_0_crtc_load_lut,
.disable = dce_v10_0_crtc_disable,
};
static int dce_v10_0_crtc_init(struct amdgpu_device *adev, int index)
{
struct amdgpu_crtc *amdgpu_crtc;
int i;
amdgpu_crtc = kzalloc(sizeof(struct amdgpu_crtc) +
(AMDGPUFB_CONN_LIMIT * sizeof(struct drm_connector *)), GFP_KERNEL);
@ -2869,12 +2734,6 @@ static int dce_v10_0_crtc_init(struct amdgpu_device *adev, int index)
adev->ddev->mode_config.cursor_width = amdgpu_crtc->max_cursor_width;
adev->ddev->mode_config.cursor_height = amdgpu_crtc->max_cursor_height;
for (i = 0; i < 256; i++) {
amdgpu_crtc->lut_r[i] = i << 2;
amdgpu_crtc->lut_g[i] = i << 2;
amdgpu_crtc->lut_b[i] = i << 2;
}
switch (amdgpu_crtc->crtc_id) {
case 0:
default:
@ -3025,6 +2884,8 @@ static int dce_v10_0_hw_init(void *handle)
dce_v10_0_init_golden_registers(adev);
/* disable vga render */
dce_v10_0_set_vga_render_state(adev, false);
/* init dig PHYs, disp eng pll */
amdgpu_atombios_encoder_init_dig(adev);
amdgpu_atombios_crtc_set_disp_eng_pll(adev, adev->clock.default_dispclk);
@ -3737,7 +3598,6 @@ static void dce_v10_0_encoder_add(struct amdgpu_device *adev,
}
static const struct amdgpu_display_funcs dce_v10_0_display_funcs = {
.set_vga_render_state = &dce_v10_0_set_vga_render_state,
.bandwidth_update = &dce_v10_0_bandwidth_update,
.vblank_get_counter = &dce_v10_0_vblank_get_counter,
.vblank_wait = &dce_v10_0_vblank_wait,
@ -3750,8 +3610,6 @@ static const struct amdgpu_display_funcs dce_v10_0_display_funcs = {
.page_flip_get_scanoutpos = &dce_v10_0_crtc_get_scanoutpos,
.add_encoder = &dce_v10_0_encoder_add,
.add_connector = &amdgpu_connector_add,
.stop_mc_access = &dce_v10_0_stop_mc_access,
.resume_mc_access = &dce_v10_0_resume_mc_access,
};
static void dce_v10_0_set_display_funcs(struct amdgpu_device *adev)

View File

@ -499,79 +499,6 @@ static bool dce_v11_0_is_display_hung(struct amdgpu_device *adev)
return true;
}
static void dce_v11_0_stop_mc_access(struct amdgpu_device *adev,
struct amdgpu_mode_mc_save *save)
{
u32 crtc_enabled, tmp;
int i;
save->vga_render_control = RREG32(mmVGA_RENDER_CONTROL);
save->vga_hdp_control = RREG32(mmVGA_HDP_CONTROL);
/* disable VGA render */
tmp = RREG32(mmVGA_RENDER_CONTROL);
tmp = REG_SET_FIELD(tmp, VGA_RENDER_CONTROL, VGA_VSTATUS_CNTL, 0);
WREG32(mmVGA_RENDER_CONTROL, tmp);
/* blank the display controllers */
for (i = 0; i < adev->mode_info.num_crtc; i++) {
crtc_enabled = REG_GET_FIELD(RREG32(mmCRTC_CONTROL + crtc_offsets[i]),
CRTC_CONTROL, CRTC_MASTER_EN);
if (crtc_enabled) {
#if 1
save->crtc_enabled[i] = true;
tmp = RREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i]);
if (REG_GET_FIELD(tmp, CRTC_BLANK_CONTROL, CRTC_BLANK_DATA_EN) == 0) {
/*it is correct only for RGB ; black is 0*/
WREG32(mmCRTC_BLANK_DATA_COLOR + crtc_offsets[i], 0);
tmp = REG_SET_FIELD(tmp, CRTC_BLANK_CONTROL, CRTC_BLANK_DATA_EN, 1);
WREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
}
#else
/* XXX this is a hack to avoid strange behavior with EFI on certain systems */
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 1);
tmp = RREG32(mmCRTC_CONTROL + crtc_offsets[i]);
tmp = REG_SET_FIELD(tmp, CRTC_CONTROL, CRTC_MASTER_EN, 0);
WREG32(mmCRTC_CONTROL + crtc_offsets[i], tmp);
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 0);
save->crtc_enabled[i] = false;
/* ***** */
#endif
} else {
save->crtc_enabled[i] = false;
}
}
}
static void dce_v11_0_resume_mc_access(struct amdgpu_device *adev,
struct amdgpu_mode_mc_save *save)
{
u32 tmp;
int i;
/* update crtc base addresses */
for (i = 0; i < adev->mode_info.num_crtc; i++) {
WREG32(mmGRPH_PRIMARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
upper_32_bits(adev->mc.vram_start));
WREG32(mmGRPH_PRIMARY_SURFACE_ADDRESS + crtc_offsets[i],
(u32)adev->mc.vram_start);
if (save->crtc_enabled[i]) {
tmp = RREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i]);
tmp = REG_SET_FIELD(tmp, CRTC_BLANK_CONTROL, CRTC_BLANK_DATA_EN, 0);
WREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
}
}
WREG32(mmVGA_MEMORY_BASE_ADDRESS_HIGH, upper_32_bits(adev->mc.vram_start));
WREG32(mmVGA_MEMORY_BASE_ADDRESS, lower_32_bits(adev->mc.vram_start));
/* Unlock vga access */
WREG32(mmVGA_HDP_CONTROL, save->vga_hdp_control);
mdelay(1);
WREG32(mmVGA_RENDER_CONTROL, save->vga_render_control);
}
static void dce_v11_0_set_vga_render_state(struct amdgpu_device *adev,
bool render)
{
@ -1851,7 +1778,7 @@ static void dce_v11_0_afmt_setmode(struct drm_encoder *encoder,
dce_v11_0_audio_write_sad_regs(encoder);
dce_v11_0_audio_write_latency_fields(encoder, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
if (err < 0) {
DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
return;
@ -2251,6 +2178,7 @@ static void dce_v11_0_crtc_load_lut(struct drm_crtc *crtc)
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct amdgpu_device *adev = dev->dev_private;
u16 *r, *g, *b;
int i;
u32 tmp;
@ -2282,11 +2210,14 @@ static void dce_v11_0_crtc_load_lut(struct drm_crtc *crtc)
WREG32(mmDC_LUT_WRITE_EN_MASK + amdgpu_crtc->crtc_offset, 0x00000007);
WREG32(mmDC_LUT_RW_INDEX + amdgpu_crtc->crtc_offset, 0);
r = crtc->gamma_store;
g = r + crtc->gamma_size;
b = g + crtc->gamma_size;
for (i = 0; i < 256; i++) {
WREG32(mmDC_LUT_30_COLOR + amdgpu_crtc->crtc_offset,
(amdgpu_crtc->lut_r[i] << 20) |
(amdgpu_crtc->lut_g[i] << 10) |
(amdgpu_crtc->lut_b[i] << 0));
((*r++ & 0xffc0) << 14) |
((*g++ & 0xffc0) << 4) |
(*b++ >> 6));
}
tmp = RREG32(mmDEGAMMA_CONTROL + amdgpu_crtc->crtc_offset);
@ -2575,7 +2506,7 @@ static int dce_v11_0_crtc_cursor_set2(struct drm_crtc *crtc,
aobj = gem_to_amdgpu_bo(obj);
ret = amdgpu_bo_reserve(aobj, false);
if (ret != 0) {
drm_gem_object_unreference_unlocked(obj);
drm_gem_object_put_unlocked(obj);
return ret;
}
@ -2583,7 +2514,7 @@ static int dce_v11_0_crtc_cursor_set2(struct drm_crtc *crtc,
amdgpu_bo_unreserve(aobj);
if (ret) {
DRM_ERROR("Failed to pin new cursor BO (%d)\n", ret);
drm_gem_object_unreference_unlocked(obj);
drm_gem_object_put_unlocked(obj);
return ret;
}
@ -2617,7 +2548,7 @@ unpin:
amdgpu_bo_unpin(aobj);
amdgpu_bo_unreserve(aobj);
}
drm_gem_object_unreference_unlocked(amdgpu_crtc->cursor_bo);
drm_gem_object_put_unlocked(amdgpu_crtc->cursor_bo);
}
amdgpu_crtc->cursor_bo = obj;
@ -2644,15 +2575,6 @@ static int dce_v11_0_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
u16 *blue, uint32_t size,
struct drm_modeset_acquire_ctx *ctx)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
int i;
/* userspace palettes are always correct as is */
for (i = 0; i < size; i++) {
amdgpu_crtc->lut_r[i] = red[i] >> 6;
amdgpu_crtc->lut_g[i] = green[i] >> 6;
amdgpu_crtc->lut_b[i] = blue[i] >> 6;
}
dce_v11_0_crtc_load_lut(crtc);
return 0;
@ -2892,14 +2814,12 @@ static const struct drm_crtc_helper_funcs dce_v11_0_crtc_helper_funcs = {
.mode_set_base_atomic = dce_v11_0_crtc_set_base_atomic,
.prepare = dce_v11_0_crtc_prepare,
.commit = dce_v11_0_crtc_commit,
.load_lut = dce_v11_0_crtc_load_lut,
.disable = dce_v11_0_crtc_disable,
};
static int dce_v11_0_crtc_init(struct amdgpu_device *adev, int index)
{
struct amdgpu_crtc *amdgpu_crtc;
int i;
amdgpu_crtc = kzalloc(sizeof(struct amdgpu_crtc) +
(AMDGPUFB_CONN_LIMIT * sizeof(struct drm_connector *)), GFP_KERNEL);
@ -2917,12 +2837,6 @@ static int dce_v11_0_crtc_init(struct amdgpu_device *adev, int index)
adev->ddev->mode_config.cursor_width = amdgpu_crtc->max_cursor_width;
adev->ddev->mode_config.cursor_height = amdgpu_crtc->max_cursor_height;
for (i = 0; i < 256; i++) {
amdgpu_crtc->lut_r[i] = i << 2;
amdgpu_crtc->lut_g[i] = i << 2;
amdgpu_crtc->lut_b[i] = i << 2;
}
switch (amdgpu_crtc->crtc_id) {
case 0:
default:
@ -3086,6 +3000,8 @@ static int dce_v11_0_hw_init(void *handle)
dce_v11_0_init_golden_registers(adev);
/* disable vga render */
dce_v11_0_set_vga_render_state(adev, false);
/* init dig PHYs, disp eng pll */
amdgpu_atombios_crtc_powergate_init(adev);
amdgpu_atombios_encoder_init_dig(adev);
@ -3806,7 +3722,6 @@ static void dce_v11_0_encoder_add(struct amdgpu_device *adev,
}
static const struct amdgpu_display_funcs dce_v11_0_display_funcs = {
.set_vga_render_state = &dce_v11_0_set_vga_render_state,
.bandwidth_update = &dce_v11_0_bandwidth_update,
.vblank_get_counter = &dce_v11_0_vblank_get_counter,
.vblank_wait = &dce_v11_0_vblank_wait,
@ -3819,8 +3734,6 @@ static const struct amdgpu_display_funcs dce_v11_0_display_funcs = {
.page_flip_get_scanoutpos = &dce_v11_0_crtc_get_scanoutpos,
.add_encoder = &dce_v11_0_encoder_add,
.add_connector = &amdgpu_connector_add,
.stop_mc_access = &dce_v11_0_stop_mc_access,
.resume_mc_access = &dce_v11_0_resume_mc_access,
};
static void dce_v11_0_set_display_funcs(struct amdgpu_device *adev)

View File

@ -42,6 +42,7 @@
#include "dce/dce_6_0_d.h"
#include "dce/dce_6_0_sh_mask.h"
#include "gca/gfx_7_2_enum.h"
#include "dce_v6_0.h"
#include "si_enums.h"
static void dce_v6_0_set_display_funcs(struct amdgpu_device *adev);
@ -392,117 +393,6 @@ static u32 dce_v6_0_hpd_get_gpio_reg(struct amdgpu_device *adev)
return mmDC_GPIO_HPD_A;
}
static u32 evergreen_get_vblank_counter(struct amdgpu_device* adev, int crtc)
{
if (crtc >= adev->mode_info.num_crtc)
return 0;
else
return RREG32(mmCRTC_STATUS_FRAME_COUNT + crtc_offsets[crtc]);
}
static void dce_v6_0_stop_mc_access(struct amdgpu_device *adev,
struct amdgpu_mode_mc_save *save)
{
u32 crtc_enabled, tmp, frame_count;
int i, j;
save->vga_render_control = RREG32(mmVGA_RENDER_CONTROL);
save->vga_hdp_control = RREG32(mmVGA_HDP_CONTROL);
/* disable VGA render */
WREG32(mmVGA_RENDER_CONTROL, 0);
/* blank the display controllers */
for (i = 0; i < adev->mode_info.num_crtc; i++) {
crtc_enabled = RREG32(mmCRTC_CONTROL + crtc_offsets[i]) & CRTC_CONTROL__CRTC_MASTER_EN_MASK;
if (crtc_enabled) {
save->crtc_enabled[i] = true;
tmp = RREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i]);
if (!(tmp & CRTC_BLANK_CONTROL__CRTC_BLANK_DATA_EN_MASK)) {
dce_v6_0_vblank_wait(adev, i);
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 1);
tmp |= CRTC_BLANK_CONTROL__CRTC_BLANK_DATA_EN_MASK;
WREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 0);
}
/* wait for the next frame */
frame_count = evergreen_get_vblank_counter(adev, i);
for (j = 0; j < adev->usec_timeout; j++) {
if (evergreen_get_vblank_counter(adev, i) != frame_count)
break;
udelay(1);
}
/* XXX this is a hack to avoid strange behavior with EFI on certain systems */
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 1);
tmp = RREG32(mmCRTC_CONTROL + crtc_offsets[i]);
tmp &= ~CRTC_CONTROL__CRTC_MASTER_EN_MASK;
WREG32(mmCRTC_CONTROL + crtc_offsets[i], tmp);
WREG32(mmCRTC_UPDATE_LOCK + crtc_offsets[i], 0);
save->crtc_enabled[i] = false;
/* ***** */
} else {
save->crtc_enabled[i] = false;
}
}
}
static void dce_v6_0_resume_mc_access(struct amdgpu_device *adev,
struct amdgpu_mode_mc_save *save)
{
u32 tmp;
int i, j;
/* update crtc base addresses */
for (i = 0; i < adev->mode_info.num_crtc; i++) {
WREG32(mmGRPH_PRIMARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
upper_32_bits(adev->mc.vram_start));
WREG32(mmGRPH_SECONDARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
upper_32_bits(adev->mc.vram_start));
WREG32(mmGRPH_PRIMARY_SURFACE_ADDRESS + crtc_offsets[i],
(u32)adev->mc.vram_start);
WREG32(mmGRPH_SECONDARY_SURFACE_ADDRESS + crtc_offsets[i],
(u32)adev->mc.vram_start);
}
WREG32(mmVGA_MEMORY_BASE_ADDRESS_HIGH, upper_32_bits(adev->mc.vram_start));
WREG32(mmVGA_MEMORY_BASE_ADDRESS, (u32)adev->mc.vram_start);
/* unlock regs and wait for update */
for (i = 0; i < adev->mode_info.num_crtc; i++) {
if (save->crtc_enabled[i]) {
tmp = RREG32(mmMASTER_UPDATE_MODE + crtc_offsets[i]);
if ((tmp & 0x7) != 0) {
tmp &= ~0x7;
WREG32(mmMASTER_UPDATE_MODE + crtc_offsets[i], tmp);
}
tmp = RREG32(mmGRPH_UPDATE + crtc_offsets[i]);
if (tmp & GRPH_UPDATE__GRPH_UPDATE_LOCK_MASK) {
tmp &= ~GRPH_UPDATE__GRPH_UPDATE_LOCK_MASK;
WREG32(mmGRPH_UPDATE + crtc_offsets[i], tmp);
}
tmp = RREG32(mmMASTER_UPDATE_LOCK + crtc_offsets[i]);
if (tmp & 1) {
tmp &= ~1;
WREG32(mmMASTER_UPDATE_LOCK + crtc_offsets[i], tmp);
}
for (j = 0; j < adev->usec_timeout; j++) {
tmp = RREG32(mmGRPH_UPDATE + crtc_offsets[i]);
if ((tmp & GRPH_UPDATE__GRPH_SURFACE_UPDATE_PENDING_MASK) == 0)
break;
udelay(1);
}
}
}
/* Unlock vga access */
WREG32(mmVGA_HDP_CONTROL, save->vga_hdp_control);
mdelay(1);
WREG32(mmVGA_RENDER_CONTROL, save->vga_render_control);
}
static void dce_v6_0_set_vga_render_state(struct amdgpu_device *adev,
bool render)
{
@ -1597,7 +1487,7 @@ static void dce_v6_0_audio_set_avi_infoframe(struct drm_encoder *encoder,
ssize_t err;
u32 tmp;
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
if (err < 0) {
DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
return;
@ -2182,6 +2072,7 @@ static void dce_v6_0_crtc_load_lut(struct drm_crtc *crtc)
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct amdgpu_device *adev = dev->dev_private;
u16 *r, *g, *b;
int i;
DRM_DEBUG_KMS("%d\n", amdgpu_crtc->crtc_id);
@ -2211,11 +2102,14 @@ static void dce_v6_0_crtc_load_lut(struct drm_crtc *crtc)
WREG32(mmDC_LUT_WRITE_EN_MASK + amdgpu_crtc->crtc_offset, 0x00000007);
WREG32(mmDC_LUT_RW_INDEX + amdgpu_crtc->crtc_offset, 0);
r = crtc->gamma_store;
g = r + crtc->gamma_size;
b = g + crtc->gamma_size;
for (i = 0; i < 256; i++) {
WREG32(mmDC_LUT_30_COLOR + amdgpu_crtc->crtc_offset,
(amdgpu_crtc->lut_r[i] << 20) |
(amdgpu_crtc->lut_g[i] << 10) |
(amdgpu_crtc->lut_b[i] << 0));
((*r++ & 0xffc0) << 14) |
((*g++ & 0xffc0) << 4) |
(*b++ >> 6));
}
WREG32(mmDEGAMMA_CONTROL + amdgpu_crtc->crtc_offset,
@ -2428,7 +2322,7 @@ static int dce_v6_0_crtc_cursor_set2(struct drm_crtc *crtc,
aobj = gem_to_amdgpu_bo(obj);
ret = amdgpu_bo_reserve(aobj, false);
if (ret != 0) {
drm_gem_object_unreference_unlocked(obj);
drm_gem_object_put_unlocked(obj);
return ret;
}
@ -2436,7 +2330,7 @@ static int dce_v6_0_crtc_cursor_set2(struct drm_crtc *crtc,
amdgpu_bo_unreserve(aobj);
if (ret) {
DRM_ERROR("Failed to pin new cursor BO (%d)\n", ret);
drm_gem_object_unreference_unlocked(obj);
drm_gem_object_put_unlocked(obj);
return ret;
}
@ -2470,7 +2364,7 @@ unpin:
amdgpu_bo_unpin(aobj);
amdgpu_bo_unreserve(aobj);
}
drm_gem_object_unreference_unlocked(amdgpu_crtc->cursor_bo);
drm_gem_object_put_unlocked(amdgpu_crtc->cursor_bo);
}
amdgpu_crtc->cursor_bo = obj;
@ -2496,15 +2390,6 @@ static int dce_v6_0_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
u16 *blue, uint32_t size,
struct drm_modeset_acquire_ctx *ctx)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
int i;
/* userspace palettes are always correct as is */
for (i = 0; i < size; i++) {
amdgpu_crtc->lut_r[i] = red[i] >> 6;
amdgpu_crtc->lut_g[i] = green[i] >> 6;
amdgpu_crtc->lut_b[i] = blue[i] >> 6;
}
dce_v6_0_crtc_load_lut(crtc);
return 0;
@ -2712,14 +2597,12 @@ static const struct drm_crtc_helper_funcs dce_v6_0_crtc_helper_funcs = {
.mode_set_base_atomic = dce_v6_0_crtc_set_base_atomic,
.prepare = dce_v6_0_crtc_prepare,
.commit = dce_v6_0_crtc_commit,
.load_lut = dce_v6_0_crtc_load_lut,
.disable = dce_v6_0_crtc_disable,
};
static int dce_v6_0_crtc_init(struct amdgpu_device *adev, int index)
{
struct amdgpu_crtc *amdgpu_crtc;
int i;
amdgpu_crtc = kzalloc(sizeof(struct amdgpu_crtc) +
(AMDGPUFB_CONN_LIMIT * sizeof(struct drm_connector *)), GFP_KERNEL);
@ -2737,12 +2620,6 @@ static int dce_v6_0_crtc_init(struct amdgpu_device *adev, int index)
adev->ddev->mode_config.cursor_width = amdgpu_crtc->max_cursor_width;
adev->ddev->mode_config.cursor_height = amdgpu_crtc->max_cursor_height;
for (i = 0; i < 256; i++) {
amdgpu_crtc->lut_r[i] = i << 2;
amdgpu_crtc->lut_g[i] = i << 2;
amdgpu_crtc->lut_b[i] = i << 2;
}
amdgpu_crtc->crtc_offset = crtc_offsets[amdgpu_crtc->crtc_id];
amdgpu_crtc->pll_id = ATOM_PPLL_INVALID;
@ -2873,6 +2750,8 @@ static int dce_v6_0_hw_init(void *handle)
int i;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
/* disable vga render */
dce_v6_0_set_vga_render_state(adev, false);
/* init dig PHYs, disp eng pll */
amdgpu_atombios_encoder_init_dig(adev);
amdgpu_atombios_crtc_set_disp_eng_pll(adev, adev->clock.default_dispclk);
@ -3525,7 +3404,6 @@ static void dce_v6_0_encoder_add(struct amdgpu_device *adev,
}
static const struct amdgpu_display_funcs dce_v6_0_display_funcs = {
.set_vga_render_state = &dce_v6_0_set_vga_render_state,
.bandwidth_update = &dce_v6_0_bandwidth_update,
.vblank_get_counter = &dce_v6_0_vblank_get_counter,
.vblank_wait = &dce_v6_0_vblank_wait,
@ -3538,8 +3416,6 @@ static const struct amdgpu_display_funcs dce_v6_0_display_funcs = {
.page_flip_get_scanoutpos = &dce_v6_0_crtc_get_scanoutpos,
.add_encoder = &dce_v6_0_encoder_add,
.add_connector = &amdgpu_connector_add,
.stop_mc_access = &dce_v6_0_stop_mc_access,
.resume_mc_access = &dce_v6_0_resume_mc_access,
};
static void dce_v6_0_set_display_funcs(struct amdgpu_device *adev)

Some files were not shown because too many files have changed in this diff Show More