1
0
Fork 0

Features:

- Engine discovery query (Tvrtko)
 - Support for DP YCbCr4:2:0 outputs (Gwan-gyeong)
 - HDCP revocation support, refactoring (Ramalingam)
 - Remove DRM_AUTH from IOCTLs which also have DRM_RENDER_ALLOW (Christian König)
 - Asynchronous display power disabling (Imre)
 - Perma-pin uC firmware and re-enable global reset (Fernando)
 - GTT remapping for display, for bigger fb size and stride (Ville)
 - Enable pipe HDR mode on ICL if only HDR planes are used (Ville)
 - Kconfig to tweak the busyspin durations for i915_wait_request (Chris)
 - Allow multiple user handles to the same VM (Chris)
 - GT/GEM runtime pm improvements using wakerefs (Chris)
 - Gen 4&5 render context support (Chris)
 - Allow userspace to clone contexts on creation (Chris)
 - SINGLE_TIMELINE flags for context creation (Chris)
 - Allow specification of parallel execbuf (Chris)
 
 Refactoring:
 - Header refactoring (Jani)
 - Move GraphicsTechnology files under gt/ (Chris)
 - Sideband code refactoring (Chris)
 
 Fixes:
 - ICL DSI state readout and checker fixes (Vandita)
 - GLK DSI picture corruption fix (Stanislav)
 - HDMI deep color fixes (Clinton, Aditya)
 - Fix driver unbinding from a device in use (Janusz)
 - Fix clock gating with pipe scaling (Radhakrishna)
 - Disable broken FBC on GLK (Daniel Drake)
 - Miscellaneous GuC fixes (Michal)
 - Fix MG PHY DP register programming (Imre)
 - Add missing combo PHY lane power setup (Imre)
 - Workarounds for early ICL VBT issues (Imre)
 - Fix fastset vs. pfit on/off on HSW EDP transcoder (Ville)
 - Add readout and state check for pch_pfit.force_thru (Ville)
 - Miscellaneous display fixes and refactoring (Ville)
 - Display workaround fixes (Ville)
 - Enable audio even if ELD is bogus (Ville)
 - Fix use-after-free in reporting create.size (Chris)
 - Sideband fixes to avoid BYT hard lockups (Chris)
 - Workaround fixes and improvements (Chris)
 
 Maintainer shortcomings:
 - Failure to adequately describe and give credit for all changes (Jani)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEFWWmW3ewYy4RJOWc05gHnSar7m8FAlzoK8oACgkQ05gHnSar
 7m8jZg//UuIkz4bIu7A0YfN/VH3/h3fthxboejj27HpO4OO9eFqLVqaEUFEngGvf
 66fnFKNwtLdW7Dsx9iQsKNsVTcdsEE5PvSA6FZ3rVtYOwBdZ9OKYRxci2KcSnjqz
 F0/8Jxgz2G0gu9TV6dgTLrfdJiuJrCbidRV3G5id0XHNEGbpABtmVxYfsbj/w9mU
 luckCgKyRDZNzfhyGIPV763bNGZWLQPcbP99yrZf4+EcsiQ2MfjHJdwe5Ko+iGDk
 sO3lFg/1iEf41gqaD4LPokOtUKZfXI1Sujs1w/0djDbqs9USq0eY1L5C3ZBq5Si1
 woz7ATXO71FfBcNRxLTejNqCVlQMLix/185/ItkDA4gDlHwWZPYaT5VTNgRtEEy6
 XNtscZyM6Z1ghqRqahWWu40g80sOdfYuiTFEAYonVbDAUootgF46uWO/2ib0Hya+
 tYlm60M097eMealzaXEyHPHlW1OeUUJTKxl9j7nHmqVn542OI8gn7xvIXX2VsYDY
 7D4gVPoFg0UpGXM2uuSHVgvxwtg4t083Wu+utYu76RjmwNye4LkHewWGFjmOkYRf
 BraHoA+gKPFtAJjjtkyE/ZnlT4c3tDoQ0a6+gRKVurXzu/Y6JVzquhJvH5mShyZ7
 oTv+erupcz7JEnEeKzgMCyon/Drumiut5I6zr29GNQ3eelpf4jQ=
 =U/nc
 -----END PGP SIGNATURE-----

Merge tag 'drm-intel-next-2019-05-24' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

Features:
- Engine discovery query (Tvrtko)
- Support for DP YCbCr4:2:0 outputs (Gwan-gyeong)
- HDCP revocation support, refactoring (Ramalingam)
- Remove DRM_AUTH from IOCTLs which also have DRM_RENDER_ALLOW (Christian König)
- Asynchronous display power disabling (Imre)
- Perma-pin uC firmware and re-enable global reset (Fernando)
- GTT remapping for display, for bigger fb size and stride (Ville)
- Enable pipe HDR mode on ICL if only HDR planes are used (Ville)
- Kconfig to tweak the busyspin durations for i915_wait_request (Chris)
- Allow multiple user handles to the same VM (Chris)
- GT/GEM runtime pm improvements using wakerefs (Chris)
- Gen 4&5 render context support (Chris)
- Allow userspace to clone contexts on creation (Chris)
- SINGLE_TIMELINE flags for context creation (Chris)
- Allow specification of parallel execbuf (Chris)

Refactoring:
- Header refactoring (Jani)
- Move GraphicsTechnology files under gt/ (Chris)
- Sideband code refactoring (Chris)

Fixes:
- ICL DSI state readout and checker fixes (Vandita)
- GLK DSI picture corruption fix (Stanislav)
- HDMI deep color fixes (Clinton, Aditya)
- Fix driver unbinding from a device in use (Janusz)
- Fix clock gating with pipe scaling (Radhakrishna)
- Disable broken FBC on GLK (Daniel Drake)
- Miscellaneous GuC fixes (Michal)
- Fix MG PHY DP register programming (Imre)
- Add missing combo PHY lane power setup (Imre)
- Workarounds for early ICL VBT issues (Imre)
- Fix fastset vs. pfit on/off on HSW EDP transcoder (Ville)
- Add readout and state check for pch_pfit.force_thru (Ville)
- Miscellaneous display fixes and refactoring (Ville)
- Display workaround fixes (Ville)
- Enable audio even if ELD is bogus (Ville)
- Fix use-after-free in reporting create.size (Chris)
- Sideband fixes to avoid BYT hard lockups (Chris)
- Workaround fixes and improvements (Chris)

Maintainer shortcomings:
- Failure to adequately describe and give credit for all changes (Jani)

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/87sgt3n45z.fsf@intel.com
alistair/sunxi64-5.4-dsi
Dave Airlie 2019-05-28 09:03:58 +10:00
commit 14ee642c2a
218 changed files with 11522 additions and 5061 deletions

View File

@ -181,6 +181,12 @@ Panel Helper Reference
.. kernel-doc:: drivers/gpu/drm/drm_panel_orientation_quirks.c
:export:
HDCP Helper Functions Reference
===============================
.. kernel-doc:: drivers/gpu/drm/drm_hdcp.c
:export:
Display Port Helper Functions Reference
=======================================

View File

@ -17,7 +17,7 @@ drm-y := drm_auth.o drm_cache.o \
drm_plane.o drm_color_mgmt.o drm_print.o \
drm_dumb_buffers.o drm_mode_config.o drm_vblank.o \
drm_syncobj.o drm_lease.o drm_writeback.o drm_client.o \
drm_atomic_uapi.o
drm_atomic_uapi.o drm_hdcp.o
drm-$(CONFIG_DRM_LEGACY) += drm_legacy_misc.o drm_bufs.o drm_context.o drm_dma.o drm_scatter.o drm_lock.o
drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o

View File

@ -115,7 +115,7 @@ EXPORT_SYMBOL_GPL(analogix_dp_psr_enabled);
int analogix_dp_enable_psr(struct analogix_dp_device *dp)
{
struct edp_vsc_psr psr_vsc;
struct dp_sdp psr_vsc;
if (!dp->psr_enable)
return 0;
@ -127,8 +127,8 @@ int analogix_dp_enable_psr(struct analogix_dp_device *dp)
psr_vsc.sdp_header.HB2 = 0x2;
psr_vsc.sdp_header.HB3 = 0x8;
psr_vsc.DB0 = 0;
psr_vsc.DB1 = EDP_VSC_PSR_STATE_ACTIVE | EDP_VSC_PSR_CRC_VALUES_VALID;
psr_vsc.db[0] = 0;
psr_vsc.db[1] = EDP_VSC_PSR_STATE_ACTIVE | EDP_VSC_PSR_CRC_VALUES_VALID;
return analogix_dp_send_psr_spd(dp, &psr_vsc, true);
}
@ -136,7 +136,7 @@ EXPORT_SYMBOL_GPL(analogix_dp_enable_psr);
int analogix_dp_disable_psr(struct analogix_dp_device *dp)
{
struct edp_vsc_psr psr_vsc;
struct dp_sdp psr_vsc;
int ret;
if (!dp->psr_enable)
@ -149,8 +149,8 @@ int analogix_dp_disable_psr(struct analogix_dp_device *dp)
psr_vsc.sdp_header.HB2 = 0x2;
psr_vsc.sdp_header.HB3 = 0x8;
psr_vsc.DB0 = 0;
psr_vsc.DB1 = 0;
psr_vsc.db[0] = 0;
psr_vsc.db[1] = 0;
ret = drm_dp_dpcd_writeb(&dp->aux, DP_SET_POWER, DP_SET_POWER_D0);
if (ret != 1) {

View File

@ -254,7 +254,7 @@ void analogix_dp_enable_scrambling(struct analogix_dp_device *dp);
void analogix_dp_disable_scrambling(struct analogix_dp_device *dp);
void analogix_dp_enable_psr_crc(struct analogix_dp_device *dp);
int analogix_dp_send_psr_spd(struct analogix_dp_device *dp,
struct edp_vsc_psr *vsc, bool blocking);
struct dp_sdp *vsc, bool blocking);
ssize_t analogix_dp_transfer(struct analogix_dp_device *dp,
struct drm_dp_aux_msg *msg);

View File

@ -1041,7 +1041,7 @@ static ssize_t analogix_dp_get_psr_status(struct analogix_dp_device *dp)
}
int analogix_dp_send_psr_spd(struct analogix_dp_device *dp,
struct edp_vsc_psr *vsc, bool blocking)
struct dp_sdp *vsc, bool blocking)
{
unsigned int val;
int ret;
@ -1069,8 +1069,8 @@ int analogix_dp_send_psr_spd(struct analogix_dp_device *dp,
writel(0x5D, dp->reg_base + ANALOGIX_DP_SPD_PB3);
/* configure DB0 / DB1 values */
writel(vsc->DB0, dp->reg_base + ANALOGIX_DP_VSC_SHADOW_DB0);
writel(vsc->DB1, dp->reg_base + ANALOGIX_DP_VSC_SHADOW_DB1);
writel(vsc->db[0], dp->reg_base + ANALOGIX_DP_VSC_SHADOW_DB0);
writel(vsc->db[1], dp->reg_base + ANALOGIX_DP_VSC_SHADOW_DB1);
/* set reuse spd inforframe */
val = readl(dp->reg_base + ANALOGIX_DP_VIDEO_CTL_3);
@ -1092,8 +1092,8 @@ int analogix_dp_send_psr_spd(struct analogix_dp_device *dp,
ret = readx_poll_timeout(analogix_dp_get_psr_status, dp, psr_status,
psr_status >= 0 &&
((vsc->DB1 && psr_status == DP_PSR_SINK_ACTIVE_RFB) ||
(!vsc->DB1 && psr_status == DP_PSR_SINK_INACTIVE)), 1500,
((vsc->db[1] && psr_status == DP_PSR_SINK_ACTIVE_RFB) ||
(!vsc->db[1] && psr_status == DP_PSR_SINK_INACTIVE)), 1500,
DP_TIMEOUT_PSR_LOOP_MS * 1000);
if (ret) {
dev_warn(dp->dev, "Failed to apply PSR %d\n", ret);

View File

@ -741,7 +741,7 @@ static int drm_atomic_connector_set_property(struct drm_connector *connector,
state->content_type = val;
} else if (property == connector->scaling_mode_property) {
state->scaling_mode = val;
} else if (property == connector->content_protection_property) {
} else if (property == config->content_protection_property) {
if (val == DRM_MODE_CONTENT_PROTECTION_ENABLED) {
DRM_DEBUG_KMS("only drivers can set CP Enabled\n");
return -EINVAL;
@ -826,7 +826,7 @@ drm_atomic_connector_get_property(struct drm_connector *connector,
} else if (property == config->hdr_output_metadata_property) {
*val = state->hdr_output_metadata ?
state->hdr_output_metadata->base.id : 0;
} else if (property == connector->content_protection_property) {
} else if (property == config->content_protection_property) {
*val = state->content_protection;
} else if (property == config->writeback_fb_id_property) {
/* Writeback framebuffer is one-shot, write and forget */

View File

@ -823,13 +823,6 @@ static const struct drm_prop_enum_list drm_tv_subconnector_enum_list[] = {
DRM_ENUM_NAME_FN(drm_get_tv_subconnector_name,
drm_tv_subconnector_enum_list)
static struct drm_prop_enum_list drm_cp_enum_list[] = {
{ DRM_MODE_CONTENT_PROTECTION_UNDESIRED, "Undesired" },
{ DRM_MODE_CONTENT_PROTECTION_DESIRED, "Desired" },
{ DRM_MODE_CONTENT_PROTECTION_ENABLED, "Enabled" },
};
DRM_ENUM_NAME_FN(drm_get_content_protection_name, drm_cp_enum_list)
static const struct drm_prop_enum_list hdmi_colorspaces[] = {
/* For Default case, driver will set the colorspace */
{ DRM_MODE_COLORIMETRY_DEFAULT, "Default" },
@ -1515,42 +1508,6 @@ int drm_connector_attach_scaling_mode_property(struct drm_connector *connector,
}
EXPORT_SYMBOL(drm_connector_attach_scaling_mode_property);
/**
* drm_connector_attach_content_protection_property - attach content protection
* property
*
* @connector: connector to attach CP property on.
*
* This is used to add support for content protection on select connectors.
* Content Protection is intentionally vague to allow for different underlying
* technologies, however it is most implemented by HDCP.
*
* The content protection will be set to &drm_connector_state.content_protection
*
* Returns:
* Zero on success, negative errno on failure.
*/
int drm_connector_attach_content_protection_property(
struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_property *prop;
prop = drm_property_create_enum(dev, 0, "Content Protection",
drm_cp_enum_list,
ARRAY_SIZE(drm_cp_enum_list));
if (!prop)
return -ENOMEM;
drm_object_attach_property(&connector->base, prop,
DRM_MODE_CONTENT_PROTECTION_UNDESIRED);
connector->content_protection_property = prop;
return 0;
}
EXPORT_SYMBOL(drm_connector_attach_content_protection_property);
/**
* drm_mode_create_aspect_ratio_property - create aspect ratio property
* @dev: DRM device

View File

@ -0,0 +1,382 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2019 Intel Corporation.
*
* Authors:
* Ramalingam C <ramalingam.c@intel.com>
*/
#include <linux/device.h>
#include <linux/err.h>
#include <linux/gfp.h>
#include <linux/export.h>
#include <linux/slab.h>
#include <linux/firmware.h>
#include <drm/drm_hdcp.h>
#include <drm/drm_sysfs.h>
#include <drm/drm_print.h>
#include <drm/drm_device.h>
#include <drm/drm_property.h>
#include <drm/drm_mode_object.h>
#include <drm/drm_connector.h>
#include "drm_internal.h"
static struct hdcp_srm {
u32 revoked_ksv_cnt;
u8 *revoked_ksv_list;
/* Mutex to protect above struct member */
struct mutex mutex;
} *srm_data;
static inline void drm_hdcp_print_ksv(const u8 *ksv)
{
DRM_DEBUG("\t%#02x, %#02x, %#02x, %#02x, %#02x\n",
ksv[0], ksv[1], ksv[2], ksv[3], ksv[4]);
}
static u32 drm_hdcp_get_revoked_ksv_count(const u8 *buf, u32 vrls_length)
{
u32 parsed_bytes = 0, ksv_count = 0, vrl_ksv_cnt, vrl_sz;
while (parsed_bytes < vrls_length) {
vrl_ksv_cnt = *buf;
ksv_count += vrl_ksv_cnt;
vrl_sz = (vrl_ksv_cnt * DRM_HDCP_KSV_LEN) + 1;
buf += vrl_sz;
parsed_bytes += vrl_sz;
}
/*
* When vrls are not valid, ksvs are not considered.
* Hence SRM will be discarded.
*/
if (parsed_bytes != vrls_length)
ksv_count = 0;
return ksv_count;
}
static u32 drm_hdcp_get_revoked_ksvs(const u8 *buf, u8 *revoked_ksv_list,
u32 vrls_length)
{
u32 parsed_bytes = 0, ksv_count = 0;
u32 vrl_ksv_cnt, vrl_ksv_sz, vrl_idx = 0;
do {
vrl_ksv_cnt = *buf;
vrl_ksv_sz = vrl_ksv_cnt * DRM_HDCP_KSV_LEN;
buf++;
DRM_DEBUG("vrl: %d, Revoked KSVs: %d\n", vrl_idx++,
vrl_ksv_cnt);
memcpy(revoked_ksv_list, buf, vrl_ksv_sz);
ksv_count += vrl_ksv_cnt;
revoked_ksv_list += vrl_ksv_sz;
buf += vrl_ksv_sz;
parsed_bytes += (vrl_ksv_sz + 1);
} while (parsed_bytes < vrls_length);
return ksv_count;
}
static inline u32 get_vrl_length(const u8 *buf)
{
return drm_hdcp_be24_to_cpu(buf);
}
static int drm_hdcp_parse_hdcp1_srm(const u8 *buf, size_t count)
{
struct hdcp_srm_header *header;
u32 vrl_length, ksv_count;
if (count < (sizeof(struct hdcp_srm_header) +
DRM_HDCP_1_4_VRL_LENGTH_SIZE + DRM_HDCP_1_4_DCP_SIG_SIZE)) {
DRM_ERROR("Invalid blob length\n");
return -EINVAL;
}
header = (struct hdcp_srm_header *)buf;
DRM_DEBUG("SRM ID: 0x%x, SRM Ver: 0x%x, SRM Gen No: 0x%x\n",
header->srm_id,
be16_to_cpu(header->srm_version), header->srm_gen_no);
WARN_ON(header->reserved);
buf = buf + sizeof(*header);
vrl_length = get_vrl_length(buf);
if (count < (sizeof(struct hdcp_srm_header) + vrl_length) ||
vrl_length < (DRM_HDCP_1_4_VRL_LENGTH_SIZE +
DRM_HDCP_1_4_DCP_SIG_SIZE)) {
DRM_ERROR("Invalid blob length or vrl length\n");
return -EINVAL;
}
/* Length of the all vrls combined */
vrl_length -= (DRM_HDCP_1_4_VRL_LENGTH_SIZE +
DRM_HDCP_1_4_DCP_SIG_SIZE);
if (!vrl_length) {
DRM_ERROR("No vrl found\n");
return -EINVAL;
}
buf += DRM_HDCP_1_4_VRL_LENGTH_SIZE;
ksv_count = drm_hdcp_get_revoked_ksv_count(buf, vrl_length);
if (!ksv_count) {
DRM_DEBUG("Revoked KSV count is 0\n");
return count;
}
kfree(srm_data->revoked_ksv_list);
srm_data->revoked_ksv_list = kcalloc(ksv_count, DRM_HDCP_KSV_LEN,
GFP_KERNEL);
if (!srm_data->revoked_ksv_list) {
DRM_ERROR("Out of Memory\n");
return -ENOMEM;
}
if (drm_hdcp_get_revoked_ksvs(buf, srm_data->revoked_ksv_list,
vrl_length) != ksv_count) {
srm_data->revoked_ksv_cnt = 0;
kfree(srm_data->revoked_ksv_list);
return -EINVAL;
}
srm_data->revoked_ksv_cnt = ksv_count;
return count;
}
static int drm_hdcp_parse_hdcp2_srm(const u8 *buf, size_t count)
{
struct hdcp_srm_header *header;
u32 vrl_length, ksv_count, ksv_sz;
if (count < (sizeof(struct hdcp_srm_header) +
DRM_HDCP_2_VRL_LENGTH_SIZE + DRM_HDCP_2_DCP_SIG_SIZE)) {
DRM_ERROR("Invalid blob length\n");
return -EINVAL;
}
header = (struct hdcp_srm_header *)buf;
DRM_DEBUG("SRM ID: 0x%x, SRM Ver: 0x%x, SRM Gen No: 0x%x\n",
header->srm_id & DRM_HDCP_SRM_ID_MASK,
be16_to_cpu(header->srm_version), header->srm_gen_no);
if (header->reserved)
return -EINVAL;
buf = buf + sizeof(*header);
vrl_length = get_vrl_length(buf);
if (count < (sizeof(struct hdcp_srm_header) + vrl_length) ||
vrl_length < (DRM_HDCP_2_VRL_LENGTH_SIZE +
DRM_HDCP_2_DCP_SIG_SIZE)) {
DRM_ERROR("Invalid blob length or vrl length\n");
return -EINVAL;
}
/* Length of the all vrls combined */
vrl_length -= (DRM_HDCP_2_VRL_LENGTH_SIZE +
DRM_HDCP_2_DCP_SIG_SIZE);
if (!vrl_length) {
DRM_ERROR("No vrl found\n");
return -EINVAL;
}
buf += DRM_HDCP_2_VRL_LENGTH_SIZE;
ksv_count = (*buf << 2) | DRM_HDCP_2_KSV_COUNT_2_LSBITS(*(buf + 1));
if (!ksv_count) {
DRM_DEBUG("Revoked KSV count is 0\n");
return count;
}
kfree(srm_data->revoked_ksv_list);
srm_data->revoked_ksv_list = kcalloc(ksv_count, DRM_HDCP_KSV_LEN,
GFP_KERNEL);
if (!srm_data->revoked_ksv_list) {
DRM_ERROR("Out of Memory\n");
return -ENOMEM;
}
ksv_sz = ksv_count * DRM_HDCP_KSV_LEN;
buf += DRM_HDCP_2_NO_OF_DEV_PLUS_RESERVED_SZ;
DRM_DEBUG("Revoked KSVs: %d\n", ksv_count);
memcpy(srm_data->revoked_ksv_list, buf, ksv_sz);
srm_data->revoked_ksv_cnt = ksv_count;
return count;
}
static inline bool is_srm_version_hdcp1(const u8 *buf)
{
return *buf == (u8)(DRM_HDCP_1_4_SRM_ID << 4);
}
static inline bool is_srm_version_hdcp2(const u8 *buf)
{
return *buf == (u8)(DRM_HDCP_2_SRM_ID << 4 | DRM_HDCP_2_INDICATOR);
}
static void drm_hdcp_srm_update(const u8 *buf, size_t count)
{
if (count < sizeof(struct hdcp_srm_header))
return;
if (is_srm_version_hdcp1(buf))
drm_hdcp_parse_hdcp1_srm(buf, count);
else if (is_srm_version_hdcp2(buf))
drm_hdcp_parse_hdcp2_srm(buf, count);
}
static void drm_hdcp_request_srm(struct drm_device *drm_dev)
{
char fw_name[36] = "display_hdcp_srm.bin";
const struct firmware *fw;
int ret;
ret = request_firmware_direct(&fw, (const char *)fw_name,
drm_dev->dev);
if (ret < 0)
goto exit;
if (fw->size && fw->data)
drm_hdcp_srm_update(fw->data, fw->size);
exit:
release_firmware(fw);
}
/**
* drm_hdcp_check_ksvs_revoked - Check the revoked status of the IDs
*
* @drm_dev: drm_device for which HDCP revocation check is requested
* @ksvs: List of KSVs (HDCP receiver IDs)
* @ksv_count: KSV count passed in through @ksvs
*
* This function reads the HDCP System renewability Message(SRM Table)
* from userspace as a firmware and parses it for the revoked HDCP
* KSVs(Receiver IDs) detected by DCP LLC. Once the revoked KSVs are known,
* revoked state of the KSVs in the list passed in by display drivers are
* decided and response is sent.
*
* SRM should be presented in the name of "display_hdcp_srm.bin".
*
* Returns:
* TRUE on any of the KSV is revoked, else FALSE.
*/
bool drm_hdcp_check_ksvs_revoked(struct drm_device *drm_dev, u8 *ksvs,
u32 ksv_count)
{
u32 rev_ksv_cnt, cnt, i, j;
u8 *rev_ksv_list;
if (!srm_data)
return false;
mutex_lock(&srm_data->mutex);
drm_hdcp_request_srm(drm_dev);
rev_ksv_cnt = srm_data->revoked_ksv_cnt;
rev_ksv_list = srm_data->revoked_ksv_list;
/* If the Revoked ksv list is empty */
if (!rev_ksv_cnt || !rev_ksv_list) {
mutex_unlock(&srm_data->mutex);
return false;
}
for (cnt = 0; cnt < ksv_count; cnt++) {
rev_ksv_list = srm_data->revoked_ksv_list;
for (i = 0; i < rev_ksv_cnt; i++) {
for (j = 0; j < DRM_HDCP_KSV_LEN; j++)
if (ksvs[j] != rev_ksv_list[j]) {
break;
} else if (j == (DRM_HDCP_KSV_LEN - 1)) {
DRM_DEBUG("Revoked KSV is ");
drm_hdcp_print_ksv(ksvs);
mutex_unlock(&srm_data->mutex);
return true;
}
/* Move the offset to next KSV in the revoked list */
rev_ksv_list += DRM_HDCP_KSV_LEN;
}
/* Iterate to next ksv_offset */
ksvs += DRM_HDCP_KSV_LEN;
}
mutex_unlock(&srm_data->mutex);
return false;
}
EXPORT_SYMBOL_GPL(drm_hdcp_check_ksvs_revoked);
int drm_setup_hdcp_srm(struct class *drm_class)
{
srm_data = kzalloc(sizeof(*srm_data), GFP_KERNEL);
if (!srm_data)
return -ENOMEM;
mutex_init(&srm_data->mutex);
return 0;
}
void drm_teardown_hdcp_srm(struct class *drm_class)
{
if (srm_data) {
kfree(srm_data->revoked_ksv_list);
kfree(srm_data);
}
}
static struct drm_prop_enum_list drm_cp_enum_list[] = {
{ DRM_MODE_CONTENT_PROTECTION_UNDESIRED, "Undesired" },
{ DRM_MODE_CONTENT_PROTECTION_DESIRED, "Desired" },
{ DRM_MODE_CONTENT_PROTECTION_ENABLED, "Enabled" },
};
DRM_ENUM_NAME_FN(drm_get_content_protection_name, drm_cp_enum_list)
/**
* drm_connector_attach_content_protection_property - attach content protection
* property
*
* @connector: connector to attach CP property on.
*
* This is used to add support for content protection on select connectors.
* Content Protection is intentionally vague to allow for different underlying
* technologies, however it is most implemented by HDCP.
*
* The content protection will be set to &drm_connector_state.content_protection
*
* Returns:
* Zero on success, negative errno on failure.
*/
int drm_connector_attach_content_protection_property(
struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_property *prop =
dev->mode_config.content_protection_property;
if (!prop)
prop = drm_property_create_enum(dev, 0, "Content Protection",
drm_cp_enum_list,
ARRAY_SIZE(drm_cp_enum_list));
if (!prop)
return -ENOMEM;
drm_object_attach_property(&connector->base, prop,
DRM_MODE_CONTENT_PROTECTION_UNDESIRED);
dev->mode_config.content_protection_property = prop;
return 0;
}
EXPORT_SYMBOL(drm_connector_attach_content_protection_property);

View File

@ -108,6 +108,7 @@ void drm_sysfs_connector_remove(struct drm_connector *connector);
void drm_sysfs_lease_event(struct drm_device *dev);
/* drm_gem.c */
struct drm_gem_object;
int drm_gem_init(struct drm_device *dev);
void drm_gem_destroy(struct drm_device *dev);
int drm_gem_handle_create_tail(struct drm_file *file_priv,
@ -203,3 +204,7 @@ int drm_syncobj_query_ioctl(struct drm_device *dev, void *data,
void drm_framebuffer_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_framebuffer *fb);
int drm_framebuffer_debugfs_init(struct drm_minor *minor);
/* drm_hdcp.c */
int drm_setup_hdcp_srm(struct class *drm_class);
void drm_teardown_hdcp_srm(struct class *drm_class);

View File

@ -78,6 +78,7 @@ int drm_sysfs_init(void)
}
drm_class->devnode = drm_devnode;
drm_setup_hdcp_srm(drm_class);
return 0;
}
@ -90,6 +91,7 @@ void drm_sysfs_destroy(void)
{
if (IS_ERR_OR_NULL(drm_class))
return;
drm_teardown_hdcp_srm(drm_class);
class_remove_file(drm_class, &class_attr_version.attr);
class_destroy(drm_class);
drm_class = NULL;

View File

@ -133,3 +133,9 @@ depends on DRM_I915
depends on EXPERT
source "drivers/gpu/drm/i915/Kconfig.debug"
endmenu
menu "drm/i915 Profile Guided Optimisation"
visible if EXPERT
depends on DRM_I915
source "drivers/gpu/drm/i915/Kconfig.profile"
endmenu

View File

@ -0,0 +1,13 @@
config DRM_I915_SPIN_REQUEST
int
default 5 # microseconds
help
Before sleeping waiting for a request (GPU operation) to complete,
we may spend some time polling for its completion. As the IRQ may
take a non-negligible time to setup, we do a short spin first to
check if the request will complete in the time it would have taken
us to enable the interrupt.
May be 0 to disable the initial spin. In practice, we estimate
the cost of enabling the interrupt (if currently disabled) to be
a few microseconds.

View File

@ -35,32 +35,56 @@ subdir-ccflags-y += \
# Extra header tests
include $(src)/Makefile.header-test
subdir-ccflags-y += -I$(src)
# Please keep these build lists sorted!
# core driver code
i915-y += i915_drv.o \
i915_irq.o \
i915_memcpy.o \
i915_mm.o \
i915_params.o \
i915_pci.o \
i915_reset.o \
i915_suspend.o \
i915_sw_fence.o \
i915_syncmap.o \
i915_sysfs.o \
i915_user_extensions.o \
intel_csr.o \
intel_device_info.o \
intel_pm.o \
intel_runtime_pm.o \
intel_workarounds.o
intel_wakeref.o \
intel_uncore.o
# core library code
i915-y += \
i915_memcpy.o \
i915_mm.o \
i915_sw_fence.o \
i915_syncmap.o \
i915_user_extensions.o
i915-$(CONFIG_COMPAT) += i915_ioc32.o
i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o intel_pipe_crc.o
i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o
# GEM code
# "Graphics Technology" (aka we talk to the gpu)
obj-y += gt/
gt-y += \
gt/intel_breadcrumbs.o \
gt/intel_context.o \
gt/intel_engine_cs.o \
gt/intel_engine_pm.o \
gt/intel_gt_pm.o \
gt/intel_hangcheck.o \
gt/intel_lrc.o \
gt/intel_reset.o \
gt/intel_ringbuffer.o \
gt/intel_mocs.o \
gt/intel_sseu.o \
gt/intel_workarounds.o
gt-$(CONFIG_DRM_I915_SELFTEST) += \
gt/mock_engine.o
i915-y += $(gt-y)
# GEM (Graphics Execution Management) code
i915-y += \
i915_active.o \
i915_cmd_parser.o \
@ -75,6 +99,7 @@ i915-y += \
i915_gem_internal.o \
i915_gem.o \
i915_gem_object.o \
i915_gem_pm.o \
i915_gem_render_state.o \
i915_gem_shrinker.o \
i915_gem_stolen.o \
@ -88,14 +113,6 @@ i915-y += \
i915_timeline.o \
i915_trace_points.o \
i915_vma.o \
intel_breadcrumbs.o \
intel_context.o \
intel_engine_cs.o \
intel_hangcheck.o \
intel_lrc.o \
intel_mocs.o \
intel_ringbuffer.o \
intel_uncore.o \
intel_wopcm.o
# general-purpose microcontroller (GuC) support
@ -159,8 +176,8 @@ i915-y += dvo_ch7017.o \
intel_dsi_dcs_backlight.o \
intel_dsi_vbt.o \
intel_dvo.o \
intel_gmbus.o \
intel_hdmi.o \
intel_i2c.o \
intel_lspcon.o \
intel_lvds.o \
intel_panel.o \
@ -176,6 +193,7 @@ i915-$(CONFIG_DRM_I915_SELFTEST) += \
selftests/i915_random.o \
selftests/i915_selftest.o \
selftests/igt_flush_test.o \
selftests/igt_gem_utils.o \
selftests/igt_live_test.o \
selftests/igt_reset.o \
selftests/igt_spinner.o

View File

@ -4,37 +4,65 @@
# Test the headers are compilable as standalone units
header_test := \
i915_active_types.h \
i915_debugfs.h \
i915_drv.h \
i915_gem_context_types.h \
i915_gem_pm.h \
i915_irq.h \
i915_params.h \
i915_priolist_types.h \
i915_reg.h \
i915_scheduler_types.h \
i915_timeline_types.h \
i915_utils.h \
intel_acpi.h \
intel_atomic.h \
intel_atomic_plane.h \
intel_audio.h \
intel_bios.h \
intel_cdclk.h \
intel_color.h \
intel_combo_phy.h \
intel_connector.h \
intel_context_types.h \
intel_crt.h \
intel_csr.h \
intel_ddi.h \
intel_dp.h \
intel_dp_aux_backlight.h \
intel_dp_link_training.h \
intel_dp_mst.h \
intel_dpio_phy.h \
intel_dpll_mgr.h \
intel_drv.h \
intel_dsi.h \
intel_dsi_dcs_backlight.h \
intel_dvo.h \
intel_engine_types.h \
intel_dvo_dev.h \
intel_fbc.h \
intel_fbdev.h \
intel_fifo_underrun.h \
intel_frontbuffer.h \
intel_gmbus.h \
intel_hdcp.h \
intel_hdmi.h \
intel_hotplug.h \
intel_lpe_audio.h \
intel_lspcon.h \
intel_lvds.h \
intel_overlay.h \
intel_panel.h \
intel_pipe_crc.h \
intel_pm.h \
intel_psr.h \
intel_quirks.h \
intel_runtime_pm.h \
intel_sdvo.h \
intel_sideband.h \
intel_sprite.h \
intel_tv.h \
intel_workarounds_types.h
intel_uncore.h \
intel_vdsc.h \
intel_wakeref.h
quiet_cmd_header_test = HDRTEST $@
cmd_header_test = echo "\#include \"$(<F)\"" > $@

View File

@ -25,7 +25,8 @@
*
*/
#include "dvo.h"
#include "intel_drv.h"
#include "intel_dvo_dev.h"
#define CH7017_TV_DISPLAY_MODE 0x00
#define CH7017_FLICKER_FILTER 0x01

View File

@ -26,7 +26,8 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
**************************************************************************/
#include "dvo.h"
#include "intel_drv.h"
#include "intel_dvo_dev.h"
#define CH7xxx_REG_VID 0x4a
#define CH7xxx_REG_DID 0x4b

View File

@ -29,7 +29,8 @@
*
*/
#include "dvo.h"
#include "intel_drv.h"
#include "intel_dvo_dev.h"
/*
* register definitions for the i82807aa.

View File

@ -26,9 +26,10 @@
*
*/
#include "dvo.h"
#include "i915_reg.h"
#include "i915_drv.h"
#include "i915_reg.h"
#include "intel_drv.h"
#include "intel_dvo_dev.h"
#define NS2501_VID 0x1305
#define NS2501_DID 0x6726

View File

@ -26,7 +26,8 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
**************************************************************************/
#include "dvo.h"
#include "intel_drv.h"
#include "intel_dvo_dev.h"
#define SIL164_VID 0x0001
#define SIL164_DID 0x0006

View File

@ -25,7 +25,8 @@
*
*/
#include "dvo.h"
#include "intel_drv.h"
#include "intel_dvo_dev.h"
/* register definitions according to the TFP410 data sheet */
#define TFP410_VID 0x014C

View File

@ -0,0 +1,2 @@
# Extra header tests
include $(src)/Makefile.header-test

View File

@ -0,0 +1,16 @@
# SPDX-License-Identifier: MIT
# Copyright © 2019 Intel Corporation
# Test the headers are compilable as standalone units
header_test := $(notdir $(wildcard $(src)/*.h))
quiet_cmd_header_test = HDRTEST $@
cmd_header_test = echo "\#include \"$(<F)\"" > $@
header_test_%.c: %.h
$(call cmd,header_test)
extra-$(CONFIG_DRM_I915_WERROR) += \
$(foreach h,$(header_test),$(patsubst %.h,header_test_%.o,$(h)))
clean-files += $(foreach h,$(header_test),$(patsubst %.h,header_test_%.c,$(h)))

View File

@ -81,6 +81,22 @@ static inline bool __request_completed(const struct i915_request *rq)
return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno);
}
__maybe_unused static bool
check_signal_order(struct intel_context *ce, struct i915_request *rq)
{
if (!list_is_last(&rq->signal_link, &ce->signals) &&
i915_seqno_passed(rq->fence.seqno,
list_next_entry(rq, signal_link)->fence.seqno))
return false;
if (!list_is_first(&rq->signal_link, &ce->signals) &&
i915_seqno_passed(list_prev_entry(rq, signal_link)->fence.seqno,
rq->fence.seqno))
return false;
return true;
}
static bool
__dma_fence_signal(struct dma_fence *fence)
{
@ -130,6 +146,8 @@ void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine)
struct i915_request *rq =
list_entry(pos, typeof(*rq), signal_link);
GEM_BUG_ON(!check_signal_order(ce, rq));
if (!__request_completed(rq))
break;
@ -312,6 +330,7 @@ bool i915_request_enable_breadcrumb(struct i915_request *rq)
list_add(&rq->signal_link, pos);
if (pos == &ce->signals) /* catch transitions from empty list */
list_move_tail(&ce->signal_link, &b->signalers);
GEM_BUG_ON(!check_signal_order(ce, rq));
set_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags);
spin_unlock(&b->irq_lock);

View File

@ -0,0 +1,179 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#include "i915_drv.h"
#include "i915_gem_context.h"
#include "i915_globals.h"
#include "intel_context.h"
#include "intel_engine.h"
#include "intel_engine_pm.h"
static struct i915_global_context {
struct i915_global base;
struct kmem_cache *slab_ce;
} global;
static struct intel_context *intel_context_alloc(void)
{
return kmem_cache_zalloc(global.slab_ce, GFP_KERNEL);
}
void intel_context_free(struct intel_context *ce)
{
kmem_cache_free(global.slab_ce, ce);
}
struct intel_context *
intel_context_create(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
{
struct intel_context *ce;
ce = intel_context_alloc();
if (!ce)
return ERR_PTR(-ENOMEM);
intel_context_init(ce, ctx, engine);
return ce;
}
int __intel_context_do_pin(struct intel_context *ce)
{
int err;
if (mutex_lock_interruptible(&ce->pin_mutex))
return -EINTR;
if (likely(!atomic_read(&ce->pin_count))) {
intel_wakeref_t wakeref;
err = 0;
with_intel_runtime_pm(ce->engine->i915, wakeref)
err = ce->ops->pin(ce);
if (err)
goto err;
i915_gem_context_get(ce->gem_context); /* for ctx->ppgtt */
intel_context_get(ce);
smp_mb__before_atomic(); /* flush pin before it is visible */
}
atomic_inc(&ce->pin_count);
GEM_BUG_ON(!intel_context_is_pinned(ce)); /* no overflow! */
mutex_unlock(&ce->pin_mutex);
return 0;
err:
mutex_unlock(&ce->pin_mutex);
return err;
}
void intel_context_unpin(struct intel_context *ce)
{
if (likely(atomic_add_unless(&ce->pin_count, -1, 1)))
return;
/* We may be called from inside intel_context_pin() to evict another */
intel_context_get(ce);
mutex_lock_nested(&ce->pin_mutex, SINGLE_DEPTH_NESTING);
if (likely(atomic_dec_and_test(&ce->pin_count))) {
ce->ops->unpin(ce);
i915_gem_context_put(ce->gem_context);
intel_context_put(ce);
}
mutex_unlock(&ce->pin_mutex);
intel_context_put(ce);
}
static void intel_context_retire(struct i915_active_request *active,
struct i915_request *rq)
{
struct intel_context *ce =
container_of(active, typeof(*ce), active_tracker);
intel_context_unpin(ce);
}
void
intel_context_init(struct intel_context *ce,
struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
{
GEM_BUG_ON(!engine->cops);
kref_init(&ce->ref);
ce->gem_context = ctx;
ce->engine = engine;
ce->ops = engine->cops;
ce->sseu = engine->sseu;
ce->saturated = 0;
INIT_LIST_HEAD(&ce->signal_link);
INIT_LIST_HEAD(&ce->signals);
mutex_init(&ce->pin_mutex);
i915_active_request_init(&ce->active_tracker,
NULL, intel_context_retire);
}
static void i915_global_context_shrink(void)
{
kmem_cache_shrink(global.slab_ce);
}
static void i915_global_context_exit(void)
{
kmem_cache_destroy(global.slab_ce);
}
static struct i915_global_context global = { {
.shrink = i915_global_context_shrink,
.exit = i915_global_context_exit,
} };
int __init i915_global_context_init(void)
{
global.slab_ce = KMEM_CACHE(intel_context, SLAB_HWCACHE_ALIGN);
if (!global.slab_ce)
return -ENOMEM;
i915_global_register(&global.base);
return 0;
}
void intel_context_enter_engine(struct intel_context *ce)
{
intel_engine_pm_get(ce->engine);
}
void intel_context_exit_engine(struct intel_context *ce)
{
ce->saturated = 0;
intel_engine_pm_put(ce->engine);
}
struct i915_request *intel_context_create_request(struct intel_context *ce)
{
struct i915_request *rq;
int err;
err = intel_context_pin(ce);
if (unlikely(err))
return ERR_PTR(err);
rq = i915_request_create(ce);
intel_context_unpin(ce);
return rq;
}

View File

@ -0,0 +1,130 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#ifndef __INTEL_CONTEXT_H__
#define __INTEL_CONTEXT_H__
#include <linux/lockdep.h>
#include "intel_context_types.h"
#include "intel_engine_types.h"
void intel_context_init(struct intel_context *ce,
struct i915_gem_context *ctx,
struct intel_engine_cs *engine);
struct intel_context *
intel_context_create(struct i915_gem_context *ctx,
struct intel_engine_cs *engine);
void intel_context_free(struct intel_context *ce);
/**
* intel_context_lock_pinned - Stablises the 'pinned' status of the HW context
* @ce - the context
*
* Acquire a lock on the pinned status of the HW context, such that the context
* can neither be bound to the GPU or unbound whilst the lock is held, i.e.
* intel_context_is_pinned() remains stable.
*/
static inline int intel_context_lock_pinned(struct intel_context *ce)
__acquires(ce->pin_mutex)
{
return mutex_lock_interruptible(&ce->pin_mutex);
}
/**
* intel_context_is_pinned - Reports the 'pinned' status
* @ce - the context
*
* While in use by the GPU, the context, along with its ring and page
* tables is pinned into memory and the GTT.
*
* Returns: true if the context is currently pinned for use by the GPU.
*/
static inline bool
intel_context_is_pinned(struct intel_context *ce)
{
return atomic_read(&ce->pin_count);
}
/**
* intel_context_unlock_pinned - Releases the earlier locking of 'pinned' status
* @ce - the context
*
* Releases the lock earlier acquired by intel_context_unlock_pinned().
*/
static inline void intel_context_unlock_pinned(struct intel_context *ce)
__releases(ce->pin_mutex)
{
mutex_unlock(&ce->pin_mutex);
}
int __intel_context_do_pin(struct intel_context *ce);
static inline int intel_context_pin(struct intel_context *ce)
{
if (likely(atomic_inc_not_zero(&ce->pin_count)))
return 0;
return __intel_context_do_pin(ce);
}
static inline void __intel_context_pin(struct intel_context *ce)
{
GEM_BUG_ON(!intel_context_is_pinned(ce));
atomic_inc(&ce->pin_count);
}
void intel_context_unpin(struct intel_context *ce);
void intel_context_enter_engine(struct intel_context *ce);
void intel_context_exit_engine(struct intel_context *ce);
static inline void intel_context_enter(struct intel_context *ce)
{
if (!ce->active_count++)
ce->ops->enter(ce);
}
static inline void intel_context_mark_active(struct intel_context *ce)
{
++ce->active_count;
}
static inline void intel_context_exit(struct intel_context *ce)
{
GEM_BUG_ON(!ce->active_count);
if (!--ce->active_count)
ce->ops->exit(ce);
}
static inline struct intel_context *intel_context_get(struct intel_context *ce)
{
kref_get(&ce->ref);
return ce;
}
static inline void intel_context_put(struct intel_context *ce)
{
kref_put(&ce->ref, ce->ops->destroy);
}
static inline void intel_context_timeline_lock(struct intel_context *ce)
__acquires(&ce->ring->timeline->mutex)
{
mutex_lock(&ce->ring->timeline->mutex);
}
static inline void intel_context_timeline_unlock(struct intel_context *ce)
__releases(&ce->ring->timeline->mutex)
{
mutex_unlock(&ce->ring->timeline->mutex);
}
struct i915_request *intel_context_create_request(struct intel_context *ce);
#endif /* __INTEL_CONTEXT_H__ */

View File

@ -10,11 +10,11 @@
#include <linux/kref.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/rbtree.h>
#include <linux/types.h>
#include "i915_active_types.h"
#include "intel_engine_types.h"
#include "intel_sseu.h"
struct i915_gem_context;
struct i915_vma;
@ -25,20 +25,13 @@ struct intel_context_ops {
int (*pin)(struct intel_context *ce);
void (*unpin)(struct intel_context *ce);
void (*enter)(struct intel_context *ce);
void (*exit)(struct intel_context *ce);
void (*reset)(struct intel_context *ce);
void (*destroy)(struct kref *kref);
};
/*
* Powergating configuration for a particular (context,engine).
*/
struct intel_sseu {
u8 slice_mask;
u8 subslice_mask;
u8 min_eus_per_subslice;
u8 max_eus_per_subslice;
};
struct intel_context {
struct kref ref;
@ -46,7 +39,6 @@ struct intel_context {
struct intel_engine_cs *engine;
struct intel_engine_cs *active;
struct list_head active_link;
struct list_head signal_link;
struct list_head signals;
@ -56,6 +48,8 @@ struct intel_context {
u32 *lrc_reg_state;
u64 lrc_desc;
unsigned int active_count; /* notionally protected by timeline->mutex */
atomic_t pin_count;
struct mutex pin_mutex; /* guards pinning and associated on-gpuing */
@ -68,7 +62,6 @@ struct intel_context {
struct i915_active_request active_tracker;
const struct intel_context_ops *ops;
struct rb_node node;
/** sseu: Control eu/slice partitioning */
struct intel_sseu sseu;

View File

@ -106,24 +106,6 @@ hangcheck_action_to_str(const enum intel_engine_hangcheck_action a)
void intel_engines_set_scheduler_caps(struct drm_i915_private *i915);
static inline bool __execlists_need_preempt(int prio, int last)
{
/*
* Allow preemption of low -> normal -> high, but we do
* not allow low priority tasks to preempt other low priority
* tasks under the impression that latency for low priority
* tasks does not matter (as much as background throughput),
* so kiss.
*
* More naturally we would write
* prio >= max(0, last);
* except that we wish to prevent triggering preemption at the same
* priority level: the task that is running should remain running
* to preserve FIFO ordering of dependencies.
*/
return prio > max(I915_PRIORITY_NORMAL - 1, last);
}
static inline void
execlists_set_active(struct intel_engine_execlists *execlists,
unsigned int bit)
@ -233,8 +215,6 @@ intel_write_status_page(struct intel_engine_cs *engine, int reg, u32 value)
*/
#define I915_GEM_HWS_PREEMPT 0x32
#define I915_GEM_HWS_PREEMPT_ADDR (I915_GEM_HWS_PREEMPT * sizeof(u32))
#define I915_GEM_HWS_HANGCHECK 0x34
#define I915_GEM_HWS_HANGCHECK_ADDR (I915_GEM_HWS_HANGCHECK * sizeof(u32))
#define I915_GEM_HWS_SEQNO 0x40
#define I915_GEM_HWS_SEQNO_ADDR (I915_GEM_HWS_SEQNO * sizeof(u32))
#define I915_GEM_HWS_SCRATCH 0x80
@ -362,14 +342,16 @@ __intel_ring_space(unsigned int head, unsigned int tail, unsigned int size)
return (head - tail - CACHELINE_BYTES) & (size - 1);
}
int intel_engine_setup_common(struct intel_engine_cs *engine);
int intel_engines_init_mmio(struct drm_i915_private *i915);
int intel_engines_setup(struct drm_i915_private *i915);
int intel_engines_init(struct drm_i915_private *i915);
void intel_engines_cleanup(struct drm_i915_private *i915);
int intel_engine_init_common(struct intel_engine_cs *engine);
void intel_engine_cleanup_common(struct intel_engine_cs *engine);
int intel_init_render_ring_buffer(struct intel_engine_cs *engine);
int intel_init_bsd_ring_buffer(struct intel_engine_cs *engine);
int intel_init_blt_ring_buffer(struct intel_engine_cs *engine);
int intel_init_vebox_ring_buffer(struct intel_engine_cs *engine);
int intel_ring_submission_setup(struct intel_engine_cs *engine);
int intel_ring_submission_init(struct intel_engine_cs *engine);
int intel_engine_stop_cs(struct intel_engine_cs *engine);
void intel_engine_cancel_stop_cs(struct intel_engine_cs *engine);
@ -382,6 +364,8 @@ u64 intel_engine_get_last_batch_head(const struct intel_engine_cs *engine);
void intel_engine_get_instdone(struct intel_engine_cs *engine,
struct intel_instdone *instdone);
void intel_engine_init_execlists(struct intel_engine_cs *engine);
void intel_engine_init_breadcrumbs(struct intel_engine_cs *engine);
void intel_engine_fini_breadcrumbs(struct intel_engine_cs *engine);
@ -458,19 +442,14 @@ static inline void intel_engine_reset(struct intel_engine_cs *engine,
{
if (engine->reset.reset)
engine->reset.reset(engine, stalled);
engine->serial++; /* contexts lost */
}
void intel_engines_sanitize(struct drm_i915_private *i915, bool force);
void intel_gt_resume(struct drm_i915_private *i915);
bool intel_engine_is_idle(struct intel_engine_cs *engine);
bool intel_engines_are_idle(struct drm_i915_private *dev_priv);
void intel_engine_lost_context(struct intel_engine_cs *engine);
void intel_engines_park(struct drm_i915_private *i915);
void intel_engines_unpark(struct drm_i915_private *i915);
void intel_engines_reset_default_submission(struct drm_i915_private *i915);
unsigned int intel_engines_has_context_isolation(struct drm_i915_private *i915);
@ -567,17 +546,4 @@ static inline bool inject_preempt_hang(struct intel_engine_execlists *execlists)
#endif
static inline u32
intel_engine_next_hangcheck_seqno(struct intel_engine_cs *engine)
{
return engine->hangcheck.next_seqno =
next_pseudo_random32(engine->hangcheck.next_seqno);
}
static inline u32
intel_engine_get_hangcheck_seqno(struct intel_engine_cs *engine)
{
return intel_read_status_page(engine, I915_GEM_HWS_HANGCHECK);
}
#endif /* _INTEL_RINGBUFFER_H_ */

View File

@ -25,9 +25,11 @@
#include <drm/drm_print.h>
#include "i915_drv.h"
#include "i915_reset.h"
#include "intel_ringbuffer.h"
#include "intel_engine.h"
#include "intel_engine_pm.h"
#include "intel_lrc.h"
#include "intel_reset.h"
/* Haswell does have the CXT_SIZE register however it does not appear to be
* valid. Now, docs explain in dwords what is in the context object. The full
@ -48,35 +50,24 @@
struct engine_class_info {
const char *name;
int (*init_legacy)(struct intel_engine_cs *engine);
int (*init_execlists)(struct intel_engine_cs *engine);
u8 uabi_class;
};
static const struct engine_class_info intel_engine_classes[] = {
[RENDER_CLASS] = {
.name = "rcs",
.init_execlists = logical_render_ring_init,
.init_legacy = intel_init_render_ring_buffer,
.uabi_class = I915_ENGINE_CLASS_RENDER,
},
[COPY_ENGINE_CLASS] = {
.name = "bcs",
.init_execlists = logical_xcs_ring_init,
.init_legacy = intel_init_blt_ring_buffer,
.uabi_class = I915_ENGINE_CLASS_COPY,
},
[VIDEO_DECODE_CLASS] = {
.name = "vcs",
.init_execlists = logical_xcs_ring_init,
.init_legacy = intel_init_bsd_ring_buffer,
.uabi_class = I915_ENGINE_CLASS_VIDEO,
},
[VIDEO_ENHANCEMENT_CLASS] = {
.name = "vecs",
.init_execlists = logical_xcs_ring_init,
.init_legacy = intel_init_vebox_ring_buffer,
.uabi_class = I915_ENGINE_CLASS_VIDEO_ENHANCE,
},
};
@ -212,6 +203,22 @@ __intel_engine_context_size(struct drm_i915_private *dev_priv, u8 class)
PAGE_SIZE);
case 5:
case 4:
/*
* There is a discrepancy here between the size reported
* by the register and the size of the context layout
* in the docs. Both are described as authorative!
*
* The discrepancy is on the order of a few cachelines,
* but the total is under one page (4k), which is our
* minimum allocation anyway so it should all come
* out in the wash.
*/
cxt_size = I915_READ(CXT_SIZE) + 1;
DRM_DEBUG_DRIVER("gen%d CXT_SIZE = %d bytes [0x%08x]\n",
INTEL_GEN(dev_priv),
cxt_size * 64,
cxt_size - 1);
return round_up(cxt_size * 64, PAGE_SIZE);
case 3:
case 2:
/* For the special day when i810 gets merged. */
@ -312,6 +319,12 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
engine->class = info->class;
engine->instance = info->instance;
/*
* To be overridden by the backend on setup. However to facilitate
* cleanup on error during setup, we always provide the destroy vfunc.
*/
engine->destroy = (typeof(engine->destroy))kfree;
engine->uabi_class = intel_engine_classes[info->class].uabi_class;
engine->context_size = __intel_engine_context_size(dev_priv,
@ -336,18 +349,70 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
return 0;
}
static void __setup_engine_capabilities(struct intel_engine_cs *engine)
{
struct drm_i915_private *i915 = engine->i915;
if (engine->class == VIDEO_DECODE_CLASS) {
/*
* HEVC support is present on first engine instance
* before Gen11 and on all instances afterwards.
*/
if (INTEL_GEN(i915) >= 11 ||
(INTEL_GEN(i915) >= 9 && engine->instance == 0))
engine->uabi_capabilities |=
I915_VIDEO_CLASS_CAPABILITY_HEVC;
/*
* SFC block is present only on even logical engine
* instances.
*/
if ((INTEL_GEN(i915) >= 11 &&
RUNTIME_INFO(i915)->vdbox_sfc_access & engine->mask) ||
(INTEL_GEN(i915) >= 9 && engine->instance == 0))
engine->uabi_capabilities |=
I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC;
} else if (engine->class == VIDEO_ENHANCEMENT_CLASS) {
if (INTEL_GEN(i915) >= 9)
engine->uabi_capabilities |=
I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC;
}
}
static void intel_setup_engine_capabilities(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
for_each_engine(engine, i915, id)
__setup_engine_capabilities(engine);
}
/**
* intel_engines_cleanup() - free the resources allocated for Command Streamers
* @i915: the i915 devic
*/
void intel_engines_cleanup(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
for_each_engine(engine, i915, id) {
engine->destroy(engine);
i915->engine[id] = NULL;
}
}
/**
* intel_engines_init_mmio() - allocate and prepare the Engine Command Streamers
* @dev_priv: i915 device private
* @i915: the i915 device
*
* Return: non-zero if the initialization failed.
*/
int intel_engines_init_mmio(struct drm_i915_private *dev_priv)
int intel_engines_init_mmio(struct drm_i915_private *i915)
{
struct intel_device_info *device_info = mkwrite_device_info(dev_priv);
const unsigned int engine_mask = INTEL_INFO(dev_priv)->engine_mask;
struct intel_engine_cs *engine;
enum intel_engine_id id;
struct intel_device_info *device_info = mkwrite_device_info(i915);
const unsigned int engine_mask = INTEL_INFO(i915)->engine_mask;
unsigned int mask = 0;
unsigned int i;
int err;
@ -360,10 +425,10 @@ int intel_engines_init_mmio(struct drm_i915_private *dev_priv)
return -ENODEV;
for (i = 0; i < ARRAY_SIZE(intel_engines); i++) {
if (!HAS_ENGINE(dev_priv, i))
if (!HAS_ENGINE(i915, i))
continue;
err = intel_engine_setup(dev_priv, i);
err = intel_engine_setup(i915, i);
if (err)
goto cleanup;
@ -379,69 +444,52 @@ int intel_engines_init_mmio(struct drm_i915_private *dev_priv)
device_info->engine_mask = mask;
/* We always presume we have at least RCS available for later probing */
if (WARN_ON(!HAS_ENGINE(dev_priv, RCS0))) {
if (WARN_ON(!HAS_ENGINE(i915, RCS0))) {
err = -ENODEV;
goto cleanup;
}
RUNTIME_INFO(dev_priv)->num_engines = hweight32(mask);
RUNTIME_INFO(i915)->num_engines = hweight32(mask);
i915_check_and_clear_faults(dev_priv);
i915_check_and_clear_faults(i915);
intel_setup_engine_capabilities(i915);
return 0;
cleanup:
for_each_engine(engine, dev_priv, id)
kfree(engine);
intel_engines_cleanup(i915);
return err;
}
/**
* intel_engines_init() - init the Engine Command Streamers
* @dev_priv: i915 device private
* @i915: i915 device private
*
* Return: non-zero if the initialization failed.
*/
int intel_engines_init(struct drm_i915_private *dev_priv)
int intel_engines_init(struct drm_i915_private *i915)
{
int (*init)(struct intel_engine_cs *engine);
struct intel_engine_cs *engine;
enum intel_engine_id id, err_id;
enum intel_engine_id id;
int err;
for_each_engine(engine, dev_priv, id) {
const struct engine_class_info *class_info =
&intel_engine_classes[engine->class];
int (*init)(struct intel_engine_cs *engine);
if (HAS_EXECLISTS(dev_priv))
init = class_info->init_execlists;
else
init = class_info->init_legacy;
err = -EINVAL;
err_id = id;
if (GEM_DEBUG_WARN_ON(!init))
goto cleanup;
if (HAS_EXECLISTS(i915))
init = intel_execlists_submission_init;
else
init = intel_ring_submission_init;
for_each_engine(engine, i915, id) {
err = init(engine);
if (err)
goto cleanup;
GEM_BUG_ON(!engine->submit_request);
}
return 0;
cleanup:
for_each_engine(engine, dev_priv, id) {
if (id >= err_id) {
kfree(engine);
dev_priv->engine[id] = NULL;
} else {
dev_priv->gt.cleanup_engine(engine);
}
}
intel_engines_cleanup(i915);
return err;
}
@ -450,7 +498,7 @@ static void intel_engine_init_batch_pool(struct intel_engine_cs *engine)
i915_gem_batch_pool_init(&engine->batch_pool, engine);
}
static void intel_engine_init_execlist(struct intel_engine_cs *engine)
void intel_engine_init_execlists(struct intel_engine_cs *engine)
{
struct intel_engine_execlists * const execlists = &engine->execlists;
@ -557,16 +605,7 @@ err:
return ret;
}
/**
* intel_engines_setup_common - setup engine state not requiring hw access
* @engine: Engine to setup.
*
* Initializes @engine@ structure members shared between legacy and execlists
* submission modes which do not require hardware access.
*
* Typically done early in the submission mode specific engine setup stage.
*/
int intel_engine_setup_common(struct intel_engine_cs *engine)
static int intel_engine_setup_common(struct intel_engine_cs *engine)
{
int err;
@ -583,10 +622,15 @@ int intel_engine_setup_common(struct intel_engine_cs *engine)
i915_timeline_set_subclass(&engine->timeline, TIMELINE_ENGINE);
intel_engine_init_breadcrumbs(engine);
intel_engine_init_execlist(engine);
intel_engine_init_execlists(engine);
intel_engine_init_hangcheck(engine);
intel_engine_init_batch_pool(engine);
intel_engine_init_cmd_parser(engine);
intel_engine_init__pm(engine);
/* Use the whole device by default */
engine->sseu =
intel_sseu_from_device_info(&RUNTIME_INFO(engine->i915)->sseu);
return 0;
@ -595,6 +639,49 @@ err_hwsp:
return err;
}
/**
* intel_engines_setup- setup engine state not requiring hw access
* @i915: Device to setup.
*
* Initializes engine structure members shared between legacy and execlists
* submission modes which do not require hardware access.
*
* Typically done early in the submission mode specific engine setup stage.
*/
int intel_engines_setup(struct drm_i915_private *i915)
{
int (*setup)(struct intel_engine_cs *engine);
struct intel_engine_cs *engine;
enum intel_engine_id id;
int err;
if (HAS_EXECLISTS(i915))
setup = intel_execlists_submission_setup;
else
setup = intel_ring_submission_setup;
for_each_engine(engine, i915, id) {
err = intel_engine_setup_common(engine);
if (err)
goto cleanup;
err = setup(engine);
if (err)
goto cleanup;
/* We expect the backend to take control over its state */
GEM_BUG_ON(engine->destroy == (typeof(engine->destroy))kfree);
GEM_BUG_ON(!engine->cops);
}
return 0;
cleanup:
intel_engines_cleanup(i915);
return err;
}
void intel_engines_set_scheduler_caps(struct drm_i915_private *i915)
{
static const struct {
@ -675,6 +762,7 @@ static int measure_breadcrumb_dw(struct intel_engine_cs *engine)
goto out_timeline;
dw = engine->emit_fini_breadcrumb(&frame->rq, frame->cs) - frame->cs;
GEM_BUG_ON(dw & 1); /* RING_TAIL must be qword aligned */
i915_timeline_unpin(&frame->timeline);
@ -690,11 +778,17 @@ static int pin_context(struct i915_gem_context *ctx,
struct intel_context **out)
{
struct intel_context *ce;
int err;
ce = intel_context_pin(ctx, engine);
ce = i915_gem_context_get_engine(ctx, engine->id);
if (IS_ERR(ce))
return PTR_ERR(ce);
err = intel_context_pin(ce);
intel_context_put(ce);
if (err)
return err;
*out = ce;
return 0;
}
@ -753,30 +847,6 @@ err_unpin:
return ret;
}
void intel_gt_resume(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
/*
* After resume, we may need to poke into the pinned kernel
* contexts to paper over any damage caused by the sudden suspend.
* Only the kernel contexts should remain pinned over suspend,
* allowing us to fixup the user contexts on their first pin.
*/
for_each_engine(engine, i915, id) {
struct intel_context *ce;
ce = engine->kernel_context;
if (ce)
ce->ops->reset(ce);
ce = engine->preempt_context;
if (ce)
ce->ops->reset(ce);
}
}
/**
* intel_engines_cleanup_common - cleans up the engine state created by
* the common initiailizers.
@ -1062,10 +1132,15 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
if (i915_reset_failed(engine->i915))
return true;
if (!intel_wakeref_active(&engine->wakeref))
return true;
/* Waiting to drain ELSP? */
if (READ_ONCE(engine->execlists.active)) {
struct tasklet_struct *t = &engine->execlists.tasklet;
synchronize_hardirq(engine->i915->drm.irq);
local_bh_disable();
if (tasklet_trylock(t)) {
/* Must wait for any GPU reset in progress. */
@ -1123,117 +1198,6 @@ void intel_engines_reset_default_submission(struct drm_i915_private *i915)
engine->set_default_submission(engine);
}
static bool reset_engines(struct drm_i915_private *i915)
{
if (INTEL_INFO(i915)->gpu_reset_clobbers_display)
return false;
return intel_gpu_reset(i915, ALL_ENGINES) == 0;
}
/**
* intel_engines_sanitize: called after the GPU has lost power
* @i915: the i915 device
* @force: ignore a failed reset and sanitize engine state anyway
*
* Anytime we reset the GPU, either with an explicit GPU reset or through a
* PCI power cycle, the GPU loses state and we must reset our state tracking
* to match. Note that calling intel_engines_sanitize() if the GPU has not
* been reset results in much confusion!
*/
void intel_engines_sanitize(struct drm_i915_private *i915, bool force)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
GEM_TRACE("\n");
if (!reset_engines(i915) && !force)
return;
for_each_engine(engine, i915, id)
intel_engine_reset(engine, false);
}
/**
* intel_engines_park: called when the GT is transitioning from busy->idle
* @i915: the i915 device
*
* The GT is now idle and about to go to sleep (maybe never to wake again?).
* Time for us to tidy and put away our toys (release resources back to the
* system).
*/
void intel_engines_park(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
for_each_engine(engine, i915, id) {
/* Flush the residual irq tasklets first. */
intel_engine_disarm_breadcrumbs(engine);
tasklet_kill(&engine->execlists.tasklet);
/*
* We are committed now to parking the engines, make sure there
* will be no more interrupts arriving later and the engines
* are truly idle.
*/
if (wait_for(intel_engine_is_idle(engine), 10)) {
struct drm_printer p = drm_debug_printer(__func__);
dev_err(i915->drm.dev,
"%s is not idle before parking\n",
engine->name);
intel_engine_dump(engine, &p, NULL);
}
/* Must be reset upon idling, or we may miss the busy wakeup. */
GEM_BUG_ON(engine->execlists.queue_priority_hint != INT_MIN);
if (engine->park)
engine->park(engine);
if (engine->pinned_default_state) {
i915_gem_object_unpin_map(engine->default_state);
engine->pinned_default_state = NULL;
}
i915_gem_batch_pool_fini(&engine->batch_pool);
engine->execlists.no_priolist = false;
}
i915->gt.active_engines = 0;
}
/**
* intel_engines_unpark: called when the GT is transitioning from idle->busy
* @i915: the i915 device
*
* The GT was idle and now about to fire up with some new user requests.
*/
void intel_engines_unpark(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
for_each_engine(engine, i915, id) {
void *map;
/* Pin the default state for fast resets from atomic context. */
map = NULL;
if (engine->default_state)
map = i915_gem_object_pin_map(engine->default_state,
I915_MAP_WB);
if (!IS_ERR_OR_NULL(map))
engine->pinned_default_state = map;
if (engine->unpark)
engine->unpark(engine);
intel_engine_init_hangcheck(engine);
}
}
/**
* intel_engine_lost_context: called when the GPU is reset into unknown state
* @engine: the engine
@ -1312,8 +1276,11 @@ static void print_request(struct drm_printer *m,
i915_request_completed(rq) ? "!" :
i915_request_started(rq) ? "*" :
"",
test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
&rq->fence.flags) ? "+" :
test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
&rq->fence.flags) ? "+" : "",
&rq->fence.flags) ? "-" :
"",
buf,
jiffies_to_msecs(jiffies - rq->emitted_jiffies),
name);
@ -1518,9 +1485,8 @@ void intel_engine_dump(struct intel_engine_cs *engine,
if (i915_reset_failed(engine->i915))
drm_printf(m, "*** WEDGED ***\n");
drm_printf(m, "\tHangcheck %x:%x [%d ms]\n",
engine->hangcheck.last_seqno,
engine->hangcheck.next_seqno,
drm_printf(m, "\tAwake? %d\n", atomic_read(&engine->wakeref.count));
drm_printf(m, "\tHangcheck: %d ms ago\n",
jiffies_to_msecs(jiffies - engine->hangcheck.action_timestamp));
drm_printf(m, "\tReset count: %d (global %d)\n",
i915_reset_engine_count(error, engine),
@ -1752,6 +1718,5 @@ intel_engine_find_active_request(struct intel_engine_cs *engine)
}
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
#include "selftests/mock_engine.c"
#include "selftests/intel_engine_cs.c"
#include "selftest_engine_cs.c"
#endif

View File

@ -0,0 +1,164 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#include "i915_drv.h"
#include "intel_engine.h"
#include "intel_engine_pm.h"
#include "intel_gt_pm.h"
static int __engine_unpark(struct intel_wakeref *wf)
{
struct intel_engine_cs *engine =
container_of(wf, typeof(*engine), wakeref);
void *map;
GEM_TRACE("%s\n", engine->name);
intel_gt_pm_get(engine->i915);
/* Pin the default state for fast resets from atomic context. */
map = NULL;
if (engine->default_state)
map = i915_gem_object_pin_map(engine->default_state,
I915_MAP_WB);
if (!IS_ERR_OR_NULL(map))
engine->pinned_default_state = map;
if (engine->unpark)
engine->unpark(engine);
intel_engine_init_hangcheck(engine);
return 0;
}
void intel_engine_pm_get(struct intel_engine_cs *engine)
{
intel_wakeref_get(engine->i915, &engine->wakeref, __engine_unpark);
}
void intel_engine_park(struct intel_engine_cs *engine)
{
/*
* We are committed now to parking this engine, make sure there
* will be no more interrupts arriving later and the engine
* is truly idle.
*/
if (wait_for(intel_engine_is_idle(engine), 10)) {
struct drm_printer p = drm_debug_printer(__func__);
dev_err(engine->i915->drm.dev,
"%s is not idle before parking\n",
engine->name);
intel_engine_dump(engine, &p, NULL);
}
}
static bool switch_to_kernel_context(struct intel_engine_cs *engine)
{
struct i915_request *rq;
/* Already inside the kernel context, safe to power down. */
if (engine->wakeref_serial == engine->serial)
return true;
/* GPU is pointing to the void, as good as in the kernel context. */
if (i915_reset_failed(engine->i915))
return true;
/*
* Note, we do this without taking the timeline->mutex. We cannot
* as we may be called while retiring the kernel context and so
* already underneath the timeline->mutex. Instead we rely on the
* exclusive property of the __engine_park that prevents anyone
* else from creating a request on this engine. This also requires
* that the ring is empty and we avoid any waits while constructing
* the context, as they assume protection by the timeline->mutex.
* This should hold true as we can only park the engine after
* retiring the last request, thus all rings should be empty and
* all timelines idle.
*/
rq = __i915_request_create(engine->kernel_context, GFP_NOWAIT);
if (IS_ERR(rq))
/* Context switch failed, hope for the best! Maybe reset? */
return true;
/* Check again on the next retirement. */
engine->wakeref_serial = engine->serial + 1;
__i915_request_commit(rq);
return false;
}
static int __engine_park(struct intel_wakeref *wf)
{
struct intel_engine_cs *engine =
container_of(wf, typeof(*engine), wakeref);
/*
* If one and only one request is completed between pm events,
* we know that we are inside the kernel context and it is
* safe to power down. (We are paranoid in case that runtime
* suspend causes corruption to the active context image, and
* want to avoid that impacting userspace.)
*/
if (!switch_to_kernel_context(engine))
return -EBUSY;
GEM_TRACE("%s\n", engine->name);
intel_engine_disarm_breadcrumbs(engine);
/* Must be reset upon idling, or we may miss the busy wakeup. */
GEM_BUG_ON(engine->execlists.queue_priority_hint != INT_MIN);
if (engine->park)
engine->park(engine);
if (engine->pinned_default_state) {
i915_gem_object_unpin_map(engine->default_state);
engine->pinned_default_state = NULL;
}
engine->execlists.no_priolist = false;
intel_gt_pm_put(engine->i915);
return 0;
}
void intel_engine_pm_put(struct intel_engine_cs *engine)
{
intel_wakeref_put(engine->i915, &engine->wakeref, __engine_park);
}
void intel_engine_init__pm(struct intel_engine_cs *engine)
{
intel_wakeref_init(&engine->wakeref);
}
int intel_engines_resume(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
int err = 0;
intel_gt_pm_get(i915);
for_each_engine(engine, i915, id) {
intel_engine_pm_get(engine);
engine->serial++; /* kernel context lost */
err = engine->resume(engine);
intel_engine_pm_put(engine);
if (err) {
dev_err(i915->drm.dev,
"Failed to restart %s (%d)\n",
engine->name, err);
break;
}
}
intel_gt_pm_put(i915);
return err;
}

View File

@ -0,0 +1,22 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#ifndef INTEL_ENGINE_PM_H
#define INTEL_ENGINE_PM_H
struct drm_i915_private;
struct intel_engine_cs;
void intel_engine_pm_get(struct intel_engine_cs *engine);
void intel_engine_pm_put(struct intel_engine_cs *engine);
void intel_engine_park(struct intel_engine_cs *engine);
void intel_engine_init__pm(struct intel_engine_cs *engine);
int intel_engines_resume(struct drm_i915_private *i915);
#endif /* INTEL_ENGINE_PM_H */

View File

@ -14,14 +14,15 @@
#include <linux/types.h>
#include "i915_gem.h"
#include "i915_gem_batch_pool.h"
#include "i915_pmu.h"
#include "i915_priolist_types.h"
#include "i915_selftest.h"
#include "i915_timeline_types.h"
#include "intel_sseu.h"
#include "intel_wakeref.h"
#include "intel_workarounds_types.h"
#include "i915_gem_batch_pool.h"
#include "i915_pmu.h"
#define I915_MAX_SLICES 3
#define I915_MAX_SUBSLICES 8
@ -52,8 +53,8 @@ struct intel_instdone {
struct intel_engine_hangcheck {
u64 acthd;
u32 last_seqno;
u32 next_seqno;
u32 last_ring;
u32 last_head;
unsigned long action_timestamp;
struct intel_instdone instdone;
};
@ -226,6 +227,7 @@ struct intel_engine_execlists {
* @queue: queue of requests, in priority lists
*/
struct rb_root_cached queue;
struct rb_root_cached virtual;
/**
* @csb_write: control register for Context Switch buffer
@ -278,6 +280,10 @@ struct intel_engine_cs {
u32 context_size;
u32 mmio_base;
u32 uabi_capabilities;
struct intel_sseu sseu;
struct intel_ring *buffer;
struct i915_timeline timeline;
@ -285,6 +291,10 @@ struct intel_engine_cs {
struct intel_context *kernel_context; /* pinned */
struct intel_context *preempt_context; /* pinned; optional */
unsigned long serial;
unsigned long wakeref_serial;
struct intel_wakeref wakeref;
struct drm_i915_gem_object *default_state;
void *pinned_default_state;
@ -357,7 +367,7 @@ struct intel_engine_cs {
void (*irq_enable)(struct intel_engine_cs *engine);
void (*irq_disable)(struct intel_engine_cs *engine);
int (*init_hw)(struct intel_engine_cs *engine);
int (*resume)(struct intel_engine_cs *engine);
struct {
void (*prepare)(struct intel_engine_cs *engine);
@ -397,6 +407,13 @@ struct intel_engine_cs {
*/
void (*submit_request)(struct i915_request *rq);
/*
* Called on signaling of a SUBMIT_FENCE, passing along the signaling
* request down to the bonded pairs.
*/
void (*bond_execute)(struct i915_request *rq,
struct dma_fence *signal);
/*
* Call when the priority on a request has changed and it and its
* dependencies may need rescheduling. Note the request itself may
@ -413,7 +430,7 @@ struct intel_engine_cs {
*/
void (*cancel_requests)(struct intel_engine_cs *engine);
void (*cleanup)(struct intel_engine_cs *engine);
void (*destroy)(struct intel_engine_cs *engine);
struct intel_engine_execlists execlists;
@ -438,6 +455,7 @@ struct intel_engine_cs {
#define I915_ENGINE_HAS_PREEMPTION BIT(2)
#define I915_ENGINE_HAS_SEMAPHORES BIT(3)
#define I915_ENGINE_NEEDS_BREADCRUMB_TASKLET BIT(4)
#define I915_ENGINE_IS_VIRTUAL BIT(5)
unsigned int flags;
/*
@ -527,6 +545,12 @@ intel_engine_needs_breadcrumb_tasklet(const struct intel_engine_cs *engine)
return engine->flags & I915_ENGINE_NEEDS_BREADCRUMB_TASKLET;
}
static inline bool
intel_engine_is_virtual(const struct intel_engine_cs *engine)
{
return engine->flags & I915_ENGINE_IS_VIRTUAL;
}
#define instdone_slice_mask(dev_priv__) \
(IS_GEN(dev_priv__, 7) ? \
1 : RUNTIME_INFO(dev_priv__)->sseu.slice_mask)

View File

@ -0,0 +1,143 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#include "i915_drv.h"
#include "intel_gt_pm.h"
#include "intel_pm.h"
#include "intel_wakeref.h"
static void pm_notify(struct drm_i915_private *i915, int state)
{
blocking_notifier_call_chain(&i915->gt.pm_notifications, state, i915);
}
static int intel_gt_unpark(struct intel_wakeref *wf)
{
struct drm_i915_private *i915 =
container_of(wf, typeof(*i915), gt.wakeref);
GEM_TRACE("\n");
/*
* It seems that the DMC likes to transition between the DC states a lot
* when there are no connected displays (no active power domains) during
* command submission.
*
* This activity has negative impact on the performance of the chip with
* huge latencies observed in the interrupt handler and elsewhere.
*
* Work around it by grabbing a GT IRQ power domain whilst there is any
* GT activity, preventing any DC state transitions.
*/
i915->gt.awake = intel_display_power_get(i915, POWER_DOMAIN_GT_IRQ);
GEM_BUG_ON(!i915->gt.awake);
intel_enable_gt_powersave(i915);
i915_update_gfx_val(i915);
if (INTEL_GEN(i915) >= 6)
gen6_rps_busy(i915);
i915_pmu_gt_unparked(i915);
i915_queue_hangcheck(i915);
pm_notify(i915, INTEL_GT_UNPARK);
return 0;
}
void intel_gt_pm_get(struct drm_i915_private *i915)
{
intel_wakeref_get(i915, &i915->gt.wakeref, intel_gt_unpark);
}
static int intel_gt_park(struct intel_wakeref *wf)
{
struct drm_i915_private *i915 =
container_of(wf, typeof(*i915), gt.wakeref);
intel_wakeref_t wakeref = fetch_and_zero(&i915->gt.awake);
GEM_TRACE("\n");
pm_notify(i915, INTEL_GT_PARK);
i915_pmu_gt_parked(i915);
if (INTEL_GEN(i915) >= 6)
gen6_rps_idle(i915);
GEM_BUG_ON(!wakeref);
intel_display_power_put(i915, POWER_DOMAIN_GT_IRQ, wakeref);
return 0;
}
void intel_gt_pm_put(struct drm_i915_private *i915)
{
intel_wakeref_put(i915, &i915->gt.wakeref, intel_gt_park);
}
void intel_gt_pm_init(struct drm_i915_private *i915)
{
intel_wakeref_init(&i915->gt.wakeref);
BLOCKING_INIT_NOTIFIER_HEAD(&i915->gt.pm_notifications);
}
static bool reset_engines(struct drm_i915_private *i915)
{
if (INTEL_INFO(i915)->gpu_reset_clobbers_display)
return false;
return intel_gpu_reset(i915, ALL_ENGINES) == 0;
}
/**
* intel_gt_sanitize: called after the GPU has lost power
* @i915: the i915 device
* @force: ignore a failed reset and sanitize engine state anyway
*
* Anytime we reset the GPU, either with an explicit GPU reset or through a
* PCI power cycle, the GPU loses state and we must reset our state tracking
* to match. Note that calling intel_gt_sanitize() if the GPU has not
* been reset results in much confusion!
*/
void intel_gt_sanitize(struct drm_i915_private *i915, bool force)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
GEM_TRACE("\n");
if (!reset_engines(i915) && !force)
return;
for_each_engine(engine, i915, id)
intel_engine_reset(engine, false);
}
void intel_gt_resume(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
/*
* After resume, we may need to poke into the pinned kernel
* contexts to paper over any damage caused by the sudden suspend.
* Only the kernel contexts should remain pinned over suspend,
* allowing us to fixup the user contexts on their first pin.
*/
for_each_engine(engine, i915, id) {
struct intel_context *ce;
ce = engine->kernel_context;
if (ce)
ce->ops->reset(ce);
ce = engine->preempt_context;
if (ce)
ce->ops->reset(ce);
}
}

View File

@ -0,0 +1,27 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#ifndef INTEL_GT_PM_H
#define INTEL_GT_PM_H
#include <linux/types.h>
struct drm_i915_private;
enum {
INTEL_GT_UNPARK,
INTEL_GT_PARK,
};
void intel_gt_pm_get(struct drm_i915_private *i915);
void intel_gt_pm_put(struct drm_i915_private *i915);
void intel_gt_pm_init(struct drm_i915_private *i915);
void intel_gt_sanitize(struct drm_i915_private *i915, bool force);
void intel_gt_resume(struct drm_i915_private *i915);
#endif /* INTEL_GT_PM_H */

View File

@ -22,12 +22,13 @@
*
*/
#include "intel_reset.h"
#include "i915_drv.h"
#include "i915_reset.h"
struct hangcheck {
u64 acthd;
u32 seqno;
u32 ring;
u32 head;
enum intel_engine_hangcheck_action action;
unsigned long action_timestamp;
int deadlock;
@ -133,26 +134,31 @@ static void hangcheck_load_sample(struct intel_engine_cs *engine,
struct hangcheck *hc)
{
hc->acthd = intel_engine_get_active_head(engine);
hc->seqno = intel_engine_get_hangcheck_seqno(engine);
hc->ring = ENGINE_READ(engine, RING_START);
hc->head = ENGINE_READ(engine, RING_HEAD);
}
static void hangcheck_store_sample(struct intel_engine_cs *engine,
const struct hangcheck *hc)
{
engine->hangcheck.acthd = hc->acthd;
engine->hangcheck.last_seqno = hc->seqno;
engine->hangcheck.last_ring = hc->ring;
engine->hangcheck.last_head = hc->head;
}
static enum intel_engine_hangcheck_action
hangcheck_get_action(struct intel_engine_cs *engine,
const struct hangcheck *hc)
{
if (engine->hangcheck.last_seqno != hc->seqno)
return ENGINE_ACTIVE_SEQNO;
if (intel_engine_is_idle(engine))
return ENGINE_IDLE;
if (engine->hangcheck.last_ring != hc->ring)
return ENGINE_ACTIVE_SEQNO;
if (engine->hangcheck.last_head != hc->head)
return ENGINE_ACTIVE_SEQNO;
return engine_stuck(engine, hc->acthd);
}
@ -256,6 +262,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
struct intel_engine_cs *engine;
enum intel_engine_id id;
unsigned int hung = 0, stuck = 0, wedged = 0;
intel_wakeref_t wakeref;
if (!i915_modparams.enable_hangcheck)
return;
@ -266,6 +273,10 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
if (i915_terminally_wedged(dev_priv))
return;
wakeref = intel_runtime_pm_get_if_in_use(dev_priv);
if (!wakeref)
return;
/* As enabling the GPU requires fairly extensive mmio access,
* periodically arm the mmio checker to see if we are triggering
* any invalid access.
@ -313,6 +324,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
if (hung)
hangcheck_declare_hang(dev_priv, hung, stuck);
intel_runtime_pm_put(dev_priv, wakeref);
/* Reset timer in case GPU hangs without another request being added */
i915_queue_hangcheck(dev_priv);
}
@ -330,5 +343,5 @@ void intel_hangcheck_init(struct drm_i915_private *i915)
}
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
#include "selftests/intel_hangcheck.c"
#include "selftest_hangcheck.c"
#endif

View File

@ -24,8 +24,7 @@
#ifndef _INTEL_LRC_H_
#define _INTEL_LRC_H_
#include "intel_ringbuffer.h"
#include "i915_gem_context.h"
#include "intel_engine.h"
/* Execlists regs */
#define RING_ELSP(base) _MMIO((base) + 0x230)
@ -67,8 +66,9 @@ enum {
/* Logical Rings */
void intel_logical_ring_cleanup(struct intel_engine_cs *engine);
int logical_render_ring_init(struct intel_engine_cs *engine);
int logical_xcs_ring_init(struct intel_engine_cs *engine);
int intel_execlists_submission_setup(struct intel_engine_cs *engine);
int intel_execlists_submission_init(struct intel_engine_cs *engine);
/* Logical Ring Contexts */
@ -99,7 +99,6 @@ int logical_xcs_ring_init(struct intel_engine_cs *engine);
struct drm_printer;
struct drm_i915_private;
struct i915_gem_context;
void intel_execlists_set_default_submission(struct intel_engine_cs *engine);
@ -115,6 +114,17 @@ void intel_execlists_show_requests(struct intel_engine_cs *engine,
const char *prefix),
unsigned int max);
u32 gen8_make_rpcs(struct drm_i915_private *i915, struct intel_sseu *ctx_sseu);
struct intel_context *
intel_execlists_create_virtual(struct i915_gem_context *ctx,
struct intel_engine_cs **siblings,
unsigned int count);
struct intel_context *
intel_execlists_clone_virtual(struct i915_gem_context *ctx,
struct intel_engine_cs *src);
int intel_virtual_engine_attach_bond(struct intel_engine_cs *engine,
const struct intel_engine_cs *master,
const struct intel_engine_cs *sibling);
#endif /* _INTEL_LRC_H_ */

View File

@ -20,9 +20,11 @@
* SOFTWARE.
*/
#include "i915_drv.h"
#include "intel_engine.h"
#include "intel_mocs.h"
#include "intel_lrc.h"
#include "intel_ringbuffer.h"
/* structures required */
struct drm_i915_mocs_entry {

View File

@ -49,7 +49,9 @@
* context handling keep the MOCS in step.
*/
#include "i915_drv.h"
struct drm_i915_private;
struct i915_request;
struct intel_engine_cs;
int intel_rcs_context_init_mocs(struct i915_request *rq);
void intel_mocs_init_l3cc_table(struct drm_i915_private *dev_priv);

View File

@ -9,9 +9,13 @@
#include "i915_drv.h"
#include "i915_gpu_error.h"
#include "i915_reset.h"
#include "i915_irq.h"
#include "intel_engine_pm.h"
#include "intel_gt_pm.h"
#include "intel_reset.h"
#include "intel_guc.h"
#include "intel_overlay.h"
#define RESET_MAX_RETRIES 3
@ -641,9 +645,6 @@ int intel_gpu_reset(struct drm_i915_private *i915,
bool intel_has_gpu_reset(struct drm_i915_private *i915)
{
if (USES_GUC(i915))
return false;
if (!i915_modparams.reset)
return NULL;
@ -683,6 +684,7 @@ static void reset_prepare_engine(struct intel_engine_cs *engine)
* written to the powercontext is undefined and so we may lose
* GPU state upon resume, i.e. fail to restart after a reset.
*/
intel_engine_pm_get(engine);
intel_uncore_forcewake_get(engine->uncore, FORCEWAKE_ALL);
engine->reset.prepare(engine);
}
@ -718,6 +720,7 @@ static void reset_prepare(struct drm_i915_private *i915)
struct intel_engine_cs *engine;
enum intel_engine_id id;
intel_gt_pm_get(i915);
for_each_engine(engine, i915, id)
reset_prepare_engine(engine);
@ -755,48 +758,10 @@ static int gt_reset(struct drm_i915_private *i915,
static void reset_finish_engine(struct intel_engine_cs *engine)
{
engine->reset.finish(engine);
intel_engine_pm_put(engine);
intel_uncore_forcewake_put(engine->uncore, FORCEWAKE_ALL);
}
struct i915_gpu_restart {
struct work_struct work;
struct drm_i915_private *i915;
};
static void restart_work(struct work_struct *work)
{
struct i915_gpu_restart *arg = container_of(work, typeof(*arg), work);
struct drm_i915_private *i915 = arg->i915;
struct intel_engine_cs *engine;
enum intel_engine_id id;
intel_wakeref_t wakeref;
wakeref = intel_runtime_pm_get(i915);
mutex_lock(&i915->drm.struct_mutex);
WRITE_ONCE(i915->gpu_error.restart, NULL);
for_each_engine(engine, i915, id) {
struct i915_request *rq;
/*
* Ostensibily, we always want a context loaded for powersaving,
* so if the engine is idle after the reset, send a request
* to load our scratch kernel_context.
*/
if (!intel_engine_is_idle(engine))
continue;
rq = i915_request_alloc(engine, i915->kernel_context);
if (!IS_ERR(rq))
i915_request_add(rq);
}
mutex_unlock(&i915->drm.struct_mutex);
intel_runtime_pm_put(i915, wakeref);
kfree(arg);
}
static void reset_finish(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
@ -806,29 +771,7 @@ static void reset_finish(struct drm_i915_private *i915)
reset_finish_engine(engine);
intel_engine_signal_breadcrumbs(engine);
}
}
static void reset_restart(struct drm_i915_private *i915)
{
struct i915_gpu_restart *arg;
/*
* Following the reset, ensure that we always reload context for
* powersaving, and to correct engine->last_retired_context. Since
* this requires us to submit a request, queue a worker to do that
* task for us to evade any locking here.
*/
if (READ_ONCE(i915->gpu_error.restart))
return;
arg = kmalloc(sizeof(*arg), GFP_KERNEL);
if (arg) {
arg->i915 = i915;
INIT_WORK(&arg->work, restart_work);
WRITE_ONCE(i915->gpu_error.restart, arg);
queue_work(i915->wq, &arg->work);
}
intel_gt_pm_put(i915);
}
static void nop_submit_request(struct i915_request *request)
@ -889,6 +832,7 @@ static void __i915_gem_set_wedged(struct drm_i915_private *i915)
* in nop_submit_request.
*/
synchronize_rcu_expedited();
set_bit(I915_WEDGED, &error->flags);
/* Mark all executing requests as skipped */
for_each_engine(engine, i915, id)
@ -896,9 +840,6 @@ static void __i915_gem_set_wedged(struct drm_i915_private *i915)
reset_finish(i915);
smp_mb__before_atomic();
set_bit(I915_WEDGED, &error->flags);
GEM_TRACE("end\n");
}
@ -956,7 +897,7 @@ static bool __i915_gem_unset_wedged(struct drm_i915_private *i915)
}
mutex_unlock(&i915->gt.timelines.mutex);
intel_engines_sanitize(i915, false);
intel_gt_sanitize(i915, false);
/*
* Undo nop_submit_request. We prevent all new i915 requests from
@ -1034,7 +975,6 @@ void i915_reset(struct drm_i915_private *i915,
GEM_TRACE("flags=%lx\n", error->flags);
might_sleep();
assert_rpm_wakelock_held(i915);
GEM_BUG_ON(!test_bit(I915_RESET_BACKOFF, &error->flags));
/* Clear any previous failed attempts at recovery. Time to try again. */
@ -1087,8 +1027,6 @@ void i915_reset(struct drm_i915_private *i915,
finish:
reset_finish(i915);
if (!__i915_wedged(error))
reset_restart(i915);
return;
taint:
@ -1104,7 +1042,7 @@ taint:
* rather than continue on into oblivion. For everyone else,
* the system should still plod along, but they have been warned!
*/
add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
add_taint_for_CI(TAINT_WARN);
error:
__i915_gem_set_wedged(i915);
goto finish;
@ -1137,6 +1075,9 @@ int i915_reset_engine(struct intel_engine_cs *engine, const char *msg)
GEM_TRACE("%s flags=%lx\n", engine->name, error->flags);
GEM_BUG_ON(!test_bit(I915_RESET_ENGINE + engine->id, &error->flags));
if (!intel_wakeref_active(&engine->wakeref))
return 0;
reset_prepare_engine(engine);
if (msg)
@ -1168,7 +1109,7 @@ int i915_reset_engine(struct intel_engine_cs *engine, const char *msg)
* have been reset to their default values. Follow the init_ring
* process to program RING_MODE, HWSP and re-enable submission.
*/
ret = engine->init_hw(engine);
ret = engine->resume(engine);
if (ret)
goto out;
@ -1425,25 +1366,6 @@ int i915_terminally_wedged(struct drm_i915_private *i915)
return __i915_wedged(error) ? -EIO : 0;
}
bool i915_reset_flush(struct drm_i915_private *i915)
{
int err;
cancel_delayed_work_sync(&i915->gpu_error.hangcheck_work);
flush_workqueue(i915->wq);
GEM_BUG_ON(READ_ONCE(i915->gpu_error.restart));
mutex_lock(&i915->drm.struct_mutex);
err = i915_gem_wait_for_idle(i915,
I915_WAIT_LOCKED |
I915_WAIT_FOR_IDLE_BOOST,
MAX_SCHEDULE_TIMEOUT);
mutex_unlock(&i915->drm.struct_mutex);
return !err;
}
static void i915_wedge_me(struct work_struct *work)
{
struct i915_wedge_me *w = container_of(work, typeof(*w), work.work);
@ -1472,3 +1394,7 @@ void __i915_fini_wedge(struct i915_wedge_me *w)
destroy_delayed_work_on_stack(&w->work);
w->i915 = NULL;
}
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
#include "selftest_reset.c"
#endif

View File

@ -11,7 +11,7 @@
#include <linux/types.h>
#include <linux/srcu.h>
#include "intel_engine_types.h"
#include "gt/intel_engine_types.h"
struct drm_i915_private;
struct i915_request;
@ -34,7 +34,6 @@ int i915_reset_engine(struct intel_engine_cs *engine,
const char *reason);
void i915_reset_request(struct i915_request *rq, bool guilty);
bool i915_reset_flush(struct drm_i915_private *i915);
int __must_check i915_reset_trylock(struct drm_i915_private *i915);
void i915_reset_unlock(struct drm_i915_private *i915, int tag);

View File

@ -33,9 +33,8 @@
#include "i915_drv.h"
#include "i915_gem_render_state.h"
#include "i915_reset.h"
#include "i915_trace.h"
#include "intel_drv.h"
#include "intel_reset.h"
#include "intel_workarounds.h"
/* Rough estimate of the typical request size, performing a flush,
@ -310,11 +309,6 @@ static u32 *gen6_rcs_emit_breadcrumb(struct i915_request *rq, u32 *cs)
*cs++ = rq->timeline->hwsp_offset | PIPE_CONTROL_GLOBAL_GTT;
*cs++ = rq->fence.seqno;
*cs++ = GFX_OP_PIPE_CONTROL(4);
*cs++ = PIPE_CONTROL_QW_WRITE | PIPE_CONTROL_STORE_DATA_INDEX;
*cs++ = I915_GEM_HWS_HANGCHECK_ADDR | PIPE_CONTROL_GLOBAL_GTT;
*cs++ = intel_engine_next_hangcheck_seqno(rq->engine);
*cs++ = MI_USER_INTERRUPT;
*cs++ = MI_NOOP;
@ -416,13 +410,6 @@ static u32 *gen7_rcs_emit_breadcrumb(struct i915_request *rq, u32 *cs)
*cs++ = rq->timeline->hwsp_offset;
*cs++ = rq->fence.seqno;
*cs++ = GFX_OP_PIPE_CONTROL(4);
*cs++ = (PIPE_CONTROL_QW_WRITE |
PIPE_CONTROL_STORE_DATA_INDEX |
PIPE_CONTROL_GLOBAL_GTT_IVB);
*cs++ = I915_GEM_HWS_HANGCHECK_ADDR;
*cs++ = intel_engine_next_hangcheck_seqno(rq->engine);
*cs++ = MI_USER_INTERRUPT;
*cs++ = MI_NOOP;
@ -441,12 +428,7 @@ static u32 *gen6_xcs_emit_breadcrumb(struct i915_request *rq, u32 *cs)
*cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
*cs++ = rq->fence.seqno;
*cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
*cs++ = I915_GEM_HWS_HANGCHECK_ADDR | MI_FLUSH_DW_USE_GTT;
*cs++ = intel_engine_next_hangcheck_seqno(rq->engine);
*cs++ = MI_USER_INTERRUPT;
*cs++ = MI_NOOP;
rq->tail = intel_ring_offset(rq, cs);
assert_ring_tail_valid(rq->ring, rq->tail);
@ -466,10 +448,6 @@ static u32 *gen7_xcs_emit_breadcrumb(struct i915_request *rq, u32 *cs)
*cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
*cs++ = rq->fence.seqno;
*cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
*cs++ = I915_GEM_HWS_HANGCHECK_ADDR | MI_FLUSH_DW_USE_GTT;
*cs++ = intel_engine_next_hangcheck_seqno(rq->engine);
for (i = 0; i < GEN7_XCS_WA; i++) {
*cs++ = MI_STORE_DWORD_INDEX;
*cs++ = I915_GEM_HWS_SEQNO_ADDR;
@ -481,6 +459,7 @@ static u32 *gen7_xcs_emit_breadcrumb(struct i915_request *rq, u32 *cs)
*cs++ = 0;
*cs++ = MI_USER_INTERRUPT;
*cs++ = MI_NOOP;
rq->tail = intel_ring_offset(rq, cs);
assert_ring_tail_valid(rq->ring, rq->tail);
@ -638,12 +617,15 @@ static bool stop_ring(struct intel_engine_cs *engine)
return (ENGINE_READ(engine, RING_HEAD) & HEAD_ADDR) == 0;
}
static int init_ring_common(struct intel_engine_cs *engine)
static int xcs_resume(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
struct intel_ring *ring = engine->buffer;
int ret = 0;
GEM_TRACE("%s: ring:{HEAD:%04x, TAIL:%04x}\n",
engine->name, ring->head, ring->tail);
intel_uncore_forcewake_get(engine->uncore, FORCEWAKE_ALL);
if (!stop_ring(engine)) {
@ -828,12 +810,23 @@ static int intel_rcs_ctx_init(struct i915_request *rq)
return 0;
}
static int init_render_ring(struct intel_engine_cs *engine)
static int rcs_resume(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
int ret = init_ring_common(engine);
if (ret)
return ret;
/*
* Disable CONSTANT_BUFFER before it is loaded from the context
* image. For as it is loaded, it is executed and the stored
* address may no longer be valid, leading to a GPU hang.
*
* This imposes the requirement that userspace reload their
* CONSTANT_BUFFER on every batch, fortunately a requirement
* they are already accustomed to from before contexts were
* enabled.
*/
if (IS_GEN(dev_priv, 4))
I915_WRITE(ECOSKPD,
_MASKED_BIT_ENABLE(ECO_CONSTANT_BUFFER_SR_DISABLE));
/* WaTimedSingleVertexDispatch:cl,bw,ctg,elk,ilk,snb */
if (IS_GEN_RANGE(dev_priv, 4, 6))
@ -873,10 +866,7 @@ static int init_render_ring(struct intel_engine_cs *engine)
if (IS_GEN_RANGE(dev_priv, 6, 7))
I915_WRITE(INSTPM, _MASKED_BIT_ENABLE(INSTPM_FORCE_ORDERING));
if (INTEL_GEN(dev_priv) >= 6)
ENGINE_WRITE(engine, RING_IMR, ~engine->irq_keep_mask);
return 0;
return xcs_resume(engine);
}
static void cancel_requests(struct intel_engine_cs *engine)
@ -918,11 +908,8 @@ static u32 *i9xx_emit_breadcrumb(struct i915_request *rq, u32 *cs)
*cs++ = I915_GEM_HWS_SEQNO_ADDR;
*cs++ = rq->fence.seqno;
*cs++ = MI_STORE_DWORD_INDEX;
*cs++ = I915_GEM_HWS_HANGCHECK_ADDR;
*cs++ = intel_engine_next_hangcheck_seqno(rq->engine);
*cs++ = MI_USER_INTERRUPT;
*cs++ = MI_NOOP;
rq->tail = intel_ring_offset(rq, cs);
assert_ring_tail_valid(rq->ring, rq->tail);
@ -940,10 +927,6 @@ static u32 *gen5_emit_breadcrumb(struct i915_request *rq, u32 *cs)
*cs++ = MI_FLUSH;
*cs++ = MI_STORE_DWORD_INDEX;
*cs++ = I915_GEM_HWS_HANGCHECK_ADDR;
*cs++ = intel_engine_next_hangcheck_seqno(rq->engine);
BUILD_BUG_ON(GEN5_WA_STORES < 1);
for (i = 0; i < GEN5_WA_STORES; i++) {
*cs++ = MI_STORE_DWORD_INDEX;
@ -952,7 +935,6 @@ static u32 *gen5_emit_breadcrumb(struct i915_request *rq, u32 *cs)
}
*cs++ = MI_USER_INTERRUPT;
*cs++ = MI_NOOP;
rq->tail = intel_ring_offset(rq, cs);
assert_ring_tail_valid(rq->ring, rq->tail);
@ -1517,77 +1499,13 @@ static const struct intel_context_ops ring_context_ops = {
.pin = ring_context_pin,
.unpin = ring_context_unpin,
.enter = intel_context_enter_engine,
.exit = intel_context_exit_engine,
.reset = ring_context_reset,
.destroy = ring_context_destroy,
};
static int intel_init_ring_buffer(struct intel_engine_cs *engine)
{
struct i915_timeline *timeline;
struct intel_ring *ring;
int err;
err = intel_engine_setup_common(engine);
if (err)
return err;
timeline = i915_timeline_create(engine->i915, engine->status_page.vma);
if (IS_ERR(timeline)) {
err = PTR_ERR(timeline);
goto err;
}
GEM_BUG_ON(timeline->has_initial_breadcrumb);
ring = intel_engine_create_ring(engine, timeline, 32 * PAGE_SIZE);
i915_timeline_put(timeline);
if (IS_ERR(ring)) {
err = PTR_ERR(ring);
goto err;
}
err = intel_ring_pin(ring);
if (err)
goto err_ring;
GEM_BUG_ON(engine->buffer);
engine->buffer = ring;
err = intel_engine_init_common(engine);
if (err)
goto err_unpin;
GEM_BUG_ON(ring->timeline->hwsp_ggtt != engine->status_page.vma);
return 0;
err_unpin:
intel_ring_unpin(ring);
err_ring:
intel_ring_put(ring);
err:
intel_engine_cleanup_common(engine);
return err;
}
void intel_engine_cleanup(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
WARN_ON(INTEL_GEN(dev_priv) > 2 &&
(ENGINE_READ(engine, RING_MI_MODE) & MODE_IDLE) == 0);
intel_ring_unpin(engine->buffer);
intel_ring_put(engine->buffer);
if (engine->cleanup)
engine->cleanup(engine);
intel_engine_cleanup_common(engine);
dev_priv->engine[engine->id] = NULL;
kfree(engine);
}
static int load_pd_dir(struct i915_request *rq,
const struct i915_hw_ppgtt *ppgtt)
{
@ -1646,11 +1564,14 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
/* These flags are for resource streamer on HSW+ */
flags |= HSW_MI_RS_SAVE_STATE_EN | HSW_MI_RS_RESTORE_STATE_EN;
else
/* We need to save the extended state for powersaving modes */
flags |= MI_SAVE_EXT_STATE_EN | MI_RESTORE_EXT_STATE_EN;
len = 4;
if (IS_GEN(i915, 7))
len += 2 + (num_engines ? 4 * num_engines + 6 : 0);
else if (IS_GEN(i915, 5))
len += 2;
if (flags & MI_FORCE_RESTORE) {
GEM_BUG_ON(flags & MI_RESTORE_INHIBIT);
flags &= ~MI_FORCE_RESTORE;
@ -1679,6 +1600,14 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
GEN6_PSMI_SLEEP_MSG_DISABLE);
}
}
} else if (IS_GEN(i915, 5)) {
/*
* This w/a is only listed for pre-production ilk a/b steppings,
* but is also mentioned for programming the powerctx. To be
* safe, just apply the workaround; we do not use SyncFlush so
* this should never take effect and so be a no-op!
*/
*cs++ = MI_SUSPEND_FLUSH | MI_SUSPEND_FLUSH_EN;
}
if (force_restore) {
@ -1732,6 +1661,8 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
*cs++ = MI_NOOP;
}
*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
} else if (IS_GEN(i915, 5)) {
*cs++ = MI_SUSPEND_FLUSH;
}
intel_ring_advance(rq, cs);
@ -1776,7 +1707,6 @@ static int switch_context(struct i915_request *rq)
u32 hw_flags = 0;
int ret, i;
lockdep_assert_held(&rq->i915->drm.struct_mutex);
GEM_BUG_ON(HAS_EXECLISTS(rq->i915));
if (ppgtt) {
@ -1888,12 +1818,12 @@ static int ring_request_alloc(struct i915_request *request)
*/
request->reserved_space += LEGACY_REQUEST_SIZE;
ret = switch_context(request);
/* Unconditionally invalidate GPU caches and TLBs. */
ret = request->engine->emit_flush(request, EMIT_INVALIDATE);
if (ret)
return ret;
/* Unconditionally invalidate GPU caches and TLBs. */
ret = request->engine->emit_flush(request, EMIT_INVALIDATE);
ret = switch_context(request);
if (ret)
return ret;
@ -1906,8 +1836,6 @@ static noinline int wait_for_space(struct intel_ring *ring, unsigned int bytes)
struct i915_request *target;
long timeout;
lockdep_assert_held(&ring->vma->vm->i915->drm.struct_mutex);
if (intel_ring_update_space(ring) >= bytes)
return 0;
@ -2167,24 +2095,6 @@ static int gen6_ring_flush(struct i915_request *rq, u32 mode)
return gen6_flush_dw(rq, mode, MI_INVALIDATE_TLB);
}
static void intel_ring_init_irq(struct drm_i915_private *dev_priv,
struct intel_engine_cs *engine)
{
if (INTEL_GEN(dev_priv) >= 6) {
engine->irq_enable = gen6_irq_enable;
engine->irq_disable = gen6_irq_disable;
} else if (INTEL_GEN(dev_priv) >= 5) {
engine->irq_enable = gen5_irq_enable;
engine->irq_disable = gen5_irq_disable;
} else if (INTEL_GEN(dev_priv) >= 3) {
engine->irq_enable = i9xx_irq_enable;
engine->irq_disable = i9xx_irq_disable;
} else {
engine->irq_enable = i8xx_irq_enable;
engine->irq_disable = i8xx_irq_disable;
}
}
static void i9xx_set_default_submission(struct intel_engine_cs *engine)
{
engine->submit_request = i9xx_submit_request;
@ -2200,15 +2110,51 @@ static void gen6_bsd_set_default_submission(struct intel_engine_cs *engine)
engine->submit_request = gen6_bsd_submit_request;
}
static void intel_ring_default_vfuncs(struct drm_i915_private *dev_priv,
struct intel_engine_cs *engine)
static void ring_destroy(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
WARN_ON(INTEL_GEN(dev_priv) > 2 &&
(ENGINE_READ(engine, RING_MI_MODE) & MODE_IDLE) == 0);
intel_ring_unpin(engine->buffer);
intel_ring_put(engine->buffer);
intel_engine_cleanup_common(engine);
kfree(engine);
}
static void setup_irq(struct intel_engine_cs *engine)
{
struct drm_i915_private *i915 = engine->i915;
if (INTEL_GEN(i915) >= 6) {
engine->irq_enable = gen6_irq_enable;
engine->irq_disable = gen6_irq_disable;
} else if (INTEL_GEN(i915) >= 5) {
engine->irq_enable = gen5_irq_enable;
engine->irq_disable = gen5_irq_disable;
} else if (INTEL_GEN(i915) >= 3) {
engine->irq_enable = i9xx_irq_enable;
engine->irq_disable = i9xx_irq_disable;
} else {
engine->irq_enable = i8xx_irq_enable;
engine->irq_disable = i8xx_irq_disable;
}
}
static void setup_common(struct intel_engine_cs *engine)
{
struct drm_i915_private *i915 = engine->i915;
/* gen8+ are only supported with execlists */
GEM_BUG_ON(INTEL_GEN(dev_priv) >= 8);
GEM_BUG_ON(INTEL_GEN(i915) >= 8);
intel_ring_init_irq(dev_priv, engine);
setup_irq(engine);
engine->init_hw = init_ring_common;
engine->destroy = ring_destroy;
engine->resume = xcs_resume;
engine->reset.prepare = reset_prepare;
engine->reset.reset = reset_ring;
engine->reset.finish = reset_finish;
@ -2222,117 +2168,96 @@ static void intel_ring_default_vfuncs(struct drm_i915_private *dev_priv,
* engine->emit_init_breadcrumb().
*/
engine->emit_fini_breadcrumb = i9xx_emit_breadcrumb;
if (IS_GEN(dev_priv, 5))
if (IS_GEN(i915, 5))
engine->emit_fini_breadcrumb = gen5_emit_breadcrumb;
engine->set_default_submission = i9xx_set_default_submission;
if (INTEL_GEN(dev_priv) >= 6)
if (INTEL_GEN(i915) >= 6)
engine->emit_bb_start = gen6_emit_bb_start;
else if (INTEL_GEN(dev_priv) >= 4)
else if (INTEL_GEN(i915) >= 4)
engine->emit_bb_start = i965_emit_bb_start;
else if (IS_I830(dev_priv) || IS_I845G(dev_priv))
else if (IS_I830(i915) || IS_I845G(i915))
engine->emit_bb_start = i830_emit_bb_start;
else
engine->emit_bb_start = i915_emit_bb_start;
}
int intel_init_render_ring_buffer(struct intel_engine_cs *engine)
static void setup_rcs(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
int ret;
struct drm_i915_private *i915 = engine->i915;
intel_ring_default_vfuncs(dev_priv, engine);
if (HAS_L3_DPF(dev_priv))
if (HAS_L3_DPF(i915))
engine->irq_keep_mask = GT_RENDER_L3_PARITY_ERROR_INTERRUPT;
engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
if (INTEL_GEN(dev_priv) >= 7) {
if (INTEL_GEN(i915) >= 7) {
engine->init_context = intel_rcs_ctx_init;
engine->emit_flush = gen7_render_ring_flush;
engine->emit_fini_breadcrumb = gen7_rcs_emit_breadcrumb;
} else if (IS_GEN(dev_priv, 6)) {
} else if (IS_GEN(i915, 6)) {
engine->init_context = intel_rcs_ctx_init;
engine->emit_flush = gen6_render_ring_flush;
engine->emit_fini_breadcrumb = gen6_rcs_emit_breadcrumb;
} else if (IS_GEN(dev_priv, 5)) {
} else if (IS_GEN(i915, 5)) {
engine->emit_flush = gen4_render_ring_flush;
} else {
if (INTEL_GEN(dev_priv) < 4)
if (INTEL_GEN(i915) < 4)
engine->emit_flush = gen2_render_ring_flush;
else
engine->emit_flush = gen4_render_ring_flush;
engine->irq_enable_mask = I915_USER_INTERRUPT;
}
if (IS_HASWELL(dev_priv))
if (IS_HASWELL(i915))
engine->emit_bb_start = hsw_emit_bb_start;
engine->init_hw = init_render_ring;
ret = intel_init_ring_buffer(engine);
if (ret)
return ret;
return 0;
engine->resume = rcs_resume;
}
int intel_init_bsd_ring_buffer(struct intel_engine_cs *engine)
static void setup_vcs(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
struct drm_i915_private *i915 = engine->i915;
intel_ring_default_vfuncs(dev_priv, engine);
if (INTEL_GEN(dev_priv) >= 6) {
if (INTEL_GEN(i915) >= 6) {
/* gen6 bsd needs a special wa for tail updates */
if (IS_GEN(dev_priv, 6))
if (IS_GEN(i915, 6))
engine->set_default_submission = gen6_bsd_set_default_submission;
engine->emit_flush = gen6_bsd_ring_flush;
engine->irq_enable_mask = GT_BSD_USER_INTERRUPT;
if (IS_GEN(dev_priv, 6))
if (IS_GEN(i915, 6))
engine->emit_fini_breadcrumb = gen6_xcs_emit_breadcrumb;
else
engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb;
} else {
engine->emit_flush = bsd_ring_flush;
if (IS_GEN(dev_priv, 5))
if (IS_GEN(i915, 5))
engine->irq_enable_mask = ILK_BSD_USER_INTERRUPT;
else
engine->irq_enable_mask = I915_BSD_USER_INTERRUPT;
}
return intel_init_ring_buffer(engine);
}
int intel_init_blt_ring_buffer(struct intel_engine_cs *engine)
static void setup_bcs(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
GEM_BUG_ON(INTEL_GEN(dev_priv) < 6);
intel_ring_default_vfuncs(dev_priv, engine);
struct drm_i915_private *i915 = engine->i915;
engine->emit_flush = gen6_ring_flush;
engine->irq_enable_mask = GT_BLT_USER_INTERRUPT;
if (IS_GEN(dev_priv, 6))
if (IS_GEN(i915, 6))
engine->emit_fini_breadcrumb = gen6_xcs_emit_breadcrumb;
else
engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb;
return intel_init_ring_buffer(engine);
}
int intel_init_vebox_ring_buffer(struct intel_engine_cs *engine)
static void setup_vecs(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
struct drm_i915_private *i915 = engine->i915;
GEM_BUG_ON(INTEL_GEN(dev_priv) < 7);
intel_ring_default_vfuncs(dev_priv, engine);
GEM_BUG_ON(INTEL_GEN(i915) < 7);
engine->emit_flush = gen6_ring_flush;
engine->irq_enable_mask = PM_VEBOX_USER_INTERRUPT;
@ -2340,6 +2265,73 @@ int intel_init_vebox_ring_buffer(struct intel_engine_cs *engine)
engine->irq_disable = hsw_vebox_irq_disable;
engine->emit_fini_breadcrumb = gen7_xcs_emit_breadcrumb;
return intel_init_ring_buffer(engine);
}
int intel_ring_submission_setup(struct intel_engine_cs *engine)
{
setup_common(engine);
switch (engine->class) {
case RENDER_CLASS:
setup_rcs(engine);
break;
case VIDEO_DECODE_CLASS:
setup_vcs(engine);
break;
case COPY_ENGINE_CLASS:
setup_bcs(engine);
break;
case VIDEO_ENHANCEMENT_CLASS:
setup_vecs(engine);
break;
default:
MISSING_CASE(engine->class);
return -ENODEV;
}
return 0;
}
int intel_ring_submission_init(struct intel_engine_cs *engine)
{
struct i915_timeline *timeline;
struct intel_ring *ring;
int err;
timeline = i915_timeline_create(engine->i915, engine->status_page.vma);
if (IS_ERR(timeline)) {
err = PTR_ERR(timeline);
goto err;
}
GEM_BUG_ON(timeline->has_initial_breadcrumb);
ring = intel_engine_create_ring(engine, timeline, 32 * PAGE_SIZE);
i915_timeline_put(timeline);
if (IS_ERR(ring)) {
err = PTR_ERR(ring);
goto err;
}
err = intel_ring_pin(ring);
if (err)
goto err_ring;
GEM_BUG_ON(engine->buffer);
engine->buffer = ring;
err = intel_engine_init_common(engine);
if (err)
goto err_unpin;
GEM_BUG_ON(ring->timeline->hwsp_ggtt != engine->status_page.vma);
return 0;
err_unpin:
intel_ring_unpin(ring);
err_ring:
intel_ring_put(ring);
err:
intel_engine_cleanup_common(engine);
return err;
}

View File

@ -0,0 +1,142 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#include "i915_drv.h"
#include "intel_lrc_reg.h"
#include "intel_sseu.h"
u32 intel_sseu_make_rpcs(struct drm_i915_private *i915,
const struct intel_sseu *req_sseu)
{
const struct sseu_dev_info *sseu = &RUNTIME_INFO(i915)->sseu;
bool subslice_pg = sseu->has_subslice_pg;
struct intel_sseu ctx_sseu;
u8 slices, subslices;
u32 rpcs = 0;
/*
* No explicit RPCS request is needed to ensure full
* slice/subslice/EU enablement prior to Gen9.
*/
if (INTEL_GEN(i915) < 9)
return 0;
/*
* If i915/perf is active, we want a stable powergating configuration
* on the system.
*
* We could choose full enablement, but on ICL we know there are use
* cases which disable slices for functional, apart for performance
* reasons. So in this case we select a known stable subset.
*/
if (!i915->perf.oa.exclusive_stream) {
ctx_sseu = *req_sseu;
} else {
ctx_sseu = intel_sseu_from_device_info(sseu);
if (IS_GEN(i915, 11)) {
/*
* We only need subslice count so it doesn't matter
* which ones we select - just turn off low bits in the
* amount of half of all available subslices per slice.
*/
ctx_sseu.subslice_mask =
~(~0 << (hweight8(ctx_sseu.subslice_mask) / 2));
ctx_sseu.slice_mask = 0x1;
}
}
slices = hweight8(ctx_sseu.slice_mask);
subslices = hweight8(ctx_sseu.subslice_mask);
/*
* Since the SScount bitfield in GEN8_R_PWR_CLK_STATE is only three bits
* wide and Icelake has up to eight subslices, specfial programming is
* needed in order to correctly enable all subslices.
*
* According to documentation software must consider the configuration
* as 2x4x8 and hardware will translate this to 1x8x8.
*
* Furthemore, even though SScount is three bits, maximum documented
* value for it is four. From this some rules/restrictions follow:
*
* 1.
* If enabled subslice count is greater than four, two whole slices must
* be enabled instead.
*
* 2.
* When more than one slice is enabled, hardware ignores the subslice
* count altogether.
*
* From these restrictions it follows that it is not possible to enable
* a count of subslices between the SScount maximum of four restriction,
* and the maximum available number on a particular SKU. Either all
* subslices are enabled, or a count between one and four on the first
* slice.
*/
if (IS_GEN(i915, 11) &&
slices == 1 &&
subslices > min_t(u8, 4, hweight8(sseu->subslice_mask[0]) / 2)) {
GEM_BUG_ON(subslices & 1);
subslice_pg = false;
slices *= 2;
}
/*
* Starting in Gen9, render power gating can leave
* slice/subslice/EU in a partially enabled state. We
* must make an explicit request through RPCS for full
* enablement.
*/
if (sseu->has_slice_pg) {
u32 mask, val = slices;
if (INTEL_GEN(i915) >= 11) {
mask = GEN11_RPCS_S_CNT_MASK;
val <<= GEN11_RPCS_S_CNT_SHIFT;
} else {
mask = GEN8_RPCS_S_CNT_MASK;
val <<= GEN8_RPCS_S_CNT_SHIFT;
}
GEM_BUG_ON(val & ~mask);
val &= mask;
rpcs |= GEN8_RPCS_ENABLE | GEN8_RPCS_S_CNT_ENABLE | val;
}
if (subslice_pg) {
u32 val = subslices;
val <<= GEN8_RPCS_SS_CNT_SHIFT;
GEM_BUG_ON(val & ~GEN8_RPCS_SS_CNT_MASK);
val &= GEN8_RPCS_SS_CNT_MASK;
rpcs |= GEN8_RPCS_ENABLE | GEN8_RPCS_SS_CNT_ENABLE | val;
}
if (sseu->has_eu_pg) {
u32 val;
val = ctx_sseu.min_eus_per_subslice << GEN8_RPCS_EU_MIN_SHIFT;
GEM_BUG_ON(val & ~GEN8_RPCS_EU_MIN_MASK);
val &= GEN8_RPCS_EU_MIN_MASK;
rpcs |= val;
val = ctx_sseu.max_eus_per_subslice << GEN8_RPCS_EU_MAX_SHIFT;
GEM_BUG_ON(val & ~GEN8_RPCS_EU_MAX_MASK);
val &= GEN8_RPCS_EU_MAX_MASK;
rpcs |= val;
rpcs |= GEN8_RPCS_ENABLE;
}
return rpcs;
}

View File

@ -0,0 +1,67 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#ifndef __INTEL_SSEU_H__
#define __INTEL_SSEU_H__
#include <linux/types.h>
struct drm_i915_private;
#define GEN_MAX_SLICES (6) /* CNL upper bound */
#define GEN_MAX_SUBSLICES (8) /* ICL upper bound */
struct sseu_dev_info {
u8 slice_mask;
u8 subslice_mask[GEN_MAX_SLICES];
u16 eu_total;
u8 eu_per_subslice;
u8 min_eu_in_pool;
/* For each slice, which subslice(s) has(have) 7 EUs (bitfield)? */
u8 subslice_7eu[3];
u8 has_slice_pg:1;
u8 has_subslice_pg:1;
u8 has_eu_pg:1;
/* Topology fields */
u8 max_slices;
u8 max_subslices;
u8 max_eus_per_subslice;
/* We don't have more than 8 eus per subslice at the moment and as we
* store eus enabled using bits, no need to multiply by eus per
* subslice.
*/
u8 eu_mask[GEN_MAX_SLICES * GEN_MAX_SUBSLICES];
};
/*
* Powergating configuration for a particular (context,engine).
*/
struct intel_sseu {
u8 slice_mask;
u8 subslice_mask;
u8 min_eus_per_subslice;
u8 max_eus_per_subslice;
};
static inline struct intel_sseu
intel_sseu_from_device_info(const struct sseu_dev_info *sseu)
{
struct intel_sseu value = {
.slice_mask = sseu->slice_mask,
.subslice_mask = sseu->subslice_mask[0],
.min_eus_per_subslice = sseu->max_eus_per_subslice,
.max_eus_per_subslice = sseu->max_eus_per_subslice,
};
return value;
}
u32 intel_sseu_make_rpcs(struct drm_i915_private *i915,
const struct intel_sseu *req_sseu);
#endif /* __INTEL_SSEU_H__ */

View File

@ -122,6 +122,7 @@ static void _wa_add(struct i915_wa_list *wal, const struct i915_wa *wa)
wal->wa_count++;
wa_->val |= wa->val;
wa_->mask |= wa->mask;
wa_->read |= wa->read;
return;
}
}
@ -146,9 +147,10 @@ wa_write_masked_or(struct i915_wa_list *wal, i915_reg_t reg, u32 mask,
u32 val)
{
struct i915_wa wa = {
.reg = reg,
.reg = reg,
.mask = mask,
.val = val
.val = val,
.read = mask,
};
_wa_add(wal, &wa);
@ -172,6 +174,19 @@ wa_write_or(struct i915_wa_list *wal, i915_reg_t reg, u32 val)
wa_write_masked_or(wal, reg, val, val);
}
static void
ignore_wa_write_or(struct i915_wa_list *wal, i915_reg_t reg, u32 mask, u32 val)
{
struct i915_wa wa = {
.reg = reg,
.mask = mask,
.val = val,
/* Bonkers HW, skip verifying */
};
_wa_add(wal, &wa);
}
#define WA_SET_BIT_MASKED(addr, mask) \
wa_write_masked_or(wal, (addr), (mask), _MASKED_BIT_ENABLE(mask))
@ -181,10 +196,9 @@ wa_write_or(struct i915_wa_list *wal, i915_reg_t reg, u32 val)
#define WA_SET_FIELD_MASKED(addr, mask, value) \
wa_write_masked_or(wal, (addr), (mask), _MASKED_FIELD((mask), (value)))
static void gen8_ctx_workarounds_init(struct intel_engine_cs *engine)
static void gen8_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct i915_wa_list *wal = &engine->ctx_wa_list;
WA_SET_BIT_MASKED(INSTPM, INSTPM_FORCE_ORDERING);
/* WaDisableAsyncFlipPerfMode:bdw,chv */
@ -230,12 +244,12 @@ static void gen8_ctx_workarounds_init(struct intel_engine_cs *engine)
GEN6_WIZ_HASHING_16x4);
}
static void bdw_ctx_workarounds_init(struct intel_engine_cs *engine)
static void bdw_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
struct i915_wa_list *wal = &engine->ctx_wa_list;
gen8_ctx_workarounds_init(engine);
gen8_ctx_workarounds_init(engine, wal);
/* WaDisableThreadStallDopClockGating:bdw (pre-production) */
WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN, STALL_DOP_GATING_DISABLE);
@ -258,11 +272,10 @@ static void bdw_ctx_workarounds_init(struct intel_engine_cs *engine)
(IS_BDW_GT3(i915) ? HDC_FENCE_DEST_SLM_DISABLE : 0));
}
static void chv_ctx_workarounds_init(struct intel_engine_cs *engine)
static void chv_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct i915_wa_list *wal = &engine->ctx_wa_list;
gen8_ctx_workarounds_init(engine);
gen8_ctx_workarounds_init(engine, wal);
/* WaDisableThreadStallDopClockGating:chv */
WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN, STALL_DOP_GATING_DISABLE);
@ -271,10 +284,10 @@ static void chv_ctx_workarounds_init(struct intel_engine_cs *engine)
WA_SET_BIT_MASKED(HIZ_CHICKEN, CHV_HZ_8X8_MODE_IN_1X);
}
static void gen9_ctx_workarounds_init(struct intel_engine_cs *engine)
static void gen9_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
struct i915_wa_list *wal = &engine->ctx_wa_list;
if (HAS_LLC(i915)) {
/* WaCompressedResourceSamplerPbeMediaNewHashMode:skl,kbl
@ -369,10 +382,10 @@ static void gen9_ctx_workarounds_init(struct intel_engine_cs *engine)
WA_SET_BIT_MASKED(GEN9_WM_CHICKEN3, GEN9_FACTOR_IN_CLR_VAL_HIZ);
}
static void skl_tune_iz_hashing(struct intel_engine_cs *engine)
static void skl_tune_iz_hashing(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
struct i915_wa_list *wal = &engine->ctx_wa_list;
u8 vals[3] = { 0, 0, 0 };
unsigned int i;
@ -409,17 +422,17 @@ static void skl_tune_iz_hashing(struct intel_engine_cs *engine)
GEN9_IZ_HASHING(0, vals[0]));
}
static void skl_ctx_workarounds_init(struct intel_engine_cs *engine)
static void skl_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
gen9_ctx_workarounds_init(engine);
skl_tune_iz_hashing(engine);
gen9_ctx_workarounds_init(engine, wal);
skl_tune_iz_hashing(engine, wal);
}
static void bxt_ctx_workarounds_init(struct intel_engine_cs *engine)
static void bxt_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct i915_wa_list *wal = &engine->ctx_wa_list;
gen9_ctx_workarounds_init(engine);
gen9_ctx_workarounds_init(engine, wal);
/* WaDisableThreadStallDopClockGating:bxt */
WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN,
@ -430,12 +443,12 @@ static void bxt_ctx_workarounds_init(struct intel_engine_cs *engine)
GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);
}
static void kbl_ctx_workarounds_init(struct intel_engine_cs *engine)
static void kbl_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
struct i915_wa_list *wal = &engine->ctx_wa_list;
gen9_ctx_workarounds_init(engine);
gen9_ctx_workarounds_init(engine, wal);
/* WaToEnableHwFixForPushConstHWBug:kbl */
if (IS_KBL_REVID(i915, KBL_REVID_C0, REVID_FOREVER))
@ -447,22 +460,20 @@ static void kbl_ctx_workarounds_init(struct intel_engine_cs *engine)
GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE);
}
static void glk_ctx_workarounds_init(struct intel_engine_cs *engine)
static void glk_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct i915_wa_list *wal = &engine->ctx_wa_list;
gen9_ctx_workarounds_init(engine);
gen9_ctx_workarounds_init(engine, wal);
/* WaToEnableHwFixForPushConstHWBug:glk */
WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);
}
static void cfl_ctx_workarounds_init(struct intel_engine_cs *engine)
static void cfl_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct i915_wa_list *wal = &engine->ctx_wa_list;
gen9_ctx_workarounds_init(engine);
gen9_ctx_workarounds_init(engine, wal);
/* WaToEnableHwFixForPushConstHWBug:cfl */
WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
@ -473,10 +484,10 @@ static void cfl_ctx_workarounds_init(struct intel_engine_cs *engine)
GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE);
}
static void cnl_ctx_workarounds_init(struct intel_engine_cs *engine)
static void cnl_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
struct i915_wa_list *wal = &engine->ctx_wa_list;
/* WaForceContextSaveRestoreNonCoherent:cnl */
WA_SET_BIT_MASKED(CNL_HDC_CHICKEN0,
@ -513,10 +524,16 @@ static void cnl_ctx_workarounds_init(struct intel_engine_cs *engine)
WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN, DISABLE_EARLY_EOT);
}
static void icl_ctx_workarounds_init(struct intel_engine_cs *engine)
static void icl_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
struct i915_wa_list *wal = &engine->ctx_wa_list;
/* WaDisableBankHangMode:icl */
wa_write(wal,
GEN8_L3CNTLREG,
intel_uncore_read(engine->uncore, GEN8_L3CNTLREG) |
GEN8_ERRDETBCTRL);
/* Wa_1604370585:icl (pre-prod)
* Formerly known as WaPushConstantDereferenceHoldDisable
@ -556,33 +573,42 @@ static void icl_ctx_workarounds_init(struct intel_engine_cs *engine)
WA_SET_FIELD_MASKED(GEN8_CS_CHICKEN1,
GEN9_PREEMPT_GPGPU_LEVEL_MASK,
GEN9_PREEMPT_GPGPU_THREAD_GROUP_LEVEL);
/* allow headerless messages for preemptible GPGPU context */
WA_SET_BIT_MASKED(GEN10_SAMPLER_MODE,
GEN11_SAMPLER_ENABLE_HEADLESS_MSG);
}
void intel_engine_init_ctx_wa(struct intel_engine_cs *engine)
static void
__intel_engine_init_ctx_wa(struct intel_engine_cs *engine,
struct i915_wa_list *wal,
const char *name)
{
struct drm_i915_private *i915 = engine->i915;
struct i915_wa_list *wal = &engine->ctx_wa_list;
wa_init_start(wal, "context");
if (engine->class != RENDER_CLASS)
return;
wa_init_start(wal, name);
if (IS_GEN(i915, 11))
icl_ctx_workarounds_init(engine);
icl_ctx_workarounds_init(engine, wal);
else if (IS_CANNONLAKE(i915))
cnl_ctx_workarounds_init(engine);
cnl_ctx_workarounds_init(engine, wal);
else if (IS_COFFEELAKE(i915))
cfl_ctx_workarounds_init(engine);
cfl_ctx_workarounds_init(engine, wal);
else if (IS_GEMINILAKE(i915))
glk_ctx_workarounds_init(engine);
glk_ctx_workarounds_init(engine, wal);
else if (IS_KABYLAKE(i915))
kbl_ctx_workarounds_init(engine);
kbl_ctx_workarounds_init(engine, wal);
else if (IS_BROXTON(i915))
bxt_ctx_workarounds_init(engine);
bxt_ctx_workarounds_init(engine, wal);
else if (IS_SKYLAKE(i915))
skl_ctx_workarounds_init(engine);
skl_ctx_workarounds_init(engine, wal);
else if (IS_CHERRYVIEW(i915))
chv_ctx_workarounds_init(engine);
chv_ctx_workarounds_init(engine, wal);
else if (IS_BROADWELL(i915))
bdw_ctx_workarounds_init(engine);
bdw_ctx_workarounds_init(engine, wal);
else if (INTEL_GEN(i915) < 8)
return;
else
@ -591,6 +617,11 @@ void intel_engine_init_ctx_wa(struct intel_engine_cs *engine)
wa_init_finish(wal);
}
void intel_engine_init_ctx_wa(struct intel_engine_cs *engine)
{
__intel_engine_init_ctx_wa(engine, &engine->ctx_wa_list, "context");
}
int intel_engine_emit_ctx_wa(struct i915_request *rq)
{
struct i915_wa_list *wal = &rq->engine->ctx_wa_list;
@ -909,6 +940,21 @@ wal_get_fw_for_rmw(struct intel_uncore *uncore, const struct i915_wa_list *wal)
return fw;
}
static bool
wa_verify(const struct i915_wa *wa, u32 cur, const char *name, const char *from)
{
if ((cur ^ wa->val) & wa->read) {
DRM_ERROR("%s workaround lost on %s! (%x=%x/%x, expected %x, mask=%x)\n",
name, from, i915_mmio_reg_offset(wa->reg),
cur, cur & wa->read,
wa->val, wa->mask);
return false;
}
return true;
}
static void
wa_list_apply(struct intel_uncore *uncore, const struct i915_wa_list *wal)
{
@ -927,6 +973,10 @@ wa_list_apply(struct intel_uncore *uncore, const struct i915_wa_list *wal)
for (i = 0, wa = wal->list; i < wal->count; i++, wa++) {
intel_uncore_rmw_fw(uncore, wa->reg, wa->mask, wa->val);
if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
wa_verify(wa,
intel_uncore_read_fw(uncore, wa->reg),
wal->name, "application");
}
intel_uncore_forcewake_put__locked(uncore, fw);
@ -938,20 +988,6 @@ void intel_gt_apply_workarounds(struct drm_i915_private *i915)
wa_list_apply(&i915->uncore, &i915->gt_wa_list);
}
static bool
wa_verify(const struct i915_wa *wa, u32 cur, const char *name, const char *from)
{
if ((cur ^ wa->val) & wa->mask) {
DRM_ERROR("%s workaround lost on %s! (%x=%x/%x, expected %x, mask=%x)\n",
name, from, i915_mmio_reg_offset(wa->reg), cur,
cur & wa->mask, wa->val, wa->mask);
return false;
}
return true;
}
static bool wa_list_verify(struct intel_uncore *uncore,
const struct i915_wa_list *wal,
const char *from)
@ -1056,7 +1092,8 @@ void intel_engine_init_whitelist(struct intel_engine_cs *engine)
struct drm_i915_private *i915 = engine->i915;
struct i915_wa_list *w = &engine->whitelist;
GEM_BUG_ON(engine->id != RCS0);
if (engine->class != RENDER_CLASS)
return;
wa_init_start(w, "whitelist");
@ -1117,9 +1154,10 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
_3D_CHICKEN3_AA_LINE_QUALITY_FIX_ENABLE);
/* WaPipelineFlushCoherentLines:icl */
wa_write_or(wal,
GEN8_L3SQCREG4,
GEN8_LQSC_FLUSH_COHERENT_LINES);
ignore_wa_write_or(wal,
GEN8_L3SQCREG4,
GEN8_LQSC_FLUSH_COHERENT_LINES,
GEN8_LQSC_FLUSH_COHERENT_LINES);
/*
* Wa_1405543622:icl
@ -1146,9 +1184,10 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
* Wa_1405733216:icl
* Formerly known as WaDisableCleanEvicts
*/
wa_write_or(wal,
GEN8_L3SQCREG4,
GEN11_LQSC_CLEAN_EVICT_DISABLE);
ignore_wa_write_or(wal,
GEN8_L3SQCREG4,
GEN11_LQSC_CLEAN_EVICT_DISABLE,
GEN11_LQSC_CLEAN_EVICT_DISABLE);
/* WaForwardProgressSoftReset:icl */
wa_write_or(wal,
@ -1254,6 +1293,128 @@ void intel_engine_apply_workarounds(struct intel_engine_cs *engine)
wa_list_apply(engine->uncore, &engine->wa_list);
}
static struct i915_vma *
create_scratch(struct i915_address_space *vm, int count)
{
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
unsigned int size;
int err;
size = round_up(count * sizeof(u32), PAGE_SIZE);
obj = i915_gem_object_create_internal(vm->i915, size);
if (IS_ERR(obj))
return ERR_CAST(obj);
i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err_obj;
}
err = i915_vma_pin(vma, 0, 0,
i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);
if (err)
goto err_obj;
return vma;
err_obj:
i915_gem_object_put(obj);
return ERR_PTR(err);
}
static int
wa_list_srm(struct i915_request *rq,
const struct i915_wa_list *wal,
struct i915_vma *vma)
{
const struct i915_wa *wa;
unsigned int i;
u32 srm, *cs;
srm = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT;
if (INTEL_GEN(rq->i915) >= 8)
srm++;
cs = intel_ring_begin(rq, 4 * wal->count);
if (IS_ERR(cs))
return PTR_ERR(cs);
for (i = 0, wa = wal->list; i < wal->count; i++, wa++) {
*cs++ = srm;
*cs++ = i915_mmio_reg_offset(wa->reg);
*cs++ = i915_ggtt_offset(vma) + sizeof(u32) * i;
*cs++ = 0;
}
intel_ring_advance(rq, cs);
return 0;
}
static int engine_wa_list_verify(struct intel_context *ce,
const struct i915_wa_list * const wal,
const char *from)
{
const struct i915_wa *wa;
struct i915_request *rq;
struct i915_vma *vma;
unsigned int i;
u32 *results;
int err;
if (!wal->count)
return 0;
vma = create_scratch(&ce->engine->i915->ggtt.vm, wal->count);
if (IS_ERR(vma))
return PTR_ERR(vma);
rq = intel_context_create_request(ce);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto err_vma;
}
err = wa_list_srm(rq, wal, vma);
if (err)
goto err_vma;
i915_request_add(rq);
if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
err = -ETIME;
goto err_vma;
}
results = i915_gem_object_pin_map(vma->obj, I915_MAP_WB);
if (IS_ERR(results)) {
err = PTR_ERR(results);
goto err_vma;
}
err = 0;
for (i = 0, wa = wal->list; i < wal->count; i++, wa++)
if (!wa_verify(wa, results[i], wal->name, from))
err = -ENXIO;
i915_gem_object_unpin_map(vma->obj);
err_vma:
i915_vma_unpin(vma);
i915_vma_put(vma);
return err;
}
int intel_engine_verify_workarounds(struct intel_engine_cs *engine,
const char *from)
{
return engine_wa_list_verify(engine->kernel_context,
&engine->wa_list,
from);
}
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
#include "selftests/intel_workarounds.c"
#include "selftest_workarounds.c"
#endif

View File

@ -4,13 +4,17 @@
* Copyright © 2014-2018 Intel Corporation
*/
#ifndef _I915_WORKAROUNDS_H_
#define _I915_WORKAROUNDS_H_
#ifndef _INTEL_WORKAROUNDS_H_
#define _INTEL_WORKAROUNDS_H_
#include <linux/slab.h>
#include "intel_workarounds_types.h"
struct drm_i915_private;
struct i915_request;
struct intel_engine_cs;
static inline void intel_wa_list_free(struct i915_wa_list *wal)
{
kfree(wal->list);
@ -30,5 +34,7 @@ void intel_engine_apply_whitelist(struct intel_engine_cs *engine);
void intel_engine_init_workarounds(struct intel_engine_cs *engine);
void intel_engine_apply_workarounds(struct intel_engine_cs *engine);
int intel_engine_verify_workarounds(struct intel_engine_cs *engine,
const char *from);
#endif

View File

@ -12,9 +12,10 @@
#include "i915_reg.h"
struct i915_wa {
i915_reg_t reg;
u32 mask;
u32 val;
i915_reg_t reg;
u32 mask;
u32 val;
u32 read;
};
struct i915_wa_list {

View File

@ -22,8 +22,13 @@
*
*/
#include "i915_drv.h"
#include "i915_gem_context.h"
#include "intel_context.h"
#include "intel_engine_pm.h"
#include "mock_engine.h"
#include "mock_request.h"
#include "selftests/mock_request.h"
struct mock_ring {
struct intel_ring base;
@ -154,6 +159,9 @@ static const struct intel_context_ops mock_context_ops = {
.pin = mock_context_pin,
.unpin = mock_context_unpin,
.enter = intel_context_enter_engine,
.exit = intel_context_exit_engine,
.destroy = mock_context_destroy,
};
@ -257,29 +265,44 @@ struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
engine->base.reset.finish = mock_reset_finish;
engine->base.cancel_requests = mock_cancel_requests;
if (i915_timeline_init(i915, &engine->base.timeline, NULL))
goto err_free;
i915_timeline_set_subclass(&engine->base.timeline, TIMELINE_ENGINE);
intel_engine_init_breadcrumbs(&engine->base);
/* fake hw queue */
spin_lock_init(&engine->hw_lock);
timer_setup(&engine->hw_delay, hw_delay_complete, 0);
INIT_LIST_HEAD(&engine->hw_queue);
if (pin_context(i915->kernel_context, &engine->base,
&engine->base.kernel_context))
goto err_breadcrumbs;
return &engine->base;
}
int mock_engine_init(struct intel_engine_cs *engine)
{
struct drm_i915_private *i915 = engine->i915;
int err;
intel_engine_init_breadcrumbs(engine);
intel_engine_init_execlists(engine);
intel_engine_init__pm(engine);
if (i915_timeline_init(i915, &engine->timeline, NULL))
goto err_breadcrumbs;
i915_timeline_set_subclass(&engine->timeline, TIMELINE_ENGINE);
engine->kernel_context =
i915_gem_context_get_engine(i915->kernel_context, engine->id);
if (IS_ERR(engine->kernel_context))
goto err_timeline;
err = intel_context_pin(engine->kernel_context);
intel_context_put(engine->kernel_context);
if (err)
goto err_timeline;
return 0;
err_timeline:
i915_timeline_fini(&engine->timeline);
err_breadcrumbs:
intel_engine_fini_breadcrumbs(&engine->base);
i915_timeline_fini(&engine->base.timeline);
err_free:
kfree(engine);
return NULL;
intel_engine_fini_breadcrumbs(engine);
return -ENOMEM;
}
void mock_engine_flush(struct intel_engine_cs *engine)

View File

@ -29,7 +29,7 @@
#include <linux/spinlock.h>
#include <linux/timer.h>
#include "../intel_ringbuffer.h"
#include "gt/intel_engine.h"
struct mock_engine {
struct intel_engine_cs base;
@ -42,6 +42,8 @@ struct mock_engine {
struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
const char *name,
int id);
int mock_engine_init(struct intel_engine_cs *engine);
void mock_engine_flush(struct intel_engine_cs *engine);
void mock_engine_reset(struct intel_engine_cs *engine);
void mock_engine_free(struct intel_engine_cs *engine);

View File

@ -24,14 +24,18 @@
#include <linux/kthread.h>
#include "../i915_selftest.h"
#include "i915_random.h"
#include "igt_flush_test.h"
#include "igt_reset.h"
#include "igt_wedge_me.h"
#include "intel_engine_pm.h"
#include "mock_context.h"
#include "mock_drm.h"
#include "i915_selftest.h"
#include "selftests/i915_random.h"
#include "selftests/igt_flush_test.h"
#include "selftests/igt_gem_utils.h"
#include "selftests/igt_reset.h"
#include "selftests/igt_wedge_me.h"
#include "selftests/igt_atomic.h"
#include "selftests/mock_context.h"
#include "selftests/mock_drm.h"
#define IGT_IDLE_TIMEOUT 50 /* ms; time to wait after flushing between tests */
@ -173,7 +177,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
if (err)
goto unpin_vma;
rq = i915_request_alloc(engine, h->ctx);
rq = igt_request_alloc(h->ctx, engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto unpin_hws;
@ -362,54 +366,6 @@ unlock:
return err;
}
static int igt_global_reset(void *arg)
{
struct drm_i915_private *i915 = arg;
unsigned int reset_count;
int err = 0;
/* Check that we can issue a global GPU reset */
igt_global_reset_lock(i915);
reset_count = i915_reset_count(&i915->gpu_error);
i915_reset(i915, ALL_ENGINES, NULL);
if (i915_reset_count(&i915->gpu_error) == reset_count) {
pr_err("No GPU reset recorded!\n");
err = -EINVAL;
}
igt_global_reset_unlock(i915);
if (i915_reset_failed(i915))
err = -EIO;
return err;
}
static int igt_wedged_reset(void *arg)
{
struct drm_i915_private *i915 = arg;
intel_wakeref_t wakeref;
/* Check that we can recover a wedged device with a GPU reset */
igt_global_reset_lock(i915);
wakeref = intel_runtime_pm_get(i915);
i915_gem_set_wedged(i915);
GEM_BUG_ON(!i915_reset_failed(i915));
i915_reset(i915, ALL_ENGINES, NULL);
intel_runtime_pm_put(i915, wakeref);
igt_global_reset_unlock(i915);
return i915_reset_failed(i915) ? -EIO : 0;
}
static bool wait_for_idle(struct intel_engine_cs *engine)
{
return wait_for(intel_engine_is_idle(engine), IGT_IDLE_TIMEOUT) == 0;
@ -453,7 +409,7 @@ static int igt_reset_nop(void *arg)
for (i = 0; i < 16; i++) {
struct i915_request *rq;
rq = i915_request_alloc(engine, ctx);
rq = igt_request_alloc(ctx, engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
break;
@ -479,19 +435,6 @@ static int igt_reset_nop(void *arg)
break;
}
if (!i915_reset_flush(i915)) {
struct drm_printer p =
drm_info_printer(i915->drm.dev);
pr_err("%s failed to idle after reset\n",
engine->name);
intel_engine_dump(engine, &p,
"%s\n", engine->name);
err = -EIO;
break;
}
err = igt_flush_test(i915, 0);
if (err)
break;
@ -565,7 +508,7 @@ static int igt_reset_nop_engine(void *arg)
for (i = 0; i < 16; i++) {
struct i915_request *rq;
rq = i915_request_alloc(engine, ctx);
rq = igt_request_alloc(ctx, engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
break;
@ -594,19 +537,6 @@ static int igt_reset_nop_engine(void *arg)
err = -EINVAL;
break;
}
if (!i915_reset_flush(i915)) {
struct drm_printer p =
drm_info_printer(i915->drm.dev);
pr_err("%s failed to idle after reset\n",
engine->name);
intel_engine_dump(engine, &p,
"%s\n", engine->name);
err = -EIO;
break;
}
} while (time_before(jiffies, end_time));
clear_bit(I915_RESET_ENGINE + id, &i915->gpu_error.flags);
pr_info("%s(%s): %d resets\n", __func__, engine->name, count);
@ -669,6 +599,7 @@ static int __igt_reset_engine(struct drm_i915_private *i915, bool active)
reset_engine_count = i915_reset_engine_count(&i915->gpu_error,
engine);
intel_engine_pm_get(engine);
set_bit(I915_RESET_ENGINE + id, &i915->gpu_error.flags);
do {
if (active) {
@ -721,21 +652,9 @@ static int __igt_reset_engine(struct drm_i915_private *i915, bool active)
err = -EINVAL;
break;
}
if (!i915_reset_flush(i915)) {
struct drm_printer p =
drm_info_printer(i915->drm.dev);
pr_err("%s failed to idle after reset\n",
engine->name);
intel_engine_dump(engine, &p,
"%s\n", engine->name);
err = -EIO;
break;
}
} while (time_before(jiffies, end_time));
clear_bit(I915_RESET_ENGINE + id, &i915->gpu_error.flags);
intel_engine_pm_put(engine);
if (err)
break;
@ -835,7 +754,7 @@ static int active_engine(void *data)
struct i915_request *new;
mutex_lock(&engine->i915->drm.struct_mutex);
new = i915_request_alloc(engine, ctx[idx]);
new = igt_request_alloc(ctx[idx], engine);
if (IS_ERR(new)) {
mutex_unlock(&engine->i915->drm.struct_mutex);
err = PTR_ERR(new);
@ -942,6 +861,7 @@ static int __igt_reset_engines(struct drm_i915_private *i915,
get_task_struct(tsk);
}
intel_engine_pm_get(engine);
set_bit(I915_RESET_ENGINE + id, &i915->gpu_error.flags);
do {
struct i915_request *rq = NULL;
@ -1018,6 +938,7 @@ static int __igt_reset_engines(struct drm_i915_private *i915,
}
} while (time_before(jiffies, end_time));
clear_bit(I915_RESET_ENGINE + id, &i915->gpu_error.flags);
intel_engine_pm_put(engine);
pr_info("i915_reset_engine(%s:%s): %lu resets\n",
engine->name, test_name, count);
@ -1069,7 +990,9 @@ unwind:
if (err)
break;
err = igt_flush_test(i915, 0);
mutex_lock(&i915->drm.struct_mutex);
err = igt_flush_test(i915, I915_WAIT_LOCKED);
mutex_unlock(&i915->drm.struct_mutex);
if (err)
break;
}
@ -1681,44 +1604,8 @@ err_unlock:
return err;
}
static void __preempt_begin(void)
{
preempt_disable();
}
static void __preempt_end(void)
{
preempt_enable();
}
static void __softirq_begin(void)
{
local_bh_disable();
}
static void __softirq_end(void)
{
local_bh_enable();
}
static void __hardirq_begin(void)
{
local_irq_disable();
}
static void __hardirq_end(void)
{
local_irq_enable();
}
struct atomic_section {
const char *name;
void (*critical_section_begin)(void);
void (*critical_section_end)(void);
};
static int __igt_atomic_reset_engine(struct intel_engine_cs *engine,
const struct atomic_section *p,
const struct igt_atomic_section *p,
const char *mode)
{
struct tasklet_struct * const t = &engine->execlists.tasklet;
@ -1743,7 +1630,7 @@ static int __igt_atomic_reset_engine(struct intel_engine_cs *engine,
}
static int igt_atomic_reset_engine(struct intel_engine_cs *engine,
const struct atomic_section *p)
const struct igt_atomic_section *p)
{
struct drm_i915_private *i915 = engine->i915;
struct i915_request *rq;
@ -1794,79 +1681,43 @@ out:
return err;
}
static void force_reset(struct drm_i915_private *i915)
static int igt_reset_engines_atomic(void *arg)
{
i915_gem_set_wedged(i915);
i915_reset(i915, 0, NULL);
}
static int igt_atomic_reset(void *arg)
{
static const struct atomic_section phases[] = {
{ "preempt", __preempt_begin, __preempt_end },
{ "softirq", __softirq_begin, __softirq_end },
{ "hardirq", __hardirq_begin, __hardirq_end },
{ }
};
struct drm_i915_private *i915 = arg;
intel_wakeref_t wakeref;
const typeof(*igt_atomic_phases) *p;
int err = 0;
/* Check that the resets are usable from atomic context */
/* Check that the engines resets are usable from atomic context */
if (!intel_has_reset_engine(i915))
return 0;
if (USES_GUC_SUBMISSION(i915))
return 0; /* guc is dead; long live the guc */
return 0;
igt_global_reset_lock(i915);
mutex_lock(&i915->drm.struct_mutex);
wakeref = intel_runtime_pm_get(i915);
/* Flush any requests before we get started and check basics */
force_reset(i915);
if (i915_reset_failed(i915))
if (!igt_force_reset(i915))
goto unlock;
if (intel_has_gpu_reset(i915)) {
const typeof(*phases) *p;
for (p = phases; p->name; p++) {
GEM_TRACE("intel_gpu_reset under %s\n", p->name);
p->critical_section_begin();
err = intel_gpu_reset(i915, ALL_ENGINES);
p->critical_section_end();
if (err) {
pr_err("intel_gpu_reset failed under %s\n",
p->name);
goto out;
}
}
force_reset(i915);
}
if (intel_has_reset_engine(i915)) {
for (p = igt_atomic_phases; p->name; p++) {
struct intel_engine_cs *engine;
enum intel_engine_id id;
for_each_engine(engine, i915, id) {
const typeof(*phases) *p;
for (p = phases; p->name; p++) {
err = igt_atomic_reset_engine(engine, p);
if (err)
goto out;
}
err = igt_atomic_reset_engine(engine, p);
if (err)
goto out;
}
}
out:
/* As we poke around the guts, do a full reset before continuing. */
force_reset(i915);
igt_force_reset(i915);
unlock:
intel_runtime_pm_put(i915, wakeref);
mutex_unlock(&i915->drm.struct_mutex);
igt_global_reset_unlock(i915);
@ -1876,21 +1727,19 @@ unlock:
int intel_hangcheck_live_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_global_reset), /* attempt to recover GPU first */
SUBTEST(igt_wedged_reset),
SUBTEST(igt_hang_sanitycheck),
SUBTEST(igt_reset_nop),
SUBTEST(igt_reset_nop_engine),
SUBTEST(igt_reset_idle_engine),
SUBTEST(igt_reset_active_engine),
SUBTEST(igt_reset_engines),
SUBTEST(igt_reset_engines_atomic),
SUBTEST(igt_reset_queue),
SUBTEST(igt_reset_wait),
SUBTEST(igt_reset_evict_ggtt),
SUBTEST(igt_reset_evict_ppgtt),
SUBTEST(igt_reset_evict_fence),
SUBTEST(igt_handle_error),
SUBTEST(igt_atomic_reset),
};
intel_wakeref_t wakeref;
bool saved_hangcheck;

View File

@ -6,15 +6,15 @@
#include <linux/prime_numbers.h>
#include "../i915_reset.h"
#include "../i915_selftest.h"
#include "igt_flush_test.h"
#include "igt_live_test.h"
#include "igt_spinner.h"
#include "i915_random.h"
#include "mock_context.h"
#include "gt/intel_reset.h"
#include "i915_selftest.h"
#include "selftests/i915_random.h"
#include "selftests/igt_flush_test.h"
#include "selftests/igt_gem_utils.h"
#include "selftests/igt_live_test.h"
#include "selftests/igt_spinner.h"
#include "selftests/lib_sw_fence.h"
#include "selftests/mock_context.h"
static int live_sanitycheck(void *arg)
{
@ -152,7 +152,7 @@ static int live_busywait_preempt(void *arg)
* fails, we hang instead.
*/
lo = i915_request_alloc(engine, ctx_lo);
lo = igt_request_alloc(ctx_lo, engine);
if (IS_ERR(lo)) {
err = PTR_ERR(lo);
goto err_vma;
@ -196,7 +196,7 @@ static int live_busywait_preempt(void *arg)
goto err_vma;
}
hi = i915_request_alloc(engine, ctx_hi);
hi = igt_request_alloc(ctx_hi, engine);
if (IS_ERR(hi)) {
err = PTR_ERR(hi);
goto err_vma;
@ -641,14 +641,19 @@ static struct i915_request *dummy_request(struct intel_engine_cs *engine)
GEM_BUG_ON(i915_request_completed(rq));
i915_sw_fence_init(&rq->submit, dummy_notify);
i915_sw_fence_commit(&rq->submit);
set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags);
return rq;
}
static void dummy_request_free(struct i915_request *dummy)
{
/* We have to fake the CS interrupt to kick the next request */
i915_sw_fence_commit(&dummy->submit);
i915_request_mark_complete(dummy);
dma_fence_signal(&dummy->fence);
i915_sched_node_fini(&dummy->sched);
i915_sw_fence_fini(&dummy->submit);
@ -861,13 +866,13 @@ static int live_chain_preempt(void *arg)
i915_request_add(rq);
for (i = 0; i < count; i++) {
rq = i915_request_alloc(engine, lo.ctx);
rq = igt_request_alloc(lo.ctx, engine);
if (IS_ERR(rq))
goto err_wedged;
i915_request_add(rq);
}
rq = i915_request_alloc(engine, hi.ctx);
rq = igt_request_alloc(hi.ctx, engine);
if (IS_ERR(rq))
goto err_wedged;
i915_request_add(rq);
@ -886,7 +891,7 @@ static int live_chain_preempt(void *arg)
}
igt_spinner_end(&lo.spin);
rq = i915_request_alloc(engine, lo.ctx);
rq = igt_request_alloc(lo.ctx, engine);
if (IS_ERR(rq))
goto err_wedged;
i915_request_add(rq);
@ -1093,7 +1098,7 @@ static int smoke_submit(struct preempt_smoke *smoke,
ctx->sched.priority = prio;
rq = i915_request_alloc(smoke->engine, ctx);
rq = igt_request_alloc(ctx, smoke->engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto unpin;
@ -1306,6 +1311,504 @@ err_unlock:
return err;
}
static int nop_virtual_engine(struct drm_i915_private *i915,
struct intel_engine_cs **siblings,
unsigned int nsibling,
unsigned int nctx,
unsigned int flags)
#define CHAIN BIT(0)
{
IGT_TIMEOUT(end_time);
struct i915_request *request[16];
struct i915_gem_context *ctx[16];
struct intel_context *ve[16];
unsigned long n, prime, nc;
struct igt_live_test t;
ktime_t times[2] = {};
int err;
GEM_BUG_ON(!nctx || nctx > ARRAY_SIZE(ctx));
for (n = 0; n < nctx; n++) {
ctx[n] = kernel_context(i915);
if (!ctx[n]) {
err = -ENOMEM;
nctx = n;
goto out;
}
ve[n] = intel_execlists_create_virtual(ctx[n],
siblings, nsibling);
if (IS_ERR(ve[n])) {
kernel_context_close(ctx[n]);
err = PTR_ERR(ve[n]);
nctx = n;
goto out;
}
err = intel_context_pin(ve[n]);
if (err) {
intel_context_put(ve[n]);
kernel_context_close(ctx[n]);
nctx = n;
goto out;
}
}
err = igt_live_test_begin(&t, i915, __func__, ve[0]->engine->name);
if (err)
goto out;
for_each_prime_number_from(prime, 1, 8192) {
times[1] = ktime_get_raw();
if (flags & CHAIN) {
for (nc = 0; nc < nctx; nc++) {
for (n = 0; n < prime; n++) {
request[nc] =
i915_request_create(ve[nc]);
if (IS_ERR(request[nc])) {
err = PTR_ERR(request[nc]);
goto out;
}
i915_request_add(request[nc]);
}
}
} else {
for (n = 0; n < prime; n++) {
for (nc = 0; nc < nctx; nc++) {
request[nc] =
i915_request_create(ve[nc]);
if (IS_ERR(request[nc])) {
err = PTR_ERR(request[nc]);
goto out;
}
i915_request_add(request[nc]);
}
}
}
for (nc = 0; nc < nctx; nc++) {
if (i915_request_wait(request[nc],
I915_WAIT_LOCKED,
HZ / 10) < 0) {
pr_err("%s(%s): wait for %llx:%lld timed out\n",
__func__, ve[0]->engine->name,
request[nc]->fence.context,
request[nc]->fence.seqno);
GEM_TRACE("%s(%s) failed at request %llx:%lld\n",
__func__, ve[0]->engine->name,
request[nc]->fence.context,
request[nc]->fence.seqno);
GEM_TRACE_DUMP();
i915_gem_set_wedged(i915);
break;
}
}
times[1] = ktime_sub(ktime_get_raw(), times[1]);
if (prime == 1)
times[0] = times[1];
if (__igt_timeout(end_time, NULL))
break;
}
err = igt_live_test_end(&t);
if (err)
goto out;
pr_info("Requestx%d latencies on %s: 1 = %lluns, %lu = %lluns\n",
nctx, ve[0]->engine->name, ktime_to_ns(times[0]),
prime, div64_u64(ktime_to_ns(times[1]), prime));
out:
if (igt_flush_test(i915, I915_WAIT_LOCKED))
err = -EIO;
for (nc = 0; nc < nctx; nc++) {
intel_context_unpin(ve[nc]);
intel_context_put(ve[nc]);
kernel_context_close(ctx[nc]);
}
return err;
}
static int live_virtual_engine(void *arg)
{
struct drm_i915_private *i915 = arg;
struct intel_engine_cs *siblings[MAX_ENGINE_INSTANCE + 1];
struct intel_engine_cs *engine;
enum intel_engine_id id;
unsigned int class, inst;
int err = -ENODEV;
if (USES_GUC_SUBMISSION(i915))
return 0;
mutex_lock(&i915->drm.struct_mutex);
for_each_engine(engine, i915, id) {
err = nop_virtual_engine(i915, &engine, 1, 1, 0);
if (err) {
pr_err("Failed to wrap engine %s: err=%d\n",
engine->name, err);
goto out_unlock;
}
}
for (class = 0; class <= MAX_ENGINE_CLASS; class++) {
int nsibling, n;
nsibling = 0;
for (inst = 0; inst <= MAX_ENGINE_INSTANCE; inst++) {
if (!i915->engine_class[class][inst])
continue;
siblings[nsibling++] = i915->engine_class[class][inst];
}
if (nsibling < 2)
continue;
for (n = 1; n <= nsibling + 1; n++) {
err = nop_virtual_engine(i915, siblings, nsibling,
n, 0);
if (err)
goto out_unlock;
}
err = nop_virtual_engine(i915, siblings, nsibling, n, CHAIN);
if (err)
goto out_unlock;
}
out_unlock:
mutex_unlock(&i915->drm.struct_mutex);
return err;
}
static int mask_virtual_engine(struct drm_i915_private *i915,
struct intel_engine_cs **siblings,
unsigned int nsibling)
{
struct i915_request *request[MAX_ENGINE_INSTANCE + 1];
struct i915_gem_context *ctx;
struct intel_context *ve;
struct igt_live_test t;
unsigned int n;
int err;
/*
* Check that by setting the execution mask on a request, we can
* restrict it to our desired engine within the virtual engine.
*/
ctx = kernel_context(i915);
if (!ctx)
return -ENOMEM;
ve = intel_execlists_create_virtual(ctx, siblings, nsibling);
if (IS_ERR(ve)) {
err = PTR_ERR(ve);
goto out_close;
}
err = intel_context_pin(ve);
if (err)
goto out_put;
err = igt_live_test_begin(&t, i915, __func__, ve->engine->name);
if (err)
goto out_unpin;
for (n = 0; n < nsibling; n++) {
request[n] = i915_request_create(ve);
if (IS_ERR(request)) {
err = PTR_ERR(request);
nsibling = n;
goto out;
}
/* Reverse order as it's more likely to be unnatural */
request[n]->execution_mask = siblings[nsibling - n - 1]->mask;
i915_request_get(request[n]);
i915_request_add(request[n]);
}
for (n = 0; n < nsibling; n++) {
if (i915_request_wait(request[n], I915_WAIT_LOCKED, HZ / 10) < 0) {
pr_err("%s(%s): wait for %llx:%lld timed out\n",
__func__, ve->engine->name,
request[n]->fence.context,
request[n]->fence.seqno);
GEM_TRACE("%s(%s) failed at request %llx:%lld\n",
__func__, ve->engine->name,
request[n]->fence.context,
request[n]->fence.seqno);
GEM_TRACE_DUMP();
i915_gem_set_wedged(i915);
err = -EIO;
goto out;
}
if (request[n]->engine != siblings[nsibling - n - 1]) {
pr_err("Executed on wrong sibling '%s', expected '%s'\n",
request[n]->engine->name,
siblings[nsibling - n - 1]->name);
err = -EINVAL;
goto out;
}
}
err = igt_live_test_end(&t);
if (err)
goto out;
out:
if (igt_flush_test(i915, I915_WAIT_LOCKED))
err = -EIO;
for (n = 0; n < nsibling; n++)
i915_request_put(request[n]);
out_unpin:
intel_context_unpin(ve);
out_put:
intel_context_put(ve);
out_close:
kernel_context_close(ctx);
return err;
}
static int live_virtual_mask(void *arg)
{
struct drm_i915_private *i915 = arg;
struct intel_engine_cs *siblings[MAX_ENGINE_INSTANCE + 1];
unsigned int class, inst;
int err = 0;
if (USES_GUC_SUBMISSION(i915))
return 0;
mutex_lock(&i915->drm.struct_mutex);
for (class = 0; class <= MAX_ENGINE_CLASS; class++) {
unsigned int nsibling;
nsibling = 0;
for (inst = 0; inst <= MAX_ENGINE_INSTANCE; inst++) {
if (!i915->engine_class[class][inst])
break;
siblings[nsibling++] = i915->engine_class[class][inst];
}
if (nsibling < 2)
continue;
err = mask_virtual_engine(i915, siblings, nsibling);
if (err)
goto out_unlock;
}
out_unlock:
mutex_unlock(&i915->drm.struct_mutex);
return err;
}
static int bond_virtual_engine(struct drm_i915_private *i915,
unsigned int class,
struct intel_engine_cs **siblings,
unsigned int nsibling,
unsigned int flags)
#define BOND_SCHEDULE BIT(0)
{
struct intel_engine_cs *master;
struct i915_gem_context *ctx;
struct i915_request *rq[16];
enum intel_engine_id id;
unsigned long n;
int err;
GEM_BUG_ON(nsibling >= ARRAY_SIZE(rq) - 1);
ctx = kernel_context(i915);
if (!ctx)
return -ENOMEM;
err = 0;
rq[0] = ERR_PTR(-ENOMEM);
for_each_engine(master, i915, id) {
struct i915_sw_fence fence = {};
if (master->class == class)
continue;
memset_p((void *)rq, ERR_PTR(-EINVAL), ARRAY_SIZE(rq));
rq[0] = igt_request_alloc(ctx, master);
if (IS_ERR(rq[0])) {
err = PTR_ERR(rq[0]);
goto out;
}
i915_request_get(rq[0]);
if (flags & BOND_SCHEDULE) {
onstack_fence_init(&fence);
err = i915_sw_fence_await_sw_fence_gfp(&rq[0]->submit,
&fence,
GFP_KERNEL);
}
i915_request_add(rq[0]);
if (err < 0)
goto out;
for (n = 0; n < nsibling; n++) {
struct intel_context *ve;
ve = intel_execlists_create_virtual(ctx,
siblings,
nsibling);
if (IS_ERR(ve)) {
err = PTR_ERR(ve);
onstack_fence_fini(&fence);
goto out;
}
err = intel_virtual_engine_attach_bond(ve->engine,
master,
siblings[n]);
if (err) {
intel_context_put(ve);
onstack_fence_fini(&fence);
goto out;
}
err = intel_context_pin(ve);
intel_context_put(ve);
if (err) {
onstack_fence_fini(&fence);
goto out;
}
rq[n + 1] = i915_request_create(ve);
intel_context_unpin(ve);
if (IS_ERR(rq[n + 1])) {
err = PTR_ERR(rq[n + 1]);
onstack_fence_fini(&fence);
goto out;
}
i915_request_get(rq[n + 1]);
err = i915_request_await_execution(rq[n + 1],
&rq[0]->fence,
ve->engine->bond_execute);
i915_request_add(rq[n + 1]);
if (err < 0) {
onstack_fence_fini(&fence);
goto out;
}
}
onstack_fence_fini(&fence);
if (i915_request_wait(rq[0],
I915_WAIT_LOCKED,
HZ / 10) < 0) {
pr_err("Master request did not execute (on %s)!\n",
rq[0]->engine->name);
err = -EIO;
goto out;
}
for (n = 0; n < nsibling; n++) {
if (i915_request_wait(rq[n + 1],
I915_WAIT_LOCKED,
MAX_SCHEDULE_TIMEOUT) < 0) {
err = -EIO;
goto out;
}
if (rq[n + 1]->engine != siblings[n]) {
pr_err("Bonded request did not execute on target engine: expected %s, used %s; master was %s\n",
siblings[n]->name,
rq[n + 1]->engine->name,
rq[0]->engine->name);
err = -EINVAL;
goto out;
}
}
for (n = 0; !IS_ERR(rq[n]); n++)
i915_request_put(rq[n]);
rq[0] = ERR_PTR(-ENOMEM);
}
out:
for (n = 0; !IS_ERR(rq[n]); n++)
i915_request_put(rq[n]);
if (igt_flush_test(i915, I915_WAIT_LOCKED))
err = -EIO;
kernel_context_close(ctx);
return err;
}
static int live_virtual_bond(void *arg)
{
static const struct phase {
const char *name;
unsigned int flags;
} phases[] = {
{ "", 0 },
{ "schedule", BOND_SCHEDULE },
{ },
};
struct drm_i915_private *i915 = arg;
struct intel_engine_cs *siblings[MAX_ENGINE_INSTANCE + 1];
unsigned int class, inst;
int err = 0;
if (USES_GUC_SUBMISSION(i915))
return 0;
mutex_lock(&i915->drm.struct_mutex);
for (class = 0; class <= MAX_ENGINE_CLASS; class++) {
const struct phase *p;
int nsibling;
nsibling = 0;
for (inst = 0; inst <= MAX_ENGINE_INSTANCE; inst++) {
if (!i915->engine_class[class][inst])
break;
GEM_BUG_ON(nsibling == ARRAY_SIZE(siblings));
siblings[nsibling++] = i915->engine_class[class][inst];
}
if (nsibling < 2)
continue;
for (p = phases; p->name; p++) {
err = bond_virtual_engine(i915,
class, siblings, nsibling,
p->flags);
if (err) {
pr_err("%s(%s): failed class=%d, nsibling=%d, err=%d\n",
__func__, p->name, class, nsibling, err);
goto out_unlock;
}
}
}
out_unlock:
mutex_unlock(&i915->drm.struct_mutex);
return err;
}
int intel_execlists_live_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
@ -1318,6 +1821,9 @@ int intel_execlists_live_selftests(struct drm_i915_private *i915)
SUBTEST(live_chain_preempt),
SUBTEST(live_preempt_hang),
SUBTEST(live_preempt_smoke),
SUBTEST(live_virtual_engine),
SUBTEST(live_virtual_mask),
SUBTEST(live_virtual_bond),
};
if (!HAS_EXECLISTS(i915))

View File

@ -0,0 +1,118 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2018 Intel Corporation
*/
#include "i915_selftest.h"
#include "selftests/igt_reset.h"
#include "selftests/igt_atomic.h"
static int igt_global_reset(void *arg)
{
struct drm_i915_private *i915 = arg;
unsigned int reset_count;
int err = 0;
/* Check that we can issue a global GPU reset */
igt_global_reset_lock(i915);
reset_count = i915_reset_count(&i915->gpu_error);
i915_reset(i915, ALL_ENGINES, NULL);
if (i915_reset_count(&i915->gpu_error) == reset_count) {
pr_err("No GPU reset recorded!\n");
err = -EINVAL;
}
igt_global_reset_unlock(i915);
if (i915_reset_failed(i915))
err = -EIO;
return err;
}
static int igt_wedged_reset(void *arg)
{
struct drm_i915_private *i915 = arg;
intel_wakeref_t wakeref;
/* Check that we can recover a wedged device with a GPU reset */
igt_global_reset_lock(i915);
wakeref = intel_runtime_pm_get(i915);
i915_gem_set_wedged(i915);
GEM_BUG_ON(!i915_reset_failed(i915));
i915_reset(i915, ALL_ENGINES, NULL);
intel_runtime_pm_put(i915, wakeref);
igt_global_reset_unlock(i915);
return i915_reset_failed(i915) ? -EIO : 0;
}
static int igt_atomic_reset(void *arg)
{
struct drm_i915_private *i915 = arg;
const typeof(*igt_atomic_phases) *p;
int err = 0;
/* Check that the resets are usable from atomic context */
igt_global_reset_lock(i915);
mutex_lock(&i915->drm.struct_mutex);
/* Flush any requests before we get started and check basics */
if (!igt_force_reset(i915))
goto unlock;
for (p = igt_atomic_phases; p->name; p++) {
GEM_TRACE("intel_gpu_reset under %s\n", p->name);
p->critical_section_begin();
reset_prepare(i915);
err = intel_gpu_reset(i915, ALL_ENGINES);
reset_finish(i915);
p->critical_section_end();
if (err) {
pr_err("intel_gpu_reset failed under %s\n", p->name);
break;
}
}
/* As we poke around the guts, do a full reset before continuing. */
igt_force_reset(i915);
unlock:
mutex_unlock(&i915->drm.struct_mutex);
igt_global_reset_unlock(i915);
return err;
}
int intel_reset_live_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_global_reset), /* attempt to recover GPU first */
SUBTEST(igt_wedged_reset),
SUBTEST(igt_atomic_reset),
};
intel_wakeref_t wakeref;
int err = 0;
if (!intel_has_gpu_reset(i915))
return 0;
if (i915_terminally_wedged(i915))
return -EIO; /* we're long past hope of a successful reset */
with_intel_runtime_pm(i915, wakeref)
err = i915_subtests(tests, i915);
return err;
}

View File

@ -4,15 +4,16 @@
* Copyright © 2018 Intel Corporation
*/
#include "../i915_selftest.h"
#include "../i915_reset.h"
#include "i915_selftest.h"
#include "intel_reset.h"
#include "igt_flush_test.h"
#include "igt_reset.h"
#include "igt_spinner.h"
#include "igt_wedge_me.h"
#include "mock_context.h"
#include "mock_drm.h"
#include "selftests/igt_flush_test.h"
#include "selftests/igt_gem_utils.h"
#include "selftests/igt_reset.h"
#include "selftests/igt_spinner.h"
#include "selftests/igt_wedge_me.h"
#include "selftests/mock_context.h"
#include "selftests/mock_drm.h"
static const struct wo_register {
enum intel_platform platform;
@ -21,12 +22,13 @@ static const struct wo_register {
{ INTEL_GEMINILAKE, 0x731c }
};
#define REF_NAME_MAX (INTEL_ENGINE_CS_MAX_NAME + 4)
#define REF_NAME_MAX (INTEL_ENGINE_CS_MAX_NAME + 8)
struct wa_lists {
struct i915_wa_list gt_wa_list;
struct {
char name[REF_NAME_MAX];
struct i915_wa_list wa_list;
struct i915_wa_list ctx_wa_list;
} engine[I915_NUM_ENGINES];
};
@ -51,6 +53,12 @@ reference_lists_init(struct drm_i915_private *i915, struct wa_lists *lists)
wa_init_start(wal, name);
engine_init_workarounds(engine, wal);
wa_init_finish(wal);
snprintf(name, REF_NAME_MAX, "%s_CTX_REF", engine->name);
__intel_engine_init_ctx_wa(engine,
&lists->engine[id].ctx_wa_list,
name);
}
}
@ -71,7 +79,6 @@ read_nonprivs(struct i915_gem_context *ctx, struct intel_engine_cs *engine)
{
const u32 base = engine->mmio_base;
struct drm_i915_gem_object *result;
intel_wakeref_t wakeref;
struct i915_request *rq;
struct i915_vma *vma;
u32 srm, *cs;
@ -103,9 +110,7 @@ read_nonprivs(struct i915_gem_context *ctx, struct intel_engine_cs *engine)
if (err)
goto err_obj;
rq = ERR_PTR(-ENODEV);
with_intel_runtime_pm(engine->i915, wakeref)
rq = i915_request_alloc(engine, ctx);
rq = igt_request_alloc(ctx, engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto err_pin;
@ -340,49 +345,6 @@ out:
return err;
}
static struct i915_vma *create_scratch(struct i915_gem_context *ctx)
{
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
void *ptr;
int err;
obj = i915_gem_object_create_internal(ctx->i915, PAGE_SIZE);
if (IS_ERR(obj))
return ERR_CAST(obj);
i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
ptr = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(ptr)) {
err = PTR_ERR(ptr);
goto err_obj;
}
memset(ptr, 0xc5, PAGE_SIZE);
i915_gem_object_flush_map(obj);
i915_gem_object_unpin_map(obj);
vma = i915_vma_instance(obj, &ctx->ppgtt->vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err_obj;
}
err = i915_vma_pin(vma, 0, 0, PIN_USER);
if (err)
goto err_obj;
err = i915_gem_object_set_to_cpu_domain(obj, false);
if (err)
goto err_obj;
return vma;
err_obj:
i915_gem_object_put(obj);
return ERR_PTR(err);
}
static struct i915_vma *create_batch(struct i915_gem_context *ctx)
{
struct drm_i915_gem_object *obj;
@ -475,7 +437,7 @@ static int check_dirty_whitelist(struct i915_gem_context *ctx,
int err = 0, i, v;
u32 *cs, *results;
scratch = create_scratch(ctx);
scratch = create_scratch(&ctx->ppgtt->vm, 2 * ARRAY_SIZE(values) + 1);
if (IS_ERR(scratch))
return PTR_ERR(scratch);
@ -557,7 +519,7 @@ static int check_dirty_whitelist(struct i915_gem_context *ctx,
i915_gem_object_unpin_map(batch->obj);
i915_gem_chipset_flush(ctx->i915);
rq = i915_request_alloc(engine, ctx);
rq = igt_request_alloc(ctx, engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto out_batch;
@ -743,26 +705,343 @@ out:
return err;
}
static bool verify_gt_engine_wa(struct drm_i915_private *i915,
struct wa_lists *lists, const char *str)
static int read_whitelisted_registers(struct i915_gem_context *ctx,
struct intel_engine_cs *engine,
struct i915_vma *results)
{
struct i915_request *rq;
int i, err = 0;
u32 srm, *cs;
rq = igt_request_alloc(ctx, engine);
if (IS_ERR(rq))
return PTR_ERR(rq);
srm = MI_STORE_REGISTER_MEM;
if (INTEL_GEN(ctx->i915) >= 8)
srm++;
cs = intel_ring_begin(rq, 4 * engine->whitelist.count);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto err_req;
}
for (i = 0; i < engine->whitelist.count; i++) {
u64 offset = results->node.start + sizeof(u32) * i;
*cs++ = srm;
*cs++ = i915_mmio_reg_offset(engine->whitelist.list[i].reg);
*cs++ = lower_32_bits(offset);
*cs++ = upper_32_bits(offset);
}
intel_ring_advance(rq, cs);
err_req:
i915_request_add(rq);
if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0)
err = -EIO;
return err;
}
static int scrub_whitelisted_registers(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
{
struct i915_request *rq;
struct i915_vma *batch;
int i, err = 0;
u32 *cs;
batch = create_batch(ctx);
if (IS_ERR(batch))
return PTR_ERR(batch);
cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto err_batch;
}
*cs++ = MI_LOAD_REGISTER_IMM(engine->whitelist.count);
for (i = 0; i < engine->whitelist.count; i++) {
*cs++ = i915_mmio_reg_offset(engine->whitelist.list[i].reg);
*cs++ = 0xffffffff;
}
*cs++ = MI_BATCH_BUFFER_END;
i915_gem_object_flush_map(batch->obj);
i915_gem_chipset_flush(ctx->i915);
rq = igt_request_alloc(ctx, engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto err_unpin;
}
if (engine->emit_init_breadcrumb) { /* Be nice if we hang */
err = engine->emit_init_breadcrumb(rq);
if (err)
goto err_request;
}
/* Perform the writes from an unprivileged "user" batch */
err = engine->emit_bb_start(rq, batch->node.start, 0, 0);
err_request:
i915_request_add(rq);
if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0)
err = -EIO;
err_unpin:
i915_gem_object_unpin_map(batch->obj);
err_batch:
i915_vma_unpin_and_release(&batch, 0);
return err;
}
struct regmask {
i915_reg_t reg;
unsigned long gen_mask;
};
static bool find_reg(struct drm_i915_private *i915,
i915_reg_t reg,
const struct regmask *tbl,
unsigned long count)
{
u32 offset = i915_mmio_reg_offset(reg);
while (count--) {
if (INTEL_INFO(i915)->gen_mask & tbl->gen_mask &&
i915_mmio_reg_offset(tbl->reg) == offset)
return true;
tbl++;
}
return false;
}
static bool pardon_reg(struct drm_i915_private *i915, i915_reg_t reg)
{
/* Alas, we must pardon some whitelists. Mistakes already made */
static const struct regmask pardon[] = {
{ GEN9_CTX_PREEMPT_REG, INTEL_GEN_MASK(9, 9) },
{ GEN8_L3SQCREG4, INTEL_GEN_MASK(9, 9) },
};
return find_reg(i915, reg, pardon, ARRAY_SIZE(pardon));
}
static bool result_eq(struct intel_engine_cs *engine,
u32 a, u32 b, i915_reg_t reg)
{
if (a != b && !pardon_reg(engine->i915, reg)) {
pr_err("Whitelisted register 0x%4x not context saved: A=%08x, B=%08x\n",
i915_mmio_reg_offset(reg), a, b);
return false;
}
return true;
}
static bool writeonly_reg(struct drm_i915_private *i915, i915_reg_t reg)
{
/* Some registers do not seem to behave and our writes unreadable */
static const struct regmask wo[] = {
{ GEN9_SLICE_COMMON_ECO_CHICKEN1, INTEL_GEN_MASK(9, 9) },
};
return find_reg(i915, reg, wo, ARRAY_SIZE(wo));
}
static bool result_neq(struct intel_engine_cs *engine,
u32 a, u32 b, i915_reg_t reg)
{
if (a == b && !writeonly_reg(engine->i915, reg)) {
pr_err("Whitelist register 0x%4x:%08x was unwritable\n",
i915_mmio_reg_offset(reg), a);
return false;
}
return true;
}
static int
check_whitelisted_registers(struct intel_engine_cs *engine,
struct i915_vma *A,
struct i915_vma *B,
bool (*fn)(struct intel_engine_cs *engine,
u32 a, u32 b,
i915_reg_t reg))
{
u32 *a, *b;
int i, err;
a = i915_gem_object_pin_map(A->obj, I915_MAP_WB);
if (IS_ERR(a))
return PTR_ERR(a);
b = i915_gem_object_pin_map(B->obj, I915_MAP_WB);
if (IS_ERR(b)) {
err = PTR_ERR(b);
goto err_a;
}
err = 0;
for (i = 0; i < engine->whitelist.count; i++) {
if (!fn(engine, a[i], b[i], engine->whitelist.list[i].reg))
err = -EINVAL;
}
i915_gem_object_unpin_map(B->obj);
err_a:
i915_gem_object_unpin_map(A->obj);
return err;
}
static int live_isolated_whitelist(void *arg)
{
struct drm_i915_private *i915 = arg;
struct {
struct i915_gem_context *ctx;
struct i915_vma *scratch[2];
} client[2] = {};
struct intel_engine_cs *engine;
enum intel_engine_id id;
int i, err = 0;
/*
* Check that a write into a whitelist register works, but
* invisible to a second context.
*/
if (!intel_engines_has_context_isolation(i915))
return 0;
if (!i915->kernel_context->ppgtt)
return 0;
for (i = 0; i < ARRAY_SIZE(client); i++) {
struct i915_gem_context *c;
c = kernel_context(i915);
if (IS_ERR(c)) {
err = PTR_ERR(c);
goto err;
}
client[i].scratch[0] = create_scratch(&c->ppgtt->vm, 1024);
if (IS_ERR(client[i].scratch[0])) {
err = PTR_ERR(client[i].scratch[0]);
kernel_context_close(c);
goto err;
}
client[i].scratch[1] = create_scratch(&c->ppgtt->vm, 1024);
if (IS_ERR(client[i].scratch[1])) {
err = PTR_ERR(client[i].scratch[1]);
i915_vma_unpin_and_release(&client[i].scratch[0], 0);
kernel_context_close(c);
goto err;
}
client[i].ctx = c;
}
for_each_engine(engine, i915, id) {
if (!engine->whitelist.count)
continue;
/* Read default values */
err = read_whitelisted_registers(client[0].ctx, engine,
client[0].scratch[0]);
if (err)
goto err;
/* Try to overwrite registers (should only affect ctx0) */
err = scrub_whitelisted_registers(client[0].ctx, engine);
if (err)
goto err;
/* Read values from ctx1, we expect these to be defaults */
err = read_whitelisted_registers(client[1].ctx, engine,
client[1].scratch[0]);
if (err)
goto err;
/* Verify that both reads return the same default values */
err = check_whitelisted_registers(engine,
client[0].scratch[0],
client[1].scratch[0],
result_eq);
if (err)
goto err;
/* Read back the updated values in ctx0 */
err = read_whitelisted_registers(client[0].ctx, engine,
client[0].scratch[1]);
if (err)
goto err;
/* User should be granted privilege to overwhite regs */
err = check_whitelisted_registers(engine,
client[0].scratch[0],
client[0].scratch[1],
result_neq);
if (err)
goto err;
}
err:
for (i = 0; i < ARRAY_SIZE(client); i++) {
if (!client[i].ctx)
break;
i915_vma_unpin_and_release(&client[i].scratch[1], 0);
i915_vma_unpin_and_release(&client[i].scratch[0], 0);
kernel_context_close(client[i].ctx);
}
if (igt_flush_test(i915, I915_WAIT_LOCKED))
err = -EIO;
return err;
}
static bool
verify_wa_lists(struct i915_gem_context *ctx, struct wa_lists *lists,
const char *str)
{
struct drm_i915_private *i915 = ctx->i915;
struct i915_gem_engines_iter it;
struct intel_context *ce;
bool ok = true;
ok &= wa_list_verify(&i915->uncore, &lists->gt_wa_list, str);
for_each_engine(engine, i915, id)
ok &= wa_list_verify(engine->uncore,
&lists->engine[id].wa_list, str);
for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
enum intel_engine_id id = ce->engine->id;
ok &= engine_wa_list_verify(ce,
&lists->engine[id].wa_list,
str) == 0;
ok &= engine_wa_list_verify(ce,
&lists->engine[id].ctx_wa_list,
str) == 0;
}
i915_gem_context_unlock_engines(ctx);
return ok;
}
static int
live_gpu_reset_gt_engine_workarounds(void *arg)
live_gpu_reset_workarounds(void *arg)
{
struct drm_i915_private *i915 = arg;
struct i915_gem_context *ctx;
intel_wakeref_t wakeref;
struct wa_lists lists;
bool ok;
@ -770,6 +1049,10 @@ live_gpu_reset_gt_engine_workarounds(void *arg)
if (!intel_has_gpu_reset(i915))
return 0;
ctx = kernel_context(i915);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
pr_info("Verifying after GPU reset...\n");
igt_global_reset_lock(i915);
@ -777,15 +1060,16 @@ live_gpu_reset_gt_engine_workarounds(void *arg)
reference_lists_init(i915, &lists);
ok = verify_gt_engine_wa(i915, &lists, "before reset");
ok = verify_wa_lists(ctx, &lists, "before reset");
if (!ok)
goto out;
i915_reset(i915, ALL_ENGINES, "live_workarounds");
ok = verify_gt_engine_wa(i915, &lists, "after reset");
ok = verify_wa_lists(ctx, &lists, "after reset");
out:
kernel_context_close(ctx);
reference_lists_fini(i915, &lists);
intel_runtime_pm_put(i915, wakeref);
igt_global_reset_unlock(i915);
@ -794,7 +1078,7 @@ out:
}
static int
live_engine_reset_gt_engine_workarounds(void *arg)
live_engine_reset_workarounds(void *arg)
{
struct drm_i915_private *i915 = arg;
struct intel_engine_cs *engine;
@ -823,7 +1107,7 @@ live_engine_reset_gt_engine_workarounds(void *arg)
pr_info("Verifying after %s reset...\n", engine->name);
ok = verify_gt_engine_wa(i915, &lists, "before reset");
ok = verify_wa_lists(ctx, &lists, "before reset");
if (!ok) {
ret = -ESRCH;
goto err;
@ -831,7 +1115,7 @@ live_engine_reset_gt_engine_workarounds(void *arg)
i915_reset_engine(engine, "live_workarounds");
ok = verify_gt_engine_wa(i915, &lists, "after idle reset");
ok = verify_wa_lists(ctx, &lists, "after idle reset");
if (!ok) {
ret = -ESRCH;
goto err;
@ -862,7 +1146,7 @@ live_engine_reset_gt_engine_workarounds(void *arg)
igt_spinner_end(&spin);
igt_spinner_fini(&spin);
ok = verify_gt_engine_wa(i915, &lists, "after busy reset");
ok = verify_wa_lists(ctx, &lists, "after busy reset");
if (!ok) {
ret = -ESRCH;
goto err;
@ -885,8 +1169,9 @@ int intel_workarounds_live_selftests(struct drm_i915_private *i915)
static const struct i915_subtest tests[] = {
SUBTEST(live_dirty_whitelist),
SUBTEST(live_reset_whitelist),
SUBTEST(live_gpu_reset_gt_engine_workarounds),
SUBTEST(live_engine_reset_gt_engine_workarounds),
SUBTEST(live_isolated_whitelist),
SUBTEST(live_gpu_reset_workarounds),
SUBTEST(live_engine_reset_workarounds),
};
int err;

View File

@ -149,9 +149,9 @@ struct intel_vgpu_submission_ops {
struct intel_vgpu_submission {
struct intel_vgpu_execlist execlist[I915_NUM_ENGINES];
struct list_head workload_q_head[I915_NUM_ENGINES];
struct intel_context *shadow[I915_NUM_ENGINES];
struct kmem_cache *workloads;
atomic_t running_workload_num;
struct i915_gem_context *shadow_ctx;
union {
u64 i915_context_pml4;
u64 i915_context_pdps[GEN8_3LVL_PDPES];

View File

@ -1576,7 +1576,7 @@ hw_id_show(struct device *dev, struct device_attribute *attr,
struct intel_vgpu *vgpu = (struct intel_vgpu *)
mdev_get_drvdata(mdev);
return sprintf(buf, "%u\n",
vgpu->submission.shadow_ctx->hw_id);
vgpu->submission.shadow[0]->gem_context->hw_id);
}
return sprintf(buf, "\n");
}

View File

@ -493,8 +493,7 @@ static void switch_mmio(struct intel_vgpu *pre,
* itself.
*/
if (mmio->in_context &&
!is_inhibit_context(intel_context_lookup(s->shadow_ctx,
dev_priv->engine[ring_id])))
!is_inhibit_context(s->shadow[ring_id]))
continue;
if (mmio->mask)

View File

@ -36,6 +36,7 @@
#include <linux/kthread.h>
#include "i915_drv.h"
#include "i915_gem_pm.h"
#include "gvt.h"
#define RING_CTX_OFF(x) \
@ -277,18 +278,23 @@ static int shadow_context_status_change(struct notifier_block *nb,
return NOTIFY_OK;
}
static void shadow_context_descriptor_update(struct intel_context *ce)
static void
shadow_context_descriptor_update(struct intel_context *ce,
struct intel_vgpu_workload *workload)
{
u64 desc = 0;
u64 desc = ce->lrc_desc;
desc = ce->lrc_desc;
/* Update bits 0-11 of the context descriptor which includes flags
/*
* Update bits 0-11 of the context descriptor which includes flags
* like GEN8_CTX_* cached in desc_template
*/
desc &= U64_MAX << 12;
desc |= ce->gem_context->desc_template & ((1ULL << 12) - 1);
desc &= ~(0x3 << GEN8_CTX_ADDRESSING_MODE_SHIFT);
desc |= workload->ctx_desc.addressing_mode <<
GEN8_CTX_ADDRESSING_MODE_SHIFT;
ce->lrc_desc = desc;
}
@ -382,26 +388,22 @@ intel_gvt_workload_req_alloc(struct intel_vgpu_workload *workload)
{
struct intel_vgpu *vgpu = workload->vgpu;
struct intel_vgpu_submission *s = &vgpu->submission;
struct i915_gem_context *shadow_ctx = s->shadow_ctx;
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id];
struct i915_request *rq;
int ret = 0;
lockdep_assert_held(&dev_priv->drm.struct_mutex);
if (workload->req)
goto out;
return 0;
rq = i915_request_alloc(engine, shadow_ctx);
rq = i915_request_create(s->shadow[workload->ring_id]);
if (IS_ERR(rq)) {
gvt_vgpu_err("fail to allocate gem request\n");
ret = PTR_ERR(rq);
goto out;
return PTR_ERR(rq);
}
workload->req = i915_request_get(rq);
out:
return ret;
return 0;
}
/**
@ -416,10 +418,7 @@ int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload)
{
struct intel_vgpu *vgpu = workload->vgpu;
struct intel_vgpu_submission *s = &vgpu->submission;
struct i915_gem_context *shadow_ctx = s->shadow_ctx;
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id];
struct intel_context *ce;
int ret;
lockdep_assert_held(&dev_priv->drm.struct_mutex);
@ -427,29 +426,13 @@ int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload)
if (workload->shadow)
return 0;
/* pin shadow context by gvt even the shadow context will be pinned
* when i915 alloc request. That is because gvt will update the guest
* context from shadow context when workload is completed, and at that
* moment, i915 may already unpined the shadow context to make the
* shadow_ctx pages invalid. So gvt need to pin itself. After update
* the guest context, gvt can unpin the shadow_ctx safely.
*/
ce = intel_context_pin(shadow_ctx, engine);
if (IS_ERR(ce)) {
gvt_vgpu_err("fail to pin shadow context\n");
return PTR_ERR(ce);
}
shadow_ctx->desc_template &= ~(0x3 << GEN8_CTX_ADDRESSING_MODE_SHIFT);
shadow_ctx->desc_template |= workload->ctx_desc.addressing_mode <<
GEN8_CTX_ADDRESSING_MODE_SHIFT;
if (!test_and_set_bit(workload->ring_id, s->shadow_ctx_desc_updated))
shadow_context_descriptor_update(ce);
shadow_context_descriptor_update(s->shadow[workload->ring_id],
workload);
ret = intel_gvt_scan_and_shadow_ringbuffer(workload);
if (ret)
goto err_unpin;
return ret;
if (workload->ring_id == RCS0 && workload->wa_ctx.indirect_ctx.size) {
ret = intel_gvt_scan_and_shadow_wa_ctx(&workload->wa_ctx);
@ -461,8 +444,6 @@ int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload)
return 0;
err_shadow:
release_shadow_wa_ctx(&workload->wa_ctx);
err_unpin:
intel_context_unpin(ce);
return ret;
}
@ -689,7 +670,6 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
struct intel_vgpu *vgpu = workload->vgpu;
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
struct intel_vgpu_submission *s = &vgpu->submission;
struct i915_gem_context *shadow_ctx = s->shadow_ctx;
struct i915_request *rq;
int ring_id = workload->ring_id;
int ret;
@ -700,7 +680,8 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
mutex_lock(&vgpu->vgpu_lock);
mutex_lock(&dev_priv->drm.struct_mutex);
ret = set_context_ppgtt_from_shadow(workload, shadow_ctx);
ret = set_context_ppgtt_from_shadow(workload,
s->shadow[ring_id]->gem_context);
if (ret < 0) {
gvt_vgpu_err("workload shadow ppgtt isn't ready\n");
goto err_req;
@ -928,11 +909,6 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
intel_vgpu_trigger_virtual_event(vgpu, event);
}
/* unpin shadow ctx as the shadow_ctx update is done */
mutex_lock(&rq->i915->drm.struct_mutex);
intel_context_unpin(rq->hw_context);
mutex_unlock(&rq->i915->drm.struct_mutex);
i915_request_put(fetch_and_zero(&workload->req));
}
@ -1011,8 +987,6 @@ static int workload_thread(void *priv)
workload->ring_id, workload,
workload->vgpu->id);
intel_runtime_pm_get(gvt->dev_priv);
gvt_dbg_sched("ring id %d will dispatch workload %p\n",
workload->ring_id, workload);
@ -1042,7 +1016,6 @@ complete:
intel_uncore_forcewake_put(&gvt->dev_priv->uncore,
FORCEWAKE_ALL);
intel_runtime_pm_put_unchecked(gvt->dev_priv);
if (ret && (vgpu_is_vm_unhealthy(ret)))
enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
}
@ -1125,17 +1098,17 @@ err:
}
static void
i915_context_ppgtt_root_restore(struct intel_vgpu_submission *s)
i915_context_ppgtt_root_restore(struct intel_vgpu_submission *s,
struct i915_hw_ppgtt *ppgtt)
{
struct i915_hw_ppgtt *i915_ppgtt = s->shadow_ctx->ppgtt;
int i;
if (i915_vm_is_4lvl(&i915_ppgtt->vm)) {
px_dma(&i915_ppgtt->pml4) = s->i915_context_pml4;
if (i915_vm_is_4lvl(&ppgtt->vm)) {
px_dma(&ppgtt->pml4) = s->i915_context_pml4;
} else {
for (i = 0; i < GEN8_3LVL_PDPES; i++)
px_dma(i915_ppgtt->pdp.page_directory[i]) =
s->i915_context_pdps[i];
px_dma(ppgtt->pdp.page_directory[i]) =
s->i915_context_pdps[i];
}
}
@ -1149,10 +1122,15 @@ i915_context_ppgtt_root_restore(struct intel_vgpu_submission *s)
void intel_vgpu_clean_submission(struct intel_vgpu *vgpu)
{
struct intel_vgpu_submission *s = &vgpu->submission;
struct intel_engine_cs *engine;
enum intel_engine_id id;
intel_vgpu_select_submission_ops(vgpu, ALL_ENGINES, 0);
i915_context_ppgtt_root_restore(s);
i915_gem_context_put(s->shadow_ctx);
i915_context_ppgtt_root_restore(s, s->shadow[0]->gem_context->ppgtt);
for_each_engine(engine, vgpu->gvt->dev_priv, id)
intel_context_unpin(s->shadow[id]);
kmem_cache_destroy(s->workloads);
}
@ -1178,17 +1156,17 @@ void intel_vgpu_reset_submission(struct intel_vgpu *vgpu,
}
static void
i915_context_ppgtt_root_save(struct intel_vgpu_submission *s)
i915_context_ppgtt_root_save(struct intel_vgpu_submission *s,
struct i915_hw_ppgtt *ppgtt)
{
struct i915_hw_ppgtt *i915_ppgtt = s->shadow_ctx->ppgtt;
int i;
if (i915_vm_is_4lvl(&i915_ppgtt->vm))
s->i915_context_pml4 = px_dma(&i915_ppgtt->pml4);
else {
if (i915_vm_is_4lvl(&ppgtt->vm)) {
s->i915_context_pml4 = px_dma(&ppgtt->pml4);
} else {
for (i = 0; i < GEN8_3LVL_PDPES; i++)
s->i915_context_pdps[i] =
px_dma(i915_ppgtt->pdp.page_directory[i]);
px_dma(ppgtt->pdp.page_directory[i]);
}
}
@ -1205,16 +1183,36 @@ i915_context_ppgtt_root_save(struct intel_vgpu_submission *s)
int intel_vgpu_setup_submission(struct intel_vgpu *vgpu)
{
struct intel_vgpu_submission *s = &vgpu->submission;
enum intel_engine_id i;
struct intel_engine_cs *engine;
struct i915_gem_context *ctx;
enum intel_engine_id i;
int ret;
s->shadow_ctx = i915_gem_context_create_gvt(
&vgpu->gvt->dev_priv->drm);
if (IS_ERR(s->shadow_ctx))
return PTR_ERR(s->shadow_ctx);
ctx = i915_gem_context_create_gvt(&vgpu->gvt->dev_priv->drm);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
i915_context_ppgtt_root_save(s);
i915_context_ppgtt_root_save(s, ctx->ppgtt);
for_each_engine(engine, vgpu->gvt->dev_priv, i) {
struct intel_context *ce;
INIT_LIST_HEAD(&s->workload_q_head[i]);
s->shadow[i] = ERR_PTR(-EINVAL);
ce = i915_gem_context_get_engine(ctx, i);
if (IS_ERR(ce)) {
ret = PTR_ERR(ce);
goto out_shadow_ctx;
}
ret = intel_context_pin(ce);
intel_context_put(ce);
if (ret)
goto out_shadow_ctx;
s->shadow[i] = ce;
}
bitmap_zero(s->shadow_ctx_desc_updated, I915_NUM_ENGINES);
@ -1230,16 +1228,21 @@ int intel_vgpu_setup_submission(struct intel_vgpu *vgpu)
goto out_shadow_ctx;
}
for_each_engine(engine, vgpu->gvt->dev_priv, i)
INIT_LIST_HEAD(&s->workload_q_head[i]);
atomic_set(&s->running_workload_num, 0);
bitmap_zero(s->tlb_handle_pending, I915_NUM_ENGINES);
i915_gem_context_put(ctx);
return 0;
out_shadow_ctx:
i915_gem_context_put(s->shadow_ctx);
i915_context_ppgtt_root_restore(s, ctx->ppgtt);
for_each_engine(engine, vgpu->gvt->dev_priv, i) {
if (IS_ERR(s->shadow[i]))
break;
intel_context_unpin(s->shadow[i]);
}
i915_gem_context_put(ctx);
return ret;
}

View File

@ -25,8 +25,9 @@
*
*/
#include "gt/intel_engine.h"
#include "i915_drv.h"
#include "intel_ringbuffer.h"
/**
* DOC: batch buffer command parser

View File

@ -32,7 +32,12 @@
#include <drm/drm_debugfs.h>
#include <drm/drm_fourcc.h>
#include "i915_reset.h"
#include "gt/intel_reset.h"
#include "i915_debugfs.h"
#include "i915_gem_context.h"
#include "i915_irq.h"
#include "intel_csr.h"
#include "intel_dp.h"
#include "intel_drv.h"
#include "intel_fbc.h"
@ -41,6 +46,7 @@
#include "intel_hdmi.h"
#include "intel_pm.h"
#include "intel_psr.h"
#include "intel_sideband.h"
static inline struct drm_i915_private *node_to_i915(struct drm_info_node *node)
{
@ -206,6 +212,18 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
vma->ggtt_view.rotated.plane[1].offset);
break;
case I915_GGTT_VIEW_REMAPPED:
seq_printf(m, ", remapped [(%ux%u, stride=%u, offset=%u), (%ux%u, stride=%u, offset=%u)]",
vma->ggtt_view.remapped.plane[0].width,
vma->ggtt_view.remapped.plane[0].height,
vma->ggtt_view.remapped.plane[0].stride,
vma->ggtt_view.remapped.plane[0].offset,
vma->ggtt_view.remapped.plane[1].width,
vma->ggtt_view.remapped.plane[1].height,
vma->ggtt_view.remapped.plane[1].stride,
vma->ggtt_view.remapped.plane[1].offset);
break;
default:
MISSING_CASE(vma->ggtt_view.type);
break;
@ -395,14 +413,17 @@ static void print_context_stats(struct seq_file *m,
struct i915_gem_context *ctx;
list_for_each_entry(ctx, &i915->contexts.list, link) {
struct i915_gem_engines_iter it;
struct intel_context *ce;
list_for_each_entry(ce, &ctx->active_engines, active_link) {
for_each_gem_engine(ce,
i915_gem_context_lock_engines(ctx), it) {
if (ce->state)
per_file_stats(0, ce->state->obj, &kstats);
if (ce->ring)
per_file_stats(0, ce->ring->vma->obj, &kstats);
}
i915_gem_context_unlock_engines(ctx);
if (!IS_ERR_OR_NULL(ctx->file_priv)) {
struct file_stats stats = { .vm = &ctx->ppgtt->vm, };
@ -1045,8 +1066,6 @@ static int i915_frequency_info(struct seq_file *m, void *unused)
} else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
u32 rpmodectl, freq_sts;
mutex_lock(&dev_priv->pcu_lock);
rpmodectl = I915_READ(GEN6_RP_CONTROL);
seq_printf(m, "Video Turbo Mode: %s\n",
yesno(rpmodectl & GEN6_RP_MEDIA_TURBO));
@ -1056,7 +1075,10 @@ static int i915_frequency_info(struct seq_file *m, void *unused)
yesno((rpmodectl & GEN6_RP_MEDIA_MODE_MASK) ==
GEN6_RP_MEDIA_SW_MODE));
vlv_punit_get(dev_priv);
freq_sts = vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS);
vlv_punit_put(dev_priv);
seq_printf(m, "PUNIT_REG_GPU_FREQ_STS: 0x%08x\n", freq_sts);
seq_printf(m, "DDR freq: %d MHz\n", dev_priv->mem_freq);
@ -1078,7 +1100,6 @@ static int i915_frequency_info(struct seq_file *m, void *unused)
seq_printf(m,
"efficient (RPe) frequency: %d MHz\n",
intel_gpu_freq(dev_priv, rps->efficient_freq));
mutex_unlock(&dev_priv->pcu_lock);
} else if (INTEL_GEN(dev_priv) >= 6) {
u32 rp_state_limits;
u32 gt_perf_status;
@ -1279,7 +1300,6 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
struct drm_i915_private *dev_priv = node_to_i915(m->private);
struct intel_engine_cs *engine;
u64 acthd[I915_NUM_ENGINES];
u32 seqno[I915_NUM_ENGINES];
struct intel_instdone instdone;
intel_wakeref_t wakeref;
enum intel_engine_id id;
@ -1296,10 +1316,8 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
}
with_intel_runtime_pm(dev_priv, wakeref) {
for_each_engine(engine, dev_priv, id) {
for_each_engine(engine, dev_priv, id)
acthd[id] = intel_engine_get_active_head(engine);
seqno[id] = intel_engine_get_hangcheck_seqno(engine);
}
intel_engine_get_instdone(dev_priv->engine[RCS0], &instdone);
}
@ -1316,11 +1334,8 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
seq_printf(m, "GT active? %s\n", yesno(dev_priv->gt.awake));
for_each_engine(engine, dev_priv, id) {
seq_printf(m, "%s:\n", engine->name);
seq_printf(m, "\tseqno = %x [current %x, last %x], %dms ago\n",
engine->hangcheck.last_seqno,
seqno[id],
engine->hangcheck.next_seqno,
seq_printf(m, "%s: %d ms ago\n",
engine->name,
jiffies_to_msecs(jiffies -
engine->hangcheck.action_timestamp));
@ -1483,12 +1498,9 @@ static int gen6_drpc_info(struct seq_file *m)
gen9_powergate_status = I915_READ(GEN9_PWRGT_DOMAIN_STATUS);
}
if (INTEL_GEN(dev_priv) <= 7) {
mutex_lock(&dev_priv->pcu_lock);
if (INTEL_GEN(dev_priv) <= 7)
sandybridge_pcode_read(dev_priv, GEN6_PCODE_READ_RC6VIDS,
&rc6vids);
mutex_unlock(&dev_priv->pcu_lock);
}
seq_printf(m, "RC1e Enabled: %s\n",
yesno(rcctl1 & GEN6_RC_CTL_RC1e_ENABLE));
@ -1752,17 +1764,10 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused)
unsigned int max_gpu_freq, min_gpu_freq;
intel_wakeref_t wakeref;
int gpu_freq, ia_freq;
int ret;
if (!HAS_LLC(dev_priv))
return -ENODEV;
wakeref = intel_runtime_pm_get(dev_priv);
ret = mutex_lock_interruptible(&dev_priv->pcu_lock);
if (ret)
goto out;
min_gpu_freq = rps->min_freq;
max_gpu_freq = rps->max_freq;
if (IS_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) {
@ -1773,6 +1778,7 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused)
seq_puts(m, "GPU freq (MHz)\tEffective CPU freq (MHz)\tEffective Ring freq (MHz)\n");
wakeref = intel_runtime_pm_get(dev_priv);
for (gpu_freq = min_gpu_freq; gpu_freq <= max_gpu_freq; gpu_freq++) {
ia_freq = gpu_freq;
sandybridge_pcode_read(dev_priv,
@ -1786,12 +1792,9 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused)
((ia_freq >> 0) & 0xff) * 100,
((ia_freq >> 8) & 0xff) * 100);
}
mutex_unlock(&dev_priv->pcu_lock);
out:
intel_runtime_pm_put(dev_priv, wakeref);
return ret;
return 0;
}
static int i915_opregion(struct seq_file *m, void *unused)
@ -1892,6 +1895,7 @@ static int i915_context_status(struct seq_file *m, void *unused)
return ret;
list_for_each_entry(ctx, &dev_priv->contexts.list, link) {
struct i915_gem_engines_iter it;
struct intel_context *ce;
seq_puts(m, "HW context ");
@ -1916,7 +1920,8 @@ static int i915_context_status(struct seq_file *m, void *unused)
seq_putc(m, ctx->remap_slice ? 'R' : 'r');
seq_putc(m, '\n');
list_for_each_entry(ce, &ctx->active_engines, active_link) {
for_each_gem_engine(ce,
i915_gem_context_lock_engines(ctx), it) {
seq_printf(m, "%s: ", ce->engine->name);
if (ce->state)
describe_obj(m, ce->state->obj);
@ -1924,6 +1929,7 @@ static int i915_context_status(struct seq_file *m, void *unused)
describe_ctx_ring(m, ce->ring);
seq_putc(m, '\n');
}
i915_gem_context_unlock_engines(ctx);
seq_putc(m, '\n');
}
@ -2028,11 +2034,11 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
with_intel_runtime_pm_if_in_use(dev_priv, wakeref) {
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
mutex_lock(&dev_priv->pcu_lock);
vlv_punit_get(dev_priv);
act_freq = vlv_punit_read(dev_priv,
PUNIT_REG_GPU_FREQ_STS);
vlv_punit_put(dev_priv);
act_freq = (act_freq >> 8) & 0xff;
mutex_unlock(&dev_priv->pcu_lock);
} else {
act_freq = intel_get_cagf(dev_priv,
I915_READ(GEN6_RPSTAT1));
@ -2040,8 +2046,7 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
}
seq_printf(m, "RPS enabled? %d\n", rps->enabled);
seq_printf(m, "GPU busy? %s [%d requests]\n",
yesno(dev_priv->gt.awake), dev_priv->gt.active_requests);
seq_printf(m, "GPU busy? %s\n", yesno(dev_priv->gt.awake));
seq_printf(m, "Boosts outstanding? %d\n",
atomic_read(&rps->num_waiters));
seq_printf(m, "Interactive? %d\n", READ_ONCE(rps->power.interactive));
@ -2060,9 +2065,7 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
seq_printf(m, "Wait boosts: %d\n", atomic_read(&rps->boosts));
if (INTEL_GEN(dev_priv) >= 6 &&
rps->enabled &&
dev_priv->gt.active_requests) {
if (INTEL_GEN(dev_priv) >= 6 && rps->enabled && dev_priv->gt.awake) {
u32 rpup, rpupei;
u32 rpdown, rpdownei;
@ -3091,9 +3094,9 @@ static int i915_engine_info(struct seq_file *m, void *unused)
wakeref = intel_runtime_pm_get(dev_priv);
seq_printf(m, "GT awake? %s\n", yesno(dev_priv->gt.awake));
seq_printf(m, "Global active requests: %d\n",
dev_priv->gt.active_requests);
seq_printf(m, "GT awake? %s [%d]\n",
yesno(dev_priv->gt.awake),
atomic_read(&dev_priv->gt.wakeref.count));
seq_printf(m, "CS timestamp frequency: %u kHz\n",
RUNTIME_INFO(dev_priv)->cs_timestamp_frequency_khz);
@ -3904,14 +3907,26 @@ i915_drop_caches_set(void *data, u64 val)
/* No need to check and wait for gpu resets, only libdrm auto-restarts
* on ioctls on -EAGAIN. */
if (val & (DROP_ACTIVE | DROP_RETIRE | DROP_RESET_SEQNO)) {
if (val & (DROP_ACTIVE | DROP_IDLE | DROP_RETIRE | DROP_RESET_SEQNO)) {
int ret;
ret = mutex_lock_interruptible(&i915->drm.struct_mutex);
if (ret)
return ret;
if (val & DROP_ACTIVE)
/*
* To finish the flush of the idle_worker, we must complete
* the switch-to-kernel-context, which requires a double
* pass through wait_for_idle: first queues the switch,
* second waits for the switch.
*/
if (ret == 0 && val & (DROP_IDLE | DROP_ACTIVE))
ret = i915_gem_wait_for_idle(i915,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_LOCKED,
MAX_SCHEDULE_TIMEOUT);
if (ret == 0 && val & DROP_IDLE)
ret = i915_gem_wait_for_idle(i915,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_LOCKED,
@ -3938,11 +3953,8 @@ i915_drop_caches_set(void *data, u64 val)
fs_reclaim_release(GFP_KERNEL);
if (val & DROP_IDLE) {
do {
if (READ_ONCE(i915->gt.active_requests))
flush_delayed_work(&i915->gt.retire_work);
drain_delayed_work(&i915->gt.idle_work);
} while (READ_ONCE(i915->gt.awake));
flush_delayed_work(&i915->gem.retire_work);
flush_work(&i915->gem.idle_work);
}
if (val & DROP_FREED)
@ -4757,6 +4769,7 @@ static int i915_hdcp_sink_capability_show(struct seq_file *m, void *data)
{
struct drm_connector *connector = m->private;
struct intel_connector *intel_connector = to_intel_connector(connector);
bool hdcp_cap, hdcp2_cap;
if (connector->status != connector_status_connected)
return -ENODEV;
@ -4767,8 +4780,16 @@ static int i915_hdcp_sink_capability_show(struct seq_file *m, void *data)
seq_printf(m, "%s:%d HDCP version: ", connector->name,
connector->base.id);
seq_printf(m, "%s ", !intel_hdcp_capable(intel_connector) ?
"None" : "HDCP1.4");
hdcp_cap = intel_hdcp_capable(intel_connector);
hdcp2_cap = intel_hdcp2_capable(intel_connector);
if (hdcp_cap)
seq_puts(m, "HDCP1.4 ");
if (hdcp2_cap)
seq_puts(m, "HDCP2.2 ");
if (!hdcp_cap && !hdcp2_cap)
seq_puts(m, "None");
seq_puts(m, "\n");
return 0;

View File

@ -0,0 +1,20 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2019 Intel Corporation
*/
#ifndef __I915_DEBUGFS_H__
#define __I915_DEBUGFS_H__
struct drm_i915_private;
struct drm_connector;
#ifdef CONFIG_DEBUG_FS
int i915_debugfs_register(struct drm_i915_private *dev_priv);
int i915_debugfs_connector_add(struct drm_connector *connector);
#else
static inline int i915_debugfs_register(struct drm_i915_private *dev_priv) { return 0; }
static inline int i915_debugfs_connector_add(struct drm_connector *connector) { return 0; }
#endif
#endif /* __I915_DEBUGFS_H__ */

View File

@ -47,22 +47,31 @@
#include <drm/drm_probe_helper.h>
#include <drm/i915_drm.h>
#include "gt/intel_gt_pm.h"
#include "gt/intel_reset.h"
#include "gt/intel_workarounds.h"
#include "i915_debugfs.h"
#include "i915_drv.h"
#include "i915_irq.h"
#include "i915_pmu.h"
#include "i915_query.h"
#include "i915_reset.h"
#include "i915_trace.h"
#include "i915_vgpu.h"
#include "intel_acpi.h"
#include "intel_audio.h"
#include "intel_cdclk.h"
#include "intel_csr.h"
#include "intel_dp.h"
#include "intel_drv.h"
#include "intel_fbdev.h"
#include "intel_gmbus.h"
#include "intel_hotplug.h"
#include "intel_overlay.h"
#include "intel_pipe_crc.h"
#include "intel_pm.h"
#include "intel_sprite.h"
#include "intel_uc.h"
#include "intel_workarounds.h"
static struct drm_driver driver;
@ -186,7 +195,8 @@ intel_pch_type(const struct drm_i915_private *dev_priv, unsigned short id)
DRM_DEBUG_KMS("Found Kaby Lake PCH (KBP)\n");
WARN_ON(!IS_SKYLAKE(dev_priv) && !IS_KABYLAKE(dev_priv) &&
!IS_COFFEELAKE(dev_priv));
return PCH_KBP;
/* KBP is SPT compatible */
return PCH_SPT;
case INTEL_PCH_CNP_DEVICE_ID_TYPE:
DRM_DEBUG_KMS("Found Cannon Lake PCH (CNP)\n");
WARN_ON(!IS_CANNONLAKE(dev_priv) && !IS_COFFEELAKE(dev_priv));
@ -433,6 +443,7 @@ static int i915_getparam_ioctl(struct drm_device *dev, void *data,
case I915_PARAM_HAS_EXEC_CAPTURE:
case I915_PARAM_HAS_EXEC_BATCH_FIRST:
case I915_PARAM_HAS_EXEC_FENCE_ARRAY:
case I915_PARAM_HAS_EXEC_SUBMIT_FENCE:
/* For the time being all of these are always true;
* if some supported hardware does not have one of these
* features this value needs to be provided from
@ -697,7 +708,7 @@ static int i915_load_modeset_init(struct drm_device *dev)
if (ret)
goto cleanup_csr;
intel_setup_gmbus(dev_priv);
intel_gmbus_setup(dev_priv);
/* Important: The output setup functions called by modeset_init need
* working irqs for e.g. gmbus and dp aux transfers. */
@ -732,7 +743,7 @@ cleanup_modeset:
intel_modeset_cleanup(dev);
cleanup_irq:
drm_irq_uninstall(dev);
intel_teardown_gmbus(dev_priv);
intel_gmbus_teardown(dev_priv);
cleanup_csr:
intel_csr_ucode_fini(dev_priv);
intel_power_domains_fini_hw(dev_priv);
@ -884,6 +895,9 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv)
mutex_init(&dev_priv->backlight_lock);
mutex_init(&dev_priv->sb_lock);
pm_qos_add_request(&dev_priv->sb_qos,
PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
mutex_init(&dev_priv->av_mutex);
mutex_init(&dev_priv->wm.wm_mutex);
mutex_init(&dev_priv->pps_mutex);
@ -943,6 +957,9 @@ static void i915_driver_cleanup_early(struct drm_i915_private *dev_priv)
i915_gem_cleanup_early(dev_priv);
i915_workqueues_cleanup(dev_priv);
i915_engines_cleanup(dev_priv);
pm_qos_remove_request(&dev_priv->sb_qos);
mutex_destroy(&dev_priv->sb_lock);
}
/**
@ -1760,7 +1777,7 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv)
i915_pmu_unregister(dev_priv);
i915_teardown_sysfs(dev_priv);
drm_dev_unregister(&dev_priv->drm);
drm_dev_unplug(&dev_priv->drm);
i915_gem_shrinker_unregister(dev_priv);
}
@ -2322,7 +2339,7 @@ static int i915_drm_resume_early(struct drm_device *dev)
intel_power_domains_resume(dev_priv);
intel_engines_sanitize(dev_priv, true);
intel_gt_sanitize(dev_priv, true);
enable_rpm_wakeref_asserts(dev_priv);
@ -2875,7 +2892,7 @@ static int intel_runtime_suspend(struct device *kdev)
*/
i915_gem_runtime_suspend(dev_priv);
intel_uc_suspend(dev_priv);
intel_uc_runtime_suspend(dev_priv);
intel_runtime_pm_disable_interrupts(dev_priv);
@ -3098,7 +3115,7 @@ static const struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF_DRV(I915_BATCHBUFFER, drm_noop, DRM_AUTH),
DRM_IOCTL_DEF_DRV(I915_IRQ_EMIT, drm_noop, DRM_AUTH),
DRM_IOCTL_DEF_DRV(I915_IRQ_WAIT, drm_noop, DRM_AUTH),
DRM_IOCTL_DEF_DRV(I915_GETPARAM, i915_getparam_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GETPARAM, i915_getparam_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_SETPARAM, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_ALLOC, drm_noop, DRM_AUTH),
DRM_IOCTL_DEF_DRV(I915_FREE, drm_noop, DRM_AUTH),
@ -3111,13 +3128,13 @@ static const struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF_DRV(I915_HWS_ADDR, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_GEM_INIT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER, i915_gem_execbuffer_ioctl, DRM_AUTH),
DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER2_WR, i915_gem_execbuffer2_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER2_WR, i915_gem_execbuffer2_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_PIN, i915_gem_reject_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_GEM_UNPIN, i915_gem_reject_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_GEM_BUSY, i915_gem_busy_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_BUSY, i915_gem_busy_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_SET_CACHING, i915_gem_set_caching_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_GET_CACHING, i915_gem_get_caching_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_THROTTLE, i915_gem_throttle_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_THROTTLE, i915_gem_throttle_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_ENTERVT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_GEM_LEAVEVT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_GEM_CREATE, i915_gem_create_ioctl, DRM_RENDER_ALLOW),
@ -3136,7 +3153,7 @@ static const struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF_DRV(I915_OVERLAY_ATTRS, intel_overlay_attrs_ioctl, DRM_MASTER),
DRM_IOCTL_DEF_DRV(I915_SET_SPRITE_COLORKEY, intel_sprite_set_colorkey_ioctl, DRM_MASTER),
DRM_IOCTL_DEF_DRV(I915_GET_SPRITE_COLORKEY, drm_noop, DRM_MASTER),
DRM_IOCTL_DEF_DRV(I915_GEM_WAIT, i915_gem_wait_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_WAIT, i915_gem_wait_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_CREATE_EXT, i915_gem_context_create_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_DESTROY, i915_gem_context_destroy_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_REG_READ, i915_reg_read_ioctl, DRM_RENDER_ALLOW),
@ -3148,6 +3165,8 @@ static const struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF_DRV(I915_PERF_ADD_CONFIG, i915_perf_add_config_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_PERF_REMOVE_CONFIG, i915_perf_remove_config_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_QUERY, i915_query_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_VM_CREATE, i915_gem_vm_create_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_VM_DESTROY, i915_gem_vm_destroy_ioctl, DRM_RENDER_ALLOW),
};
static struct drm_driver driver = {

View File

@ -62,18 +62,21 @@
#include "i915_reg.h"
#include "i915_utils.h"
#include "gt/intel_lrc.h"
#include "gt/intel_engine.h"
#include "gt/intel_workarounds.h"
#include "intel_bios.h"
#include "intel_device_info.h"
#include "intel_display.h"
#include "intel_dpll_mgr.h"
#include "intel_frontbuffer.h"
#include "intel_lrc.h"
#include "intel_opregion.h"
#include "intel_ringbuffer.h"
#include "intel_runtime_pm.h"
#include "intel_uc.h"
#include "intel_uncore.h"
#include "intel_wakeref.h"
#include "intel_wopcm.h"
#include "intel_workarounds.h"
#include "i915_gem.h"
#include "i915_gem_context.h"
@ -93,8 +96,8 @@
#define DRIVER_NAME "i915"
#define DRIVER_DESC "Intel Graphics"
#define DRIVER_DATE "20190417"
#define DRIVER_TIMESTAMP 1555492067
#define DRIVER_DATE "20190524"
#define DRIVER_TIMESTAMP 1558719322
/* Use I915_STATE_WARN(x) and I915_STATE_WARN_ON() (rather than WARN() and
* WARN_ON()) for hw state sanity checks to check for unexpected conditions
@ -133,8 +136,6 @@ bool i915_error_injected(void);
__i915_printk(i915, i915_error_injected() ? KERN_DEBUG : KERN_ERR, \
fmt, ##__VA_ARGS__)
typedef depot_stack_handle_t intel_wakeref_t;
enum hpd_pin {
HPD_NONE = 0,
HPD_TV = HPD_NONE, /* TV is known to be unreliable */
@ -344,10 +345,6 @@ struct drm_i915_display_funcs {
void (*load_luts)(const struct intel_crtc_state *crtc_state);
};
#define CSR_VERSION(major, minor) ((major) << 16 | (minor))
#define CSR_VERSION_MAJOR(version) ((version) >> 16)
#define CSR_VERSION_MINOR(version) ((version) & 0xffff)
struct intel_csr {
struct work_struct work;
const char *fw_path;
@ -535,17 +532,11 @@ enum intel_pch {
PCH_IBX, /* Ibexpeak PCH */
PCH_CPT, /* Cougarpoint/Pantherpoint PCH */
PCH_LPT, /* Lynxpoint/Wildcatpoint PCH */
PCH_SPT, /* Sunrisepoint PCH */
PCH_KBP, /* Kaby Lake PCH */
PCH_SPT, /* Sunrisepoint/Kaby Lake PCH */
PCH_CNP, /* Cannon/Comet Lake PCH */
PCH_ICP, /* Ice Lake PCH */
};
enum intel_sbi_destination {
SBI_ICLK,
SBI_MPHY,
};
#define QUIRK_LVDS_SSC_DISABLE (1<<1)
#define QUIRK_INVERT_BRIGHTNESS (1<<2)
#define QUIRK_BACKLIGHT_PRESENT (1<<3)
@ -648,6 +639,8 @@ struct intel_rps_ei {
};
struct intel_rps {
struct mutex lock; /* protects enabling and the worker */
/*
* work, interrupts_enabled and pm_iir are protected by
* dev_priv->irq_lock
@ -841,6 +834,11 @@ struct i915_power_domains {
struct mutex lock;
int domain_use_count[POWER_DOMAIN_NUM];
struct delayed_work async_put_work;
intel_wakeref_t async_put_wakeref;
u64 async_put_domains[2];
struct i915_power_well *power_wells;
};
@ -1561,6 +1559,7 @@ struct drm_i915_private {
/* Sideband mailbox protection */
struct mutex sb_lock;
struct pm_qos_request sb_qos;
/** Cached value of IMR to avoid reads in updating the bitfield */
union {
@ -1709,14 +1708,6 @@ struct drm_i915_private {
*/
u32 edram_size_mb;
/*
* Protects RPS/RC6 register access and PCU communication.
* Must be taken after struct_mutex if nested. Note that
* this lock may be held for long periods of time when
* talking to hw - so only take it when talking to hw!
*/
struct mutex pcu_lock;
/* gen6+ GT PM state */
struct intel_gen6_power_mgmt gt_pm;
@ -1995,8 +1986,6 @@ struct drm_i915_private {
/* Abstract the submission mechanism (legacy ringbuffer or execlists) away */
struct {
void (*cleanup_engine)(struct intel_engine_cs *engine);
struct i915_gt_timelines {
struct mutex mutex; /* protects list, tainted by GPU */
struct list_head active_list;
@ -2006,10 +1995,10 @@ struct drm_i915_private {
struct list_head hwsp_free_list;
} timelines;
intel_engine_mask_t active_engines;
struct list_head active_rings;
struct list_head closed_vma;
u32 active_requests;
struct intel_wakeref wakeref;
/**
* Is the GPU currently considered idle, or busy executing
@ -2020,6 +2009,16 @@ struct drm_i915_private {
*/
intel_wakeref_t awake;
struct blocking_notifier_head pm_notifications;
ktime_t last_init_time;
struct i915_vma *scratch;
} gt;
struct {
struct notifier_block pm_notifier;
/**
* We leave the user IRQ off as much as possible,
* but this means that requests will finish and never
@ -2036,12 +2035,8 @@ struct drm_i915_private {
* arrive within a small period of time, we fire
* off the idle_work.
*/
struct delayed_work idle_work;
ktime_t last_init_time;
struct i915_vma *scratch;
} gt;
struct work_struct idle_work;
} gem;
/* For i945gm vblank irq vs. C3 workaround */
struct {
@ -2585,6 +2580,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
#define HAS_RC6p(dev_priv) (INTEL_INFO(dev_priv)->has_rc6p)
#define HAS_RC6pp(dev_priv) (false) /* HW was never validated */
#define HAS_RPS(dev_priv) (INTEL_INFO(dev_priv)->has_rps)
#define HAS_CSR(dev_priv) (INTEL_INFO(dev_priv)->display.has_csr)
#define HAS_RUNTIME_PM(dev_priv) (INTEL_INFO(dev_priv)->has_runtime_pm)
@ -2636,7 +2633,6 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
#define INTEL_PCH_ID(dev_priv) ((dev_priv)->pch_id)
#define HAS_PCH_ICP(dev_priv) (INTEL_PCH_TYPE(dev_priv) == PCH_ICP)
#define HAS_PCH_CNP(dev_priv) (INTEL_PCH_TYPE(dev_priv) == PCH_CNP)
#define HAS_PCH_KBP(dev_priv) (INTEL_PCH_TYPE(dev_priv) == PCH_KBP)
#define HAS_PCH_SPT(dev_priv) (INTEL_PCH_TYPE(dev_priv) == PCH_SPT)
#define HAS_PCH_LPT(dev_priv) (INTEL_PCH_TYPE(dev_priv) == PCH_LPT)
#define HAS_PCH_LPT_LP(dev_priv) \
@ -2714,23 +2710,8 @@ extern unsigned long i915_gfx_val(struct drm_i915_private *dev_priv);
extern void i915_update_gfx_val(struct drm_i915_private *dev_priv);
int vlv_force_gfx_clock(struct drm_i915_private *dev_priv, bool on);
int intel_engines_init_mmio(struct drm_i915_private *dev_priv);
int intel_engines_init(struct drm_i915_private *dev_priv);
u32 intel_calculate_mcr_s_ss_select(struct drm_i915_private *dev_priv);
/* intel_hotplug.c */
void intel_hpd_irq_handler(struct drm_i915_private *dev_priv,
u32 pin_mask, u32 long_mask);
void intel_hpd_init(struct drm_i915_private *dev_priv);
void intel_hpd_init_work(struct drm_i915_private *dev_priv);
void intel_hpd_cancel_work(struct drm_i915_private *dev_priv);
enum hpd_pin intel_hpd_pin_default(struct drm_i915_private *dev_priv,
enum port port);
bool intel_hpd_disable(struct drm_i915_private *dev_priv, enum hpd_pin pin);
void intel_hpd_enable(struct drm_i915_private *dev_priv, enum hpd_pin pin);
/* i915_irq.c */
static inline void i915_queue_hangcheck(struct drm_i915_private *dev_priv)
{
unsigned long delay;
@ -2748,11 +2729,6 @@ static inline void i915_queue_hangcheck(struct drm_i915_private *dev_priv)
&dev_priv->gpu_error.hangcheck_work, delay);
}
extern void intel_irq_init(struct drm_i915_private *dev_priv);
extern void intel_irq_fini(struct drm_i915_private *dev_priv);
int intel_irq_install(struct drm_i915_private *dev_priv);
void intel_irq_uninstall(struct drm_i915_private *dev_priv);
static inline bool intel_gvt_active(struct drm_i915_private *dev_priv)
{
return dev_priv->gvt;
@ -2763,62 +2739,6 @@ static inline bool intel_vgpu_active(struct drm_i915_private *dev_priv)
return dev_priv->vgpu.active;
}
u32 i915_pipestat_enable_mask(struct drm_i915_private *dev_priv,
enum pipe pipe);
void
i915_enable_pipestat(struct drm_i915_private *dev_priv, enum pipe pipe,
u32 status_mask);
void
i915_disable_pipestat(struct drm_i915_private *dev_priv, enum pipe pipe,
u32 status_mask);
void valleyview_enable_display_irqs(struct drm_i915_private *dev_priv);
void valleyview_disable_display_irqs(struct drm_i915_private *dev_priv);
void i915_hotplug_interrupt_update(struct drm_i915_private *dev_priv,
u32 mask,
u32 bits);
void ilk_update_display_irq(struct drm_i915_private *dev_priv,
u32 interrupt_mask,
u32 enabled_irq_mask);
static inline void
ilk_enable_display_irq(struct drm_i915_private *dev_priv, u32 bits)
{
ilk_update_display_irq(dev_priv, bits, bits);
}
static inline void
ilk_disable_display_irq(struct drm_i915_private *dev_priv, u32 bits)
{
ilk_update_display_irq(dev_priv, bits, 0);
}
void bdw_update_pipe_irq(struct drm_i915_private *dev_priv,
enum pipe pipe,
u32 interrupt_mask,
u32 enabled_irq_mask);
static inline void bdw_enable_pipe_irq(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 bits)
{
bdw_update_pipe_irq(dev_priv, pipe, bits, bits);
}
static inline void bdw_disable_pipe_irq(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 bits)
{
bdw_update_pipe_irq(dev_priv, pipe, bits, 0);
}
void ibx_display_interrupt_update(struct drm_i915_private *dev_priv,
u32 interrupt_mask,
u32 enabled_irq_mask);
static inline void
ibx_enable_display_interrupt(struct drm_i915_private *dev_priv, u32 bits)
{
ibx_display_interrupt_update(dev_priv, bits, bits);
}
static inline void
ibx_disable_display_interrupt(struct drm_i915_private *dev_priv, u32 bits)
{
ibx_display_interrupt_update(dev_priv, bits, 0);
}
/* i915_gem.c */
int i915_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
@ -2903,15 +2823,15 @@ static inline void i915_gem_drain_workqueue(struct drm_i915_private *i915)
* grace period so that we catch work queued via RCU from the first
* pass. As neither drain_workqueue() nor flush_workqueue() report
* a result, we make an assumption that we only don't require more
* than 2 passes to catch all recursive RCU delayed work.
* than 3 passes to catch all _recursive_ RCU delayed work.
*
*/
int pass = 2;
int pass = 3;
do {
rcu_barrier();
i915_gem_drain_freed_objects(i915);
drain_workqueue(i915->wq);
} while (--pass);
drain_workqueue(i915->wq);
}
struct i915_vma * __must_check
@ -2944,6 +2864,10 @@ i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj,
unsigned int n);
dma_addr_t
i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj,
unsigned long n,
unsigned int *len);
dma_addr_t
i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
unsigned long n);
@ -3005,7 +2929,7 @@ enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */
int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
enum i915_mm_subclass subclass);
void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj);
void __i915_gem_object_truncate(struct drm_i915_gem_object *obj);
enum i915_map_type {
I915_MAP_WB = 0,
@ -3124,7 +3048,6 @@ int __must_check i915_gem_init(struct drm_i915_private *dev_priv);
int __must_check i915_gem_init_hw(struct drm_i915_private *dev_priv);
void i915_gem_init_swizzling(struct drm_i915_private *dev_priv);
void i915_gem_fini(struct drm_i915_private *dev_priv);
void i915_gem_cleanup_engines(struct drm_i915_private *dev_priv);
int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv,
unsigned int flags, long timeout);
void i915_gem_suspend(struct drm_i915_private *dev_priv);
@ -3266,11 +3189,12 @@ unsigned long i915_gem_shrink(struct drm_i915_private *i915,
unsigned long target,
unsigned long *nr_scanned,
unsigned flags);
#define I915_SHRINK_PURGEABLE 0x1
#define I915_SHRINK_UNBOUND 0x2
#define I915_SHRINK_BOUND 0x4
#define I915_SHRINK_ACTIVE 0x8
#define I915_SHRINK_VMAPS 0x10
#define I915_SHRINK_PURGEABLE BIT(0)
#define I915_SHRINK_UNBOUND BIT(1)
#define I915_SHRINK_BOUND BIT(2)
#define I915_SHRINK_ACTIVE BIT(3)
#define I915_SHRINK_VMAPS BIT(4)
#define I915_SHRINK_WRITEBACK BIT(5)
unsigned long i915_gem_shrink_all(struct drm_i915_private *i915);
void i915_gem_shrinker_register(struct drm_i915_private *i915);
void i915_gem_shrinker_unregister(struct drm_i915_private *i915);
@ -3291,18 +3215,6 @@ u32 i915_gem_fence_size(struct drm_i915_private *dev_priv, u32 size,
u32 i915_gem_fence_alignment(struct drm_i915_private *dev_priv, u32 size,
unsigned int tiling, unsigned int stride);
/* i915_debugfs.c */
#ifdef CONFIG_DEBUG_FS
int i915_debugfs_register(struct drm_i915_private *dev_priv);
int i915_debugfs_connector_add(struct drm_connector *connector);
void intel_display_crc_init(struct drm_i915_private *dev_priv);
#else
static inline int i915_debugfs_register(struct drm_i915_private *dev_priv) {return 0;}
static inline int i915_debugfs_connector_add(struct drm_connector *connector)
{ return 0; }
static inline void intel_display_crc_init(struct drm_i915_private *dev_priv) {}
#endif
const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
/* i915_cmd_parser.c */
@ -3330,56 +3242,6 @@ extern int i915_restore_state(struct drm_i915_private *dev_priv);
void i915_setup_sysfs(struct drm_i915_private *dev_priv);
void i915_teardown_sysfs(struct drm_i915_private *dev_priv);
/* intel_lpe_audio.c */
int intel_lpe_audio_init(struct drm_i915_private *dev_priv);
void intel_lpe_audio_teardown(struct drm_i915_private *dev_priv);
void intel_lpe_audio_irq_handler(struct drm_i915_private *dev_priv);
void intel_lpe_audio_notify(struct drm_i915_private *dev_priv,
enum pipe pipe, enum port port,
const void *eld, int ls_clock, bool dp_output);
/* intel_i2c.c */
extern int intel_setup_gmbus(struct drm_i915_private *dev_priv);
extern void intel_teardown_gmbus(struct drm_i915_private *dev_priv);
extern bool intel_gmbus_is_valid_pin(struct drm_i915_private *dev_priv,
unsigned int pin);
extern int intel_gmbus_output_aksv(struct i2c_adapter *adapter);
extern struct i2c_adapter *
intel_gmbus_get_adapter(struct drm_i915_private *dev_priv, unsigned int pin);
extern void intel_gmbus_set_speed(struct i2c_adapter *adapter, int speed);
extern void intel_gmbus_force_bit(struct i2c_adapter *adapter, bool force_bit);
static inline bool intel_gmbus_is_forced_bit(struct i2c_adapter *adapter)
{
return container_of(adapter, struct intel_gmbus, adapter)->force_bit;
}
extern void intel_i2c_reset(struct drm_i915_private *dev_priv);
/* intel_bios.c */
void intel_bios_init(struct drm_i915_private *dev_priv);
void intel_bios_cleanup(struct drm_i915_private *dev_priv);
bool intel_bios_is_valid_vbt(const void *buf, size_t size);
bool intel_bios_is_tv_present(struct drm_i915_private *dev_priv);
bool intel_bios_is_lvds_present(struct drm_i915_private *dev_priv, u8 *i2c_pin);
bool intel_bios_is_port_present(struct drm_i915_private *dev_priv, enum port port);
bool intel_bios_is_port_edp(struct drm_i915_private *dev_priv, enum port port);
bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv, enum port port);
bool intel_bios_is_dsi_present(struct drm_i915_private *dev_priv, enum port *port);
bool intel_bios_is_port_hpd_inverted(struct drm_i915_private *dev_priv,
enum port port);
bool intel_bios_is_lspcon_present(struct drm_i915_private *dev_priv,
enum port port);
enum aux_ch intel_bios_port_aux_ch(struct drm_i915_private *dev_priv, enum port port);
/* intel_acpi.c */
#ifdef CONFIG_ACPI
extern void intel_register_dsm_handler(void);
extern void intel_unregister_dsm_handler(void);
#else
static inline void intel_register_dsm_handler(void) { return; }
static inline void intel_unregister_dsm_handler(void) { return; }
#endif /* CONFIG_ACPI */
/* intel_device_info.c */
static inline struct intel_device_info *
mkwrite_device_info(struct drm_i915_private *dev_priv)
@ -3387,20 +3249,6 @@ mkwrite_device_info(struct drm_i915_private *dev_priv)
return (struct intel_device_info *)INTEL_INFO(dev_priv);
}
static inline struct intel_sseu
intel_device_default_sseu(struct drm_i915_private *i915)
{
const struct sseu_dev_info *sseu = &RUNTIME_INFO(i915)->sseu;
struct intel_sseu value = {
.slice_mask = sseu->slice_mask,
.subslice_mask = sseu->subslice_mask[0],
.min_eus_per_subslice = sseu->max_eus_per_subslice,
.max_eus_per_subslice = sseu->max_eus_per_subslice,
};
return value;
}
/* modesetting */
extern void intel_modeset_init_hw(struct drm_device *dev);
extern int intel_modeset_init(struct drm_device *dev);
@ -3417,115 +3265,15 @@ extern void intel_rps_mark_interactive(struct drm_i915_private *i915,
bool interactive);
extern bool intel_set_memory_cxsr(struct drm_i915_private *dev_priv,
bool enable);
void intel_dsc_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state);
void intel_dsc_disable(const struct intel_crtc_state *crtc_state);
int i915_reg_read_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
/* overlay */
extern struct intel_overlay_error_state *
intel_overlay_capture_error_state(struct drm_i915_private *dev_priv);
extern void intel_overlay_print_error_state(struct drm_i915_error_state_buf *e,
struct intel_overlay_error_state *error);
extern struct intel_display_error_state *
intel_display_capture_error_state(struct drm_i915_private *dev_priv);
extern void intel_display_print_error_state(struct drm_i915_error_state_buf *e,
struct intel_display_error_state *error);
int sandybridge_pcode_read(struct drm_i915_private *dev_priv, u32 mbox, u32 *val);
int sandybridge_pcode_write_timeout(struct drm_i915_private *dev_priv, u32 mbox,
u32 val, int fast_timeout_us,
int slow_timeout_ms);
#define sandybridge_pcode_write(dev_priv, mbox, val) \
sandybridge_pcode_write_timeout(dev_priv, mbox, val, 500, 0)
int skl_pcode_request(struct drm_i915_private *dev_priv, u32 mbox, u32 request,
u32 reply_mask, u32 reply, int timeout_base_ms);
/* intel_sideband.c */
u32 vlv_punit_read(struct drm_i915_private *dev_priv, u32 addr);
int vlv_punit_write(struct drm_i915_private *dev_priv, u32 addr, u32 val);
u32 vlv_nc_read(struct drm_i915_private *dev_priv, u8 addr);
u32 vlv_iosf_sb_read(struct drm_i915_private *dev_priv, u8 port, u32 reg);
void vlv_iosf_sb_write(struct drm_i915_private *dev_priv, u8 port, u32 reg, u32 val);
u32 vlv_cck_read(struct drm_i915_private *dev_priv, u32 reg);
void vlv_cck_write(struct drm_i915_private *dev_priv, u32 reg, u32 val);
u32 vlv_ccu_read(struct drm_i915_private *dev_priv, u32 reg);
void vlv_ccu_write(struct drm_i915_private *dev_priv, u32 reg, u32 val);
u32 vlv_bunit_read(struct drm_i915_private *dev_priv, u32 reg);
void vlv_bunit_write(struct drm_i915_private *dev_priv, u32 reg, u32 val);
u32 vlv_dpio_read(struct drm_i915_private *dev_priv, enum pipe pipe, int reg);
void vlv_dpio_write(struct drm_i915_private *dev_priv, enum pipe pipe, int reg, u32 val);
u32 intel_sbi_read(struct drm_i915_private *dev_priv, u16 reg,
enum intel_sbi_destination destination);
void intel_sbi_write(struct drm_i915_private *dev_priv, u16 reg, u32 value,
enum intel_sbi_destination destination);
u32 vlv_flisdsi_read(struct drm_i915_private *dev_priv, u32 reg);
void vlv_flisdsi_write(struct drm_i915_private *dev_priv, u32 reg, u32 val);
/* intel_dpio_phy.c */
void bxt_port_to_phy_channel(struct drm_i915_private *dev_priv, enum port port,
enum dpio_phy *phy, enum dpio_channel *ch);
void bxt_ddi_phy_set_signal_level(struct drm_i915_private *dev_priv,
enum port port, u32 margin, u32 scale,
u32 enable, u32 deemphasis);
void bxt_ddi_phy_init(struct drm_i915_private *dev_priv, enum dpio_phy phy);
void bxt_ddi_phy_uninit(struct drm_i915_private *dev_priv, enum dpio_phy phy);
bool bxt_ddi_phy_is_enabled(struct drm_i915_private *dev_priv,
enum dpio_phy phy);
bool bxt_ddi_phy_verify_state(struct drm_i915_private *dev_priv,
enum dpio_phy phy);
u8 bxt_ddi_phy_calc_lane_lat_optim_mask(u8 lane_count);
void bxt_ddi_phy_set_lane_optim_mask(struct intel_encoder *encoder,
u8 lane_lat_optim_mask);
u8 bxt_ddi_phy_get_lane_lat_optim_mask(struct intel_encoder *encoder);
void chv_set_phy_signal_level(struct intel_encoder *encoder,
u32 deemph_reg_value, u32 margin_reg_value,
bool uniq_trans_scale);
void chv_data_lane_soft_reset(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state,
bool reset);
void chv_phy_pre_pll_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state);
void chv_phy_pre_encoder_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state);
void chv_phy_release_cl2_override(struct intel_encoder *encoder);
void chv_phy_post_pll_disable(struct intel_encoder *encoder,
const struct intel_crtc_state *old_crtc_state);
void vlv_set_phy_signal_level(struct intel_encoder *encoder,
u32 demph_reg_value, u32 preemph_reg_value,
u32 uniqtranscale_reg_value, u32 tx3_demph);
void vlv_phy_pre_pll_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state);
void vlv_phy_pre_encoder_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state);
void vlv_phy_reset_lanes(struct intel_encoder *encoder,
const struct intel_crtc_state *old_crtc_state);
/* intel_combo_phy.c */
void icl_combo_phys_init(struct drm_i915_private *dev_priv);
void icl_combo_phys_uninit(struct drm_i915_private *dev_priv);
void cnl_combo_phys_init(struct drm_i915_private *dev_priv);
void cnl_combo_phys_uninit(struct drm_i915_private *dev_priv);
int intel_gpu_freq(struct drm_i915_private *dev_priv, int val);
int intel_freq_opcode(struct drm_i915_private *dev_priv, int val);
u64 intel_rc6_residency_ns(struct drm_i915_private *dev_priv,
const i915_reg_t reg);
u32 intel_get_cagf(struct drm_i915_private *dev_priv, u32 rpstat1);
static inline u64 intel_rc6_residency_us(struct drm_i915_private *dev_priv,
const i915_reg_t reg)
{
return DIV_ROUND_UP_ULL(intel_rc6_residency_ns(dev_priv, reg), 1000);
}
#define __I915_REG_OP(op__, dev_priv__, ...) \
intel_uncore_##op__(&(dev_priv__)->uncore, __VA_ARGS__)
@ -3599,60 +3347,6 @@ static inline u64 intel_rc6_residency_us(struct drm_i915_private *dev_priv,
#define INTEL_BROADCAST_RGB_FULL 1
#define INTEL_BROADCAST_RGB_LIMITED 2
static inline i915_reg_t i915_vgacntrl_reg(struct drm_i915_private *dev_priv)
{
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
return VLV_VGACNTRL;
else if (INTEL_GEN(dev_priv) >= 5)
return CPU_VGACNTRL;
else
return VGACNTRL;
}
static inline unsigned long msecs_to_jiffies_timeout(const unsigned int m)
{
unsigned long j = msecs_to_jiffies(m);
return min_t(unsigned long, MAX_JIFFY_OFFSET, j + 1);
}
static inline unsigned long nsecs_to_jiffies_timeout(const u64 n)
{
/* nsecs_to_jiffies64() does not guard against overflow */
if (NSEC_PER_SEC % HZ &&
div_u64(n, NSEC_PER_SEC) >= MAX_JIFFY_OFFSET / HZ)
return MAX_JIFFY_OFFSET;
return min_t(u64, MAX_JIFFY_OFFSET, nsecs_to_jiffies64(n) + 1);
}
/*
* If you need to wait X milliseconds between events A and B, but event B
* doesn't happen exactly after event A, you record the timestamp (jiffies) of
* when event A happened, then just before event B you call this function and
* pass the timestamp as the first argument, and X as the second argument.
*/
static inline void
wait_remaining_ms_from_jiffies(unsigned long timestamp_jiffies, int to_wait_ms)
{
unsigned long target_jiffies, tmp_jiffies, remaining_jiffies;
/*
* Don't re-read the value of "jiffies" every time since it may change
* behind our back and break the math.
*/
tmp_jiffies = jiffies;
target_jiffies = timestamp_jiffies +
msecs_to_jiffies_timeout(to_wait_ms);
if (time_after(target_jiffies, tmp_jiffies)) {
remaining_jiffies = target_jiffies - tmp_jiffies;
while (remaining_jiffies)
remaining_jiffies =
schedule_timeout_uninterruptible(remaining_jiffies);
}
}
void i915_memcpy_init_early(struct drm_i915_private *dev_priv);
bool i915_memcpy_from_wc(void *dst, const void *src, unsigned long len);
@ -3690,4 +3384,15 @@ static inline u32 i915_scratch_offset(const struct drm_i915_private *i915)
return i915_ggtt_offset(i915->gt.scratch);
}
static inline void add_taint_for_CI(unsigned int taint)
{
/*
* The system is "ok", just about surviving for the user, but
* CI results are now unreliable as the HW is very suspect.
* CI checks the taint state after every test and will reboot
* the machine if the kernel is tainted.
*/
add_taint(taint, LOCKDEP_STILL_OK);
}
#endif

View File

@ -71,7 +71,7 @@ static inline u32 mul_round_up_u32_fixed16(u32 val, uint_fixed_16_16_t mul)
{
u64 tmp;
tmp = (u64)val * mul.val;
tmp = mul_u32_u32(val, mul.val);
tmp = DIV_ROUND_UP_ULL(tmp, 1 << 16);
WARN_ON(tmp > U32_MAX);
@ -83,7 +83,7 @@ static inline uint_fixed_16_16_t mul_fixed16(uint_fixed_16_16_t val,
{
u64 tmp;
tmp = (u64)val.val * mul.val;
tmp = mul_u32_u32(val.val, mul.val);
tmp = tmp >> 16;
return clamp_u64_to_fixed16(tmp);
@ -114,7 +114,7 @@ static inline uint_fixed_16_16_t mul_u32_fixed16(u32 val, uint_fixed_16_16_t mul
{
u64 tmp;
tmp = (u64)val * mul.val;
tmp = mul_u32_u32(val, mul.val);
return clamp_u64_to_fixed16(tmp);
}

View File

@ -39,19 +39,23 @@
#include <linux/dma-buf.h>
#include <linux/mman.h>
#include "gt/intel_engine_pm.h"
#include "gt/intel_gt_pm.h"
#include "gt/intel_mocs.h"
#include "gt/intel_reset.h"
#include "gt/intel_workarounds.h"
#include "i915_drv.h"
#include "i915_gem_clflush.h"
#include "i915_gemfs.h"
#include "i915_globals.h"
#include "i915_reset.h"
#include "i915_gem_pm.h"
#include "i915_trace.h"
#include "i915_vgpu.h"
#include "intel_display.h"
#include "intel_drv.h"
#include "intel_frontbuffer.h"
#include "intel_mocs.h"
#include "intel_pm.h"
#include "intel_workarounds.h"
static void i915_gem_flush_free_objects(struct drm_i915_private *i915);
@ -102,105 +106,6 @@ static void i915_gem_info_remove_obj(struct drm_i915_private *dev_priv,
spin_unlock(&dev_priv->mm.object_stat_lock);
}
static void __i915_gem_park(struct drm_i915_private *i915)
{
intel_wakeref_t wakeref;
GEM_TRACE("\n");
lockdep_assert_held(&i915->drm.struct_mutex);
GEM_BUG_ON(i915->gt.active_requests);
GEM_BUG_ON(!list_empty(&i915->gt.active_rings));
if (!i915->gt.awake)
return;
/*
* Be paranoid and flush a concurrent interrupt to make sure
* we don't reactivate any irq tasklets after parking.
*
* FIXME: Note that even though we have waited for execlists to be idle,
* there may still be an in-flight interrupt even though the CSB
* is now empty. synchronize_irq() makes sure that a residual interrupt
* is completed before we continue, but it doesn't prevent the HW from
* raising a spurious interrupt later. To complete the shield we should
* coordinate disabling the CS irq with flushing the interrupts.
*/
synchronize_irq(i915->drm.irq);
intel_engines_park(i915);
i915_timelines_park(i915);
i915_pmu_gt_parked(i915);
i915_vma_parked(i915);
wakeref = fetch_and_zero(&i915->gt.awake);
GEM_BUG_ON(!wakeref);
if (INTEL_GEN(i915) >= 6)
gen6_rps_idle(i915);
intel_display_power_put(i915, POWER_DOMAIN_GT_IRQ, wakeref);
i915_globals_park();
}
void i915_gem_park(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
lockdep_assert_held(&i915->drm.struct_mutex);
GEM_BUG_ON(i915->gt.active_requests);
if (!i915->gt.awake)
return;
/* Defer the actual call to __i915_gem_park() to prevent ping-pongs */
mod_delayed_work(i915->wq, &i915->gt.idle_work, msecs_to_jiffies(100));
}
void i915_gem_unpark(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
lockdep_assert_held(&i915->drm.struct_mutex);
GEM_BUG_ON(!i915->gt.active_requests);
assert_rpm_wakelock_held(i915);
if (i915->gt.awake)
return;
/*
* It seems that the DMC likes to transition between the DC states a lot
* when there are no connected displays (no active power domains) during
* command submission.
*
* This activity has negative impact on the performance of the chip with
* huge latencies observed in the interrupt handler and elsewhere.
*
* Work around it by grabbing a GT IRQ power domain whilst there is any
* GT activity, preventing any DC state transitions.
*/
i915->gt.awake = intel_display_power_get(i915, POWER_DOMAIN_GT_IRQ);
GEM_BUG_ON(!i915->gt.awake);
i915_globals_unpark();
intel_enable_gt_powersave(i915);
i915_update_gfx_val(i915);
if (INTEL_GEN(i915) >= 6)
gen6_rps_busy(i915);
i915_pmu_gt_unparked(i915);
intel_engines_unpark(i915);
i915_queue_hangcheck(i915);
queue_delayed_work(i915->wq,
&i915->gt.retire_work,
round_jiffies_up_relative(HZ));
}
int
i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
@ -656,8 +561,31 @@ i915_gem_dumb_create(struct drm_file *file,
struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
int cpp = DIV_ROUND_UP(args->bpp, 8);
u32 format;
switch (cpp) {
case 1:
format = DRM_FORMAT_C8;
break;
case 2:
format = DRM_FORMAT_RGB565;
break;
case 4:
format = DRM_FORMAT_XRGB8888;
break;
default:
return -EINVAL;
}
/* have to work out size/pitch and return them */
args->pitch = ALIGN(args->width * DIV_ROUND_UP(args->bpp, 8), 64);
args->pitch = ALIGN(args->width * cpp, 64);
/* align stride to page size so that we can remap */
if (args->pitch > intel_plane_fb_max_stride(to_i915(dev), format,
DRM_FORMAT_MOD_LINEAR))
args->pitch = ALIGN(args->pitch, 4096);
args->size = args->pitch * args->height;
return i915_gem_create(file, to_i915(dev),
&args->size, &args->handle);
@ -2087,7 +2015,7 @@ static int i915_gem_object_create_mmap_offset(struct drm_i915_gem_object *obj)
if (!err)
break;
} while (flush_delayed_work(&dev_priv->gt.retire_work));
} while (flush_delayed_work(&dev_priv->gem.retire_work));
return err;
}
@ -2143,8 +2071,7 @@ i915_gem_mmap_gtt_ioctl(struct drm_device *dev, void *data,
}
/* Immediately discard the backing storage */
static void
i915_gem_object_truncate(struct drm_i915_gem_object *obj)
void __i915_gem_object_truncate(struct drm_i915_gem_object *obj)
{
i915_gem_object_free_mmap_offset(obj);
@ -2161,28 +2088,6 @@ i915_gem_object_truncate(struct drm_i915_gem_object *obj)
obj->mm.pages = ERR_PTR(-EFAULT);
}
/* Try to discard unwanted pages */
void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
{
struct address_space *mapping;
lockdep_assert_held(&obj->mm.lock);
GEM_BUG_ON(i915_gem_object_has_pages(obj));
switch (obj->mm.madv) {
case I915_MADV_DONTNEED:
i915_gem_object_truncate(obj);
case __I915_MADV_PURGED:
return;
}
if (obj->base.filp == NULL)
return;
mapping = obj->base.filp->f_mapping,
invalidate_mapping_pages(mapping, 0, (loff_t)-1);
}
/*
* Move pages to appropriate lru and release the pagevec, decrementing the
* ref count of those pages.
@ -2870,132 +2775,6 @@ i915_gem_object_pwrite_gtt(struct drm_i915_gem_object *obj,
return 0;
}
static void
i915_gem_retire_work_handler(struct work_struct *work)
{
struct drm_i915_private *dev_priv =
container_of(work, typeof(*dev_priv), gt.retire_work.work);
struct drm_device *dev = &dev_priv->drm;
/* Come back later if the device is busy... */
if (mutex_trylock(&dev->struct_mutex)) {
i915_retire_requests(dev_priv);
mutex_unlock(&dev->struct_mutex);
}
/*
* Keep the retire handler running until we are finally idle.
* We do not need to do this test under locking as in the worst-case
* we queue the retire worker once too often.
*/
if (READ_ONCE(dev_priv->gt.awake))
queue_delayed_work(dev_priv->wq,
&dev_priv->gt.retire_work,
round_jiffies_up_relative(HZ));
}
static bool switch_to_kernel_context_sync(struct drm_i915_private *i915,
unsigned long mask)
{
bool result = true;
/*
* Even if we fail to switch, give whatever is running a small chance
* to save itself before we report the failure. Yes, this may be a
* false positive due to e.g. ENOMEM, caveat emptor!
*/
if (i915_gem_switch_to_kernel_context(i915, mask))
result = false;
if (i915_gem_wait_for_idle(i915,
I915_WAIT_LOCKED |
I915_WAIT_FOR_IDLE_BOOST,
I915_GEM_IDLE_TIMEOUT))
result = false;
if (!result) {
if (i915_modparams.reset) { /* XXX hide warning from gem_eio */
dev_err(i915->drm.dev,
"Failed to idle engines, declaring wedged!\n");
GEM_TRACE_DUMP();
}
/* Forcibly cancel outstanding work and leave the gpu quiet. */
i915_gem_set_wedged(i915);
}
i915_retire_requests(i915); /* ensure we flush after wedging */
return result;
}
static bool load_power_context(struct drm_i915_private *i915)
{
/* Force loading the kernel context on all engines */
if (!switch_to_kernel_context_sync(i915, ALL_ENGINES))
return false;
/*
* Immediately park the GPU so that we enable powersaving and
* treat it as idle. The next time we issue a request, we will
* unpark and start using the engine->pinned_default_state, otherwise
* it is in limbo and an early reset may fail.
*/
__i915_gem_park(i915);
return true;
}
static void
i915_gem_idle_work_handler(struct work_struct *work)
{
struct drm_i915_private *i915 =
container_of(work, typeof(*i915), gt.idle_work.work);
bool rearm_hangcheck;
if (!READ_ONCE(i915->gt.awake))
return;
if (READ_ONCE(i915->gt.active_requests))
return;
rearm_hangcheck =
cancel_delayed_work_sync(&i915->gpu_error.hangcheck_work);
if (!mutex_trylock(&i915->drm.struct_mutex)) {
/* Currently busy, come back later */
mod_delayed_work(i915->wq,
&i915->gt.idle_work,
msecs_to_jiffies(50));
goto out_rearm;
}
/*
* Flush out the last user context, leaving only the pinned
* kernel context resident. Should anything unfortunate happen
* while we are idle (such as the GPU being power cycled), no users
* will be harmed.
*/
if (!work_pending(&i915->gt.idle_work.work) &&
!i915->gt.active_requests) {
++i915->gt.active_requests; /* don't requeue idle */
switch_to_kernel_context_sync(i915, i915->gt.active_engines);
if (!--i915->gt.active_requests) {
__i915_gem_park(i915);
rearm_hangcheck = false;
}
}
mutex_unlock(&i915->drm.struct_mutex);
out_rearm:
if (rearm_hangcheck) {
GEM_BUG_ON(!i915->gt.awake);
i915_queue_hangcheck(i915);
}
}
void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file)
{
struct drm_i915_private *i915 = to_i915(gem->dev);
@ -3135,9 +2914,6 @@ wait_for_timelines(struct drm_i915_private *i915,
struct i915_gt_timelines *gt = &i915->gt.timelines;
struct i915_timeline *tl;
if (!READ_ONCE(i915->gt.active_requests))
return timeout;
mutex_lock(&gt->mutex);
list_for_each_entry(tl, &gt->active_list, link) {
struct i915_request *rq;
@ -3177,9 +2953,10 @@ wait_for_timelines(struct drm_i915_private *i915,
int i915_gem_wait_for_idle(struct drm_i915_private *i915,
unsigned int flags, long timeout)
{
GEM_TRACE("flags=%x (%s), timeout=%ld%s\n",
GEM_TRACE("flags=%x (%s), timeout=%ld%s, awake?=%s\n",
flags, flags & I915_WAIT_LOCKED ? "locked" : "unlocked",
timeout, timeout == MAX_SCHEDULE_TIMEOUT ? " (forever)" : "");
timeout, timeout == MAX_SCHEDULE_TIMEOUT ? " (forever)" : "",
yesno(i915->gt.awake));
/* If the device is asleep, we have no requests outstanding */
if (!READ_ONCE(i915->gt.awake))
@ -4023,7 +3800,7 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
/* if the object is no longer attached, discard its backing storage */
if (obj->mm.madv == I915_MADV_DONTNEED &&
!i915_gem_object_has_pages(obj))
i915_gem_object_truncate(obj);
__i915_gem_object_truncate(obj);
args->retained = obj->mm.madv != __I915_MADV_PURGED;
mutex_unlock(&obj->mm.lock);
@ -4401,7 +4178,7 @@ void i915_gem_sanitize(struct drm_i915_private *i915)
* it may impact the display and we are uncertain about the stability
* of the reset, so this could be applied to even earlier gen.
*/
intel_engines_sanitize(i915, false);
intel_gt_sanitize(i915, false);
intel_uncore_forcewake_put(&i915->uncore, FORCEWAKE_ALL);
intel_runtime_pm_put(i915, wakeref);
@ -4411,133 +4188,6 @@ void i915_gem_sanitize(struct drm_i915_private *i915)
mutex_unlock(&i915->drm.struct_mutex);
}
void i915_gem_suspend(struct drm_i915_private *i915)
{
intel_wakeref_t wakeref;
GEM_TRACE("\n");
wakeref = intel_runtime_pm_get(i915);
flush_workqueue(i915->wq);
mutex_lock(&i915->drm.struct_mutex);
/*
* We have to flush all the executing contexts to main memory so
* that they can saved in the hibernation image. To ensure the last
* context image is coherent, we have to switch away from it. That
* leaves the i915->kernel_context still active when
* we actually suspend, and its image in memory may not match the GPU
* state. Fortunately, the kernel_context is disposable and we do
* not rely on its state.
*/
switch_to_kernel_context_sync(i915, i915->gt.active_engines);
mutex_unlock(&i915->drm.struct_mutex);
i915_reset_flush(i915);
drain_delayed_work(&i915->gt.retire_work);
/*
* As the idle_work is rearming if it detects a race, play safe and
* repeat the flush until it is definitely idle.
*/
drain_delayed_work(&i915->gt.idle_work);
/*
* Assert that we successfully flushed all the work and
* reset the GPU back to its idle, low power state.
*/
GEM_BUG_ON(i915->gt.awake);
intel_uc_suspend(i915);
intel_runtime_pm_put(i915, wakeref);
}
void i915_gem_suspend_late(struct drm_i915_private *i915)
{
struct drm_i915_gem_object *obj;
struct list_head *phases[] = {
&i915->mm.unbound_list,
&i915->mm.bound_list,
NULL
}, **phase;
/*
* Neither the BIOS, ourselves or any other kernel
* expects the system to be in execlists mode on startup,
* so we need to reset the GPU back to legacy mode. And the only
* known way to disable logical contexts is through a GPU reset.
*
* So in order to leave the system in a known default configuration,
* always reset the GPU upon unload and suspend. Afterwards we then
* clean up the GEM state tracking, flushing off the requests and
* leaving the system in a known idle state.
*
* Note that is of the upmost importance that the GPU is idle and
* all stray writes are flushed *before* we dismantle the backing
* storage for the pinned objects.
*
* However, since we are uncertain that resetting the GPU on older
* machines is a good idea, we don't - just in case it leaves the
* machine in an unusable condition.
*/
mutex_lock(&i915->drm.struct_mutex);
for (phase = phases; *phase; phase++) {
list_for_each_entry(obj, *phase, mm.link)
WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false));
}
mutex_unlock(&i915->drm.struct_mutex);
intel_uc_sanitize(i915);
i915_gem_sanitize(i915);
}
void i915_gem_resume(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
WARN_ON(i915->gt.awake);
mutex_lock(&i915->drm.struct_mutex);
intel_uncore_forcewake_get(&i915->uncore, FORCEWAKE_ALL);
i915_gem_restore_gtt_mappings(i915);
i915_gem_restore_fences(i915);
/*
* As we didn't flush the kernel context before suspend, we cannot
* guarantee that the context image is complete. So let's just reset
* it and start again.
*/
intel_gt_resume(i915);
if (i915_gem_init_hw(i915))
goto err_wedged;
intel_uc_resume(i915);
/* Always reload a context for powersaving. */
if (!load_power_context(i915))
goto err_wedged;
out_unlock:
intel_uncore_forcewake_put(&i915->uncore, FORCEWAKE_ALL);
mutex_unlock(&i915->drm.struct_mutex);
return;
err_wedged:
if (!i915_reset_failed(i915)) {
dev_err(i915->drm.dev,
"Failed to re-initialize GPU, declaring it wedged!\n");
i915_gem_set_wedged(i915);
}
goto out_unlock;
}
void i915_gem_init_swizzling(struct drm_i915_private *dev_priv)
{
if (INTEL_GEN(dev_priv) < 5 ||
@ -4586,27 +4236,6 @@ static void init_unused_rings(struct drm_i915_private *dev_priv)
}
}
static int __i915_gem_restart_engines(void *data)
{
struct drm_i915_private *i915 = data;
struct intel_engine_cs *engine;
enum intel_engine_id id;
int err;
for_each_engine(engine, i915, id) {
err = engine->init_hw(engine);
if (err) {
DRM_ERROR("Failed to restart %s (%d)\n",
engine->name, err);
return err;
}
}
intel_engines_set_scheduler_caps(i915);
return 0;
}
int i915_gem_init_hw(struct drm_i915_private *dev_priv)
{
int ret;
@ -4665,12 +4294,13 @@ int i915_gem_init_hw(struct drm_i915_private *dev_priv)
intel_mocs_init_l3cc_table(dev_priv);
/* Only when the HW is re-initialised, can we replay the requests */
ret = __i915_gem_restart_engines(dev_priv);
ret = intel_engines_resume(dev_priv);
if (ret)
goto cleanup_uc;
intel_uncore_forcewake_put(&dev_priv->uncore, FORCEWAKE_ALL);
intel_engines_set_scheduler_caps(dev_priv);
return 0;
cleanup_uc:
@ -4683,8 +4313,9 @@ out:
static int __intel_engines_record_defaults(struct drm_i915_private *i915)
{
struct i915_gem_context *ctx;
struct intel_engine_cs *engine;
struct i915_gem_context *ctx;
struct i915_gem_engines *e;
enum intel_engine_id id;
int err = 0;
@ -4701,18 +4332,21 @@ static int __intel_engines_record_defaults(struct drm_i915_private *i915)
if (IS_ERR(ctx))
return PTR_ERR(ctx);
e = i915_gem_context_lock_engines(ctx);
for_each_engine(engine, i915, id) {
struct intel_context *ce = e->engines[id];
struct i915_request *rq;
rq = i915_request_alloc(engine, ctx);
rq = intel_context_create_request(ce);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto out_ctx;
goto err_active;
}
err = 0;
if (engine->init_context)
err = engine->init_context(rq);
if (rq->engine->init_context)
err = rq->engine->init_context(rq);
i915_request_add(rq);
if (err)
@ -4720,21 +4354,16 @@ static int __intel_engines_record_defaults(struct drm_i915_private *i915)
}
/* Flush the default context image to memory, and enable powersaving. */
if (!load_power_context(i915)) {
if (!i915_gem_load_power_context(i915)) {
err = -EIO;
goto err_active;
}
for_each_engine(engine, i915, id) {
struct intel_context *ce;
struct i915_vma *state;
struct intel_context *ce = e->engines[id];
struct i915_vma *state = ce->state;
void *vaddr;
ce = intel_context_lookup(ctx, engine);
if (!ce)
continue;
state = ce->state;
if (!state)
continue;
@ -4790,6 +4419,7 @@ static int __intel_engines_record_defaults(struct drm_i915_private *i915)
}
out_ctx:
i915_gem_context_unlock_engines(ctx);
i915_gem_context_set_closed(ctx);
i915_gem_context_put(ctx);
return err;
@ -4842,6 +4472,23 @@ static void i915_gem_fini_scratch(struct drm_i915_private *i915)
i915_vma_unpin_and_release(&i915->gt.scratch, 0);
}
static int intel_engines_verify_workarounds(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
int err = 0;
if (!IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
return 0;
for_each_engine(engine, i915, id) {
if (intel_engine_verify_workarounds(engine, "load"))
err = -EIO;
}
return err;
}
int i915_gem_init(struct drm_i915_private *dev_priv)
{
int ret;
@ -4853,11 +4500,6 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
dev_priv->mm.unordered_timeline = dma_fence_context_alloc(1);
if (HAS_LOGICAL_RING_CONTEXTS(dev_priv))
dev_priv->gt.cleanup_engine = intel_logical_ring_cleanup;
else
dev_priv->gt.cleanup_engine = intel_engine_cleanup;
i915_timelines_init(dev_priv);
ret = i915_gem_init_userptr(dev_priv);
@ -4894,6 +4536,12 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
goto err_ggtt;
}
ret = intel_engines_setup(dev_priv);
if (ret) {
GEM_BUG_ON(ret == -EIO);
goto err_unlock;
}
ret = i915_gem_contexts_init(dev_priv);
if (ret) {
GEM_BUG_ON(ret == -EIO);
@ -4927,6 +4575,10 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
*/
intel_init_clock_gating(dev_priv);
ret = intel_engines_verify_workarounds(dev_priv);
if (ret)
goto err_init_hw;
ret = __intel_engines_record_defaults(dev_priv);
if (ret)
goto err_init_hw;
@ -4955,6 +4607,7 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
err_init_hw:
mutex_unlock(&dev_priv->drm.struct_mutex);
i915_gem_set_wedged(dev_priv);
i915_gem_suspend(dev_priv);
i915_gem_suspend_late(dev_priv);
@ -4967,7 +4620,7 @@ err_uc_init:
err_pm:
if (ret != -EIO) {
intel_cleanup_gt_powersave(dev_priv);
i915_gem_cleanup_engines(dev_priv);
intel_engines_cleanup(dev_priv);
}
err_context:
if (ret != -EIO)
@ -5016,6 +4669,8 @@ err_uc_misc:
void i915_gem_fini(struct drm_i915_private *dev_priv)
{
GEM_BUG_ON(dev_priv->gt.awake);
i915_gem_suspend_late(dev_priv);
intel_disable_gt_powersave(dev_priv);
@ -5025,7 +4680,7 @@ void i915_gem_fini(struct drm_i915_private *dev_priv)
mutex_lock(&dev_priv->drm.struct_mutex);
intel_uc_fini_hw(dev_priv);
intel_uc_fini(dev_priv);
i915_gem_cleanup_engines(dev_priv);
intel_engines_cleanup(dev_priv);
i915_gem_contexts_fini(dev_priv);
i915_gem_fini_scratch(dev_priv);
mutex_unlock(&dev_priv->drm.struct_mutex);
@ -5048,16 +4703,6 @@ void i915_gem_init_mmio(struct drm_i915_private *i915)
i915_gem_sanitize(i915);
}
void
i915_gem_cleanup_engines(struct drm_i915_private *dev_priv)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
for_each_engine(engine, dev_priv, id)
dev_priv->gt.cleanup_engine(engine);
}
void
i915_gem_load_init_fences(struct drm_i915_private *dev_priv)
{
@ -5110,15 +4755,14 @@ int i915_gem_init_early(struct drm_i915_private *dev_priv)
{
int err;
intel_gt_pm_init(dev_priv);
INIT_LIST_HEAD(&dev_priv->gt.active_rings);
INIT_LIST_HEAD(&dev_priv->gt.closed_vma);
i915_gem_init__mm(dev_priv);
i915_gem_init__pm(dev_priv);
INIT_DELAYED_WORK(&dev_priv->gt.retire_work,
i915_gem_retire_work_handler);
INIT_DELAYED_WORK(&dev_priv->gt.idle_work,
i915_gem_idle_work_handler);
init_waitqueue_head(&dev_priv->gpu_error.wait_queue);
init_waitqueue_head(&dev_priv->gpu_error.reset_queue);
mutex_init(&dev_priv->gpu_error.wedge_mutex);
@ -5461,16 +5105,29 @@ i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj,
}
dma_addr_t
i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
unsigned long n)
i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj,
unsigned long n,
unsigned int *len)
{
struct scatterlist *sg;
unsigned int offset;
sg = i915_gem_object_get_sg(obj, n, &offset);
if (len)
*len = sg_dma_len(sg) - (offset << PAGE_SHIFT);
return sg_dma_address(sg) + (offset << PAGE_SHIFT);
}
dma_addr_t
i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj,
unsigned long n)
{
return i915_gem_object_get_dma_address_len(obj, n, NULL);
}
int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
{
struct sg_table *pages;

View File

@ -75,9 +75,6 @@ struct drm_i915_private;
#define I915_GEM_IDLE_TIMEOUT (HZ / 5)
void i915_gem_park(struct drm_i915_private *i915);
void i915_gem_unpark(struct drm_i915_private *i915);
static inline void __tasklet_disable_sync_once(struct tasklet_struct *t)
{
if (!atomic_fetch_inc(&t->count))
@ -94,4 +91,9 @@ static inline bool __tasklet_enable(struct tasklet_struct *t)
return atomic_dec_and_test(&t->count);
}
static inline bool __tasklet_is_scheduled(struct tasklet_struct *t)
{
return test_bit(TASKLET_STATE_SCHED, &t->state);
}
#endif /* __I915_GEM_H__ */

File diff suppressed because it is too large Load Diff

View File

@ -27,9 +27,10 @@
#include "i915_gem_context_types.h"
#include "gt/intel_context.h"
#include "i915_gem.h"
#include "i915_scheduler.h"
#include "intel_context.h"
#include "intel_device_info.h"
struct drm_device;
@ -111,6 +112,24 @@ static inline void i915_gem_context_set_force_single_submission(struct i915_gem_
__set_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ctx->flags);
}
static inline bool
i915_gem_context_user_engines(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_USER_ENGINES, &ctx->flags);
}
static inline void
i915_gem_context_set_user_engines(struct i915_gem_context *ctx)
{
set_bit(CONTEXT_USER_ENGINES, &ctx->flags);
}
static inline void
i915_gem_context_clear_user_engines(struct i915_gem_context *ctx)
{
clear_bit(CONTEXT_USER_ENGINES, &ctx->flags);
}
int __i915_gem_context_pin_hw_id(struct i915_gem_context *ctx);
static inline int i915_gem_context_pin_hw_id(struct i915_gem_context *ctx)
{
@ -140,10 +159,6 @@ int i915_gem_context_open(struct drm_i915_private *i915,
struct drm_file *file);
void i915_gem_context_close(struct drm_file *file);
int i915_switch_context(struct i915_request *rq);
int i915_gem_switch_to_kernel_context(struct drm_i915_private *i915,
intel_engine_mask_t engine_mask);
void i915_gem_context_release(struct kref *ctx_ref);
struct i915_gem_context *
i915_gem_context_create_gvt(struct drm_device *dev);
@ -179,6 +194,64 @@ static inline void i915_gem_context_put(struct i915_gem_context *ctx)
kref_put(&ctx->ref, i915_gem_context_release);
}
static inline struct i915_gem_engines *
i915_gem_context_engines(struct i915_gem_context *ctx)
{
return rcu_dereference_protected(ctx->engines,
lockdep_is_held(&ctx->engines_mutex));
}
static inline struct i915_gem_engines *
i915_gem_context_lock_engines(struct i915_gem_context *ctx)
__acquires(&ctx->engines_mutex)
{
mutex_lock(&ctx->engines_mutex);
return i915_gem_context_engines(ctx);
}
static inline void
i915_gem_context_unlock_engines(struct i915_gem_context *ctx)
__releases(&ctx->engines_mutex)
{
mutex_unlock(&ctx->engines_mutex);
}
static inline struct intel_context *
i915_gem_context_lookup_engine(struct i915_gem_context *ctx, unsigned int idx)
{
return i915_gem_context_engines(ctx)->engines[idx];
}
static inline struct intel_context *
i915_gem_context_get_engine(struct i915_gem_context *ctx, unsigned int idx)
{
struct intel_context *ce = ERR_PTR(-EINVAL);
rcu_read_lock(); {
struct i915_gem_engines *e = rcu_dereference(ctx->engines);
if (likely(idx < e->num_engines && e->engines[idx]))
ce = intel_context_get(e->engines[idx]);
} rcu_read_unlock();
return ce;
}
static inline void
i915_gem_engines_iter_init(struct i915_gem_engines_iter *it,
struct i915_gem_engines *engines)
{
GEM_BUG_ON(!engines);
it->engines = engines;
it->idx = 0;
}
struct intel_context *
i915_gem_engines_iter_next(struct i915_gem_engines_iter *it);
#define for_each_gem_engine(ce, engines, it) \
for (i915_gem_engines_iter_init(&(it), (engines)); \
((ce) = i915_gem_engines_iter_next(&(it)));)
struct i915_lut_handle *i915_lut_handle_alloc(void);
void i915_lut_handle_free(struct i915_lut_handle *lut);

View File

@ -17,8 +17,9 @@
#include <linux/rcupdate.h>
#include <linux/types.h>
#include "gt/intel_context_types.h"
#include "i915_scheduler.h"
#include "intel_context_types.h"
struct pid;
@ -28,6 +29,18 @@ struct i915_hw_ppgtt;
struct i915_timeline;
struct intel_ring;
struct i915_gem_engines {
struct rcu_work rcu;
struct drm_i915_private *i915;
unsigned int num_engines;
struct intel_context *engines[];
};
struct i915_gem_engines_iter {
unsigned int idx;
const struct i915_gem_engines *engines;
};
/**
* struct i915_gem_context - client state
*
@ -41,6 +54,30 @@ struct i915_gem_context {
/** file_priv: owning file descriptor */
struct drm_i915_file_private *file_priv;
/**
* @engines: User defined engines for this context
*
* Various uAPI offer the ability to lookup up an
* index from this array to select an engine operate on.
*
* Multiple logically distinct instances of the same engine
* may be defined in the array, as well as composite virtual
* engines.
*
* Execbuf uses the I915_EXEC_RING_MASK as an index into this
* array to select which HW context + engine to execute on. For
* the default array, the user_ring_map[] is used to translate
* the legacy uABI onto the approprate index (e.g. both
* I915_EXEC_DEFAULT and I915_EXEC_RENDER select the same
* context, and I915_EXEC_BSD is weird). For a use defined
* array, execbuf uses I915_EXEC_RING_MASK as a plain index.
*
* User defined by I915_CONTEXT_PARAM_ENGINE (when the
* CONTEXT_USER_ENGINES flag is set).
*/
struct i915_gem_engines __rcu *engines;
struct mutex engines_mutex; /* guards writes to engines */
struct i915_timeline *timeline;
/**
@ -109,6 +146,7 @@ struct i915_gem_context {
#define CONTEXT_BANNED 0
#define CONTEXT_CLOSED 1
#define CONTEXT_FORCE_SINGLE_SUBMISSION 2
#define CONTEXT_USER_ENGINES 3
/**
* @hw_id: - unique identifier for the context
@ -128,15 +166,10 @@ struct i915_gem_context {
atomic_t hw_id_pin_count;
struct list_head hw_id_link;
struct list_head active_engines;
struct mutex mutex;
struct i915_sched_attr sched;
/** hw_contexts: per-engine logical HW state */
struct rb_root hw_contexts;
spinlock_t hw_contexts_lock;
/** ring_size: size for allocating the per-engine ring buffer */
u32 ring_size;
/** desc_template: invariant fields for the HW context descriptor */

View File

@ -36,15 +36,8 @@ I915_SELFTEST_DECLARE(static struct igt_evict_ctl {
bool fail_if_busy:1;
} igt_evict_ctl;)
static bool ggtt_is_idle(struct drm_i915_private *i915)
{
return !i915->gt.active_requests;
}
static int ggtt_flush(struct drm_i915_private *i915)
{
int err;
/*
* Not everything in the GGTT is tracked via vma (otherwise we
* could evict as required with minimal stalling) so we are forced
@ -52,19 +45,10 @@ static int ggtt_flush(struct drm_i915_private *i915)
* the hopes that we can then remove contexts and the like only
* bound by their active reference.
*/
err = i915_gem_switch_to_kernel_context(i915, i915->gt.active_engines);
if (err)
return err;
err = i915_gem_wait_for_idle(i915,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_LOCKED,
MAX_SCHEDULE_TIMEOUT);
if (err)
return err;
GEM_BUG_ON(!ggtt_is_idle(i915));
return 0;
return i915_gem_wait_for_idle(i915,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_LOCKED,
MAX_SCHEDULE_TIMEOUT);
}
static bool
@ -222,24 +206,17 @@ search_again:
* us a termination condition, when the last retired context is
* the kernel's there is no more we can evict.
*/
if (!ggtt_is_idle(dev_priv)) {
if (I915_SELFTEST_ONLY(igt_evict_ctl.fail_if_busy))
return -EBUSY;
if (I915_SELFTEST_ONLY(igt_evict_ctl.fail_if_busy))
return -EBUSY;
ret = ggtt_flush(dev_priv);
if (ret)
return ret;
ret = ggtt_flush(dev_priv);
if (ret)
return ret;
cond_resched();
goto search_again;
}
cond_resched();
/*
* If we still have pending pageflip completions, drop
* back to userspace to give our workqueues time to
* acquire our locks and unpin the old scanouts.
*/
return intel_has_pending_fb_unpin(dev_priv) ? -EAGAIN : -ENOSPC;
flags |= PIN_NONBLOCK;
goto search_again;
found:
/* drm_mm doesn't allow any other other operations while

View File

@ -34,6 +34,8 @@
#include <drm/drm_syncobj.h>
#include <drm/i915_drm.h>
#include "gt/intel_gt_pm.h"
#include "i915_drv.h"
#include "i915_gem_clflush.h"
#include "i915_trace.h"
@ -236,7 +238,8 @@ struct i915_execbuffer {
unsigned int *flags;
struct intel_engine_cs *engine; /** engine to queue the request to */
struct i915_gem_context *ctx; /** context for building the request */
struct intel_context *context; /* logical state for the request */
struct i915_gem_context *gem_context; /** caller's context */
struct i915_address_space *vm; /** GTT and vma for the request */
struct i915_request *request; /** our request to build */
@ -738,7 +741,7 @@ static int eb_select_context(struct i915_execbuffer *eb)
if (unlikely(!ctx))
return -ENOENT;
eb->ctx = ctx;
eb->gem_context = ctx;
if (ctx->ppgtt) {
eb->vm = &ctx->ppgtt->vm;
eb->invalid_flags |= EXEC_OBJECT_NEEDS_GTT;
@ -784,7 +787,6 @@ static struct i915_request *__eb_wait_for_ring(struct intel_ring *ring)
static int eb_wait_for_ring(const struct i915_execbuffer *eb)
{
const struct intel_context *ce;
struct i915_request *rq;
int ret = 0;
@ -794,11 +796,7 @@ static int eb_wait_for_ring(const struct i915_execbuffer *eb)
* keeping all of their resources pinned.
*/
ce = intel_context_lookup(eb->ctx, eb->engine);
if (!ce || !ce->ring) /* first use, assume empty! */
return 0;
rq = __eb_wait_for_ring(ce->ring);
rq = __eb_wait_for_ring(eb->context->ring);
if (rq) {
mutex_unlock(&eb->i915->drm.struct_mutex);
@ -817,15 +815,15 @@ static int eb_wait_for_ring(const struct i915_execbuffer *eb)
static int eb_lookup_vmas(struct i915_execbuffer *eb)
{
struct radix_tree_root *handles_vma = &eb->ctx->handles_vma;
struct radix_tree_root *handles_vma = &eb->gem_context->handles_vma;
struct drm_i915_gem_object *obj;
unsigned int i, batch;
int err;
if (unlikely(i915_gem_context_is_closed(eb->ctx)))
if (unlikely(i915_gem_context_is_closed(eb->gem_context)))
return -ENOENT;
if (unlikely(i915_gem_context_is_banned(eb->ctx)))
if (unlikely(i915_gem_context_is_banned(eb->gem_context)))
return -EIO;
INIT_LIST_HEAD(&eb->relocs);
@ -870,8 +868,8 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
if (!vma->open_count++)
i915_vma_reopen(vma);
list_add(&lut->obj_link, &obj->lut_list);
list_add(&lut->ctx_link, &eb->ctx->handles_list);
lut->ctx = eb->ctx;
list_add(&lut->ctx_link, &eb->gem_context->handles_list);
lut->ctx = eb->gem_context;
lut->handle = handle;
add_vma:
@ -1227,7 +1225,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
if (err)
goto err_unmap;
rq = i915_request_alloc(eb->engine, eb->ctx);
rq = i915_request_create(eb->context);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto err_unpin;
@ -2079,9 +2077,7 @@ gen8_dispatch_bsd_engine(struct drm_i915_private *dev_priv,
return file_priv->bsd_engine;
}
#define I915_USER_RINGS (4)
static const enum intel_engine_id user_ring_map[I915_USER_RINGS + 1] = {
static const enum intel_engine_id user_ring_map[] = {
[I915_EXEC_DEFAULT] = RCS0,
[I915_EXEC_RENDER] = RCS0,
[I915_EXEC_BLT] = BCS0,
@ -2089,31 +2085,57 @@ static const enum intel_engine_id user_ring_map[I915_USER_RINGS + 1] = {
[I915_EXEC_VEBOX] = VECS0
};
static struct intel_engine_cs *
eb_select_engine(struct drm_i915_private *dev_priv,
struct drm_file *file,
struct drm_i915_gem_execbuffer2 *args)
static int eb_pin_context(struct i915_execbuffer *eb, struct intel_context *ce)
{
int err;
/*
* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
* EIO if the GPU is already wedged.
*/
err = i915_terminally_wedged(eb->i915);
if (err)
return err;
/*
* Pinning the contexts may generate requests in order to acquire
* GGTT space, so do this first before we reserve a seqno for
* ourselves.
*/
err = intel_context_pin(ce);
if (err)
return err;
eb->engine = ce->engine;
eb->context = ce;
return 0;
}
static void eb_unpin_context(struct i915_execbuffer *eb)
{
intel_context_unpin(eb->context);
}
static unsigned int
eb_select_legacy_ring(struct i915_execbuffer *eb,
struct drm_file *file,
struct drm_i915_gem_execbuffer2 *args)
{
struct drm_i915_private *i915 = eb->i915;
unsigned int user_ring_id = args->flags & I915_EXEC_RING_MASK;
struct intel_engine_cs *engine;
if (user_ring_id > I915_USER_RINGS) {
DRM_DEBUG("execbuf with unknown ring: %u\n", user_ring_id);
return NULL;
}
if ((user_ring_id != I915_EXEC_BSD) &&
((args->flags & I915_EXEC_BSD_MASK) != 0)) {
if (user_ring_id != I915_EXEC_BSD &&
(args->flags & I915_EXEC_BSD_MASK)) {
DRM_DEBUG("execbuf with non bsd ring but with invalid "
"bsd dispatch flags: %d\n", (int)(args->flags));
return NULL;
return -1;
}
if (user_ring_id == I915_EXEC_BSD && HAS_ENGINE(dev_priv, VCS1)) {
if (user_ring_id == I915_EXEC_BSD && HAS_ENGINE(i915, VCS1)) {
unsigned int bsd_idx = args->flags & I915_EXEC_BSD_MASK;
if (bsd_idx == I915_EXEC_BSD_DEFAULT) {
bsd_idx = gen8_dispatch_bsd_engine(dev_priv, file);
bsd_idx = gen8_dispatch_bsd_engine(i915, file);
} else if (bsd_idx >= I915_EXEC_BSD_RING1 &&
bsd_idx <= I915_EXEC_BSD_RING2) {
bsd_idx >>= I915_EXEC_BSD_SHIFT;
@ -2121,20 +2143,42 @@ eb_select_engine(struct drm_i915_private *dev_priv,
} else {
DRM_DEBUG("execbuf with unknown bsd ring: %u\n",
bsd_idx);
return NULL;
return -1;
}
engine = dev_priv->engine[_VCS(bsd_idx)];
} else {
engine = dev_priv->engine[user_ring_map[user_ring_id]];
return _VCS(bsd_idx);
}
if (!engine) {
DRM_DEBUG("execbuf with invalid ring: %u\n", user_ring_id);
return NULL;
if (user_ring_id >= ARRAY_SIZE(user_ring_map)) {
DRM_DEBUG("execbuf with unknown ring: %u\n", user_ring_id);
return -1;
}
return engine;
return user_ring_map[user_ring_id];
}
static int
eb_select_engine(struct i915_execbuffer *eb,
struct drm_file *file,
struct drm_i915_gem_execbuffer2 *args)
{
struct intel_context *ce;
unsigned int idx;
int err;
if (i915_gem_context_user_engines(eb->gem_context))
idx = args->flags & I915_EXEC_RING_MASK;
else
idx = eb_select_legacy_ring(eb, file, args);
ce = i915_gem_context_get_engine(eb->gem_context, idx);
if (IS_ERR(ce))
return PTR_ERR(ce);
err = eb_pin_context(eb, ce);
intel_context_put(ce);
return err;
}
static void
@ -2275,8 +2319,8 @@ i915_gem_do_execbuffer(struct drm_device *dev,
{
struct i915_execbuffer eb;
struct dma_fence *in_fence = NULL;
struct dma_fence *exec_fence = NULL;
struct sync_file *out_fence = NULL;
intel_wakeref_t wakeref;
int out_fence_fd = -1;
int err;
@ -2318,11 +2362,24 @@ i915_gem_do_execbuffer(struct drm_device *dev,
return -EINVAL;
}
if (args->flags & I915_EXEC_FENCE_SUBMIT) {
if (in_fence) {
err = -EINVAL;
goto err_in_fence;
}
exec_fence = sync_file_get_fence(lower_32_bits(args->rsvd2));
if (!exec_fence) {
err = -EINVAL;
goto err_in_fence;
}
}
if (args->flags & I915_EXEC_FENCE_OUT) {
out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
if (out_fence_fd < 0) {
err = out_fence_fd;
goto err_in_fence;
goto err_exec_fence;
}
}
@ -2336,12 +2393,6 @@ i915_gem_do_execbuffer(struct drm_device *dev,
if (unlikely(err))
goto err_destroy;
eb.engine = eb_select_engine(eb.i915, file, args);
if (!eb.engine) {
err = -EINVAL;
goto err_engine;
}
/*
* Take a local wakeref for preparing to dispatch the execbuf as
* we expect to access the hardware fairly frequently in the
@ -2349,16 +2400,20 @@ i915_gem_do_execbuffer(struct drm_device *dev,
* wakeref that we hold until the GPU has been idle for at least
* 100ms.
*/
wakeref = intel_runtime_pm_get(eb.i915);
intel_gt_pm_get(eb.i915);
err = i915_mutex_lock_interruptible(dev);
if (err)
goto err_rpm;
err = eb_wait_for_ring(&eb); /* may temporarily drop struct_mutex */
err = eb_select_engine(&eb, file, args);
if (unlikely(err))
goto err_unlock;
err = eb_wait_for_ring(&eb); /* may temporarily drop struct_mutex */
if (unlikely(err))
goto err_engine;
err = eb_relocate(&eb);
if (err) {
/*
@ -2442,7 +2497,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
GEM_BUG_ON(eb.reloc_cache.rq);
/* Allocate a request for this batch buffer nice and early. */
eb.request = i915_request_alloc(eb.engine, eb.ctx);
eb.request = i915_request_create(eb.context);
if (IS_ERR(eb.request)) {
err = PTR_ERR(eb.request);
goto err_batch_unpin;
@ -2454,6 +2509,13 @@ i915_gem_do_execbuffer(struct drm_device *dev,
goto err_request;
}
if (exec_fence) {
err = i915_request_await_execution(eb.request, exec_fence,
eb.engine->bond_execute);
if (err < 0)
goto err_request;
}
if (fences) {
err = await_fence_array(&eb, fences);
if (err)
@ -2480,8 +2542,8 @@ i915_gem_do_execbuffer(struct drm_device *dev,
trace_i915_request_queue(eb.request, eb.batch_flags);
err = eb_submit(&eb);
err_request:
i915_request_add(eb.request);
add_to_client(eb.request, file);
i915_request_add(eb.request);
if (fences)
signal_fence_array(&eb, fences);
@ -2503,17 +2565,20 @@ err_batch_unpin:
err_vma:
if (eb.exec)
eb_release_vmas(&eb);
err_engine:
eb_unpin_context(&eb);
err_unlock:
mutex_unlock(&dev->struct_mutex);
err_rpm:
intel_runtime_pm_put(eb.i915, wakeref);
err_engine:
i915_gem_context_put(eb.ctx);
intel_gt_pm_put(eb.i915);
i915_gem_context_put(eb.gem_context);
err_destroy:
eb_destroy(&eb);
err_out_fence:
if (out_fence_fd != -1)
put_unused_fd(out_fence_fd);
err_exec_fence:
dma_fence_put(exec_fence);
err_in_fence:
dma_fence_put(in_fence);
return err;

View File

@ -37,7 +37,6 @@
#include "i915_drv.h"
#include "i915_vgpu.h"
#include "i915_reset.h"
#include "i915_trace.h"
#include "intel_drv.h"
#include "intel_frontbuffer.h"
@ -1829,11 +1828,62 @@ static void gen6_ppgtt_free_pd(struct gen6_hw_ppgtt *ppgtt)
free_pt(&ppgtt->base.vm, pt);
}
struct gen6_ppgtt_cleanup_work {
struct work_struct base;
struct i915_vma *vma;
};
static void gen6_ppgtt_cleanup_work(struct work_struct *wrk)
{
struct gen6_ppgtt_cleanup_work *work =
container_of(wrk, typeof(*work), base);
/* Side note, vma->vm is the GGTT not the ppgtt we just destroyed! */
struct drm_i915_private *i915 = work->vma->vm->i915;
mutex_lock(&i915->drm.struct_mutex);
i915_vma_destroy(work->vma);
mutex_unlock(&i915->drm.struct_mutex);
kfree(work);
}
static int nop_set_pages(struct i915_vma *vma)
{
return -ENODEV;
}
static void nop_clear_pages(struct i915_vma *vma)
{
}
static int nop_bind(struct i915_vma *vma,
enum i915_cache_level cache_level,
u32 unused)
{
return -ENODEV;
}
static void nop_unbind(struct i915_vma *vma)
{
}
static const struct i915_vma_ops nop_vma_ops = {
.set_pages = nop_set_pages,
.clear_pages = nop_clear_pages,
.bind_vma = nop_bind,
.unbind_vma = nop_unbind,
};
static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
{
struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(i915_vm_to_ppgtt(vm));
struct gen6_ppgtt_cleanup_work *work = ppgtt->work;
i915_vma_destroy(ppgtt->vma);
/* FIXME remove the struct_mutex to bring the locking under control */
INIT_WORK(&work->base, gen6_ppgtt_cleanup_work);
work->vma = ppgtt->vma;
work->vma->ops = &nop_vma_ops;
schedule_work(&work->base);
gen6_ppgtt_free_pd(ppgtt);
gen6_ppgtt_free_scratch(vm);
@ -2012,9 +2062,13 @@ static struct i915_hw_ppgtt *gen6_ppgtt_create(struct drm_i915_private *i915)
ppgtt->base.vm.pte_encode = ggtt->vm.pte_encode;
ppgtt->work = kmalloc(sizeof(*ppgtt->work), GFP_KERNEL);
if (!ppgtt->work)
goto err_free;
err = gen6_ppgtt_init_scratch(ppgtt);
if (err)
goto err_free;
goto err_work;
ppgtt->vma = pd_vma_create(ppgtt, GEN6_PD_SIZE);
if (IS_ERR(ppgtt->vma)) {
@ -2026,6 +2080,8 @@ static struct i915_hw_ppgtt *gen6_ppgtt_create(struct drm_i915_private *i915)
err_scratch:
gen6_ppgtt_free_scratch(&ppgtt->base.vm);
err_work:
kfree(ppgtt->work);
err_free:
kfree(ppgtt);
return ERR_PTR(err);
@ -2752,6 +2808,12 @@ int i915_gem_init_ggtt(struct drm_i915_private *dev_priv)
if (ret)
return ret;
if (USES_GUC(dev_priv)) {
ret = intel_guc_reserve_ggtt_top(&dev_priv->guc);
if (ret)
goto err_reserve;
}
/* Clear any non-preallocated blocks */
drm_mm_for_each_hole(entry, &ggtt->vm.mm, hole_start, hole_end) {
DRM_DEBUG_KMS("clearing unused GTT space: [%lx, %lx]\n",
@ -2766,12 +2828,14 @@ int i915_gem_init_ggtt(struct drm_i915_private *dev_priv)
if (INTEL_PPGTT(dev_priv) == INTEL_PPGTT_ALIASING) {
ret = i915_gem_init_aliasing_ppgtt(dev_priv);
if (ret)
goto err;
goto err_appgtt;
}
return 0;
err:
err_appgtt:
intel_guc_release_ggtt_top(&dev_priv->guc);
err_reserve:
drm_mm_remove_node(&ggtt->error_capture);
return ret;
}
@ -2797,6 +2861,8 @@ void i915_ggtt_cleanup_hw(struct drm_i915_private *dev_priv)
if (drm_mm_node_allocated(&ggtt->error_capture))
drm_mm_remove_node(&ggtt->error_capture);
intel_guc_release_ggtt_top(&dev_priv->guc);
if (drm_mm_initialized(&ggtt->vm.mm)) {
intel_vgt_deballoon(dev_priv);
i915_address_space_fini(&ggtt->vm);
@ -3280,7 +3346,9 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
size = gen6_get_total_gtt_size(snb_gmch_ctl);
ggtt->vm.total = (size / sizeof(gen6_pte_t)) * I915_GTT_PAGE_SIZE;
ggtt->vm.clear_range = gen6_ggtt_clear_range;
ggtt->vm.clear_range = nop_clear_range;
if (!HAS_FULL_PPGTT(dev_priv) || intel_scanout_needs_vtd_wa(dev_priv))
ggtt->vm.clear_range = gen6_ggtt_clear_range;
ggtt->vm.insert_page = gen6_ggtt_insert_page;
ggtt->vm.insert_entries = gen6_ggtt_insert_entries;
ggtt->vm.cleanup = gen6_gmch_remove;
@ -3369,17 +3437,6 @@ int i915_ggtt_probe_hw(struct drm_i915_private *dev_priv)
if (ret)
return ret;
/* Trim the GGTT to fit the GuC mappable upper range (when enabled).
* This is easier than doing range restriction on the fly, as we
* currently don't have any bits spare to pass in this upper
* restriction!
*/
if (USES_GUC(dev_priv)) {
ggtt->vm.total = min_t(u64, ggtt->vm.total, GUC_GGTT_TOP);
ggtt->mappable_end =
min_t(u64, ggtt->mappable_end, ggtt->vm.total);
}
if ((ggtt->vm.total - 1) >> 32) {
DRM_ERROR("We never expected a Global GTT with more than 32bits"
" of address space! Found %lldM!\n",
@ -3608,6 +3665,89 @@ err_st_alloc:
return ERR_PTR(ret);
}
static struct scatterlist *
remap_pages(struct drm_i915_gem_object *obj, unsigned int offset,
unsigned int width, unsigned int height,
unsigned int stride,
struct sg_table *st, struct scatterlist *sg)
{
unsigned int row;
for (row = 0; row < height; row++) {
unsigned int left = width * I915_GTT_PAGE_SIZE;
while (left) {
dma_addr_t addr;
unsigned int length;
/* We don't need the pages, but need to initialize
* the entries so the sg list can be happily traversed.
* The only thing we need are DMA addresses.
*/
addr = i915_gem_object_get_dma_address_len(obj, offset, &length);
length = min(left, length);
st->nents++;
sg_set_page(sg, NULL, length, 0);
sg_dma_address(sg) = addr;
sg_dma_len(sg) = length;
sg = sg_next(sg);
offset += length / I915_GTT_PAGE_SIZE;
left -= length;
}
offset += stride - width;
}
return sg;
}
static noinline struct sg_table *
intel_remap_pages(struct intel_remapped_info *rem_info,
struct drm_i915_gem_object *obj)
{
unsigned int size = intel_remapped_info_size(rem_info);
struct sg_table *st;
struct scatterlist *sg;
int ret = -ENOMEM;
int i;
/* Allocate target SG list. */
st = kmalloc(sizeof(*st), GFP_KERNEL);
if (!st)
goto err_st_alloc;
ret = sg_alloc_table(st, size, GFP_KERNEL);
if (ret)
goto err_sg_alloc;
st->nents = 0;
sg = st->sgl;
for (i = 0 ; i < ARRAY_SIZE(rem_info->plane); i++) {
sg = remap_pages(obj, rem_info->plane[i].offset,
rem_info->plane[i].width, rem_info->plane[i].height,
rem_info->plane[i].stride, st, sg);
}
i915_sg_trim(st);
return st;
err_sg_alloc:
kfree(st);
err_st_alloc:
DRM_DEBUG_DRIVER("Failed to create remapped mapping for object size %zu! (%ux%u tiles, %u pages)\n",
obj->base.size, rem_info->plane[0].width, rem_info->plane[0].height, size);
return ERR_PTR(ret);
}
static noinline struct sg_table *
intel_partial_pages(const struct i915_ggtt_view *view,
struct drm_i915_gem_object *obj)
@ -3686,6 +3826,11 @@ i915_get_ggtt_vma_pages(struct i915_vma *vma)
intel_rotate_pages(&vma->ggtt_view.rotated, vma->obj);
break;
case I915_GGTT_VIEW_REMAPPED:
vma->pages =
intel_remap_pages(&vma->ggtt_view.remapped, vma->obj);
break;
case I915_GGTT_VIEW_PARTIAL:
vma->pages = intel_partial_pages(&vma->ggtt_view, vma->obj);
break;

View File

@ -38,8 +38,8 @@
#include <linux/mm.h>
#include <linux/pagevec.h>
#include "gt/intel_reset.h"
#include "i915_request.h"
#include "i915_reset.h"
#include "i915_selftest.h"
#include "i915_timeline.h"
@ -163,11 +163,18 @@ typedef u64 gen8_ppgtt_pml4e_t;
struct sg_table;
struct intel_remapped_plane_info {
/* in gtt pages */
unsigned int width, height, stride, offset;
} __packed;
struct intel_remapped_info {
struct intel_remapped_plane_info plane[2];
unsigned int unused_mbz;
} __packed;
struct intel_rotation_info {
struct intel_rotation_plane_info {
/* tiles */
unsigned int width, height, stride, offset;
} plane[2];
struct intel_remapped_plane_info plane[2];
} __packed;
struct intel_partial_info {
@ -179,12 +186,20 @@ enum i915_ggtt_view_type {
I915_GGTT_VIEW_NORMAL = 0,
I915_GGTT_VIEW_ROTATED = sizeof(struct intel_rotation_info),
I915_GGTT_VIEW_PARTIAL = sizeof(struct intel_partial_info),
I915_GGTT_VIEW_REMAPPED = sizeof(struct intel_remapped_info),
};
static inline void assert_i915_gem_gtt_types(void)
{
BUILD_BUG_ON(sizeof(struct intel_rotation_info) != 8*sizeof(unsigned int));
BUILD_BUG_ON(sizeof(struct intel_partial_info) != sizeof(u64) + sizeof(unsigned int));
BUILD_BUG_ON(sizeof(struct intel_remapped_info) != 9*sizeof(unsigned int));
/* Check that rotation/remapped shares offsets for simplicity */
BUILD_BUG_ON(offsetof(struct intel_remapped_info, plane[0]) !=
offsetof(struct intel_rotation_info, plane[0]));
BUILD_BUG_ON(offsetofend(struct intel_remapped_info, plane[1]) !=
offsetofend(struct intel_rotation_info, plane[1]));
/* As we encode the size of each branch inside the union into its type,
* we have to be careful that each branch has a unique size.
@ -193,6 +208,7 @@ static inline void assert_i915_gem_gtt_types(void)
case I915_GGTT_VIEW_NORMAL:
case I915_GGTT_VIEW_PARTIAL:
case I915_GGTT_VIEW_ROTATED:
case I915_GGTT_VIEW_REMAPPED:
/* gcc complains if these are identical cases */
break;
}
@ -204,6 +220,7 @@ struct i915_ggtt_view {
/* Members need to contain no holes/padding */
struct intel_partial_info partial;
struct intel_rotation_info rotated;
struct intel_remapped_info remapped;
};
};
@ -384,6 +401,7 @@ struct i915_ggtt {
u32 pin_bias;
struct drm_mm_node error_capture;
struct drm_mm_node uc_fw;
};
struct i915_hw_ppgtt {
@ -396,8 +414,6 @@ struct i915_hw_ppgtt {
struct i915_page_directory_pointer pdp; /* GEN8+ */
struct i915_page_directory pd; /* GEN6-7 */
};
u32 user_handle;
};
struct gen6_hw_ppgtt {
@ -408,6 +424,8 @@ struct gen6_hw_ppgtt {
unsigned int pin_count;
bool scan_for_unused_pt;
struct gen6_ppgtt_cleanup_work *work;
};
#define __to_gen6_ppgtt(base) container_of(base, struct gen6_hw_ppgtt, base)

View File

@ -28,9 +28,6 @@
#define QUIET (__GFP_NORETRY | __GFP_NOWARN)
#define MAYFAIL (__GFP_RETRY_MAYFAIL | __GFP_NOWARN)
/* convert swiotlb segment size into sensible units (pages)! */
#define IO_TLB_SEGPAGES (IO_TLB_SEGSIZE << IO_TLB_SHIFT >> PAGE_SHIFT)
static void internal_free_pages(struct sg_table *st)
{
struct scatterlist *sg;

View File

@ -0,0 +1,250 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#include "gt/intel_gt_pm.h"
#include "i915_drv.h"
#include "i915_gem_pm.h"
#include "i915_globals.h"
static void i915_gem_park(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
lockdep_assert_held(&i915->drm.struct_mutex);
for_each_engine(engine, i915, id)
i915_gem_batch_pool_fini(&engine->batch_pool);
i915_timelines_park(i915);
i915_vma_parked(i915);
i915_globals_park();
}
static void idle_work_handler(struct work_struct *work)
{
struct drm_i915_private *i915 =
container_of(work, typeof(*i915), gem.idle_work);
bool restart = true;
cancel_delayed_work(&i915->gem.retire_work);
mutex_lock(&i915->drm.struct_mutex);
intel_wakeref_lock(&i915->gt.wakeref);
if (!intel_wakeref_active(&i915->gt.wakeref) && !work_pending(work)) {
i915_gem_park(i915);
restart = false;
}
intel_wakeref_unlock(&i915->gt.wakeref);
mutex_unlock(&i915->drm.struct_mutex);
if (restart)
queue_delayed_work(i915->wq,
&i915->gem.retire_work,
round_jiffies_up_relative(HZ));
}
static void retire_work_handler(struct work_struct *work)
{
struct drm_i915_private *i915 =
container_of(work, typeof(*i915), gem.retire_work.work);
/* Come back later if the device is busy... */
if (mutex_trylock(&i915->drm.struct_mutex)) {
i915_retire_requests(i915);
mutex_unlock(&i915->drm.struct_mutex);
}
queue_delayed_work(i915->wq,
&i915->gem.retire_work,
round_jiffies_up_relative(HZ));
}
static int pm_notifier(struct notifier_block *nb,
unsigned long action,
void *data)
{
struct drm_i915_private *i915 =
container_of(nb, typeof(*i915), gem.pm_notifier);
switch (action) {
case INTEL_GT_UNPARK:
i915_globals_unpark();
queue_delayed_work(i915->wq,
&i915->gem.retire_work,
round_jiffies_up_relative(HZ));
break;
case INTEL_GT_PARK:
queue_work(i915->wq, &i915->gem.idle_work);
break;
}
return NOTIFY_OK;
}
static bool switch_to_kernel_context_sync(struct drm_i915_private *i915)
{
bool result = true;
do {
if (i915_gem_wait_for_idle(i915,
I915_WAIT_LOCKED |
I915_WAIT_FOR_IDLE_BOOST,
I915_GEM_IDLE_TIMEOUT) == -ETIME) {
/* XXX hide warning from gem_eio */
if (i915_modparams.reset) {
dev_err(i915->drm.dev,
"Failed to idle engines, declaring wedged!\n");
GEM_TRACE_DUMP();
}
/*
* Forcibly cancel outstanding work and leave
* the gpu quiet.
*/
i915_gem_set_wedged(i915);
result = false;
}
} while (i915_retire_requests(i915) && result);
GEM_BUG_ON(i915->gt.awake);
return result;
}
bool i915_gem_load_power_context(struct drm_i915_private *i915)
{
return switch_to_kernel_context_sync(i915);
}
void i915_gem_suspend(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
flush_workqueue(i915->wq);
mutex_lock(&i915->drm.struct_mutex);
/*
* We have to flush all the executing contexts to main memory so
* that they can saved in the hibernation image. To ensure the last
* context image is coherent, we have to switch away from it. That
* leaves the i915->kernel_context still active when
* we actually suspend, and its image in memory may not match the GPU
* state. Fortunately, the kernel_context is disposable and we do
* not rely on its state.
*/
switch_to_kernel_context_sync(i915);
mutex_unlock(&i915->drm.struct_mutex);
/*
* Assert that we successfully flushed all the work and
* reset the GPU back to its idle, low power state.
*/
GEM_BUG_ON(i915->gt.awake);
flush_work(&i915->gem.idle_work);
cancel_delayed_work_sync(&i915->gpu_error.hangcheck_work);
i915_gem_drain_freed_objects(i915);
intel_uc_suspend(i915);
}
void i915_gem_suspend_late(struct drm_i915_private *i915)
{
struct drm_i915_gem_object *obj;
struct list_head *phases[] = {
&i915->mm.unbound_list,
&i915->mm.bound_list,
NULL
}, **phase;
/*
* Neither the BIOS, ourselves or any other kernel
* expects the system to be in execlists mode on startup,
* so we need to reset the GPU back to legacy mode. And the only
* known way to disable logical contexts is through a GPU reset.
*
* So in order to leave the system in a known default configuration,
* always reset the GPU upon unload and suspend. Afterwards we then
* clean up the GEM state tracking, flushing off the requests and
* leaving the system in a known idle state.
*
* Note that is of the upmost importance that the GPU is idle and
* all stray writes are flushed *before* we dismantle the backing
* storage for the pinned objects.
*
* However, since we are uncertain that resetting the GPU on older
* machines is a good idea, we don't - just in case it leaves the
* machine in an unusable condition.
*/
mutex_lock(&i915->drm.struct_mutex);
for (phase = phases; *phase; phase++) {
list_for_each_entry(obj, *phase, mm.link)
WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false));
}
mutex_unlock(&i915->drm.struct_mutex);
intel_uc_sanitize(i915);
i915_gem_sanitize(i915);
}
void i915_gem_resume(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
WARN_ON(i915->gt.awake);
mutex_lock(&i915->drm.struct_mutex);
intel_uncore_forcewake_get(&i915->uncore, FORCEWAKE_ALL);
i915_gem_restore_gtt_mappings(i915);
i915_gem_restore_fences(i915);
/*
* As we didn't flush the kernel context before suspend, we cannot
* guarantee that the context image is complete. So let's just reset
* it and start again.
*/
intel_gt_resume(i915);
if (i915_gem_init_hw(i915))
goto err_wedged;
intel_uc_resume(i915);
/* Always reload a context for powersaving. */
if (!i915_gem_load_power_context(i915))
goto err_wedged;
out_unlock:
intel_uncore_forcewake_put(&i915->uncore, FORCEWAKE_ALL);
mutex_unlock(&i915->drm.struct_mutex);
return;
err_wedged:
if (!i915_reset_failed(i915)) {
dev_err(i915->drm.dev,
"Failed to re-initialize GPU, declaring it wedged!\n");
i915_gem_set_wedged(i915);
}
goto out_unlock;
}
void i915_gem_init__pm(struct drm_i915_private *i915)
{
INIT_WORK(&i915->gem.idle_work, idle_work_handler);
INIT_DELAYED_WORK(&i915->gem.retire_work, retire_work_handler);
i915->gem.pm_notifier.notifier_call = pm_notifier;
blocking_notifier_chain_register(&i915->gt.pm_notifications,
&i915->gem.pm_notifier);
}

View File

@ -0,0 +1,25 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#ifndef __I915_GEM_PM_H__
#define __I915_GEM_PM_H__
#include <linux/types.h>
struct drm_i915_private;
struct work_struct;
void i915_gem_init__pm(struct drm_i915_private *i915);
bool i915_gem_load_power_context(struct drm_i915_private *i915);
void i915_gem_resume(struct drm_i915_private *i915);
void i915_gem_idle_work_handler(struct work_struct *work);
void i915_gem_suspend(struct drm_i915_private *i915);
void i915_gem_suspend_late(struct drm_i915_private *i915);
#endif /* __I915_GEM_PM_H__ */

View File

@ -114,6 +114,67 @@ static bool unsafe_drop_pages(struct drm_i915_gem_object *obj)
return !i915_gem_object_has_pages(obj);
}
static void __start_writeback(struct drm_i915_gem_object *obj,
unsigned int flags)
{
struct address_space *mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_NONE,
.nr_to_write = SWAP_CLUSTER_MAX,
.range_start = 0,
.range_end = LLONG_MAX,
.for_reclaim = 1,
};
unsigned long i;
lockdep_assert_held(&obj->mm.lock);
GEM_BUG_ON(i915_gem_object_has_pages(obj));
switch (obj->mm.madv) {
case I915_MADV_DONTNEED:
__i915_gem_object_truncate(obj);
case __I915_MADV_PURGED:
return;
}
if (!obj->base.filp)
return;
if (!(flags & I915_SHRINK_WRITEBACK))
return;
/*
* Leave mmapings intact (GTT will have been revoked on unbinding,
* leaving only CPU mmapings around) and add those pages to the LRU
* instead of invoking writeback so they are aged and paged out
* as normal.
*/
mapping = obj->base.filp->f_mapping;
/* Begin writeback on each dirty page */
for (i = 0; i < obj->base.size >> PAGE_SHIFT; i++) {
struct page *page;
page = find_lock_entry(mapping, i);
if (!page || xa_is_value(page))
continue;
if (!page_mapped(page) && clear_page_dirty_for_io(page)) {
int ret;
SetPageReclaim(page);
ret = mapping->a_ops->writepage(page, &wbc);
if (!PageWriteback(page))
ClearPageReclaim(page);
if (!ret)
goto put;
}
unlock_page(page);
put:
put_page(page);
}
}
/**
* i915_gem_shrink - Shrink buffer object caches
* @i915: i915 device
@ -254,7 +315,7 @@ i915_gem_shrink(struct drm_i915_private *i915,
mutex_lock_nested(&obj->mm.lock,
I915_MM_SHRINKER);
if (!i915_gem_object_has_pages(obj)) {
__i915_gem_object_invalidate(obj);
__start_writeback(obj, flags);
count += obj->base.size >> PAGE_SHIFT;
}
mutex_unlock(&obj->mm.lock);
@ -366,13 +427,15 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
&sc->nr_scanned,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND |
I915_SHRINK_PURGEABLE);
I915_SHRINK_PURGEABLE |
I915_SHRINK_WRITEBACK);
if (sc->nr_scanned < sc->nr_to_scan)
freed += i915_gem_shrink(i915,
sc->nr_to_scan - sc->nr_scanned,
&sc->nr_scanned,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND);
I915_SHRINK_UNBOUND |
I915_SHRINK_WRITEBACK);
if (sc->nr_scanned < sc->nr_to_scan && current_is_kswapd()) {
intel_wakeref_t wakeref;
@ -382,7 +445,8 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
&sc->nr_scanned,
I915_SHRINK_ACTIVE |
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND);
I915_SHRINK_UNBOUND |
I915_SHRINK_WRITEBACK);
}
}
@ -404,7 +468,8 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
with_intel_runtime_pm(i915, wakeref)
freed_pages += i915_gem_shrink(i915, -1UL, NULL,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND);
I915_SHRINK_UNBOUND |
I915_SHRINK_WRITEBACK);
/* Because we may be allocating inside our own driver, we cannot
* assert that there are no objects with pinned pages that are not

View File

@ -36,8 +36,11 @@
#include <drm/drm_print.h>
#include "i915_gpu_error.h"
#include "i915_drv.h"
#include "i915_gpu_error.h"
#include "intel_atomic.h"
#include "intel_csr.h"
#include "intel_overlay.h"
static inline const struct intel_engine_cs *
engine_lookup(const struct drm_i915_private *i915, unsigned int id)

View File

@ -13,8 +13,9 @@
#include <drm/drm_mm.h>
#include "gt/intel_engine.h"
#include "intel_device_info.h"
#include "intel_ringbuffer.h"
#include "intel_uc_fw.h"
#include "i915_gem.h"
@ -178,8 +179,6 @@ struct i915_gpu_state {
struct scatterlist *sgl, *fit;
};
struct i915_gpu_restart;
struct i915_gpu_error {
/* For hangcheck timer */
#define DRM_I915_HANGCHECK_PERIOD 1500 /* in ms */
@ -240,8 +239,6 @@ struct i915_gpu_error {
wait_queue_head_t reset_queue;
struct srcu_struct reset_backoff_srcu;
struct i915_gpu_restart *restart;
};
struct drm_i915_error_state_buf {

View File

@ -38,8 +38,12 @@
#include <drm/i915_drm.h>
#include "i915_drv.h"
#include "i915_irq.h"
#include "i915_trace.h"
#include "intel_drv.h"
#include "intel_fifo_underrun.h"
#include "intel_hotplug.h"
#include "intel_lpe_audio.h"
#include "intel_psr.h"
/**
@ -1301,7 +1305,7 @@ static void gen6_pm_rps_work(struct work_struct *work)
if ((pm_iir & dev_priv->pm_rps_events) == 0 && !client_boost)
goto out;
mutex_lock(&dev_priv->pcu_lock);
mutex_lock(&rps->lock);
pm_iir |= vlv_wa_c0_ei(dev_priv, pm_iir);
@ -1367,7 +1371,7 @@ static void gen6_pm_rps_work(struct work_struct *work)
rps->last_adj = 0;
}
mutex_unlock(&dev_priv->pcu_lock);
mutex_unlock(&rps->lock);
out:
/* Make sure not to corrupt PMIMR state used by ringbuffer on GEN6 */

View File

@ -0,0 +1,114 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2019 Intel Corporation
*/
#ifndef __I915_IRQ_H__
#define __I915_IRQ_H__
#include <linux/types.h>
#include "i915_drv.h"
struct drm_i915_private;
struct intel_crtc;
extern void intel_irq_init(struct drm_i915_private *dev_priv);
extern void intel_irq_fini(struct drm_i915_private *dev_priv);
int intel_irq_install(struct drm_i915_private *dev_priv);
void intel_irq_uninstall(struct drm_i915_private *dev_priv);
u32 i915_pipestat_enable_mask(struct drm_i915_private *dev_priv,
enum pipe pipe);
void
i915_enable_pipestat(struct drm_i915_private *dev_priv, enum pipe pipe,
u32 status_mask);
void
i915_disable_pipestat(struct drm_i915_private *dev_priv, enum pipe pipe,
u32 status_mask);
void valleyview_enable_display_irqs(struct drm_i915_private *dev_priv);
void valleyview_disable_display_irqs(struct drm_i915_private *dev_priv);
void i915_hotplug_interrupt_update(struct drm_i915_private *dev_priv,
u32 mask,
u32 bits);
void ilk_update_display_irq(struct drm_i915_private *dev_priv,
u32 interrupt_mask,
u32 enabled_irq_mask);
static inline void
ilk_enable_display_irq(struct drm_i915_private *dev_priv, u32 bits)
{
ilk_update_display_irq(dev_priv, bits, bits);
}
static inline void
ilk_disable_display_irq(struct drm_i915_private *dev_priv, u32 bits)
{
ilk_update_display_irq(dev_priv, bits, 0);
}
void bdw_update_pipe_irq(struct drm_i915_private *dev_priv,
enum pipe pipe,
u32 interrupt_mask,
u32 enabled_irq_mask);
static inline void bdw_enable_pipe_irq(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 bits)
{
bdw_update_pipe_irq(dev_priv, pipe, bits, bits);
}
static inline void bdw_disable_pipe_irq(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 bits)
{
bdw_update_pipe_irq(dev_priv, pipe, bits, 0);
}
void ibx_display_interrupt_update(struct drm_i915_private *dev_priv,
u32 interrupt_mask,
u32 enabled_irq_mask);
static inline void
ibx_enable_display_interrupt(struct drm_i915_private *dev_priv, u32 bits)
{
ibx_display_interrupt_update(dev_priv, bits, bits);
}
static inline void
ibx_disable_display_interrupt(struct drm_i915_private *dev_priv, u32 bits)
{
ibx_display_interrupt_update(dev_priv, bits, 0);
}
void gen5_enable_gt_irq(struct drm_i915_private *dev_priv, u32 mask);
void gen5_disable_gt_irq(struct drm_i915_private *dev_priv, u32 mask);
void gen6_mask_pm_irq(struct drm_i915_private *dev_priv, u32 mask);
void gen6_unmask_pm_irq(struct drm_i915_private *dev_priv, u32 mask);
void gen11_reset_rps_interrupts(struct drm_i915_private *dev_priv);
void gen6_reset_rps_interrupts(struct drm_i915_private *dev_priv);
void gen6_enable_rps_interrupts(struct drm_i915_private *dev_priv);
void gen6_disable_rps_interrupts(struct drm_i915_private *dev_priv);
void gen6_rps_reset_ei(struct drm_i915_private *dev_priv);
static inline u32 gen6_sanitize_rps_pm_mask(const struct drm_i915_private *i915,
u32 mask)
{
return mask & ~i915->gt_pm.rps.pm_intrmsk_mbz;
}
void intel_runtime_pm_disable_interrupts(struct drm_i915_private *dev_priv);
void intel_runtime_pm_enable_interrupts(struct drm_i915_private *dev_priv);
static inline bool intel_irqs_enabled(struct drm_i915_private *dev_priv)
{
/*
* We only use drm_irq_uninstall() at unload and VT switch, so
* this is the only thing we need to check.
*/
return dev_priv->runtime_pm.irqs_enabled;
}
int intel_get_crtc_scanline(struct intel_crtc *crtc);
void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv,
u8 pipe_mask);
void gen8_irq_power_well_pre_disable(struct drm_i915_private *dev_priv,
u8 pipe_mask);
void gen9_reset_guc_interrupts(struct drm_i915_private *dev_priv);
void gen9_enable_guc_interrupts(struct drm_i915_private *dev_priv);
void gen9_disable_guc_interrupts(struct drm_i915_private *dev_priv);
#endif /* __I915_IRQ_H__ */

View File

@ -370,6 +370,7 @@ static const struct intel_device_info intel_ironlake_m_info = {
.has_llc = 1, \
.has_rc6 = 1, \
.has_rc6p = 1, \
.has_rps = true, \
.ppgtt_type = INTEL_PPGTT_ALIASING, \
.ppgtt_size = 31, \
I9XX_PIPE_OFFSETS, \
@ -417,6 +418,7 @@ static const struct intel_device_info intel_sandybridge_m_gt2_info = {
.has_llc = 1, \
.has_rc6 = 1, \
.has_rc6p = 1, \
.has_rps = true, \
.ppgtt_type = INTEL_PPGTT_FULL, \
.ppgtt_size = 31, \
IVB_PIPE_OFFSETS, \
@ -470,6 +472,7 @@ static const struct intel_device_info intel_valleyview_info = {
.num_pipes = 2,
.has_runtime_pm = 1,
.has_rc6 = 1,
.has_rps = true,
.display.has_gmch = 1,
.display.has_hotplug = 1,
.ppgtt_type = INTEL_PPGTT_FULL,
@ -565,6 +568,7 @@ static const struct intel_device_info intel_cherryview_info = {
.has_64bit_reloc = 1,
.has_runtime_pm = 1,
.has_rc6 = 1,
.has_rps = true,
.has_logical_ring_contexts = 1,
.display.has_gmch = 1,
.ppgtt_type = INTEL_PPGTT_FULL,
@ -596,8 +600,6 @@ static const struct intel_device_info intel_cherryview_info = {
#define SKL_PLATFORM \
GEN9_FEATURES, \
/* Display WA #0477 WaDisableIPC: skl */ \
.display.has_ipc = 0, \
PLATFORM(INTEL_SKYLAKE)
static const struct intel_device_info intel_skylake_gt1_info = {
@ -640,6 +642,7 @@ static const struct intel_device_info intel_skylake_gt4_info = {
.has_runtime_pm = 1, \
.display.has_csr = 1, \
.has_rc6 = 1, \
.has_rps = true, \
.display.has_dp_mst = 1, \
.has_logical_ring_contexts = 1, \
.has_logical_ring_preemption = 1, \

View File

@ -195,6 +195,8 @@
#include <linux/sizes.h>
#include <linux/uuid.h>
#include "gt/intel_lrc_reg.h"
#include "i915_drv.h"
#include "i915_oa_hsw.h"
#include "i915_oa_bdw.h"
@ -210,7 +212,6 @@
#include "i915_oa_cflgt3.h"
#include "i915_oa_cnl.h"
#include "i915_oa_icl.h"
#include "intel_lrc_reg.h"
/* HW requires this to be a power of two, between 128k and 16M, though driver
* is currently generally designed assuming the largest 16M size is used such
@ -1202,28 +1203,35 @@ static int i915_oa_read(struct i915_perf_stream *stream,
static struct intel_context *oa_pin_context(struct drm_i915_private *i915,
struct i915_gem_context *ctx)
{
struct intel_engine_cs *engine = i915->engine[RCS0];
struct i915_gem_engines_iter it;
struct intel_context *ce;
int ret;
int err;
ret = i915_mutex_lock_interruptible(&i915->drm);
if (ret)
return ERR_PTR(ret);
err = i915_mutex_lock_interruptible(&i915->drm);
if (err)
return ERR_PTR(err);
for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
if (ce->engine->class != RENDER_CLASS)
continue;
/*
* As the ID is the gtt offset of the context's vma we
* pin the vma to ensure the ID remains fixed.
*/
err = intel_context_pin(ce);
if (err == 0) {
i915->perf.oa.pinned_ctx = ce;
break;
}
}
i915_gem_context_unlock_engines(ctx);
/*
* As the ID is the gtt offset of the context's vma we
* pin the vma to ensure the ID remains fixed.
*
* NB: implied RCS engine...
*/
ce = intel_context_pin(ctx, engine);
mutex_unlock(&i915->drm.struct_mutex);
if (IS_ERR(ce))
return ce;
if (err)
return ERR_PTR(err);
i915->perf.oa.pinned_ctx = ce;
return ce;
return i915->perf.oa.pinned_ctx;
}
/**
@ -1679,7 +1687,7 @@ gen8_update_reg_state_unlocked(struct intel_context *ce,
CTX_REG(reg_state,
CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
gen8_make_rpcs(i915, &ce->sseu));
intel_sseu_make_rpcs(i915, &ce->sseu));
}
/*
@ -1709,7 +1717,6 @@ gen8_update_reg_state_unlocked(struct intel_context *ce,
static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv,
const struct i915_oa_config *oa_config)
{
struct intel_engine_cs *engine = dev_priv->engine[RCS0];
unsigned int map_type = i915_coherent_map_type(dev_priv);
struct i915_gem_context *ctx;
struct i915_request *rq;
@ -1738,30 +1745,43 @@ static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv,
/* Update all contexts now that we've stalled the submission. */
list_for_each_entry(ctx, &dev_priv->contexts.list, link) {
struct intel_context *ce = intel_context_lookup(ctx, engine);
u32 *regs;
struct i915_gem_engines_iter it;
struct intel_context *ce;
/* OA settings will be set upon first use */
if (!ce || !ce->state)
continue;
for_each_gem_engine(ce,
i915_gem_context_lock_engines(ctx),
it) {
u32 *regs;
regs = i915_gem_object_pin_map(ce->state->obj, map_type);
if (IS_ERR(regs))
return PTR_ERR(regs);
if (ce->engine->class != RENDER_CLASS)
continue;
ce->state->obj->mm.dirty = true;
regs += LRC_STATE_PN * PAGE_SIZE / sizeof(*regs);
/* OA settings will be set upon first use */
if (!ce->state)
continue;
gen8_update_reg_state_unlocked(ce, regs, oa_config);
regs = i915_gem_object_pin_map(ce->state->obj,
map_type);
if (IS_ERR(regs)) {
i915_gem_context_unlock_engines(ctx);
return PTR_ERR(regs);
}
i915_gem_object_unpin_map(ce->state->obj);
ce->state->obj->mm.dirty = true;
regs += LRC_STATE_PN * PAGE_SIZE / sizeof(*regs);
gen8_update_reg_state_unlocked(ce, regs, oa_config);
i915_gem_object_unpin_map(ce->state->obj);
}
i915_gem_context_unlock_engines(ctx);
}
/*
* Apply the configuration by doing one context restore of the edited
* context image.
*/
rq = i915_request_alloc(engine, dev_priv->kernel_context);
rq = i915_request_create(dev_priv->engine[RCS0]->kernel_context);
if (IS_ERR(rq))
return PTR_ERR(rq);

View File

@ -6,9 +6,12 @@
#include <linux/irq.h>
#include <linux/pm_runtime.h>
#include "i915_pmu.h"
#include "intel_ringbuffer.h"
#include "gt/intel_engine.h"
#include "i915_drv.h"
#include "i915_pmu.h"
#include "intel_pm.h"
/* Frequency for the sampling timer for events which need it. */
#define FREQUENCY 200

View File

@ -96,9 +96,58 @@ static int query_topology_info(struct drm_i915_private *dev_priv,
return total_length;
}
static int
query_engine_info(struct drm_i915_private *i915,
struct drm_i915_query_item *query_item)
{
struct drm_i915_query_engine_info __user *query_ptr =
u64_to_user_ptr(query_item->data_ptr);
struct drm_i915_engine_info __user *info_ptr;
struct drm_i915_query_engine_info query;
struct drm_i915_engine_info info = { };
struct intel_engine_cs *engine;
enum intel_engine_id id;
int len, ret;
if (query_item->flags)
return -EINVAL;
len = sizeof(struct drm_i915_query_engine_info) +
RUNTIME_INFO(i915)->num_engines *
sizeof(struct drm_i915_engine_info);
ret = copy_query_item(&query, sizeof(query), len, query_item);
if (ret != 0)
return ret;
if (query.num_engines || query.rsvd[0] || query.rsvd[1] ||
query.rsvd[2])
return -EINVAL;
info_ptr = &query_ptr->engines[0];
for_each_engine(engine, i915, id) {
info.engine.engine_class = engine->uabi_class;
info.engine.engine_instance = engine->instance;
info.capabilities = engine->uabi_capabilities;
if (__copy_to_user(info_ptr, &info, sizeof(info)))
return -EFAULT;
query.num_engines++;
info_ptr++;
}
if (__copy_to_user(query_ptr, &query, sizeof(query)))
return -EFAULT;
return len;
}
static int (* const i915_query_funcs[])(struct drm_i915_private *dev_priv,
struct drm_i915_query_item *query_item) = {
query_topology_info,
query_engine_info,
};
int i915_query_ioctl(struct drm_device *dev, void *data, struct drm_file *file)

View File

@ -1813,7 +1813,6 @@ enum i915_power_well_id {
#define PWR_DOWN_LN_3 (0x8 << 4)
#define PWR_DOWN_LN_2_1_0 (0x7 << 4)
#define PWR_DOWN_LN_1_0 (0x3 << 4)
#define PWR_DOWN_LN_1 (0x2 << 4)
#define PWR_DOWN_LN_3_1 (0xa << 4)
#define PWR_DOWN_LN_3_1_0 (0xb << 4)
#define PWR_DOWN_LN_MASK (0xf << 4)
@ -2870,6 +2869,7 @@ enum i915_power_well_id {
#define GFX_FLSH_CNTL_GEN6 _MMIO(0x101008)
#define GFX_FLSH_CNTL_EN (1 << 0)
#define ECOSKPD _MMIO(0x21d0)
#define ECO_CONSTANT_BUFFER_SR_DISABLE REG_BIT(4)
#define ECO_GATING_CX_ONLY (1 << 3)
#define ECO_FLIP_DONE (1 << 0)
@ -5769,6 +5769,7 @@ enum {
#define _PIPE_MISC_B 0x71030
#define PIPEMISC_YUV420_ENABLE (1 << 27)
#define PIPEMISC_YUV420_MODE_FULL_BLEND (1 << 26)
#define PIPEMISC_HDR_MODE_PRECISION (1 << 23) /* icl+ */
#define PIPEMISC_OUTPUT_COLORSPACE_YUV (1 << 11)
#define PIPEMISC_DITHER_BPC_MASK (7 << 5)
#define PIPEMISC_DITHER_8_BPC (0 << 5)
@ -7620,6 +7621,9 @@ enum {
#define GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION (1 << 8)
#define GEN8_CSC2_SBE_VUE_CACHE_CONSERVATIVE (1 << 0)
#define GEN8_L3CNTLREG _MMIO(0x7034)
#define GEN8_ERRDETBCTRL (1 << 9)
#define GEN11_COMMON_SLICE_CHICKEN3 _MMIO(0x7304)
#define GEN11_BLEND_EMB_FIX_DISABLE_IN_RCC (1 << 11)
@ -8862,6 +8866,7 @@ enum {
#define GEN11_LSN_UNSLCVC_GAFS_HALF_SF_MAXALLOC (1 << 7)
#define GEN10_SAMPLER_MODE _MMIO(0xE18C)
#define GEN11_SAMPLER_ENABLE_HEADLESS_MSG REG_BIT(5)
/* IVYBRIDGE DPF */
#define GEN7_L3CDERRST1(slice) _MMIO(0xB008 + (slice) * 0x200) /* L3CD Error Status 1 */
@ -9009,32 +9014,32 @@ enum {
/* HSW Audio */
#define _HSW_AUD_CONFIG_A 0x65000
#define _HSW_AUD_CONFIG_B 0x65100
#define HSW_AUD_CFG(pipe) _MMIO_PIPE(pipe, _HSW_AUD_CONFIG_A, _HSW_AUD_CONFIG_B)
#define HSW_AUD_CFG(trans) _MMIO_TRANS(trans, _HSW_AUD_CONFIG_A, _HSW_AUD_CONFIG_B)
#define _HSW_AUD_MISC_CTRL_A 0x65010
#define _HSW_AUD_MISC_CTRL_B 0x65110
#define HSW_AUD_MISC_CTRL(pipe) _MMIO_PIPE(pipe, _HSW_AUD_MISC_CTRL_A, _HSW_AUD_MISC_CTRL_B)
#define HSW_AUD_MISC_CTRL(trans) _MMIO_TRANS(trans, _HSW_AUD_MISC_CTRL_A, _HSW_AUD_MISC_CTRL_B)
#define _HSW_AUD_M_CTS_ENABLE_A 0x65028
#define _HSW_AUD_M_CTS_ENABLE_B 0x65128
#define HSW_AUD_M_CTS_ENABLE(pipe) _MMIO_PIPE(pipe, _HSW_AUD_M_CTS_ENABLE_A, _HSW_AUD_M_CTS_ENABLE_B)
#define HSW_AUD_M_CTS_ENABLE(trans) _MMIO_TRANS(trans, _HSW_AUD_M_CTS_ENABLE_A, _HSW_AUD_M_CTS_ENABLE_B)
#define AUD_M_CTS_M_VALUE_INDEX (1 << 21)
#define AUD_M_CTS_M_PROG_ENABLE (1 << 20)
#define AUD_CONFIG_M_MASK 0xfffff
#define _HSW_AUD_DIP_ELD_CTRL_ST_A 0x650b4
#define _HSW_AUD_DIP_ELD_CTRL_ST_B 0x651b4
#define HSW_AUD_DIP_ELD_CTRL(pipe) _MMIO_PIPE(pipe, _HSW_AUD_DIP_ELD_CTRL_ST_A, _HSW_AUD_DIP_ELD_CTRL_ST_B)
#define HSW_AUD_DIP_ELD_CTRL(trans) _MMIO_TRANS(trans, _HSW_AUD_DIP_ELD_CTRL_ST_A, _HSW_AUD_DIP_ELD_CTRL_ST_B)
/* Audio Digital Converter */
#define _HSW_AUD_DIG_CNVT_1 0x65080
#define _HSW_AUD_DIG_CNVT_2 0x65180
#define AUD_DIG_CNVT(pipe) _MMIO_PIPE(pipe, _HSW_AUD_DIG_CNVT_1, _HSW_AUD_DIG_CNVT_2)
#define AUD_DIG_CNVT(trans) _MMIO_TRANS(trans, _HSW_AUD_DIG_CNVT_1, _HSW_AUD_DIG_CNVT_2)
#define DIP_PORT_SEL_MASK 0x3
#define _HSW_AUD_EDID_DATA_A 0x65050
#define _HSW_AUD_EDID_DATA_B 0x65150
#define HSW_AUD_EDID_DATA(pipe) _MMIO_PIPE(pipe, _HSW_AUD_EDID_DATA_A, _HSW_AUD_EDID_DATA_B)
#define HSW_AUD_EDID_DATA(trans) _MMIO_TRANS(trans, _HSW_AUD_EDID_DATA_A, _HSW_AUD_EDID_DATA_B)
#define HSW_AUD_PIPE_CONV_CFG _MMIO(0x6507c)
#define HSW_AUD_PIN_ELD_CP_VLD _MMIO(0x650c0)
@ -9523,6 +9528,7 @@ enum skl_power_gate {
#define TRANS_MSA_12_BPC (3 << 5)
#define TRANS_MSA_16_BPC (4 << 5)
#define TRANS_MSA_CEA_RANGE (1 << 3)
#define TRANS_MSA_USE_VSC_SDP (1 << 14)
/* LCPLL Control */
#define LCPLL_CTL _MMIO(0x130040)

View File

@ -32,13 +32,14 @@
#include "i915_active.h"
#include "i915_drv.h"
#include "i915_globals.h"
#include "i915_reset.h"
#include "intel_pm.h"
struct execute_cb {
struct list_head link;
struct irq_work work;
struct i915_sw_fence *fence;
void (*hook)(struct i915_request *rq, struct dma_fence *signal);
struct i915_request *signal;
};
static struct i915_global_request {
@ -132,19 +133,6 @@ i915_request_remove_from_client(struct i915_request *request)
spin_unlock(&file_priv->mm.lock);
}
static void reserve_gt(struct drm_i915_private *i915)
{
if (!i915->gt.active_requests++)
i915_gem_unpark(i915);
}
static void unreserve_gt(struct drm_i915_private *i915)
{
GEM_BUG_ON(!i915->gt.active_requests);
if (!--i915->gt.active_requests)
i915_gem_park(i915);
}
static void advance_ring(struct i915_request *request)
{
struct intel_ring *ring = request->ring;
@ -302,11 +290,10 @@ static void i915_request_retire(struct i915_request *request)
i915_request_remove_from_client(request);
intel_context_unpin(request->hw_context);
__retire_engine_upto(request->engine, request);
unreserve_gt(request->i915);
intel_context_exit(request->hw_context);
intel_context_unpin(request->hw_context);
i915_sched_node_fini(&request->sched);
i915_request_put(request);
@ -344,6 +331,17 @@ static void irq_execute_cb(struct irq_work *wrk)
kmem_cache_free(global.slab_execute_cbs, cb);
}
static void irq_execute_cb_hook(struct irq_work *wrk)
{
struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
cb->hook(container_of(cb->fence, struct i915_request, submit),
&cb->signal->fence);
i915_request_put(cb->signal);
irq_execute_cb(wrk);
}
static void __notify_execute_cb(struct i915_request *rq)
{
struct execute_cb *cb;
@ -370,14 +368,19 @@ static void __notify_execute_cb(struct i915_request *rq)
}
static int
i915_request_await_execution(struct i915_request *rq,
struct i915_request *signal,
gfp_t gfp)
__i915_request_await_execution(struct i915_request *rq,
struct i915_request *signal,
void (*hook)(struct i915_request *rq,
struct dma_fence *signal),
gfp_t gfp)
{
struct execute_cb *cb;
if (i915_request_is_active(signal))
if (i915_request_is_active(signal)) {
if (hook)
hook(rq, &signal->fence);
return 0;
}
cb = kmem_cache_alloc(global.slab_execute_cbs, gfp);
if (!cb)
@ -387,8 +390,18 @@ i915_request_await_execution(struct i915_request *rq,
i915_sw_fence_await(cb->fence);
init_irq_work(&cb->work, irq_execute_cb);
if (hook) {
cb->hook = hook;
cb->signal = i915_request_get(signal);
cb->work.func = irq_execute_cb_hook;
}
spin_lock_irq(&signal->lock);
if (i915_request_is_active(signal)) {
if (hook) {
hook(rq, &signal->fence);
i915_request_put(signal);
}
i915_sw_fence_complete(cb->fence);
kmem_cache_free(global.slab_execute_cbs, cb);
} else {
@ -466,6 +479,8 @@ void __i915_request_submit(struct i915_request *request)
/* Transfer from per-context onto the global per-engine timeline */
move_to_timeline(request, &engine->timeline);
engine->serial++;
trace_i915_request_execute(request);
}
@ -513,6 +528,12 @@ void __i915_request_unsubmit(struct i915_request *request)
/* Transfer back from the global per-engine timeline to per-context */
move_to_timeline(request, request->timeline);
/* We've already spun, don't charge on resubmitting. */
if (request->sched.semaphores && i915_request_started(request)) {
request->sched.attr.priority |= I915_PRIORITY_NOSEMAPHORE;
request->sched.semaphores = 0;
}
/*
* We don't need to wake_up any waiters on request->execute, they
* will get woken by any other event or us re-adding this request
@ -597,7 +618,7 @@ static void ring_retire_requests(struct intel_ring *ring)
}
static noinline struct i915_request *
i915_request_alloc_slow(struct intel_context *ce)
request_alloc_slow(struct intel_context *ce, gfp_t gfp)
{
struct intel_ring *ring = ce->ring;
struct i915_request *rq;
@ -605,6 +626,9 @@ i915_request_alloc_slow(struct intel_context *ce)
if (list_empty(&ring->request_list))
goto out;
if (!gfpflags_allow_blocking(gfp))
goto out;
/* Ratelimit ourselves to prevent oom from malicious clients */
rq = list_last_entry(&ring->request_list, typeof(*rq), ring_link);
cond_synchronize_rcu(rq->rcustate);
@ -613,62 +637,21 @@ i915_request_alloc_slow(struct intel_context *ce)
ring_retire_requests(ring);
out:
return kmem_cache_alloc(global.slab_requests, GFP_KERNEL);
return kmem_cache_alloc(global.slab_requests, gfp);
}
/**
* i915_request_alloc - allocate a request structure
*
* @engine: engine that we wish to issue the request on.
* @ctx: context that the request will be associated with.
*
* Returns a pointer to the allocated request if successful,
* or an error code if not.
*/
struct i915_request *
i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
__i915_request_create(struct intel_context *ce, gfp_t gfp)
{
struct drm_i915_private *i915 = engine->i915;
struct intel_context *ce;
struct i915_timeline *tl;
struct i915_timeline *tl = ce->ring->timeline;
struct i915_request *rq;
u32 seqno;
int ret;
lockdep_assert_held(&i915->drm.struct_mutex);
might_sleep_if(gfpflags_allow_blocking(gfp));
/*
* Preempt contexts are reserved for exclusive use to inject a
* preemption context switch. They are never to be used for any trivial
* request!
*/
GEM_BUG_ON(ctx == i915->preempt_context);
/*
* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
* EIO if the GPU is already wedged.
*/
ret = i915_terminally_wedged(i915);
if (ret)
return ERR_PTR(ret);
/*
* Pinning the contexts may generate requests in order to acquire
* GGTT space, so do this first before we reserve a seqno for
* ourselves.
*/
ce = intel_context_pin(ctx, engine);
if (IS_ERR(ce))
return ERR_CAST(ce);
reserve_gt(i915);
mutex_lock(&ce->ring->timeline->mutex);
/* Move our oldest request to the slab-cache (if not in use!) */
rq = list_first_entry(&ce->ring->request_list, typeof(*rq), ring_link);
if (!list_is_last(&rq->ring_link, &ce->ring->request_list) &&
i915_request_completed(rq))
i915_request_retire(rq);
/* Check that the caller provided an already pinned context */
__intel_context_pin(ce);
/*
* Beware: Dragons be flying overhead.
@ -700,30 +683,26 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
* Do not use kmem_cache_zalloc() here!
*/
rq = kmem_cache_alloc(global.slab_requests,
GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
gfp | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
if (unlikely(!rq)) {
rq = i915_request_alloc_slow(ce);
rq = request_alloc_slow(ce, gfp);
if (!rq) {
ret = -ENOMEM;
goto err_unreserve;
}
}
INIT_LIST_HEAD(&rq->active_list);
INIT_LIST_HEAD(&rq->execute_cb);
tl = ce->ring->timeline;
ret = i915_timeline_get_seqno(tl, rq, &seqno);
if (ret)
goto err_free;
rq->i915 = i915;
rq->engine = engine;
rq->gem_context = ctx;
rq->i915 = ce->engine->i915;
rq->hw_context = ce;
rq->gem_context = ce->gem_context;
rq->engine = ce->engine;
rq->ring = ce->ring;
rq->timeline = tl;
GEM_BUG_ON(rq->timeline == &engine->timeline);
GEM_BUG_ON(rq->timeline == &ce->engine->timeline);
rq->hwsp_seqno = tl->hwsp_seqno;
rq->hwsp_cacheline = tl->hwsp_cacheline;
rq->rcustate = get_state_synchronize_rcu(); /* acts as smp_mb() */
@ -743,6 +722,10 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
rq->batch = NULL;
rq->capture_list = NULL;
rq->waitboost = false;
rq->execution_mask = ALL_ENGINES;
INIT_LIST_HEAD(&rq->active_list);
INIT_LIST_HEAD(&rq->execute_cb);
/*
* Reserve space in the ring buffer for all the commands required to
@ -756,7 +739,8 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
* around inside i915_request_add() there is sufficient space at
* the beginning of the ring as well.
*/
rq->reserved_space = 2 * engine->emit_fini_breadcrumb_dw * sizeof(u32);
rq->reserved_space =
2 * rq->engine->emit_fini_breadcrumb_dw * sizeof(u32);
/*
* Record the position of the start of the request so that
@ -766,20 +750,16 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
*/
rq->head = rq->ring->emit;
ret = engine->request_alloc(rq);
ret = rq->engine->request_alloc(rq);
if (ret)
goto err_unwind;
rq->infix = rq->ring->emit; /* end of header; start of user payload */
/* Keep a second pin for the dual retirement along engine and ring */
__intel_context_pin(ce);
rq->infix = rq->ring->emit; /* end of header; start of user payload */
/* Check that we didn't interrupt ourselves with a new request */
lockdep_assert_held(&rq->timeline->mutex);
GEM_BUG_ON(rq->timeline->seqno != rq->fence.seqno);
rq->cookie = lockdep_pin_lock(&rq->timeline->mutex);
intel_context_mark_active(ce);
return rq;
err_unwind:
@ -793,12 +773,39 @@ err_unwind:
err_free:
kmem_cache_free(global.slab_requests, rq);
err_unreserve:
mutex_unlock(&ce->ring->timeline->mutex);
unreserve_gt(i915);
intel_context_unpin(ce);
return ERR_PTR(ret);
}
struct i915_request *
i915_request_create(struct intel_context *ce)
{
struct i915_request *rq;
intel_context_timeline_lock(ce);
/* Move our oldest request to the slab-cache (if not in use!) */
rq = list_first_entry(&ce->ring->request_list, typeof(*rq), ring_link);
if (!list_is_last(&rq->ring_link, &ce->ring->request_list) &&
i915_request_completed(rq))
i915_request_retire(rq);
intel_context_enter(ce);
rq = __i915_request_create(ce, GFP_KERNEL);
intel_context_exit(ce); /* active reference transferred to request */
if (IS_ERR(rq))
goto err_unlock;
/* Check that we do not interrupt ourselves with a new request */
rq->cookie = lockdep_pin_lock(&ce->ring->timeline->mutex);
return rq;
err_unlock:
intel_context_timeline_unlock(ce);
return rq;
}
static int
i915_request_await_start(struct i915_request *rq, struct i915_request *signal)
{
@ -854,13 +861,13 @@ emit_semaphore_wait(struct i915_request *to,
if (err < 0)
return err;
/* We need to pin the signaler's HWSP until we are finished reading. */
err = i915_timeline_read_hwsp(from, to, &hwsp_offset);
/* Only submit our spinner after the signaler is running! */
err = __i915_request_await_execution(to, from, NULL, gfp);
if (err)
return err;
/* Only submit our spinner after the signaler is running! */
err = i915_request_await_execution(to, from, gfp);
/* We need to pin the signaler's HWSP until we are finished reading. */
err = i915_timeline_read_hwsp(from, to, &hwsp_offset);
if (err)
return err;
@ -991,6 +998,52 @@ i915_request_await_dma_fence(struct i915_request *rq, struct dma_fence *fence)
return 0;
}
int
i915_request_await_execution(struct i915_request *rq,
struct dma_fence *fence,
void (*hook)(struct i915_request *rq,
struct dma_fence *signal))
{
struct dma_fence **child = &fence;
unsigned int nchild = 1;
int ret;
if (dma_fence_is_array(fence)) {
struct dma_fence_array *array = to_dma_fence_array(fence);
/* XXX Error for signal-on-any fence arrays */
child = array->fences;
nchild = array->num_fences;
GEM_BUG_ON(!nchild);
}
do {
fence = *child++;
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
continue;
/*
* We don't squash repeated fence dependencies here as we
* want to run our callback in all cases.
*/
if (dma_fence_is_i915(fence))
ret = __i915_request_await_execution(rq,
to_request(fence),
hook,
I915_FENCE_GFP);
else
ret = i915_sw_fence_await_dma_fence(&rq->submit, fence,
I915_FENCE_TIMEOUT,
GFP_KERNEL);
if (ret < 0)
return ret;
} while (--nchild);
return 0;
}
/**
* i915_request_await_object - set this request to (async) wait upon a bo
* @to: request we are wishing to use
@ -1100,8 +1153,7 @@ __i915_request_add_to_timeline(struct i915_request *rq)
* precludes optimising to use semaphores serialisation of a single
* timeline across engines.
*/
prev = i915_active_request_raw(&timeline->last_request,
&rq->i915->drm.struct_mutex);
prev = rcu_dereference_protected(timeline->last_request.request, 1);
if (prev && !i915_request_completed(prev)) {
if (is_power_of_2(prev->engine->mask | rq->engine->mask))
i915_sw_fence_await_sw_fence(&rq->submit,
@ -1122,6 +1174,11 @@ __i915_request_add_to_timeline(struct i915_request *rq)
list_add_tail(&rq->link, &timeline->requests);
spin_unlock_irq(&timeline->lock);
/*
* Make sure that no request gazumped us - if it was allocated after
* our i915_request_alloc() and called __i915_request_add() before
* us, the timeline will hold its seqno which is later than ours.
*/
GEM_BUG_ON(timeline->seqno != rq->fence.seqno);
__i915_active_request_set(&timeline->last_request, rq);
@ -1133,36 +1190,23 @@ __i915_request_add_to_timeline(struct i915_request *rq)
* request is not being tracked for completion but the work itself is
* going to happen on the hardware. This would be a Bad Thing(tm).
*/
void i915_request_add(struct i915_request *request)
struct i915_request *__i915_request_commit(struct i915_request *rq)
{
struct intel_engine_cs *engine = request->engine;
struct i915_timeline *timeline = request->timeline;
struct intel_ring *ring = request->ring;
struct intel_engine_cs *engine = rq->engine;
struct intel_ring *ring = rq->ring;
struct i915_request *prev;
u32 *cs;
GEM_TRACE("%s fence %llx:%lld\n",
engine->name, request->fence.context, request->fence.seqno);
lockdep_assert_held(&request->timeline->mutex);
lockdep_unpin_lock(&request->timeline->mutex, request->cookie);
trace_i915_request_add(request);
/*
* Make sure that no request gazumped us - if it was allocated after
* our i915_request_alloc() and called __i915_request_add() before
* us, the timeline will hold its seqno which is later than ours.
*/
GEM_BUG_ON(timeline->seqno != request->fence.seqno);
engine->name, rq->fence.context, rq->fence.seqno);
/*
* To ensure that this call will not fail, space for its emissions
* should already have been reserved in the ring buffer. Let the ring
* know that it is time to use that space up.
*/
GEM_BUG_ON(request->reserved_space > request->ring->space);
request->reserved_space = 0;
GEM_BUG_ON(rq->reserved_space > ring->space);
rq->reserved_space = 0;
/*
* Record the position of the start of the breadcrumb so that
@ -1170,17 +1214,16 @@ void i915_request_add(struct i915_request *request)
* GPU processing the request, we never over-estimate the
* position of the ring's HEAD.
*/
cs = intel_ring_begin(request, engine->emit_fini_breadcrumb_dw);
cs = intel_ring_begin(rq, engine->emit_fini_breadcrumb_dw);
GEM_BUG_ON(IS_ERR(cs));
request->postfix = intel_ring_offset(request, cs);
rq->postfix = intel_ring_offset(rq, cs);
prev = __i915_request_add_to_timeline(request);
prev = __i915_request_add_to_timeline(rq);
list_add_tail(&request->ring_link, &ring->request_list);
if (list_is_first(&request->ring_link, &ring->request_list))
list_add(&ring->active_link, &request->i915->gt.active_rings);
request->i915->gt.active_engines |= request->engine->mask;
request->emitted_jiffies = jiffies;
list_add_tail(&rq->ring_link, &ring->request_list);
if (list_is_first(&rq->ring_link, &ring->request_list))
list_add(&ring->active_link, &rq->i915->gt.active_rings);
rq->emitted_jiffies = jiffies;
/*
* Let the backend know a new request has arrived that may need
@ -1194,10 +1237,10 @@ void i915_request_add(struct i915_request *request)
* run at the earliest possible convenience.
*/
local_bh_disable();
i915_sw_fence_commit(&request->semaphore);
i915_sw_fence_commit(&rq->semaphore);
rcu_read_lock(); /* RCU serialisation for set-wedged protection */
if (engine->schedule) {
struct i915_sched_attr attr = request->gem_context->sched;
struct i915_sched_attr attr = rq->gem_context->sched;
/*
* Boost actual workloads past semaphores!
@ -1211,7 +1254,7 @@ void i915_request_add(struct i915_request *request)
* far in the distance past over useful work, we keep a history
* of any semaphore use along our dependency chain.
*/
if (!(request->sched.flags & I915_SCHED_HAS_SEMAPHORE_CHAIN))
if (!(rq->sched.flags & I915_SCHED_HAS_SEMAPHORE_CHAIN))
attr.priority |= I915_PRIORITY_NOSEMAPHORE;
/*
@ -1220,15 +1263,29 @@ void i915_request_add(struct i915_request *request)
* Allow interactive/synchronous clients to jump ahead of
* the bulk clients. (FQ_CODEL)
*/
if (list_empty(&request->sched.signalers_list))
if (list_empty(&rq->sched.signalers_list))
attr.priority |= I915_PRIORITY_WAIT;
engine->schedule(request, &attr);
engine->schedule(rq, &attr);
}
rcu_read_unlock();
i915_sw_fence_commit(&request->submit);
i915_sw_fence_commit(&rq->submit);
local_bh_enable(); /* Kick the execlists tasklet if just scheduled */
return prev;
}
void i915_request_add(struct i915_request *rq)
{
struct i915_request *prev;
lockdep_assert_held(&rq->timeline->mutex);
lockdep_unpin_lock(&rq->timeline->mutex, rq->cookie);
trace_i915_request_add(rq);
prev = __i915_request_commit(rq);
/*
* In typical scenarios, we do not expect the previous request on
* the timeline to be still tracked by timeline->last_request if it
@ -1249,7 +1306,7 @@ void i915_request_add(struct i915_request *request)
if (prev && i915_request_completed(prev))
i915_request_retire_upto(prev);
mutex_unlock(&request->timeline->mutex);
mutex_unlock(&rq->timeline->mutex);
}
static unsigned long local_clock_us(unsigned int *cpu)
@ -1382,8 +1439,31 @@ long i915_request_wait(struct i915_request *rq,
trace_i915_request_wait_begin(rq, flags);
/* Optimistic short spin before touching IRQs */
if (__i915_spin_request(rq, state, 5))
/*
* Optimistic spin before touching IRQs.
*
* We may use a rather large value here to offset the penalty of
* switching away from the active task. Frequently, the client will
* wait upon an old swapbuffer to throttle itself to remain within a
* frame of the gpu. If the client is running in lockstep with the gpu,
* then it should not be waiting long at all, and a sleep now will incur
* extra scheduler latency in producing the next frame. To try to
* avoid adding the cost of enabling/disabling the interrupt to the
* short wait, we first spin to see if the request would have completed
* in the time taken to setup the interrupt.
*
* We need upto 5us to enable the irq, and upto 20us to hide the
* scheduler latency of a context switch, ignoring the secondary
* impacts from a context switch such as cache eviction.
*
* The scheme used for low-latency IO is called "hybrid interrupt
* polling". The suggestion there is to sleep until just before you
* expect to be woken by the device interrupt and then poll for its
* completion. That requires having a good predictor for the request
* duration, which we currently lack.
*/
if (CONFIG_DRM_I915_SPIN_REQUEST &&
__i915_spin_request(rq, state, CONFIG_DRM_I915_SPIN_REQUEST))
goto out;
/*
@ -1401,9 +1481,7 @@ long i915_request_wait(struct i915_request *rq,
if (flags & I915_WAIT_PRIORITY) {
if (!i915_request_started(rq) && INTEL_GEN(rq->i915) >= 6)
gen6_rps_boost(rq);
local_bh_disable(); /* suspend tasklets for reprioritisation */
i915_schedule_bump_priority(rq, I915_PRIORITY_WAIT);
local_bh_enable(); /* kick tasklets en masse */
}
wait.tsk = current;
@ -1437,21 +1515,20 @@ out:
return timeout;
}
void i915_retire_requests(struct drm_i915_private *i915)
bool i915_retire_requests(struct drm_i915_private *i915)
{
struct intel_ring *ring, *tmp;
lockdep_assert_held(&i915->drm.struct_mutex);
if (!i915->gt.active_requests)
return;
list_for_each_entry_safe(ring, tmp,
&i915->gt.active_rings, active_link) {
intel_ring_get(ring); /* last rq holds reference! */
ring_retire_requests(ring);
intel_ring_put(ring);
}
return !list_empty(&i915->gt.active_rings);
}
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)

View File

@ -28,6 +28,8 @@
#include <linux/dma-fence.h>
#include <linux/lockdep.h>
#include "gt/intel_engine_types.h"
#include "i915_gem.h"
#include "i915_scheduler.h"
#include "i915_selftest.h"
@ -156,6 +158,7 @@ struct i915_request {
*/
struct i915_sched_node sched;
struct i915_dependency dep;
intel_engine_mask_t execution_mask;
/*
* A convenience pointer to the current breadcrumb value stored in
@ -240,8 +243,12 @@ static inline bool dma_fence_is_i915(const struct dma_fence *fence)
}
struct i915_request * __must_check
i915_request_alloc(struct intel_engine_cs *engine,
struct i915_gem_context *ctx);
__i915_request_create(struct intel_context *ce, gfp_t gfp);
struct i915_request * __must_check
i915_request_create(struct intel_context *ce);
struct i915_request *__i915_request_commit(struct i915_request *request);
void i915_request_retire_upto(struct i915_request *rq);
static inline struct i915_request *
@ -276,6 +283,10 @@ int i915_request_await_object(struct i915_request *to,
bool write);
int i915_request_await_dma_fence(struct i915_request *rq,
struct dma_fence *fence);
int i915_request_await_execution(struct i915_request *rq,
struct dma_fence *fence,
void (*hook)(struct i915_request *rq,
struct dma_fence *signal));
void i915_request_add(struct i915_request *rq);
@ -418,6 +429,6 @@ static inline void i915_request_mark_complete(struct i915_request *rq)
rq->hwsp_seqno = (u32 *)&rq->fence.seqno; /* decouple from HWSP */
}
void i915_retire_requests(struct drm_i915_private *i915);
bool i915_retire_requests(struct drm_i915_private *i915);
#endif /* I915_REQUEST_H */

View File

@ -150,29 +150,49 @@ sched_lock_engine(const struct i915_sched_node *node,
struct intel_engine_cs *locked,
struct sched_cache *cache)
{
struct intel_engine_cs *engine = node_to_request(node)->engine;
const struct i915_request *rq = node_to_request(node);
struct intel_engine_cs *engine;
GEM_BUG_ON(!locked);
if (engine != locked) {
/*
* Virtual engines complicate acquiring the engine timeline lock,
* as their rq->engine pointer is not stable until under that
* engine lock. The simple ploy we use is to take the lock then
* check that the rq still belongs to the newly locked engine.
*/
while (locked != (engine = READ_ONCE(rq->engine))) {
spin_unlock(&locked->timeline.lock);
memset(cache, 0, sizeof(*cache));
spin_lock(&engine->timeline.lock);
locked = engine;
}
return engine;
GEM_BUG_ON(locked != engine);
return locked;
}
static bool inflight(const struct i915_request *rq,
const struct intel_engine_cs *engine)
static inline int rq_prio(const struct i915_request *rq)
{
const struct i915_request *active;
return rq->sched.attr.priority | __NO_PREEMPTION;
}
if (!i915_request_is_active(rq))
return false;
static void kick_submission(struct intel_engine_cs *engine, int prio)
{
const struct i915_request *inflight =
port_request(engine->execlists.port);
active = port_request(engine->execlists.port);
return active->hw_context == rq->hw_context;
/*
* If we are already the currently executing context, don't
* bother evaluating if we should preempt ourselves, or if
* we expect nothing to change as a result of running the
* tasklet, i.e. we have not change the priority queue
* sufficiently to oust the running context.
*/
if (inflight && !i915_scheduler_need_preempt(prio, rq_prio(inflight)))
return;
tasklet_hi_schedule(&engine->execlists.tasklet);
}
static void __i915_schedule(struct i915_sched_node *node,
@ -189,10 +209,10 @@ static void __i915_schedule(struct i915_sched_node *node,
lockdep_assert_held(&schedule_lock);
GEM_BUG_ON(prio == I915_PRIORITY_INVALID);
if (node_signaled(node))
if (prio <= READ_ONCE(node->attr.priority))
return;
if (prio <= READ_ONCE(node->attr.priority))
if (node_signaled(node))
return;
stack.signaler = node;
@ -261,6 +281,7 @@ static void __i915_schedule(struct i915_sched_node *node,
spin_lock(&engine->timeline.lock);
/* Fifo and depth-first replacement ensure our deps execute before us */
engine = sched_lock_engine(node, engine, &cache);
list_for_each_entry_safe_reverse(dep, p, &dfs, dfs_link) {
INIT_LIST_HEAD(&dep->dfs_link);
@ -272,8 +293,11 @@ static void __i915_schedule(struct i915_sched_node *node,
if (prio <= node->attr.priority || node_signaled(node))
continue;
GEM_BUG_ON(node_to_request(node)->engine != engine);
node->attr.priority = prio;
if (!list_empty(&node->link)) {
GEM_BUG_ON(intel_engine_is_virtual(engine));
if (!cache.priolist)
cache.priolist =
i915_sched_lookup_priolist(engine,
@ -297,15 +321,8 @@ static void __i915_schedule(struct i915_sched_node *node,
engine->execlists.queue_priority_hint = prio;
/*
* If we are already the currently executing context, don't
* bother evaluating if we should preempt ourselves.
*/
if (inflight(node_to_request(node), engine))
continue;
/* Defer (tasklet) submission until after all of our updates. */
tasklet_hi_schedule(&engine->execlists.tasklet);
kick_submission(engine, prio);
}
spin_unlock(&engine->timeline.lock);

View File

@ -52,4 +52,22 @@ static inline void i915_priolist_free(struct i915_priolist *p)
__i915_priolist_free(p);
}
static inline bool i915_scheduler_need_preempt(int prio, int active)
{
/*
* Allow preemption of low -> normal -> high, but we do
* not allow low priority tasks to preempt other low priority
* tasks under the impression that latency for low priority
* tasks does not matter (as much as background throughput),
* so kiss.
*
* More naturally we would write
* prio >= max(0, last);
* except that we wish to prevent triggering preemption at the same
* priority level: the task that is running should remain running
* to preserve FIFO ordering of dependencies.
*/
return prio > max(I915_PRIORITY_NORMAL - 1, active);
}
#endif /* _I915_SCHEDULER_H_ */

View File

@ -9,8 +9,8 @@
#include <linux/list.h>
#include "gt/intel_engine_types.h"
#include "i915_priolist_types.h"
#include "intel_engine_types.h"
struct drm_i915_private;
struct i915_request;

View File

@ -29,6 +29,7 @@
#include "i915_reg.h"
#include "intel_drv.h"
#include "intel_fbc.h"
#include "intel_gmbus.h"
static void i915_save_display(struct drm_i915_private *dev_priv)
{
@ -144,7 +145,7 @@ int i915_restore_state(struct drm_i915_private *dev_priv)
mutex_unlock(&dev_priv->drm.struct_mutex);
intel_i2c_reset(dev_priv);
intel_gmbus_reset(dev_priv);
return 0;
}

View File

@ -29,8 +29,11 @@
#include <linux/module.h>
#include <linux/stat.h>
#include <linux/sysfs.h>
#include "intel_drv.h"
#include "i915_drv.h"
#include "intel_drv.h"
#include "intel_pm.h"
#include "intel_sideband.h"
static inline struct drm_i915_private *kdev_minor_to_i915(struct device *kdev)
{
@ -259,25 +262,23 @@ static ssize_t gt_act_freq_mhz_show(struct device *kdev,
{
struct drm_i915_private *dev_priv = kdev_minor_to_i915(kdev);
intel_wakeref_t wakeref;
int ret;
u32 freq;
wakeref = intel_runtime_pm_get(dev_priv);
mutex_lock(&dev_priv->pcu_lock);
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
u32 freq;
vlv_punit_get(dev_priv);
freq = vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS);
ret = intel_gpu_freq(dev_priv, (freq >> 8) & 0xff);
vlv_punit_put(dev_priv);
freq = (freq >> 8) & 0xff;
} else {
ret = intel_gpu_freq(dev_priv,
intel_get_cagf(dev_priv,
I915_READ(GEN6_RPSTAT1)));
freq = intel_get_cagf(dev_priv, I915_READ(GEN6_RPSTAT1));
}
mutex_unlock(&dev_priv->pcu_lock);
intel_runtime_pm_put(dev_priv, wakeref);
return snprintf(buf, PAGE_SIZE, "%d\n", ret);
return snprintf(buf, PAGE_SIZE, "%d\n", intel_gpu_freq(dev_priv, freq));
}
static ssize_t gt_cur_freq_mhz_show(struct device *kdev,
@ -318,12 +319,12 @@ static ssize_t gt_boost_freq_mhz_store(struct device *kdev,
if (val < rps->min_freq || val > rps->max_freq)
return -EINVAL;
mutex_lock(&dev_priv->pcu_lock);
mutex_lock(&rps->lock);
if (val != rps->boost_freq) {
rps->boost_freq = val;
boost = atomic_read(&rps->num_waiters);
}
mutex_unlock(&dev_priv->pcu_lock);
mutex_unlock(&rps->lock);
if (boost)
schedule_work(&rps->work);
@ -364,17 +365,14 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
return ret;
wakeref = intel_runtime_pm_get(dev_priv);
mutex_lock(&dev_priv->pcu_lock);
mutex_lock(&rps->lock);
val = intel_freq_opcode(dev_priv, val);
if (val < rps->min_freq ||
val > rps->max_freq ||
val < rps->min_freq_softlimit) {
mutex_unlock(&dev_priv->pcu_lock);
intel_runtime_pm_put(dev_priv, wakeref);
return -EINVAL;
ret = -EINVAL;
goto unlock;
}
if (val > rps->rp0_freq)
@ -392,8 +390,8 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
* frequency request may be unchanged. */
ret = intel_set_rps(dev_priv, val);
mutex_unlock(&dev_priv->pcu_lock);
unlock:
mutex_unlock(&rps->lock);
intel_runtime_pm_put(dev_priv, wakeref);
return ret ?: count;
@ -423,17 +421,14 @@ static ssize_t gt_min_freq_mhz_store(struct device *kdev,
return ret;
wakeref = intel_runtime_pm_get(dev_priv);
mutex_lock(&dev_priv->pcu_lock);
mutex_lock(&rps->lock);
val = intel_freq_opcode(dev_priv, val);
if (val < rps->min_freq ||
val > rps->max_freq ||
val > rps->max_freq_softlimit) {
mutex_unlock(&dev_priv->pcu_lock);
intel_runtime_pm_put(dev_priv, wakeref);
return -EINVAL;
ret = -EINVAL;
goto unlock;
}
rps->min_freq_softlimit = val;
@ -447,8 +442,8 @@ static ssize_t gt_min_freq_mhz_store(struct device *kdev,
* frequency request may be unchanged. */
ret = intel_set_rps(dev_priv, val);
mutex_unlock(&dev_priv->pcu_lock);
unlock:
mutex_unlock(&rps->lock);
intel_runtime_pm_put(dev_priv, wakeref);
return ret ?: count;

View File

@ -26,6 +26,7 @@ struct i915_timeline {
spinlock_t lock;
#define TIMELINE_CLIENT 0 /* default subclass */
#define TIMELINE_ENGINE 1
#define TIMELINE_VIRTUAL 2
struct mutex mutex; /* protects the flow of requests */
unsigned int pin_count;

View File

@ -8,9 +8,11 @@
#include <drm/drm_drv.h>
#include "gt/intel_engine.h"
#include "i915_drv.h"
#include "i915_irq.h"
#include "intel_drv.h"
#include "intel_ringbuffer.h"
#undef TRACE_SYSTEM
#define TRACE_SYSTEM i915

View File

@ -25,6 +25,12 @@
#ifndef __I915_UTILS_H
#define __I915_UTILS_H
#include <linux/list.h>
#include <linux/overflow.h>
#include <linux/sched.h>
#include <linux/types.h>
#include <linux/workqueue.h>
#undef WARN_ON
/* Many gcc seem to no see through this and fall over :( */
#if 0
@ -73,6 +79,39 @@
#define overflows_type(x, T) \
(sizeof(x) > sizeof(T) && (x) >> BITS_PER_TYPE(T))
static inline bool
__check_struct_size(size_t base, size_t arr, size_t count, size_t *size)
{
size_t sz;
if (check_mul_overflow(count, arr, &sz))
return false;
if (check_add_overflow(sz, base, &sz))
return false;
*size = sz;
return true;
}
/**
* check_struct_size() - Calculate size of structure with trailing array.
* @p: Pointer to the structure.
* @member: Name of the array member.
* @n: Number of elements in the array.
* @sz: Total size of structure and array
*
* Calculates size of memory needed for structure @p followed by an
* array of @n @member elements, like struct_size() but reports
* whether it overflowed, and the resultant size in @sz
*
* Return: false if the calculation overflowed.
*/
#define check_struct_size(p, member, n, sz) \
likely(__check_struct_size(sizeof(*(p)), \
sizeof(*(p)->member) + __must_be_array((p)->member), \
n, sz))
#define ptr_mask_bits(ptr, n) ({ \
unsigned long __v = (unsigned long)(ptr); \
(typeof(ptr))(__v & -BIT(n)); \
@ -97,6 +136,8 @@
#define page_pack_bits(ptr, bits) ptr_pack_bits(ptr, bits, PAGE_SHIFT)
#define page_unpack_bits(ptr, bits) ptr_unpack_bits(ptr, bits, PAGE_SHIFT)
#define struct_member(T, member) (((T *)0)->member)
#define ptr_offset(ptr, member) offsetof(typeof(*(ptr)), member)
#define fetch_and_zero(ptr) ({ \
@ -113,7 +154,7 @@
*/
#define container_of_user(ptr, type, member) ({ \
void __user *__mptr = (void __user *)(ptr); \
BUILD_BUG_ON_MSG(!__same_type(*(ptr), ((type *)0)->member) && \
BUILD_BUG_ON_MSG(!__same_type(*(ptr), struct_member(type, member)) && \
!__same_type(*(ptr), void), \
"pointer type mismatch in container_of()"); \
((type __user *)(__mptr - offsetof(type, member))); })
@ -152,8 +193,6 @@ static inline u64 ptr_to_u64(const void *ptr)
__idx; \
})
#include <linux/list.h>
static inline void __list_del_many(struct list_head *head,
struct list_head *first)
{
@ -174,6 +213,158 @@ static inline void drain_delayed_work(struct delayed_work *dw)
} while (delayed_work_pending(dw));
}
static inline unsigned long msecs_to_jiffies_timeout(const unsigned int m)
{
unsigned long j = msecs_to_jiffies(m);
return min_t(unsigned long, MAX_JIFFY_OFFSET, j + 1);
}
static inline unsigned long nsecs_to_jiffies_timeout(const u64 n)
{
/* nsecs_to_jiffies64() does not guard against overflow */
if (NSEC_PER_SEC % HZ &&
div_u64(n, NSEC_PER_SEC) >= MAX_JIFFY_OFFSET / HZ)
return MAX_JIFFY_OFFSET;
return min_t(u64, MAX_JIFFY_OFFSET, nsecs_to_jiffies64(n) + 1);
}
/*
* If you need to wait X milliseconds between events A and B, but event B
* doesn't happen exactly after event A, you record the timestamp (jiffies) of
* when event A happened, then just before event B you call this function and
* pass the timestamp as the first argument, and X as the second argument.
*/
static inline void
wait_remaining_ms_from_jiffies(unsigned long timestamp_jiffies, int to_wait_ms)
{
unsigned long target_jiffies, tmp_jiffies, remaining_jiffies;
/*
* Don't re-read the value of "jiffies" every time since it may change
* behind our back and break the math.
*/
tmp_jiffies = jiffies;
target_jiffies = timestamp_jiffies +
msecs_to_jiffies_timeout(to_wait_ms);
if (time_after(target_jiffies, tmp_jiffies)) {
remaining_jiffies = target_jiffies - tmp_jiffies;
while (remaining_jiffies)
remaining_jiffies =
schedule_timeout_uninterruptible(remaining_jiffies);
}
}
/**
* __wait_for - magic wait macro
*
* Macro to help avoid open coding check/wait/timeout patterns. Note that it's
* important that we check the condition again after having timed out, since the
* timeout could be due to preemption or similar and we've never had a chance to
* check the condition before the timeout.
*/
#define __wait_for(OP, COND, US, Wmin, Wmax) ({ \
const ktime_t end__ = ktime_add_ns(ktime_get_raw(), 1000ll * (US)); \
long wait__ = (Wmin); /* recommended min for usleep is 10 us */ \
int ret__; \
might_sleep(); \
for (;;) { \
const bool expired__ = ktime_after(ktime_get_raw(), end__); \
OP; \
/* Guarantee COND check prior to timeout */ \
barrier(); \
if (COND) { \
ret__ = 0; \
break; \
} \
if (expired__) { \
ret__ = -ETIMEDOUT; \
break; \
} \
usleep_range(wait__, wait__ * 2); \
if (wait__ < (Wmax)) \
wait__ <<= 1; \
} \
ret__; \
})
#define _wait_for(COND, US, Wmin, Wmax) __wait_for(, (COND), (US), (Wmin), \
(Wmax))
#define wait_for(COND, MS) _wait_for((COND), (MS) * 1000, 10, 1000)
/* If CONFIG_PREEMPT_COUNT is disabled, in_atomic() always reports false. */
#if defined(CONFIG_DRM_I915_DEBUG) && defined(CONFIG_PREEMPT_COUNT)
# define _WAIT_FOR_ATOMIC_CHECK(ATOMIC) WARN_ON_ONCE((ATOMIC) && !in_atomic())
#else
# define _WAIT_FOR_ATOMIC_CHECK(ATOMIC) do { } while (0)
#endif
#define _wait_for_atomic(COND, US, ATOMIC) \
({ \
int cpu, ret, timeout = (US) * 1000; \
u64 base; \
_WAIT_FOR_ATOMIC_CHECK(ATOMIC); \
if (!(ATOMIC)) { \
preempt_disable(); \
cpu = smp_processor_id(); \
} \
base = local_clock(); \
for (;;) { \
u64 now = local_clock(); \
if (!(ATOMIC)) \
preempt_enable(); \
/* Guarantee COND check prior to timeout */ \
barrier(); \
if (COND) { \
ret = 0; \
break; \
} \
if (now - base >= timeout) { \
ret = -ETIMEDOUT; \
break; \
} \
cpu_relax(); \
if (!(ATOMIC)) { \
preempt_disable(); \
if (unlikely(cpu != smp_processor_id())) { \
timeout -= now - base; \
cpu = smp_processor_id(); \
base = local_clock(); \
} \
} \
} \
ret; \
})
#define wait_for_us(COND, US) \
({ \
int ret__; \
BUILD_BUG_ON(!__builtin_constant_p(US)); \
if ((US) > 10) \
ret__ = _wait_for((COND), (US), 10, 10); \
else \
ret__ = _wait_for_atomic((COND), (US), 0); \
ret__; \
})
#define wait_for_atomic_us(COND, US) \
({ \
BUILD_BUG_ON(!__builtin_constant_p(US)); \
BUILD_BUG_ON((US) > 50000); \
_wait_for_atomic((COND), (US), 1); \
})
#define wait_for_atomic(COND, MS) wait_for_atomic_us((COND), (MS) * 1000)
#define KHz(x) (1000 * (x))
#define MHz(x) KHz(1000 * (x))
#define KBps(x) (1000 * (x))
#define MBps(x) KBps(1000 * (x))
#define GBps(x) ((u64)1000 * MBps((x)))
static inline const char *yesno(bool v)
{
return v ? "yes" : "no";

View File

@ -22,11 +22,12 @@
*
*/
#include "gt/intel_engine.h"
#include "i915_vma.h"
#include "i915_drv.h"
#include "i915_globals.h"
#include "intel_ringbuffer.h"
#include "intel_frontbuffer.h"
#include <drm/drm_gem.h>
@ -155,6 +156,9 @@ vma_create(struct drm_i915_gem_object *obj,
} else if (view->type == I915_GGTT_VIEW_ROTATED) {
vma->size = intel_rotation_info_size(&view->rotated);
vma->size <<= PAGE_SHIFT;
} else if (view->type == I915_GGTT_VIEW_REMAPPED) {
vma->size = intel_remapped_info_size(&view->remapped);
vma->size <<= PAGE_SHIFT;
}
}
@ -476,13 +480,6 @@ void __i915_vma_set_map_and_fenceable(struct i915_vma *vma)
GEM_BUG_ON(!i915_vma_is_ggtt(vma));
GEM_BUG_ON(!vma->fence_size);
/*
* Explicitly disable for rotated VMA since the display does not
* need the fence and the VMA is not accessible to other users.
*/
if (vma->ggtt_view.type == I915_GGTT_VIEW_ROTATED)
return;
fenceable = (vma->node.size >= vma->fence_size &&
IS_ALIGNED(vma->node.start, vma->fence_alignment));

View File

@ -277,8 +277,11 @@ i915_vma_compare(struct i915_vma *vma,
*/
BUILD_BUG_ON(I915_GGTT_VIEW_NORMAL >= I915_GGTT_VIEW_PARTIAL);
BUILD_BUG_ON(I915_GGTT_VIEW_PARTIAL >= I915_GGTT_VIEW_ROTATED);
BUILD_BUG_ON(I915_GGTT_VIEW_ROTATED >= I915_GGTT_VIEW_REMAPPED);
BUILD_BUG_ON(offsetof(typeof(*view), rotated) !=
offsetof(typeof(*view), partial));
BUILD_BUG_ON(offsetof(typeof(*view), rotated) !=
offsetof(typeof(*view), remapped));
return memcmp(&vma->ggtt_view.partial, &view->partial, view->type);
}

View File

@ -28,6 +28,8 @@
#include <drm/drm_atomic_helper.h>
#include <drm/drm_mipi_dsi.h>
#include "intel_atomic.h"
#include "intel_combo_phy.h"
#include "intel_connector.h"
#include "intel_ddi.h"
#include "intel_dsi.h"
@ -363,30 +365,10 @@ static void gen11_dsi_power_up_lanes(struct intel_encoder *encoder)
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
enum port port;
u32 tmp;
u32 lane_mask;
switch (intel_dsi->lane_count) {
case 1:
lane_mask = PWR_DOWN_LN_3_1_0;
break;
case 2:
lane_mask = PWR_DOWN_LN_3_1;
break;
case 3:
lane_mask = PWR_DOWN_LN_3;
break;
case 4:
default:
lane_mask = PWR_UP_ALL_LANES;
break;
}
for_each_dsi_port(port, intel_dsi->ports) {
tmp = I915_READ(ICL_PORT_CL_DW10(port));
tmp &= ~PWR_DOWN_LN_MASK;
I915_WRITE(ICL_PORT_CL_DW10(port), tmp | lane_mask);
}
for_each_dsi_port(port, intel_dsi->ports)
intel_combo_phy_power_up_lanes(dev_priv, port, true,
intel_dsi->lane_count, false);
}
static void gen11_dsi_config_phy_lanes_sequence(struct intel_encoder *encoder)
@ -1193,17 +1175,51 @@ static void gen11_dsi_disable(struct intel_encoder *encoder,
gen11_dsi_disable_io_power(encoder);
}
static void gen11_dsi_get_timings(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
if (intel_dsi->dual_link) {
adjusted_mode->crtc_hdisplay *= 2;
if (intel_dsi->dual_link == DSI_DUAL_LINK_FRONT_BACK)
adjusted_mode->crtc_hdisplay -=
intel_dsi->pixel_overlap;
adjusted_mode->crtc_htotal *= 2;
}
adjusted_mode->crtc_hblank_start = adjusted_mode->crtc_hdisplay;
adjusted_mode->crtc_hblank_end = adjusted_mode->crtc_htotal;
if (intel_dsi->operation_mode == INTEL_DSI_VIDEO_MODE) {
if (intel_dsi->dual_link) {
adjusted_mode->crtc_hsync_start *= 2;
adjusted_mode->crtc_hsync_end *= 2;
}
}
adjusted_mode->crtc_vblank_start = adjusted_mode->crtc_vdisplay;
adjusted_mode->crtc_vblank_end = adjusted_mode->crtc_vtotal;
}
static void gen11_dsi_get_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
/* FIXME: adapt icl_ddi_clock_get() for DSI and use that? */
pipe_config->port_clock =
cnl_calc_wrpll_link(dev_priv, &pipe_config->dpll_hw_state);
pipe_config->base.adjusted_mode.crtc_clock = intel_dsi->pclk;
if (intel_dsi->dual_link)
pipe_config->base.adjusted_mode.crtc_clock *= 2;
gen11_dsi_get_timings(encoder, pipe_config);
pipe_config->output_types |= BIT(INTEL_OUTPUT_DSI);
pipe_config->pipe_bpp = bdw_get_pipemisc_bpp(crtc);
}
static int gen11_dsi_compute_config(struct intel_encoder *encoder,
@ -1219,6 +1235,7 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder,
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB;
intel_fixed_panel_mode(fixed_mode, adjusted_mode);
intel_pch_panel_fitting(crtc, pipe_config, conn_state->scaling_mode);

Some files were not shown because too many files have changed in this diff Show More