- A backmerge of drm-next solving conflicts on i915/gt/intel_lrc.c

- Clean up shadow batch after I915_EXEC_SECURE
 - Drop assertion that active->fence is unchanged
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEbSBwaO7dZQkcLOKj+mJfZA7rE8oFAl5Vau0ACgkQ+mJfZA7r
 E8r5IQgAmBxjRdPYPcWP4IgsiyuhyUB+FTMOt84UonRdsE6dvalVibeS5CnaUKo0
 8xfXnqubRYano7QiWbPhmQpIgeC2ZrM628xoThsySaitB4QG9eOHCpa1uXMWCd/j
 /Jwh3tDiOdMpUunXPIlRFgNrNmGm5LM9APDEQDMouoknVb7GS4UaCMqlFU4LuBCm
 2+sp4e5PHdJGrfalliNndICeD/bB1eBm6vrwKe4qNzDxyJJTF/1tcvzSyx/o1K2o
 cAm3zvFmspWy5sC04XPFRG8POMjsobDBdD549wJjqah7z94DcDdFuRkZyHrZKBMW
 2VLt3dUjTj1Z3Ib57ynqY24F7AIx3g==
 =uXcx
 -----END PGP SIGNATURE-----

Merge tag 'drm-intel-next-2020-02-25' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

- A backmerge of drm-next solving conflicts on i915/gt/intel_lrc.c
- Clean up shadow batch after I915_EXEC_SECURE
- Drop assertion that active->fence is unchanged

Here goes drm-intel-next-2020-02-25:
- A backmerge of drm-next solving conflicts on i915/gt/intel_lrc.c
- Clean up shadow batch after I915_EXEC_SECURE
- Drop assertion that active->fence is unchanged
drm-intel-next-2020-02-24-1:
- RC6 fixes - Chris
- Add extra slice common debug register - Lionel
- Align virtual engines uabi_class/instance with i915_drm.h - Tvrtko
- Avoid potential division by zero in computing CS timestamp - Chris
- Avoid using various globals - Michal Winiarski, Matt Auld
- Break up long lists of GEM object reclaim - Chris
- Check that the vma hasn't been closed before we insert it - Chris
- Consolidate SDVO HDMI force_dvi handling - Ville
- Conversion to new logging and warn macros and functions - Pankaj, Wambul, Chris
- DC3CO fixes - Jose
- Disable use of hwsp_cacheline for kernel_context - Chris
- Display IRQ pre/post uninstall refactor - Jani
- Display port sync refactor for robustness and fixes - Ville, Manasi
- Do not attempt to reprogram IA/ring frequencies for dgfx - Chris
- Drop alpha_support for good in favor of force_probe - Jani
- DSI ACPI related fixes and refactors - Vivek, Jani, Rajat
- Encoder refactor for flexibility to add more information, especiallly DSI related - Jani, Vandita
- Engine workarounds refactor for robustness around resue - Daniele
- FBC simplification and tracepoints
- Various fixes for build - Jani, Kees Cook, Chris, Zhang Xiaoxu
- Fix cmdparser - Chris
- Fix DRM_I915_GEM_MMAP_OFFFSET - Chris
- Fix i915_request flags - Chris
- Fix inconsistency between pfit enable and scaler freeing - Stanislav
- Fix inverted warn_on on display code - Chris
- Fix modeset locks in sanitize_watermarks - Ville
- Fix OA context id overlap with idle context id - Umesh
- Fix pipe and vblank enable for MST - Jani
- Fix VBT handling for timing parameters - Vandita
- Fixes o kernel doc - Chris, Ville
- Force full modeset whenever DSC is enabled at probe - Jani
- Various GEM locking simplification and fixes - Jani , Chris, Jose
  - Including some changes in preparation for making GEM execbuf parallel - Chris
- Gen11 pcode error codes - Matt Roper
- Gen8+ interrupt handler refactor - Chris
- Many fixes and improvements around GuC code - Daniele, Michal Wajdeczko
- i915 parameters improvements sfor flexible input and better debugability - Chris, Jani
- Ice Lake and Elkhart Lake Fixes and workarounds - Matt Roper, Jose, Vivek, Matt Atwood
- Improvements on execlists, requests and other areas, fixing hangs and also
  improving hang detection, recover and debugability - Chris
  - Also introducing offline GT error capture - Chris
- Introduce encoder->compute_config_late() to help MST - Ville
- Make dbuf configuration const - Jani
- Few misc clean ups - Ville, Chris
- Never allow userptr into the new mapping types - Janusz
- Poison rings after use and GTT scratch pages - Chris
- Protect signaler walk with RCU - Chris
- PSR fixes - Jose
- Pull sseu context updates under gt - Chris
- Read rawclk_freq earlier - Chris
- Refactor around VBT handling to allow geting information through the encoder - Jani
- Refactor l3cc/mocs availability - Chris
- Refactor to use intel_connector over drm_connector - Ville
- Remove i915_energy_uJ from debugfs - Tvrtko
- Remove lite restore defines - Mika Kuoppala
- Remove prefault_disable modparam - Chris
- Many selftests fixes and improvements - Chris
- Set intel_dp_set_m_n() for MST slaves - Jose
- Simplify hot plug pin handling and other fixes around pin and polled modes - Ville
- Skip CPU synchronization on dma-buf attachments - chris
- Skip global serialization of clear_range for bxt vtd - Chris
- Skip rmw for marked register - Chris
- Some other GEM Fixes - Chris
- Some small changes for satisfying static code analysis - Colin, Chris
- Suppress warnings for unused debugging locals
- Tiger Lake enabling, including re-enable -f RPS, workarounds and other display fixes and changes - Chris, Matt Roper, Mika Kuoppala, Anshuman, Jose, Radhakrishna, Rafael.
- Track hw reported context runtime - Tvrtko
- Update bug filling URL - Jani
- Use async bind for PIN_USER into bsw/bxt ggtt - Chris
- Use the kernel_context to measuer the breadcrumb size - Chris
- Userptr fixes and robustness for big pages - Matt Auld
- Various Display refactors and clean-ups, specially around logs and use of drm_i915_private - Jani, Ville
- Various display refactors and fixes, especially around cdclk, modeset, and encoder - Chris, Jani
- Various eDP/DP fixes around DPCD - Lyude
- Various fixes and refactors for better Display watermark handling - Ville, Stanislav
- Various other display refactors - Ville
- Various refactor for better handling of display plane states - Ville
- Wean off drm_pci_alloc/drm_pci_free - Chris
- Correctly terminate connector iteration- Ville
- Downgrade gen7 (ivb, byt, hsw) back to aliasing-ppgtt - Chris

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200225185853.GA3282832@intel.com
This commit is contained in:
Dave Airlie 2020-02-27 08:59:19 +10:00
commit 4825b61a3d
196 changed files with 16172 additions and 11328 deletions

View file

@ -8416,7 +8416,7 @@ M: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
M: Rodrigo Vivi <rodrigo.vivi@intel.com> M: Rodrigo Vivi <rodrigo.vivi@intel.com>
L: intel-gfx@lists.freedesktop.org L: intel-gfx@lists.freedesktop.org
W: https://01.org/linuxgraphics/ W: https://01.org/linuxgraphics/
B: https://01.org/linuxgraphics/documentation/how-report-bugs B: https://gitlab.freedesktop.org/drm/intel/-/wikis/How-to-file-i915-bugs
C: irc://chat.freenode.net/intel-gfx C: irc://chat.freenode.net/intel-gfx
Q: http://patchwork.freedesktop.org/project/intel-gfx/ Q: http://patchwork.freedesktop.org/project/intel-gfx/
T: git git://anongit.freedesktop.org/drm-intel T: git git://anongit.freedesktop.org/drm-intel

View file

@ -42,16 +42,9 @@ config DRM_I915
If "M" is selected, the module will be called i915. If "M" is selected, the module will be called i915.
config DRM_I915_ALPHA_SUPPORT
bool "Enable alpha quality support for new Intel hardware by default"
depends on DRM_I915
help
This option is deprecated. Use DRM_I915_FORCE_PROBE option instead.
config DRM_I915_FORCE_PROBE config DRM_I915_FORCE_PROBE
string "Force probe driver for selected new Intel hardware" string "Force probe driver for selected new Intel hardware"
depends on DRM_I915 depends on DRM_I915
default "*" if DRM_I915_ALPHA_SUPPORT
help help
This is the default value for the i915.force_probe module This is the default value for the i915.force_probe module
parameter. Using the module parameter overrides this option. parameter. Using the module parameter overrides this option.
@ -75,9 +68,8 @@ config DRM_I915_CAPTURE_ERROR
help help
This option enables capturing the GPU state when a hang is detected. This option enables capturing the GPU state when a hang is detected.
This information is vital for triaging hangs and assists in debugging. This information is vital for triaging hangs and assists in debugging.
Please report any hang to Please report any hang for triaging according to:
https://bugs.freedesktop.org/enter_bug.cgi?product=DRI https://gitlab.freedesktop.org/drm/intel/-/wikis/How-to-file-i915-bugs
for triaging.
If in doubt, say "Y". If in doubt, say "Y".

View file

@ -46,7 +46,6 @@ i915-y += i915_drv.o \
i915_switcheroo.o \ i915_switcheroo.o \
i915_sysfs.o \ i915_sysfs.o \
i915_utils.o \ i915_utils.o \
intel_csr.o \
intel_device_info.o \ intel_device_info.o \
intel_memory_region.o \ intel_memory_region.o \
intel_pch.o \ intel_pch.o \
@ -54,7 +53,8 @@ i915-y += i915_drv.o \
intel_runtime_pm.o \ intel_runtime_pm.o \
intel_sideband.o \ intel_sideband.o \
intel_uncore.o \ intel_uncore.o \
intel_wakeref.o intel_wakeref.o \
vlv_suspend.o
# core library code # core library code
i915-y += \ i915-y += \
@ -66,7 +66,11 @@ i915-y += \
i915_user_extensions.o i915_user_extensions.o
i915-$(CONFIG_COMPAT) += i915_ioc32.o i915-$(CONFIG_COMPAT) += i915_ioc32.o
i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o display/intel_pipe_crc.o i915-$(CONFIG_DEBUG_FS) += \
i915_debugfs.o \
i915_debugfs_params.o \
display/intel_display_debugfs.o \
display/intel_pipe_crc.o
i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o
# "Graphics Technology" (aka we talk to the gpu) # "Graphics Technology" (aka we talk to the gpu)
@ -78,6 +82,7 @@ gt-y += \
gt/gen8_ppgtt.o \ gt/gen8_ppgtt.o \
gt/intel_breadcrumbs.o \ gt/intel_breadcrumbs.o \
gt/intel_context.o \ gt/intel_context.o \
gt/intel_context_sseu.o \
gt/intel_engine_cs.o \ gt/intel_engine_cs.o \
gt/intel_engine_heartbeat.o \ gt/intel_engine_heartbeat.o \
gt/intel_engine_pm.o \ gt/intel_engine_pm.o \
@ -179,6 +184,7 @@ i915-y += \
display/intel_color.o \ display/intel_color.o \
display/intel_combo_phy.o \ display/intel_combo_phy.o \
display/intel_connector.o \ display/intel_connector.o \
display/intel_csr.o \
display/intel_display.o \ display/intel_display.o \
display/intel_display_power.o \ display/intel_display_power.o \
display/intel_dpio_phy.o \ display/intel_dpio_phy.o \
@ -187,6 +193,7 @@ i915-y += \
display/intel_fbc.o \ display/intel_fbc.o \
display/intel_fifo_underrun.o \ display/intel_fifo_underrun.o \
display/intel_frontbuffer.o \ display/intel_frontbuffer.o \
display/intel_global_state.o \
display/intel_hdcp.o \ display/intel_hdcp.o \
display/intel_hotplug.o \ display/intel_hotplug.o \
display/intel_lpe_audio.o \ display/intel_lpe_audio.o \
@ -294,7 +301,7 @@ extra-$(CONFIG_DRM_I915_WERROR) += \
$(shell cd $(srctree)/$(src) && find * -name '*.h'))) $(shell cd $(srctree)/$(src) && find * -name '*.h')))
quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@) quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@)
cmd_hdrtest = $(CC) $(c_flags) -S -o /dev/null -x c /dev/null -include $<; touch $@ cmd_hdrtest = $(CC) $(filter-out $(CFLAGS_GCOV), $(c_flags)) -S -o /dev/null -x c /dev/null -include $<; touch $@
$(obj)/%.hdrtest: $(src)/%.h FORCE $(obj)/%.hdrtest: $(src)/%.h FORCE
$(call if_changed_dep,hdrtest) $(call if_changed_dep,hdrtest)

View file

@ -39,14 +39,14 @@
static inline int header_credits_available(struct drm_i915_private *dev_priv, static inline int header_credits_available(struct drm_i915_private *dev_priv,
enum transcoder dsi_trans) enum transcoder dsi_trans)
{ {
return (I915_READ(DSI_CMD_TXCTL(dsi_trans)) & FREE_HEADER_CREDIT_MASK) return (intel_de_read(dev_priv, DSI_CMD_TXCTL(dsi_trans)) & FREE_HEADER_CREDIT_MASK)
>> FREE_HEADER_CREDIT_SHIFT; >> FREE_HEADER_CREDIT_SHIFT;
} }
static inline int payload_credits_available(struct drm_i915_private *dev_priv, static inline int payload_credits_available(struct drm_i915_private *dev_priv,
enum transcoder dsi_trans) enum transcoder dsi_trans)
{ {
return (I915_READ(DSI_CMD_TXCTL(dsi_trans)) & FREE_PLOAD_CREDIT_MASK) return (intel_de_read(dev_priv, DSI_CMD_TXCTL(dsi_trans)) & FREE_PLOAD_CREDIT_MASK)
>> FREE_PLOAD_CREDIT_SHIFT; >> FREE_PLOAD_CREDIT_SHIFT;
} }
@ -55,7 +55,7 @@ static void wait_for_header_credits(struct drm_i915_private *dev_priv,
{ {
if (wait_for_us(header_credits_available(dev_priv, dsi_trans) >= if (wait_for_us(header_credits_available(dev_priv, dsi_trans) >=
MAX_HEADER_CREDIT, 100)) MAX_HEADER_CREDIT, 100))
DRM_ERROR("DSI header credits not released\n"); drm_err(&dev_priv->drm, "DSI header credits not released\n");
} }
static void wait_for_payload_credits(struct drm_i915_private *dev_priv, static void wait_for_payload_credits(struct drm_i915_private *dev_priv,
@ -63,7 +63,7 @@ static void wait_for_payload_credits(struct drm_i915_private *dev_priv,
{ {
if (wait_for_us(payload_credits_available(dev_priv, dsi_trans) >= if (wait_for_us(payload_credits_available(dev_priv, dsi_trans) >=
MAX_PLOAD_CREDIT, 100)) MAX_PLOAD_CREDIT, 100))
DRM_ERROR("DSI payload credits not released\n"); drm_err(&dev_priv->drm, "DSI payload credits not released\n");
} }
static enum transcoder dsi_port_to_transcoder(enum port port) static enum transcoder dsi_port_to_transcoder(enum port port)
@ -97,7 +97,8 @@ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder)
dsi->channel = 0; dsi->channel = 0;
ret = mipi_dsi_dcs_nop(dsi); ret = mipi_dsi_dcs_nop(dsi);
if (ret < 0) if (ret < 0)
DRM_ERROR("error sending DCS NOP command\n"); drm_err(&dev_priv->drm,
"error sending DCS NOP command\n");
} }
/* wait for header credits to be released */ /* wait for header credits to be released */
@ -109,9 +110,9 @@ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder)
/* wait for LP TX in progress bit to be cleared */ /* wait for LP TX in progress bit to be cleared */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
if (wait_for_us(!(I915_READ(DSI_LP_MSG(dsi_trans)) & if (wait_for_us(!(intel_de_read(dev_priv, DSI_LP_MSG(dsi_trans)) &
LPTX_IN_PROGRESS), 20)) LPTX_IN_PROGRESS), 20))
DRM_ERROR("LPTX bit not cleared\n"); drm_err(&dev_priv->drm, "LPTX bit not cleared\n");
} }
} }
@ -129,14 +130,15 @@ static bool add_payld_to_queue(struct intel_dsi_host *host, const u8 *data,
free_credits = payload_credits_available(dev_priv, dsi_trans); free_credits = payload_credits_available(dev_priv, dsi_trans);
if (free_credits < 1) { if (free_credits < 1) {
DRM_ERROR("Payload credit not available\n"); drm_err(&dev_priv->drm,
"Payload credit not available\n");
return false; return false;
} }
for (j = 0; j < min_t(u32, len - i, 4); j++) for (j = 0; j < min_t(u32, len - i, 4); j++)
tmp |= *data++ << 8 * j; tmp |= *data++ << 8 * j;
I915_WRITE(DSI_CMD_TXPYLD(dsi_trans), tmp); intel_de_write(dev_priv, DSI_CMD_TXPYLD(dsi_trans), tmp);
} }
return true; return true;
@ -154,11 +156,12 @@ static int dsi_send_pkt_hdr(struct intel_dsi_host *host,
/* check if header credit available */ /* check if header credit available */
free_credits = header_credits_available(dev_priv, dsi_trans); free_credits = header_credits_available(dev_priv, dsi_trans);
if (free_credits < 1) { if (free_credits < 1) {
DRM_ERROR("send pkt header failed, not enough hdr credits\n"); drm_err(&dev_priv->drm,
"send pkt header failed, not enough hdr credits\n");
return -1; return -1;
} }
tmp = I915_READ(DSI_CMD_TXHDR(dsi_trans)); tmp = intel_de_read(dev_priv, DSI_CMD_TXHDR(dsi_trans));
if (pkt.payload) if (pkt.payload)
tmp |= PAYLOAD_PRESENT; tmp |= PAYLOAD_PRESENT;
@ -175,7 +178,7 @@ static int dsi_send_pkt_hdr(struct intel_dsi_host *host,
tmp |= ((pkt.header[0] & DT_MASK) << DT_SHIFT); tmp |= ((pkt.header[0] & DT_MASK) << DT_SHIFT);
tmp |= (pkt.header[1] << PARAM_WC_LOWER_SHIFT); tmp |= (pkt.header[1] << PARAM_WC_LOWER_SHIFT);
tmp |= (pkt.header[2] << PARAM_WC_UPPER_SHIFT); tmp |= (pkt.header[2] << PARAM_WC_UPPER_SHIFT);
I915_WRITE(DSI_CMD_TXHDR(dsi_trans), tmp); intel_de_write(dev_priv, DSI_CMD_TXHDR(dsi_trans), tmp);
return 0; return 0;
} }
@ -212,53 +215,55 @@ static void dsi_program_swing_and_deemphasis(struct intel_encoder *encoder)
* Program voltage swing and pre-emphasis level values as per * Program voltage swing and pre-emphasis level values as per
* table in BSPEC under DDI buffer programing * table in BSPEC under DDI buffer programing
*/ */
tmp = I915_READ(ICL_PORT_TX_DW5_LN0(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN0(phy));
tmp &= ~(SCALING_MODE_SEL_MASK | RTERM_SELECT_MASK); tmp &= ~(SCALING_MODE_SEL_MASK | RTERM_SELECT_MASK);
tmp |= SCALING_MODE_SEL(0x2); tmp |= SCALING_MODE_SEL(0x2);
tmp |= TAP2_DISABLE | TAP3_DISABLE; tmp |= TAP2_DISABLE | TAP3_DISABLE;
tmp |= RTERM_SELECT(0x6); tmp |= RTERM_SELECT(0x6);
I915_WRITE(ICL_PORT_TX_DW5_GRP(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), tmp);
tmp = I915_READ(ICL_PORT_TX_DW5_AUX(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_AUX(phy));
tmp &= ~(SCALING_MODE_SEL_MASK | RTERM_SELECT_MASK); tmp &= ~(SCALING_MODE_SEL_MASK | RTERM_SELECT_MASK);
tmp |= SCALING_MODE_SEL(0x2); tmp |= SCALING_MODE_SEL(0x2);
tmp |= TAP2_DISABLE | TAP3_DISABLE; tmp |= TAP2_DISABLE | TAP3_DISABLE;
tmp |= RTERM_SELECT(0x6); tmp |= RTERM_SELECT(0x6);
I915_WRITE(ICL_PORT_TX_DW5_AUX(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW5_AUX(phy), tmp);
tmp = I915_READ(ICL_PORT_TX_DW2_LN0(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW2_LN0(phy));
tmp &= ~(SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK | tmp &= ~(SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK |
RCOMP_SCALAR_MASK); RCOMP_SCALAR_MASK);
tmp |= SWING_SEL_UPPER(0x2); tmp |= SWING_SEL_UPPER(0x2);
tmp |= SWING_SEL_LOWER(0x2); tmp |= SWING_SEL_LOWER(0x2);
tmp |= RCOMP_SCALAR(0x98); tmp |= RCOMP_SCALAR(0x98);
I915_WRITE(ICL_PORT_TX_DW2_GRP(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW2_GRP(phy), tmp);
tmp = I915_READ(ICL_PORT_TX_DW2_AUX(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW2_AUX(phy));
tmp &= ~(SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK | tmp &= ~(SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK |
RCOMP_SCALAR_MASK); RCOMP_SCALAR_MASK);
tmp |= SWING_SEL_UPPER(0x2); tmp |= SWING_SEL_UPPER(0x2);
tmp |= SWING_SEL_LOWER(0x2); tmp |= SWING_SEL_LOWER(0x2);
tmp |= RCOMP_SCALAR(0x98); tmp |= RCOMP_SCALAR(0x98);
I915_WRITE(ICL_PORT_TX_DW2_AUX(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW2_AUX(phy), tmp);
tmp = I915_READ(ICL_PORT_TX_DW4_AUX(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW4_AUX(phy));
tmp &= ~(POST_CURSOR_1_MASK | POST_CURSOR_2_MASK | tmp &= ~(POST_CURSOR_1_MASK | POST_CURSOR_2_MASK |
CURSOR_COEFF_MASK); CURSOR_COEFF_MASK);
tmp |= POST_CURSOR_1(0x0); tmp |= POST_CURSOR_1(0x0);
tmp |= POST_CURSOR_2(0x0); tmp |= POST_CURSOR_2(0x0);
tmp |= CURSOR_COEFF(0x3f); tmp |= CURSOR_COEFF(0x3f);
I915_WRITE(ICL_PORT_TX_DW4_AUX(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW4_AUX(phy), tmp);
for (lane = 0; lane <= 3; lane++) { for (lane = 0; lane <= 3; lane++) {
/* Bspec: must not use GRP register for write */ /* Bspec: must not use GRP register for write */
tmp = I915_READ(ICL_PORT_TX_DW4_LN(lane, phy)); tmp = intel_de_read(dev_priv,
ICL_PORT_TX_DW4_LN(lane, phy));
tmp &= ~(POST_CURSOR_1_MASK | POST_CURSOR_2_MASK | tmp &= ~(POST_CURSOR_1_MASK | POST_CURSOR_2_MASK |
CURSOR_COEFF_MASK); CURSOR_COEFF_MASK);
tmp |= POST_CURSOR_1(0x0); tmp |= POST_CURSOR_1(0x0);
tmp |= POST_CURSOR_2(0x0); tmp |= POST_CURSOR_2(0x0);
tmp |= CURSOR_COEFF(0x3f); tmp |= CURSOR_COEFF(0x3f);
I915_WRITE(ICL_PORT_TX_DW4_LN(lane, phy), tmp); intel_de_write(dev_priv,
ICL_PORT_TX_DW4_LN(lane, phy), tmp);
} }
} }
} }
@ -270,7 +275,7 @@ static void configure_dual_link_mode(struct intel_encoder *encoder,
struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
u32 dss_ctl1; u32 dss_ctl1;
dss_ctl1 = I915_READ(DSS_CTL1); dss_ctl1 = intel_de_read(dev_priv, DSS_CTL1);
dss_ctl1 |= SPLITTER_ENABLE; dss_ctl1 |= SPLITTER_ENABLE;
dss_ctl1 &= ~OVERLAP_PIXELS_MASK; dss_ctl1 &= ~OVERLAP_PIXELS_MASK;
dss_ctl1 |= OVERLAP_PIXELS(intel_dsi->pixel_overlap); dss_ctl1 |= OVERLAP_PIXELS(intel_dsi->pixel_overlap);
@ -286,20 +291,21 @@ static void configure_dual_link_mode(struct intel_encoder *encoder,
dl_buffer_depth = hactive / 2 + intel_dsi->pixel_overlap; dl_buffer_depth = hactive / 2 + intel_dsi->pixel_overlap;
if (dl_buffer_depth > MAX_DL_BUFFER_TARGET_DEPTH) if (dl_buffer_depth > MAX_DL_BUFFER_TARGET_DEPTH)
DRM_ERROR("DL buffer depth exceed max value\n"); drm_err(&dev_priv->drm,
"DL buffer depth exceed max value\n");
dss_ctl1 &= ~LEFT_DL_BUF_TARGET_DEPTH_MASK; dss_ctl1 &= ~LEFT_DL_BUF_TARGET_DEPTH_MASK;
dss_ctl1 |= LEFT_DL_BUF_TARGET_DEPTH(dl_buffer_depth); dss_ctl1 |= LEFT_DL_BUF_TARGET_DEPTH(dl_buffer_depth);
dss_ctl2 = I915_READ(DSS_CTL2); dss_ctl2 = intel_de_read(dev_priv, DSS_CTL2);
dss_ctl2 &= ~RIGHT_DL_BUF_TARGET_DEPTH_MASK; dss_ctl2 &= ~RIGHT_DL_BUF_TARGET_DEPTH_MASK;
dss_ctl2 |= RIGHT_DL_BUF_TARGET_DEPTH(dl_buffer_depth); dss_ctl2 |= RIGHT_DL_BUF_TARGET_DEPTH(dl_buffer_depth);
I915_WRITE(DSS_CTL2, dss_ctl2); intel_de_write(dev_priv, DSS_CTL2, dss_ctl2);
} else { } else {
/* Interleave */ /* Interleave */
dss_ctl1 |= DUAL_LINK_MODE_INTERLEAVE; dss_ctl1 |= DUAL_LINK_MODE_INTERLEAVE;
} }
I915_WRITE(DSS_CTL1, dss_ctl1); intel_de_write(dev_priv, DSS_CTL1, dss_ctl1);
} }
/* aka DSI 8X clock */ /* aka DSI 8X clock */
@ -330,15 +336,15 @@ static void gen11_dsi_program_esc_clk_div(struct intel_encoder *encoder,
esc_clk_div_m = DIV_ROUND_UP(afe_clk_khz, DSI_MAX_ESC_CLK); esc_clk_div_m = DIV_ROUND_UP(afe_clk_khz, DSI_MAX_ESC_CLK);
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
I915_WRITE(ICL_DSI_ESC_CLK_DIV(port), intel_de_write(dev_priv, ICL_DSI_ESC_CLK_DIV(port),
esc_clk_div_m & ICL_ESC_CLK_DIV_MASK); esc_clk_div_m & ICL_ESC_CLK_DIV_MASK);
POSTING_READ(ICL_DSI_ESC_CLK_DIV(port)); intel_de_posting_read(dev_priv, ICL_DSI_ESC_CLK_DIV(port));
} }
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
I915_WRITE(ICL_DPHY_ESC_CLK_DIV(port), intel_de_write(dev_priv, ICL_DPHY_ESC_CLK_DIV(port),
esc_clk_div_m & ICL_ESC_CLK_DIV_MASK); esc_clk_div_m & ICL_ESC_CLK_DIV_MASK);
POSTING_READ(ICL_DPHY_ESC_CLK_DIV(port)); intel_de_posting_read(dev_priv, ICL_DPHY_ESC_CLK_DIV(port));
} }
} }
@ -348,7 +354,7 @@ static void get_dsi_io_power_domains(struct drm_i915_private *dev_priv,
enum port port; enum port port;
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
WARN_ON(intel_dsi->io_wakeref[port]); drm_WARN_ON(&dev_priv->drm, intel_dsi->io_wakeref[port]);
intel_dsi->io_wakeref[port] = intel_dsi->io_wakeref[port] =
intel_display_power_get(dev_priv, intel_display_power_get(dev_priv,
port == PORT_A ? port == PORT_A ?
@ -365,9 +371,9 @@ static void gen11_dsi_enable_io_power(struct intel_encoder *encoder)
u32 tmp; u32 tmp;
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
tmp = I915_READ(ICL_DSI_IO_MODECTL(port)); tmp = intel_de_read(dev_priv, ICL_DSI_IO_MODECTL(port));
tmp |= COMBO_PHY_MODE_DSI; tmp |= COMBO_PHY_MODE_DSI;
I915_WRITE(ICL_DSI_IO_MODECTL(port), tmp); intel_de_write(dev_priv, ICL_DSI_IO_MODECTL(port), tmp);
} }
get_dsi_io_power_domains(dev_priv, intel_dsi); get_dsi_io_power_domains(dev_priv, intel_dsi);
@ -394,40 +400,46 @@ static void gen11_dsi_config_phy_lanes_sequence(struct intel_encoder *encoder)
/* Step 4b(i) set loadgen select for transmit and aux lanes */ /* Step 4b(i) set loadgen select for transmit and aux lanes */
for_each_dsi_phy(phy, intel_dsi->phys) { for_each_dsi_phy(phy, intel_dsi->phys) {
tmp = I915_READ(ICL_PORT_TX_DW4_AUX(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW4_AUX(phy));
tmp &= ~LOADGEN_SELECT; tmp &= ~LOADGEN_SELECT;
I915_WRITE(ICL_PORT_TX_DW4_AUX(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW4_AUX(phy), tmp);
for (lane = 0; lane <= 3; lane++) { for (lane = 0; lane <= 3; lane++) {
tmp = I915_READ(ICL_PORT_TX_DW4_LN(lane, phy)); tmp = intel_de_read(dev_priv,
ICL_PORT_TX_DW4_LN(lane, phy));
tmp &= ~LOADGEN_SELECT; tmp &= ~LOADGEN_SELECT;
if (lane != 2) if (lane != 2)
tmp |= LOADGEN_SELECT; tmp |= LOADGEN_SELECT;
I915_WRITE(ICL_PORT_TX_DW4_LN(lane, phy), tmp); intel_de_write(dev_priv,
ICL_PORT_TX_DW4_LN(lane, phy), tmp);
} }
} }
/* Step 4b(ii) set latency optimization for transmit and aux lanes */ /* Step 4b(ii) set latency optimization for transmit and aux lanes */
for_each_dsi_phy(phy, intel_dsi->phys) { for_each_dsi_phy(phy, intel_dsi->phys) {
tmp = I915_READ(ICL_PORT_TX_DW2_AUX(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW2_AUX(phy));
tmp &= ~FRC_LATENCY_OPTIM_MASK; tmp &= ~FRC_LATENCY_OPTIM_MASK;
tmp |= FRC_LATENCY_OPTIM_VAL(0x5); tmp |= FRC_LATENCY_OPTIM_VAL(0x5);
I915_WRITE(ICL_PORT_TX_DW2_AUX(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW2_AUX(phy), tmp);
tmp = I915_READ(ICL_PORT_TX_DW2_LN0(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW2_LN0(phy));
tmp &= ~FRC_LATENCY_OPTIM_MASK; tmp &= ~FRC_LATENCY_OPTIM_MASK;
tmp |= FRC_LATENCY_OPTIM_VAL(0x5); tmp |= FRC_LATENCY_OPTIM_VAL(0x5);
I915_WRITE(ICL_PORT_TX_DW2_GRP(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW2_GRP(phy), tmp);
/* For EHL, TGL, set latency optimization for PCS_DW1 lanes */ /* For EHL, TGL, set latency optimization for PCS_DW1 lanes */
if (IS_ELKHARTLAKE(dev_priv) || (INTEL_GEN(dev_priv) >= 12)) { if (IS_ELKHARTLAKE(dev_priv) || (INTEL_GEN(dev_priv) >= 12)) {
tmp = I915_READ(ICL_PORT_PCS_DW1_AUX(phy)); tmp = intel_de_read(dev_priv,
ICL_PORT_PCS_DW1_AUX(phy));
tmp &= ~LATENCY_OPTIM_MASK; tmp &= ~LATENCY_OPTIM_MASK;
tmp |= LATENCY_OPTIM_VAL(0); tmp |= LATENCY_OPTIM_VAL(0);
I915_WRITE(ICL_PORT_PCS_DW1_AUX(phy), tmp); intel_de_write(dev_priv, ICL_PORT_PCS_DW1_AUX(phy),
tmp);
tmp = I915_READ(ICL_PORT_PCS_DW1_LN0(phy)); tmp = intel_de_read(dev_priv,
ICL_PORT_PCS_DW1_LN0(phy));
tmp &= ~LATENCY_OPTIM_MASK; tmp &= ~LATENCY_OPTIM_MASK;
tmp |= LATENCY_OPTIM_VAL(0x1); tmp |= LATENCY_OPTIM_VAL(0x1);
I915_WRITE(ICL_PORT_PCS_DW1_GRP(phy), tmp); intel_de_write(dev_priv, ICL_PORT_PCS_DW1_GRP(phy),
tmp);
} }
} }
@ -442,12 +454,12 @@ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder)
/* clear common keeper enable bit */ /* clear common keeper enable bit */
for_each_dsi_phy(phy, intel_dsi->phys) { for_each_dsi_phy(phy, intel_dsi->phys) {
tmp = I915_READ(ICL_PORT_PCS_DW1_LN0(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_PCS_DW1_LN0(phy));
tmp &= ~COMMON_KEEPER_EN; tmp &= ~COMMON_KEEPER_EN;
I915_WRITE(ICL_PORT_PCS_DW1_GRP(phy), tmp); intel_de_write(dev_priv, ICL_PORT_PCS_DW1_GRP(phy), tmp);
tmp = I915_READ(ICL_PORT_PCS_DW1_AUX(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_PCS_DW1_AUX(phy));
tmp &= ~COMMON_KEEPER_EN; tmp &= ~COMMON_KEEPER_EN;
I915_WRITE(ICL_PORT_PCS_DW1_AUX(phy), tmp); intel_de_write(dev_priv, ICL_PORT_PCS_DW1_AUX(phy), tmp);
} }
/* /*
@ -456,19 +468,19 @@ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder)
* as part of lane phy sequence configuration * as part of lane phy sequence configuration
*/ */
for_each_dsi_phy(phy, intel_dsi->phys) { for_each_dsi_phy(phy, intel_dsi->phys) {
tmp = I915_READ(ICL_PORT_CL_DW5(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_CL_DW5(phy));
tmp |= SUS_CLOCK_CONFIG; tmp |= SUS_CLOCK_CONFIG;
I915_WRITE(ICL_PORT_CL_DW5(phy), tmp); intel_de_write(dev_priv, ICL_PORT_CL_DW5(phy), tmp);
} }
/* Clear training enable to change swing values */ /* Clear training enable to change swing values */
for_each_dsi_phy(phy, intel_dsi->phys) { for_each_dsi_phy(phy, intel_dsi->phys) {
tmp = I915_READ(ICL_PORT_TX_DW5_LN0(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN0(phy));
tmp &= ~TX_TRAINING_EN; tmp &= ~TX_TRAINING_EN;
I915_WRITE(ICL_PORT_TX_DW5_GRP(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), tmp);
tmp = I915_READ(ICL_PORT_TX_DW5_AUX(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_AUX(phy));
tmp &= ~TX_TRAINING_EN; tmp &= ~TX_TRAINING_EN;
I915_WRITE(ICL_PORT_TX_DW5_AUX(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW5_AUX(phy), tmp);
} }
/* Program swing and de-emphasis */ /* Program swing and de-emphasis */
@ -476,12 +488,12 @@ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder)
/* Set training enable to trigger update */ /* Set training enable to trigger update */
for_each_dsi_phy(phy, intel_dsi->phys) { for_each_dsi_phy(phy, intel_dsi->phys) {
tmp = I915_READ(ICL_PORT_TX_DW5_LN0(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN0(phy));
tmp |= TX_TRAINING_EN; tmp |= TX_TRAINING_EN;
I915_WRITE(ICL_PORT_TX_DW5_GRP(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), tmp);
tmp = I915_READ(ICL_PORT_TX_DW5_AUX(phy)); tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_AUX(phy));
tmp |= TX_TRAINING_EN; tmp |= TX_TRAINING_EN;
I915_WRITE(ICL_PORT_TX_DW5_AUX(phy), tmp); intel_de_write(dev_priv, ICL_PORT_TX_DW5_AUX(phy), tmp);
} }
} }
@ -493,14 +505,15 @@ static void gen11_dsi_enable_ddi_buffer(struct intel_encoder *encoder)
enum port port; enum port port;
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
tmp = I915_READ(DDI_BUF_CTL(port)); tmp = intel_de_read(dev_priv, DDI_BUF_CTL(port));
tmp |= DDI_BUF_CTL_ENABLE; tmp |= DDI_BUF_CTL_ENABLE;
I915_WRITE(DDI_BUF_CTL(port), tmp); intel_de_write(dev_priv, DDI_BUF_CTL(port), tmp);
if (wait_for_us(!(I915_READ(DDI_BUF_CTL(port)) & if (wait_for_us(!(intel_de_read(dev_priv, DDI_BUF_CTL(port)) &
DDI_BUF_IS_IDLE), DDI_BUF_IS_IDLE),
500)) 500))
DRM_ERROR("DDI port:%c buffer idle\n", port_name(port)); drm_err(&dev_priv->drm, "DDI port:%c buffer idle\n",
port_name(port));
} }
} }
@ -516,28 +529,30 @@ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
/* Program T-INIT master registers */ /* Program T-INIT master registers */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
tmp = I915_READ(ICL_DSI_T_INIT_MASTER(port)); tmp = intel_de_read(dev_priv, ICL_DSI_T_INIT_MASTER(port));
tmp &= ~MASTER_INIT_TIMER_MASK; tmp &= ~MASTER_INIT_TIMER_MASK;
tmp |= intel_dsi->init_count; tmp |= intel_dsi->init_count;
I915_WRITE(ICL_DSI_T_INIT_MASTER(port), tmp); intel_de_write(dev_priv, ICL_DSI_T_INIT_MASTER(port), tmp);
} }
/* Program DPHY clock lanes timings */ /* Program DPHY clock lanes timings */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
I915_WRITE(DPHY_CLK_TIMING_PARAM(port), intel_dsi->dphy_reg); intel_de_write(dev_priv, DPHY_CLK_TIMING_PARAM(port),
intel_dsi->dphy_reg);
/* shadow register inside display core */ /* shadow register inside display core */
I915_WRITE(DSI_CLK_TIMING_PARAM(port), intel_dsi->dphy_reg); intel_de_write(dev_priv, DSI_CLK_TIMING_PARAM(port),
intel_dsi->dphy_reg);
} }
/* Program DPHY data lanes timings */ /* Program DPHY data lanes timings */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
I915_WRITE(DPHY_DATA_TIMING_PARAM(port), intel_de_write(dev_priv, DPHY_DATA_TIMING_PARAM(port),
intel_dsi->dphy_data_lane_reg); intel_dsi->dphy_data_lane_reg);
/* shadow register inside display core */ /* shadow register inside display core */
I915_WRITE(DSI_DATA_TIMING_PARAM(port), intel_de_write(dev_priv, DSI_DATA_TIMING_PARAM(port),
intel_dsi->dphy_data_lane_reg); intel_dsi->dphy_data_lane_reg);
} }
/* /*
@ -549,25 +564,30 @@ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
if (IS_GEN(dev_priv, 11)) { if (IS_GEN(dev_priv, 11)) {
if (afe_clk(encoder, crtc_state) <= 800000) { if (afe_clk(encoder, crtc_state) <= 800000) {
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
tmp = I915_READ(DPHY_TA_TIMING_PARAM(port)); tmp = intel_de_read(dev_priv,
DPHY_TA_TIMING_PARAM(port));
tmp &= ~TA_SURE_MASK; tmp &= ~TA_SURE_MASK;
tmp |= TA_SURE_OVERRIDE | TA_SURE(0); tmp |= TA_SURE_OVERRIDE | TA_SURE(0);
I915_WRITE(DPHY_TA_TIMING_PARAM(port), tmp); intel_de_write(dev_priv,
DPHY_TA_TIMING_PARAM(port),
tmp);
/* shadow register inside display core */ /* shadow register inside display core */
tmp = I915_READ(DSI_TA_TIMING_PARAM(port)); tmp = intel_de_read(dev_priv,
DSI_TA_TIMING_PARAM(port));
tmp &= ~TA_SURE_MASK; tmp &= ~TA_SURE_MASK;
tmp |= TA_SURE_OVERRIDE | TA_SURE(0); tmp |= TA_SURE_OVERRIDE | TA_SURE(0);
I915_WRITE(DSI_TA_TIMING_PARAM(port), tmp); intel_de_write(dev_priv,
DSI_TA_TIMING_PARAM(port), tmp);
} }
} }
} }
if (IS_ELKHARTLAKE(dev_priv)) { if (IS_ELKHARTLAKE(dev_priv)) {
for_each_dsi_phy(phy, intel_dsi->phys) { for_each_dsi_phy(phy, intel_dsi->phys) {
tmp = I915_READ(ICL_DPHY_CHKN(phy)); tmp = intel_de_read(dev_priv, ICL_DPHY_CHKN(phy));
tmp |= ICL_DPHY_CHKN_AFE_OVER_PPI_STRAP; tmp |= ICL_DPHY_CHKN_AFE_OVER_PPI_STRAP;
I915_WRITE(ICL_DPHY_CHKN(phy), tmp); intel_de_write(dev_priv, ICL_DPHY_CHKN(phy), tmp);
} }
} }
} }
@ -580,11 +600,11 @@ static void gen11_dsi_gate_clocks(struct intel_encoder *encoder)
enum phy phy; enum phy phy;
mutex_lock(&dev_priv->dpll_lock); mutex_lock(&dev_priv->dpll_lock);
tmp = I915_READ(ICL_DPCLKA_CFGCR0); tmp = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0);
for_each_dsi_phy(phy, intel_dsi->phys) for_each_dsi_phy(phy, intel_dsi->phys)
tmp |= ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy); tmp |= ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
I915_WRITE(ICL_DPCLKA_CFGCR0, tmp); intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, tmp);
mutex_unlock(&dev_priv->dpll_lock); mutex_unlock(&dev_priv->dpll_lock);
} }
@ -596,11 +616,11 @@ static void gen11_dsi_ungate_clocks(struct intel_encoder *encoder)
enum phy phy; enum phy phy;
mutex_lock(&dev_priv->dpll_lock); mutex_lock(&dev_priv->dpll_lock);
tmp = I915_READ(ICL_DPCLKA_CFGCR0); tmp = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0);
for_each_dsi_phy(phy, intel_dsi->phys) for_each_dsi_phy(phy, intel_dsi->phys)
tmp &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy); tmp &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
I915_WRITE(ICL_DPCLKA_CFGCR0, tmp); intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, tmp);
mutex_unlock(&dev_priv->dpll_lock); mutex_unlock(&dev_priv->dpll_lock);
} }
@ -615,12 +635,12 @@ static void gen11_dsi_map_pll(struct intel_encoder *encoder,
mutex_lock(&dev_priv->dpll_lock); mutex_lock(&dev_priv->dpll_lock);
val = I915_READ(ICL_DPCLKA_CFGCR0); val = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0);
for_each_dsi_phy(phy, intel_dsi->phys) { for_each_dsi_phy(phy, intel_dsi->phys) {
val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy); val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy);
val |= ICL_DPCLKA_CFGCR0_DDI_CLK_SEL(pll->info->id, phy); val |= ICL_DPCLKA_CFGCR0_DDI_CLK_SEL(pll->info->id, phy);
} }
I915_WRITE(ICL_DPCLKA_CFGCR0, val); intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, val);
for_each_dsi_phy(phy, intel_dsi->phys) { for_each_dsi_phy(phy, intel_dsi->phys) {
if (INTEL_GEN(dev_priv) >= 12) if (INTEL_GEN(dev_priv) >= 12)
@ -628,9 +648,9 @@ static void gen11_dsi_map_pll(struct intel_encoder *encoder,
else else
val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy); val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
} }
I915_WRITE(ICL_DPCLKA_CFGCR0, val); intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, val);
POSTING_READ(ICL_DPCLKA_CFGCR0); intel_de_posting_read(dev_priv, ICL_DPCLKA_CFGCR0);
mutex_unlock(&dev_priv->dpll_lock); mutex_unlock(&dev_priv->dpll_lock);
} }
@ -649,7 +669,7 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
tmp = I915_READ(DSI_TRANS_FUNC_CONF(dsi_trans)); tmp = intel_de_read(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans));
if (intel_dsi->eotp_pkt) if (intel_dsi->eotp_pkt)
tmp &= ~EOTP_DISABLED; tmp &= ~EOTP_DISABLED;
@ -726,16 +746,18 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
} }
} }
I915_WRITE(DSI_TRANS_FUNC_CONF(dsi_trans), tmp); intel_de_write(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans), tmp);
} }
/* enable port sync mode if dual link */ /* enable port sync mode if dual link */
if (intel_dsi->dual_link) { if (intel_dsi->dual_link) {
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
tmp = I915_READ(TRANS_DDI_FUNC_CTL2(dsi_trans)); tmp = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL2(dsi_trans));
tmp |= PORT_SYNC_MODE_ENABLE; tmp |= PORT_SYNC_MODE_ENABLE;
I915_WRITE(TRANS_DDI_FUNC_CTL2(dsi_trans), tmp); intel_de_write(dev_priv,
TRANS_DDI_FUNC_CTL2(dsi_trans), tmp);
} }
/* configure stream splitting */ /* configure stream splitting */
@ -746,7 +768,7 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
/* select data lane width */ /* select data lane width */
tmp = I915_READ(TRANS_DDI_FUNC_CTL(dsi_trans)); tmp = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(dsi_trans));
tmp &= ~DDI_PORT_WIDTH_MASK; tmp &= ~DDI_PORT_WIDTH_MASK;
tmp |= DDI_PORT_WIDTH(intel_dsi->lane_count); tmp |= DDI_PORT_WIDTH(intel_dsi->lane_count);
@ -772,15 +794,15 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
/* enable DDI buffer */ /* enable DDI buffer */
tmp |= TRANS_DDI_FUNC_ENABLE; tmp |= TRANS_DDI_FUNC_ENABLE;
I915_WRITE(TRANS_DDI_FUNC_CTL(dsi_trans), tmp); intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(dsi_trans), tmp);
} }
/* wait for link ready */ /* wait for link ready */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
if (wait_for_us((I915_READ(DSI_TRANS_FUNC_CONF(dsi_trans)) & if (wait_for_us((intel_de_read(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans)) &
LINK_READY), 2500)) LINK_READY), 2500))
DRM_ERROR("DSI link not ready\n"); drm_err(&dev_priv->drm, "DSI link not ready\n");
} }
} }
@ -836,17 +858,18 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
/* minimum hactive as per bspec: 256 pixels */ /* minimum hactive as per bspec: 256 pixels */
if (adjusted_mode->crtc_hdisplay < 256) if (adjusted_mode->crtc_hdisplay < 256)
DRM_ERROR("hactive is less then 256 pixels\n"); drm_err(&dev_priv->drm, "hactive is less then 256 pixels\n");
/* if RGB666 format, then hactive must be multiple of 4 pixels */ /* if RGB666 format, then hactive must be multiple of 4 pixels */
if (intel_dsi->pixel_format == MIPI_DSI_FMT_RGB666 && hactive % 4 != 0) if (intel_dsi->pixel_format == MIPI_DSI_FMT_RGB666 && hactive % 4 != 0)
DRM_ERROR("hactive pixels are not multiple of 4\n"); drm_err(&dev_priv->drm,
"hactive pixels are not multiple of 4\n");
/* program TRANS_HTOTAL register */ /* program TRANS_HTOTAL register */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
I915_WRITE(HTOTAL(dsi_trans), intel_de_write(dev_priv, HTOTAL(dsi_trans),
(hactive - 1) | ((htotal - 1) << 16)); (hactive - 1) | ((htotal - 1) << 16));
} }
/* TRANS_HSYNC register to be programmed only for video mode */ /* TRANS_HSYNC register to be programmed only for video mode */
@ -855,11 +878,12 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE) { VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE) {
/* BSPEC: hsync size should be atleast 16 pixels */ /* BSPEC: hsync size should be atleast 16 pixels */
if (hsync_size < 16) if (hsync_size < 16)
DRM_ERROR("hsync size < 16 pixels\n"); drm_err(&dev_priv->drm,
"hsync size < 16 pixels\n");
} }
if (hback_porch < 16) if (hback_porch < 16)
DRM_ERROR("hback porch < 16 pixels\n"); drm_err(&dev_priv->drm, "hback porch < 16 pixels\n");
if (intel_dsi->dual_link) { if (intel_dsi->dual_link) {
hsync_start /= 2; hsync_start /= 2;
@ -868,8 +892,8 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
I915_WRITE(HSYNC(dsi_trans), intel_de_write(dev_priv, HSYNC(dsi_trans),
(hsync_start - 1) | ((hsync_end - 1) << 16)); (hsync_start - 1) | ((hsync_end - 1) << 16));
} }
} }
@ -882,21 +906,21 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
* struct drm_display_mode. * struct drm_display_mode.
* For interlace mode: program required pixel minus 2 * For interlace mode: program required pixel minus 2
*/ */
I915_WRITE(VTOTAL(dsi_trans), intel_de_write(dev_priv, VTOTAL(dsi_trans),
(vactive - 1) | ((vtotal - 1) << 16)); (vactive - 1) | ((vtotal - 1) << 16));
} }
if (vsync_end < vsync_start || vsync_end > vtotal) if (vsync_end < vsync_start || vsync_end > vtotal)
DRM_ERROR("Invalid vsync_end value\n"); drm_err(&dev_priv->drm, "Invalid vsync_end value\n");
if (vsync_start < vactive) if (vsync_start < vactive)
DRM_ERROR("vsync_start less than vactive\n"); drm_err(&dev_priv->drm, "vsync_start less than vactive\n");
/* program TRANS_VSYNC register */ /* program TRANS_VSYNC register */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
I915_WRITE(VSYNC(dsi_trans), intel_de_write(dev_priv, VSYNC(dsi_trans),
(vsync_start - 1) | ((vsync_end - 1) << 16)); (vsync_start - 1) | ((vsync_end - 1) << 16));
} }
/* /*
@ -907,15 +931,15 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
*/ */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
I915_WRITE(VSYNCSHIFT(dsi_trans), vsync_shift); intel_de_write(dev_priv, VSYNCSHIFT(dsi_trans), vsync_shift);
} }
/* program TRANS_VBLANK register, should be same as vtotal programmed */ /* program TRANS_VBLANK register, should be same as vtotal programmed */
if (INTEL_GEN(dev_priv) >= 12) { if (INTEL_GEN(dev_priv) >= 12) {
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
I915_WRITE(VBLANK(dsi_trans), intel_de_write(dev_priv, VBLANK(dsi_trans),
(vactive - 1) | ((vtotal - 1) << 16)); (vactive - 1) | ((vtotal - 1) << 16));
} }
} }
} }
@ -930,14 +954,15 @@ static void gen11_dsi_enable_transcoder(struct intel_encoder *encoder)
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
tmp = I915_READ(PIPECONF(dsi_trans)); tmp = intel_de_read(dev_priv, PIPECONF(dsi_trans));
tmp |= PIPECONF_ENABLE; tmp |= PIPECONF_ENABLE;
I915_WRITE(PIPECONF(dsi_trans), tmp); intel_de_write(dev_priv, PIPECONF(dsi_trans), tmp);
/* wait for transcoder to be enabled */ /* wait for transcoder to be enabled */
if (intel_de_wait_for_set(dev_priv, PIPECONF(dsi_trans), if (intel_de_wait_for_set(dev_priv, PIPECONF(dsi_trans),
I965_PIPECONF_ACTIVE, 10)) I965_PIPECONF_ACTIVE, 10))
DRM_ERROR("DSI transcoder not enabled\n"); drm_err(&dev_priv->drm,
"DSI transcoder not enabled\n");
} }
} }
@ -968,26 +993,26 @@ static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder,
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
/* program hst_tx_timeout */ /* program hst_tx_timeout */
tmp = I915_READ(DSI_HSTX_TO(dsi_trans)); tmp = intel_de_read(dev_priv, DSI_HSTX_TO(dsi_trans));
tmp &= ~HSTX_TIMEOUT_VALUE_MASK; tmp &= ~HSTX_TIMEOUT_VALUE_MASK;
tmp |= HSTX_TIMEOUT_VALUE(hs_tx_timeout); tmp |= HSTX_TIMEOUT_VALUE(hs_tx_timeout);
I915_WRITE(DSI_HSTX_TO(dsi_trans), tmp); intel_de_write(dev_priv, DSI_HSTX_TO(dsi_trans), tmp);
/* FIXME: DSI_CALIB_TO */ /* FIXME: DSI_CALIB_TO */
/* program lp_rx_host timeout */ /* program lp_rx_host timeout */
tmp = I915_READ(DSI_LPRX_HOST_TO(dsi_trans)); tmp = intel_de_read(dev_priv, DSI_LPRX_HOST_TO(dsi_trans));
tmp &= ~LPRX_TIMEOUT_VALUE_MASK; tmp &= ~LPRX_TIMEOUT_VALUE_MASK;
tmp |= LPRX_TIMEOUT_VALUE(lp_rx_timeout); tmp |= LPRX_TIMEOUT_VALUE(lp_rx_timeout);
I915_WRITE(DSI_LPRX_HOST_TO(dsi_trans), tmp); intel_de_write(dev_priv, DSI_LPRX_HOST_TO(dsi_trans), tmp);
/* FIXME: DSI_PWAIT_TO */ /* FIXME: DSI_PWAIT_TO */
/* program turn around timeout */ /* program turn around timeout */
tmp = I915_READ(DSI_TA_TO(dsi_trans)); tmp = intel_de_read(dev_priv, DSI_TA_TO(dsi_trans));
tmp &= ~TA_TIMEOUT_VALUE_MASK; tmp &= ~TA_TIMEOUT_VALUE_MASK;
tmp |= TA_TIMEOUT_VALUE(ta_timeout); tmp |= TA_TIMEOUT_VALUE(ta_timeout);
I915_WRITE(DSI_TA_TO(dsi_trans), tmp); intel_de_write(dev_priv, DSI_TA_TO(dsi_trans), tmp);
} }
} }
@ -1041,14 +1066,15 @@ static void gen11_dsi_powerup_panel(struct intel_encoder *encoder)
* FIXME: This uses the number of DW's currently in the payload * FIXME: This uses the number of DW's currently in the payload
* receive queue. This is probably not what we want here. * receive queue. This is probably not what we want here.
*/ */
tmp = I915_READ(DSI_CMD_RXCTL(dsi_trans)); tmp = intel_de_read(dev_priv, DSI_CMD_RXCTL(dsi_trans));
tmp &= NUMBER_RX_PLOAD_DW_MASK; tmp &= NUMBER_RX_PLOAD_DW_MASK;
/* multiply "Number Rx Payload DW" by 4 to get max value */ /* multiply "Number Rx Payload DW" by 4 to get max value */
tmp = tmp * 4; tmp = tmp * 4;
dsi = intel_dsi->dsi_hosts[port]->device; dsi = intel_dsi->dsi_hosts[port]->device;
ret = mipi_dsi_set_maximum_return_packet_size(dsi, tmp); ret = mipi_dsi_set_maximum_return_packet_size(dsi, tmp);
if (ret < 0) if (ret < 0)
DRM_ERROR("error setting max return pkt size%d\n", tmp); drm_err(&dev_priv->drm,
"error setting max return pkt size%d\n", tmp);
} }
/* panel power on related mipi dsi vbt sequences */ /* panel power on related mipi dsi vbt sequences */
@ -1077,8 +1103,6 @@ static void gen11_dsi_pre_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config, const struct intel_crtc_state *pipe_config,
const struct drm_connector_state *conn_state) const struct drm_connector_state *conn_state)
{ {
struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
/* step3b */ /* step3b */
gen11_dsi_map_pll(encoder, pipe_config); gen11_dsi_map_pll(encoder, pipe_config);
@ -1092,13 +1116,24 @@ static void gen11_dsi_pre_enable(struct intel_encoder *encoder,
/* step6c: configure transcoder timings */ /* step6c: configure transcoder timings */
gen11_dsi_set_transcoder_timings(encoder, pipe_config); gen11_dsi_set_transcoder_timings(encoder, pipe_config);
}
static void gen11_dsi_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state)
{
struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
WARN_ON(crtc_state->has_pch_encoder);
/* step6d: enable dsi transcoder */ /* step6d: enable dsi transcoder */
gen11_dsi_enable_transcoder(encoder); gen11_dsi_enable_transcoder(encoder);
/* step7: enable backlight */ /* step7: enable backlight */
intel_panel_enable_backlight(pipe_config, conn_state); intel_panel_enable_backlight(crtc_state, conn_state);
intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_ON); intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_ON);
intel_crtc_vblank_on(crtc_state);
} }
static void gen11_dsi_disable_transcoder(struct intel_encoder *encoder) static void gen11_dsi_disable_transcoder(struct intel_encoder *encoder)
@ -1113,14 +1148,15 @@ static void gen11_dsi_disable_transcoder(struct intel_encoder *encoder)
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
/* disable transcoder */ /* disable transcoder */
tmp = I915_READ(PIPECONF(dsi_trans)); tmp = intel_de_read(dev_priv, PIPECONF(dsi_trans));
tmp &= ~PIPECONF_ENABLE; tmp &= ~PIPECONF_ENABLE;
I915_WRITE(PIPECONF(dsi_trans), tmp); intel_de_write(dev_priv, PIPECONF(dsi_trans), tmp);
/* wait for transcoder to be disabled */ /* wait for transcoder to be disabled */
if (intel_de_wait_for_clear(dev_priv, PIPECONF(dsi_trans), if (intel_de_wait_for_clear(dev_priv, PIPECONF(dsi_trans),
I965_PIPECONF_ACTIVE, 50)) I965_PIPECONF_ACTIVE, 50))
DRM_ERROR("DSI trancoder not disabled\n"); drm_err(&dev_priv->drm,
"DSI trancoder not disabled\n");
} }
} }
@ -1147,32 +1183,34 @@ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder)
/* put dsi link in ULPS */ /* put dsi link in ULPS */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
tmp = I915_READ(DSI_LP_MSG(dsi_trans)); tmp = intel_de_read(dev_priv, DSI_LP_MSG(dsi_trans));
tmp |= LINK_ENTER_ULPS; tmp |= LINK_ENTER_ULPS;
tmp &= ~LINK_ULPS_TYPE_LP11; tmp &= ~LINK_ULPS_TYPE_LP11;
I915_WRITE(DSI_LP_MSG(dsi_trans), tmp); intel_de_write(dev_priv, DSI_LP_MSG(dsi_trans), tmp);
if (wait_for_us((I915_READ(DSI_LP_MSG(dsi_trans)) & if (wait_for_us((intel_de_read(dev_priv, DSI_LP_MSG(dsi_trans)) &
LINK_IN_ULPS), LINK_IN_ULPS),
10)) 10))
DRM_ERROR("DSI link not in ULPS\n"); drm_err(&dev_priv->drm, "DSI link not in ULPS\n");
} }
/* disable ddi function */ /* disable ddi function */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
tmp = I915_READ(TRANS_DDI_FUNC_CTL(dsi_trans)); tmp = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(dsi_trans));
tmp &= ~TRANS_DDI_FUNC_ENABLE; tmp &= ~TRANS_DDI_FUNC_ENABLE;
I915_WRITE(TRANS_DDI_FUNC_CTL(dsi_trans), tmp); intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(dsi_trans), tmp);
} }
/* disable port sync mode if dual link */ /* disable port sync mode if dual link */
if (intel_dsi->dual_link) { if (intel_dsi->dual_link) {
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
tmp = I915_READ(TRANS_DDI_FUNC_CTL2(dsi_trans)); tmp = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL2(dsi_trans));
tmp &= ~PORT_SYNC_MODE_ENABLE; tmp &= ~PORT_SYNC_MODE_ENABLE;
I915_WRITE(TRANS_DDI_FUNC_CTL2(dsi_trans), tmp); intel_de_write(dev_priv,
TRANS_DDI_FUNC_CTL2(dsi_trans), tmp);
} }
} }
} }
@ -1186,15 +1224,16 @@ static void gen11_dsi_disable_port(struct intel_encoder *encoder)
gen11_dsi_ungate_clocks(encoder); gen11_dsi_ungate_clocks(encoder);
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
tmp = I915_READ(DDI_BUF_CTL(port)); tmp = intel_de_read(dev_priv, DDI_BUF_CTL(port));
tmp &= ~DDI_BUF_CTL_ENABLE; tmp &= ~DDI_BUF_CTL_ENABLE;
I915_WRITE(DDI_BUF_CTL(port), tmp); intel_de_write(dev_priv, DDI_BUF_CTL(port), tmp);
if (wait_for_us((I915_READ(DDI_BUF_CTL(port)) & if (wait_for_us((intel_de_read(dev_priv, DDI_BUF_CTL(port)) &
DDI_BUF_IS_IDLE), DDI_BUF_IS_IDLE),
8)) 8))
DRM_ERROR("DDI port:%c buffer not idle\n", drm_err(&dev_priv->drm,
port_name(port)); "DDI port:%c buffer not idle\n",
port_name(port));
} }
gen11_dsi_gate_clocks(encoder); gen11_dsi_gate_clocks(encoder);
} }
@ -1219,9 +1258,9 @@ static void gen11_dsi_disable_io_power(struct intel_encoder *encoder)
/* set mode to DDI */ /* set mode to DDI */
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
tmp = I915_READ(ICL_DSI_IO_MODECTL(port)); tmp = intel_de_read(dev_priv, ICL_DSI_IO_MODECTL(port));
tmp &= ~COMBO_PHY_MODE_DSI; tmp &= ~COMBO_PHY_MODE_DSI;
I915_WRITE(ICL_DSI_IO_MODECTL(port), tmp); intel_de_write(dev_priv, ICL_DSI_IO_MODECTL(port), tmp);
} }
} }
@ -1357,11 +1396,13 @@ static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder,
return ret; return ret;
/* DSI specific sanity checks on the common code */ /* DSI specific sanity checks on the common code */
WARN_ON(vdsc_cfg->vbr_enable); drm_WARN_ON(&dev_priv->drm, vdsc_cfg->vbr_enable);
WARN_ON(vdsc_cfg->simple_422); drm_WARN_ON(&dev_priv->drm, vdsc_cfg->simple_422);
WARN_ON(vdsc_cfg->pic_width % vdsc_cfg->slice_width); drm_WARN_ON(&dev_priv->drm,
WARN_ON(vdsc_cfg->slice_height < 8); vdsc_cfg->pic_width % vdsc_cfg->slice_width);
WARN_ON(vdsc_cfg->pic_height % vdsc_cfg->slice_height); drm_WARN_ON(&dev_priv->drm, vdsc_cfg->slice_height < 8);
drm_WARN_ON(&dev_priv->drm,
vdsc_cfg->pic_height % vdsc_cfg->slice_height);
ret = drm_dsc_compute_rc_parameters(vdsc_cfg); ret = drm_dsc_compute_rc_parameters(vdsc_cfg);
if (ret) if (ret)
@ -1443,7 +1484,7 @@ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
for_each_dsi_port(port, intel_dsi->ports) { for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port); dsi_trans = dsi_port_to_transcoder(port);
tmp = I915_READ(TRANS_DDI_FUNC_CTL(dsi_trans)); tmp = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(dsi_trans));
switch (tmp & TRANS_DDI_EDP_INPUT_MASK) { switch (tmp & TRANS_DDI_EDP_INPUT_MASK) {
case TRANS_DDI_EDP_INPUT_A_ON: case TRANS_DDI_EDP_INPUT_A_ON:
*pipe = PIPE_A; *pipe = PIPE_A;
@ -1458,11 +1499,11 @@ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
*pipe = PIPE_D; *pipe = PIPE_D;
break; break;
default: default:
DRM_ERROR("Invalid PIPE input\n"); drm_err(&dev_priv->drm, "Invalid PIPE input\n");
goto out; goto out;
} }
tmp = I915_READ(PIPECONF(dsi_trans)); tmp = intel_de_read(dev_priv, PIPECONF(dsi_trans));
ret = tmp & PIPECONF_ENABLE; ret = tmp & PIPECONF_ENABLE;
} }
out: out:
@ -1582,7 +1623,8 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
*/ */
prepare_cnt = DIV_ROUND_UP(ths_prepare_ns * 4, tlpx_ns); prepare_cnt = DIV_ROUND_UP(ths_prepare_ns * 4, tlpx_ns);
if (prepare_cnt > ICL_PREPARE_CNT_MAX) { if (prepare_cnt > ICL_PREPARE_CNT_MAX) {
DRM_DEBUG_KMS("prepare_cnt out of range (%d)\n", prepare_cnt); drm_dbg_kms(&dev_priv->drm, "prepare_cnt out of range (%d)\n",
prepare_cnt);
prepare_cnt = ICL_PREPARE_CNT_MAX; prepare_cnt = ICL_PREPARE_CNT_MAX;
} }
@ -1590,28 +1632,33 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
clk_zero_cnt = DIV_ROUND_UP(mipi_config->tclk_prepare_clkzero - clk_zero_cnt = DIV_ROUND_UP(mipi_config->tclk_prepare_clkzero -
ths_prepare_ns, tlpx_ns); ths_prepare_ns, tlpx_ns);
if (clk_zero_cnt > ICL_CLK_ZERO_CNT_MAX) { if (clk_zero_cnt > ICL_CLK_ZERO_CNT_MAX) {
DRM_DEBUG_KMS("clk_zero_cnt out of range (%d)\n", clk_zero_cnt); drm_dbg_kms(&dev_priv->drm,
"clk_zero_cnt out of range (%d)\n", clk_zero_cnt);
clk_zero_cnt = ICL_CLK_ZERO_CNT_MAX; clk_zero_cnt = ICL_CLK_ZERO_CNT_MAX;
} }
/* trail cnt in escape clocks*/ /* trail cnt in escape clocks*/
trail_cnt = DIV_ROUND_UP(tclk_trail_ns, tlpx_ns); trail_cnt = DIV_ROUND_UP(tclk_trail_ns, tlpx_ns);
if (trail_cnt > ICL_TRAIL_CNT_MAX) { if (trail_cnt > ICL_TRAIL_CNT_MAX) {
DRM_DEBUG_KMS("trail_cnt out of range (%d)\n", trail_cnt); drm_dbg_kms(&dev_priv->drm, "trail_cnt out of range (%d)\n",
trail_cnt);
trail_cnt = ICL_TRAIL_CNT_MAX; trail_cnt = ICL_TRAIL_CNT_MAX;
} }
/* tclk pre count in escape clocks */ /* tclk pre count in escape clocks */
tclk_pre_cnt = DIV_ROUND_UP(mipi_config->tclk_pre, tlpx_ns); tclk_pre_cnt = DIV_ROUND_UP(mipi_config->tclk_pre, tlpx_ns);
if (tclk_pre_cnt > ICL_TCLK_PRE_CNT_MAX) { if (tclk_pre_cnt > ICL_TCLK_PRE_CNT_MAX) {
DRM_DEBUG_KMS("tclk_pre_cnt out of range (%d)\n", tclk_pre_cnt); drm_dbg_kms(&dev_priv->drm,
"tclk_pre_cnt out of range (%d)\n", tclk_pre_cnt);
tclk_pre_cnt = ICL_TCLK_PRE_CNT_MAX; tclk_pre_cnt = ICL_TCLK_PRE_CNT_MAX;
} }
/* tclk post count in escape clocks */ /* tclk post count in escape clocks */
tclk_post_cnt = DIV_ROUND_UP(mipi_config->tclk_post, tlpx_ns); tclk_post_cnt = DIV_ROUND_UP(mipi_config->tclk_post, tlpx_ns);
if (tclk_post_cnt > ICL_TCLK_POST_CNT_MAX) { if (tclk_post_cnt > ICL_TCLK_POST_CNT_MAX) {
DRM_DEBUG_KMS("tclk_post_cnt out of range (%d)\n", tclk_post_cnt); drm_dbg_kms(&dev_priv->drm,
"tclk_post_cnt out of range (%d)\n",
tclk_post_cnt);
tclk_post_cnt = ICL_TCLK_POST_CNT_MAX; tclk_post_cnt = ICL_TCLK_POST_CNT_MAX;
} }
@ -1619,14 +1666,17 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
hs_zero_cnt = DIV_ROUND_UP(mipi_config->ths_prepare_hszero - hs_zero_cnt = DIV_ROUND_UP(mipi_config->ths_prepare_hszero -
ths_prepare_ns, tlpx_ns); ths_prepare_ns, tlpx_ns);
if (hs_zero_cnt > ICL_HS_ZERO_CNT_MAX) { if (hs_zero_cnt > ICL_HS_ZERO_CNT_MAX) {
DRM_DEBUG_KMS("hs_zero_cnt out of range (%d)\n", hs_zero_cnt); drm_dbg_kms(&dev_priv->drm, "hs_zero_cnt out of range (%d)\n",
hs_zero_cnt);
hs_zero_cnt = ICL_HS_ZERO_CNT_MAX; hs_zero_cnt = ICL_HS_ZERO_CNT_MAX;
} }
/* hs exit zero cnt in escape clocks */ /* hs exit zero cnt in escape clocks */
exit_zero_cnt = DIV_ROUND_UP(mipi_config->ths_exit, tlpx_ns); exit_zero_cnt = DIV_ROUND_UP(mipi_config->ths_exit, tlpx_ns);
if (exit_zero_cnt > ICL_EXIT_ZERO_CNT_MAX) { if (exit_zero_cnt > ICL_EXIT_ZERO_CNT_MAX) {
DRM_DEBUG_KMS("exit_zero_cnt out of range (%d)\n", exit_zero_cnt); drm_dbg_kms(&dev_priv->drm,
"exit_zero_cnt out of range (%d)\n",
exit_zero_cnt);
exit_zero_cnt = ICL_EXIT_ZERO_CNT_MAX; exit_zero_cnt = ICL_EXIT_ZERO_CNT_MAX;
} }
@ -1707,6 +1757,7 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
encoder->pre_pll_enable = gen11_dsi_pre_pll_enable; encoder->pre_pll_enable = gen11_dsi_pre_pll_enable;
encoder->pre_enable = gen11_dsi_pre_enable; encoder->pre_enable = gen11_dsi_pre_enable;
encoder->enable = gen11_dsi_enable;
encoder->disable = gen11_dsi_disable; encoder->disable = gen11_dsi_disable;
encoder->post_disable = gen11_dsi_post_disable; encoder->post_disable = gen11_dsi_post_disable;
encoder->port = port; encoder->port = port;
@ -1737,7 +1788,7 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
mutex_unlock(&dev->mode_config.mutex); mutex_unlock(&dev->mode_config.mutex);
if (!fixed_mode) { if (!fixed_mode) {
DRM_ERROR("DSI fixed mode info missing\n"); drm_err(&dev_priv->drm, "DSI fixed mode info missing\n");
goto err; goto err;
} }
@ -1763,7 +1814,7 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
} }
if (!intel_dsi_vbt_init(intel_dsi, MIPI_DSI_GENERIC_PANEL_ID)) { if (!intel_dsi_vbt_init(intel_dsi, MIPI_DSI_GENERIC_PANEL_ID)) {
DRM_DEBUG_KMS("no device found\n"); drm_dbg_kms(&dev_priv->drm, "no device found\n");
goto err; goto err;
} }

View file

@ -10,6 +10,7 @@
#include "i915_drv.h" #include "i915_drv.h"
#include "intel_acpi.h" #include "intel_acpi.h"
#include "intel_display_types.h"
#define INTEL_DSM_REVISION_ID 1 /* For Calpella anyway... */ #define INTEL_DSM_REVISION_ID 1 /* For Calpella anyway... */
#define INTEL_DSM_FN_PLATFORM_MUX_INFO 1 /* No args */ #define INTEL_DSM_FN_PLATFORM_MUX_INFO 1 /* No args */
@ -156,3 +157,91 @@ void intel_register_dsm_handler(void)
void intel_unregister_dsm_handler(void) void intel_unregister_dsm_handler(void)
{ {
} }
/*
* ACPI Specification, Revision 5.0, Appendix B.3.2 _DOD (Enumerate All Devices
* Attached to the Display Adapter).
*/
#define ACPI_DISPLAY_INDEX_SHIFT 0
#define ACPI_DISPLAY_INDEX_MASK (0xf << 0)
#define ACPI_DISPLAY_PORT_ATTACHMENT_SHIFT 4
#define ACPI_DISPLAY_PORT_ATTACHMENT_MASK (0xf << 4)
#define ACPI_DISPLAY_TYPE_SHIFT 8
#define ACPI_DISPLAY_TYPE_MASK (0xf << 8)
#define ACPI_DISPLAY_TYPE_OTHER (0 << 8)
#define ACPI_DISPLAY_TYPE_VGA (1 << 8)
#define ACPI_DISPLAY_TYPE_TV (2 << 8)
#define ACPI_DISPLAY_TYPE_EXTERNAL_DIGITAL (3 << 8)
#define ACPI_DISPLAY_TYPE_INTERNAL_DIGITAL (4 << 8)
#define ACPI_VENDOR_SPECIFIC_SHIFT 12
#define ACPI_VENDOR_SPECIFIC_MASK (0xf << 12)
#define ACPI_BIOS_CAN_DETECT (1 << 16)
#define ACPI_DEPENDS_ON_VGA (1 << 17)
#define ACPI_PIPE_ID_SHIFT 18
#define ACPI_PIPE_ID_MASK (7 << 18)
#define ACPI_DEVICE_ID_SCHEME (1ULL << 31)
static u32 acpi_display_type(struct intel_connector *connector)
{
u32 display_type;
switch (connector->base.connector_type) {
case DRM_MODE_CONNECTOR_VGA:
case DRM_MODE_CONNECTOR_DVIA:
display_type = ACPI_DISPLAY_TYPE_VGA;
break;
case DRM_MODE_CONNECTOR_Composite:
case DRM_MODE_CONNECTOR_SVIDEO:
case DRM_MODE_CONNECTOR_Component:
case DRM_MODE_CONNECTOR_9PinDIN:
case DRM_MODE_CONNECTOR_TV:
display_type = ACPI_DISPLAY_TYPE_TV;
break;
case DRM_MODE_CONNECTOR_DVII:
case DRM_MODE_CONNECTOR_DVID:
case DRM_MODE_CONNECTOR_DisplayPort:
case DRM_MODE_CONNECTOR_HDMIA:
case DRM_MODE_CONNECTOR_HDMIB:
display_type = ACPI_DISPLAY_TYPE_EXTERNAL_DIGITAL;
break;
case DRM_MODE_CONNECTOR_LVDS:
case DRM_MODE_CONNECTOR_eDP:
case DRM_MODE_CONNECTOR_DSI:
display_type = ACPI_DISPLAY_TYPE_INTERNAL_DIGITAL;
break;
case DRM_MODE_CONNECTOR_Unknown:
case DRM_MODE_CONNECTOR_VIRTUAL:
display_type = ACPI_DISPLAY_TYPE_OTHER;
break;
default:
MISSING_CASE(connector->base.connector_type);
display_type = ACPI_DISPLAY_TYPE_OTHER;
break;
}
return display_type;
}
void intel_acpi_device_id_update(struct drm_i915_private *dev_priv)
{
struct drm_device *drm_dev = &dev_priv->drm;
struct intel_connector *connector;
struct drm_connector_list_iter conn_iter;
u8 display_index[16] = {};
/* Populate the ACPI IDs for all connectors for a given drm_device */
drm_connector_list_iter_begin(drm_dev, &conn_iter);
for_each_intel_connector_iter(connector, &conn_iter) {
u32 device_id, type;
device_id = acpi_display_type(connector);
/* Use display type specific display index. */
type = (device_id & ACPI_DISPLAY_TYPE_MASK)
>> ACPI_DISPLAY_TYPE_SHIFT;
device_id |= display_index[type]++ << ACPI_DISPLAY_INDEX_SHIFT;
connector->acpi_device_id = device_id;
}
drm_connector_list_iter_end(&conn_iter);
}

View file

@ -6,12 +6,17 @@
#ifndef __INTEL_ACPI_H__ #ifndef __INTEL_ACPI_H__
#define __INTEL_ACPI_H__ #define __INTEL_ACPI_H__
struct drm_i915_private;
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
void intel_register_dsm_handler(void); void intel_register_dsm_handler(void);
void intel_unregister_dsm_handler(void); void intel_unregister_dsm_handler(void);
void intel_acpi_device_id_update(struct drm_i915_private *i915);
#else #else
static inline void intel_register_dsm_handler(void) { return; } static inline void intel_register_dsm_handler(void) { return; }
static inline void intel_unregister_dsm_handler(void) { return; } static inline void intel_unregister_dsm_handler(void) { return; }
static inline
void intel_acpi_device_id_update(struct drm_i915_private *i915) { return; }
#endif /* CONFIG_ACPI */ #endif /* CONFIG_ACPI */
#endif /* __INTEL_ACPI_H__ */ #endif /* __INTEL_ACPI_H__ */

View file

@ -35,7 +35,9 @@
#include <drm/drm_plane_helper.h> #include <drm/drm_plane_helper.h>
#include "intel_atomic.h" #include "intel_atomic.h"
#include "intel_cdclk.h"
#include "intel_display_types.h" #include "intel_display_types.h"
#include "intel_global_state.h"
#include "intel_hdcp.h" #include "intel_hdcp.h"
#include "intel_psr.h" #include "intel_psr.h"
#include "intel_sprite.h" #include "intel_sprite.h"
@ -64,8 +66,9 @@ int intel_digital_connector_atomic_get_property(struct drm_connector *connector,
else if (property == dev_priv->broadcast_rgb_property) else if (property == dev_priv->broadcast_rgb_property)
*val = intel_conn_state->broadcast_rgb; *val = intel_conn_state->broadcast_rgb;
else { else {
DRM_DEBUG_ATOMIC("Unknown property [PROP:%d:%s]\n", drm_dbg_atomic(&dev_priv->drm,
property->base.id, property->name); "Unknown property [PROP:%d:%s]\n",
property->base.id, property->name);
return -EINVAL; return -EINVAL;
} }
@ -101,8 +104,8 @@ int intel_digital_connector_atomic_set_property(struct drm_connector *connector,
return 0; return 0;
} }
DRM_DEBUG_ATOMIC("Unknown property [PROP:%d:%s]\n", drm_dbg_atomic(&dev_priv->drm, "Unknown property [PROP:%d:%s]\n",
property->base.id, property->name); property->base.id, property->name);
return -EINVAL; return -EINVAL;
} }
@ -178,6 +181,8 @@ intel_digital_connector_duplicate_state(struct drm_connector *connector)
/** /**
* intel_connector_needs_modeset - check if connector needs a modeset * intel_connector_needs_modeset - check if connector needs a modeset
* @state: the atomic state corresponding to this modeset
* @connector: the connector
*/ */
bool bool
intel_connector_needs_modeset(struct intel_atomic_state *state, intel_connector_needs_modeset(struct intel_atomic_state *state,
@ -314,7 +319,8 @@ static void intel_atomic_setup_scaler(struct intel_crtc_scaler_state *scaler_sta
} }
} }
if (WARN(*scaler_id < 0, "Cannot find scaler for %s:%d\n", name, idx)) if (drm_WARN(&dev_priv->drm, *scaler_id < 0,
"Cannot find scaler for %s:%d\n", name, idx))
return; return;
/* set scaler mode */ /* set scaler mode */
@ -357,8 +363,8 @@ static void intel_atomic_setup_scaler(struct intel_crtc_scaler_state *scaler_sta
mode = SKL_PS_SCALER_MODE_DYN; mode = SKL_PS_SCALER_MODE_DYN;
} }
DRM_DEBUG_KMS("Attached scaler id %u.%u to %s:%d\n", drm_dbg_kms(&dev_priv->drm, "Attached scaler id %u.%u to %s:%d\n",
intel_crtc->pipe, *scaler_id, name, idx); intel_crtc->pipe, *scaler_id, name, idx);
scaler_state->scalers[*scaler_id].mode = mode; scaler_state->scalers[*scaler_id].mode = mode;
} }
@ -409,8 +415,9 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv,
/* fail if required scalers > available scalers */ /* fail if required scalers > available scalers */
if (num_scalers_need > intel_crtc->num_scalers){ if (num_scalers_need > intel_crtc->num_scalers){
DRM_DEBUG_KMS("Too many scaling requests %d > %d\n", drm_dbg_kms(&dev_priv->drm,
num_scalers_need, intel_crtc->num_scalers); "Too many scaling requests %d > %d\n",
num_scalers_need, intel_crtc->num_scalers);
return -EINVAL; return -EINVAL;
} }
@ -455,8 +462,9 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv,
plane = drm_plane_from_index(&dev_priv->drm, i); plane = drm_plane_from_index(&dev_priv->drm, i);
state = drm_atomic_get_plane_state(drm_state, plane); state = drm_atomic_get_plane_state(drm_state, plane);
if (IS_ERR(state)) { if (IS_ERR(state)) {
DRM_DEBUG_KMS("Failed to add [PLANE:%d] to drm_state\n", drm_dbg_kms(&dev_priv->drm,
plane->base.id); "Failed to add [PLANE:%d] to drm_state\n",
plane->base.id);
return PTR_ERR(state); return PTR_ERR(state);
} }
} }
@ -465,7 +473,8 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv,
idx = plane->base.id; idx = plane->base.id;
/* plane on different crtc cannot be a scaler user of this crtc */ /* plane on different crtc cannot be a scaler user of this crtc */
if (WARN_ON(intel_plane->pipe != intel_crtc->pipe)) if (drm_WARN_ON(&dev_priv->drm,
intel_plane->pipe != intel_crtc->pipe))
continue; continue;
plane_state = intel_atomic_get_new_plane_state(intel_state, plane_state = intel_atomic_get_new_plane_state(intel_state,
@ -494,18 +503,28 @@ intel_atomic_state_alloc(struct drm_device *dev)
return &state->base; return &state->base;
} }
void intel_atomic_state_free(struct drm_atomic_state *_state)
{
struct intel_atomic_state *state = to_intel_atomic_state(_state);
drm_atomic_state_default_release(&state->base);
kfree(state->global_objs);
i915_sw_fence_fini(&state->commit_ready);
kfree(state);
}
void intel_atomic_state_clear(struct drm_atomic_state *s) void intel_atomic_state_clear(struct drm_atomic_state *s)
{ {
struct intel_atomic_state *state = to_intel_atomic_state(s); struct intel_atomic_state *state = to_intel_atomic_state(s);
drm_atomic_state_default_clear(&state->base); drm_atomic_state_default_clear(&state->base);
intel_atomic_clear_global_state(state);
state->dpll_set = state->modeset = false; state->dpll_set = state->modeset = false;
state->global_state_changed = false; state->global_state_changed = false;
state->active_pipes = 0; state->active_pipes = 0;
memset(&state->min_cdclk, 0, sizeof(state->min_cdclk));
memset(&state->min_voltage_level, 0, sizeof(state->min_voltage_level));
memset(&state->cdclk.logical, 0, sizeof(state->cdclk.logical));
memset(&state->cdclk.actual, 0, sizeof(state->cdclk.actual));
state->cdclk.pipe = INVALID_PIPE;
} }
struct intel_crtc_state * struct intel_crtc_state *
@ -520,7 +539,7 @@ intel_atomic_get_crtc_state(struct drm_atomic_state *state,
return to_intel_crtc_state(crtc_state); return to_intel_crtc_state(crtc_state);
} }
int intel_atomic_lock_global_state(struct intel_atomic_state *state) int _intel_atomic_lock_global_state(struct intel_atomic_state *state)
{ {
struct drm_i915_private *dev_priv = to_i915(state->base.dev); struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_crtc *crtc; struct intel_crtc *crtc;
@ -539,7 +558,7 @@ int intel_atomic_lock_global_state(struct intel_atomic_state *state)
return 0; return 0;
} }
int intel_atomic_serialize_global_state(struct intel_atomic_state *state) int _intel_atomic_serialize_global_state(struct intel_atomic_state *state)
{ {
struct drm_i915_private *dev_priv = to_i915(state->base.dev); struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_crtc *crtc; struct intel_crtc *crtc;

View file

@ -45,6 +45,7 @@ void intel_crtc_destroy_state(struct drm_crtc *crtc,
void intel_crtc_free_hw_state(struct intel_crtc_state *crtc_state); void intel_crtc_free_hw_state(struct intel_crtc_state *crtc_state);
void intel_crtc_copy_color_blobs(struct intel_crtc_state *crtc_state); void intel_crtc_copy_color_blobs(struct intel_crtc_state *crtc_state);
struct drm_atomic_state *intel_atomic_state_alloc(struct drm_device *dev); struct drm_atomic_state *intel_atomic_state_alloc(struct drm_device *dev);
void intel_atomic_state_free(struct drm_atomic_state *state);
void intel_atomic_state_clear(struct drm_atomic_state *state); void intel_atomic_state_clear(struct drm_atomic_state *state);
struct intel_crtc_state * struct intel_crtc_state *
@ -55,8 +56,8 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv,
struct intel_crtc *intel_crtc, struct intel_crtc *intel_crtc,
struct intel_crtc_state *crtc_state); struct intel_crtc_state *crtc_state);
int intel_atomic_lock_global_state(struct intel_atomic_state *state); int _intel_atomic_lock_global_state(struct intel_atomic_state *state);
int intel_atomic_serialize_global_state(struct intel_atomic_state *state); int _intel_atomic_serialize_global_state(struct intel_atomic_state *state);
#endif /* __INTEL_ATOMIC_H__ */ #endif /* __INTEL_ATOMIC_H__ */

View file

@ -37,6 +37,7 @@
#include "i915_trace.h" #include "i915_trace.h"
#include "intel_atomic_plane.h" #include "intel_atomic_plane.h"
#include "intel_cdclk.h"
#include "intel_display_types.h" #include "intel_display_types.h"
#include "intel_pm.h" #include "intel_pm.h"
#include "intel_sprite.h" #include "intel_sprite.h"
@ -155,42 +156,64 @@ unsigned int intel_plane_data_rate(const struct intel_crtc_state *crtc_state,
return cpp * crtc_state->pixel_rate; return cpp * crtc_state->pixel_rate;
} }
bool intel_plane_calc_min_cdclk(struct intel_atomic_state *state, int intel_plane_calc_min_cdclk(struct intel_atomic_state *state,
struct intel_plane *plane) struct intel_plane *plane,
bool *need_cdclk_calc)
{ {
struct drm_i915_private *dev_priv = to_i915(plane->base.dev); struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct intel_plane_state *plane_state = const struct intel_plane_state *plane_state =
intel_atomic_get_new_plane_state(state, plane); intel_atomic_get_new_plane_state(state, plane);
struct intel_crtc *crtc = to_intel_crtc(plane_state->hw.crtc); struct intel_crtc *crtc = to_intel_crtc(plane_state->hw.crtc);
struct intel_crtc_state *crtc_state; const struct intel_cdclk_state *cdclk_state;
const struct intel_crtc_state *old_crtc_state;
struct intel_crtc_state *new_crtc_state;
if (!plane_state->uapi.visible || !plane->min_cdclk) if (!plane_state->uapi.visible || !plane->min_cdclk)
return false; return 0;
crtc_state = intel_atomic_get_new_crtc_state(state, crtc); old_crtc_state = intel_atomic_get_old_crtc_state(state, crtc);
new_crtc_state = intel_atomic_get_new_crtc_state(state, crtc);
crtc_state->min_cdclk[plane->id] = new_crtc_state->min_cdclk[plane->id] =
plane->min_cdclk(crtc_state, plane_state); plane->min_cdclk(new_crtc_state, plane_state);
/* /*
* Does the cdclk need to be bumbed up? * No need to check against the cdclk state if
* the min cdclk for the plane doesn't increase.
* *
* Note: we obviously need to be called before the new * Ie. we only ever increase the cdclk due to plane
* cdclk frequency is calculated so state->cdclk.logical * requirements. This can reduce back and forth
* hasn't been populated yet. Hence we look at the old * display blinking due to constant cdclk changes.
* cdclk state under dev_priv->cdclk.logical. This is
* safe as long we hold at least one crtc mutex (which
* must be true since we have crtc_state).
*/ */
if (crtc_state->min_cdclk[plane->id] > dev_priv->cdclk.logical.cdclk) { if (new_crtc_state->min_cdclk[plane->id] <=
DRM_DEBUG_KMS("[PLANE:%d:%s] min_cdclk (%d kHz) > logical cdclk (%d kHz)\n", old_crtc_state->min_cdclk[plane->id])
plane->base.base.id, plane->base.name, return 0;
crtc_state->min_cdclk[plane->id],
dev_priv->cdclk.logical.cdclk);
return true;
}
return false; cdclk_state = intel_atomic_get_cdclk_state(state);
if (IS_ERR(cdclk_state))
return PTR_ERR(cdclk_state);
/*
* No need to recalculate the cdclk state if
* the min cdclk for the pipe doesn't increase.
*
* Ie. we only ever increase the cdclk due to plane
* requirements. This can reduce back and forth
* display blinking due to constant cdclk changes.
*/
if (new_crtc_state->min_cdclk[plane->id] <=
cdclk_state->min_cdclk[crtc->pipe])
return 0;
drm_dbg_kms(&dev_priv->drm,
"[PLANE:%d:%s] min cdclk (%d kHz) > [CRTC:%d:%s] min cdclk (%d kHz)\n",
plane->base.base.id, plane->base.name,
new_crtc_state->min_cdclk[plane->id],
crtc->base.base.id, crtc->base.name,
cdclk_state->min_cdclk[crtc->pipe]);
*need_cdclk_calc = true;
return 0;
} }
static void intel_plane_clear_hw_state(struct intel_plane_state *plane_state) static void intel_plane_clear_hw_state(struct intel_plane_state *plane_state)
@ -225,12 +248,9 @@ int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_
struct intel_plane_state *new_plane_state) struct intel_plane_state *new_plane_state)
{ {
struct intel_plane *plane = to_intel_plane(new_plane_state->uapi.plane); struct intel_plane *plane = to_intel_plane(new_plane_state->uapi.plane);
const struct drm_framebuffer *fb; const struct drm_framebuffer *fb = new_plane_state->hw.fb;
int ret; int ret;
intel_plane_copy_uapi_to_hw_state(new_plane_state, new_plane_state);
fb = new_plane_state->hw.fb;
new_crtc_state->active_planes &= ~BIT(plane->id); new_crtc_state->active_planes &= ~BIT(plane->id);
new_crtc_state->nv12_planes &= ~BIT(plane->id); new_crtc_state->nv12_planes &= ~BIT(plane->id);
new_crtc_state->c8_planes &= ~BIT(plane->id); new_crtc_state->c8_planes &= ~BIT(plane->id);
@ -292,6 +312,7 @@ int intel_plane_atomic_check(struct intel_atomic_state *state,
const struct intel_crtc_state *old_crtc_state; const struct intel_crtc_state *old_crtc_state;
struct intel_crtc_state *new_crtc_state; struct intel_crtc_state *new_crtc_state;
intel_plane_copy_uapi_to_hw_state(new_plane_state, new_plane_state);
new_plane_state->uapi.visible = false; new_plane_state->uapi.visible = false;
if (!crtc) if (!crtc)
return 0; return 0;

View file

@ -46,7 +46,8 @@ int intel_plane_atomic_calc_changes(const struct intel_crtc_state *old_crtc_stat
struct intel_crtc_state *crtc_state, struct intel_crtc_state *crtc_state,
const struct intel_plane_state *old_plane_state, const struct intel_plane_state *old_plane_state,
struct intel_plane_state *plane_state); struct intel_plane_state *plane_state);
bool intel_plane_calc_min_cdclk(struct intel_atomic_state *state, int intel_plane_calc_min_cdclk(struct intel_atomic_state *state,
struct intel_plane *plane); struct intel_plane *plane,
bool *need_cdclk_calc);
#endif /* __INTEL_ATOMIC_PLANE_H__ */ #endif /* __INTEL_ATOMIC_PLANE_H__ */

View file

@ -30,6 +30,7 @@
#include "i915_drv.h" #include "i915_drv.h"
#include "intel_atomic.h" #include "intel_atomic.h"
#include "intel_audio.h" #include "intel_audio.h"
#include "intel_cdclk.h"
#include "intel_display_types.h" #include "intel_display_types.h"
#include "intel_lpe_audio.h" #include "intel_lpe_audio.h"
@ -291,18 +292,18 @@ static bool intel_eld_uptodate(struct drm_connector *connector,
u32 tmp; u32 tmp;
int i; int i;
tmp = I915_READ(reg_eldv); tmp = intel_de_read(dev_priv, reg_eldv);
tmp &= bits_eldv; tmp &= bits_eldv;
if (!tmp) if (!tmp)
return false; return false;
tmp = I915_READ(reg_elda); tmp = intel_de_read(dev_priv, reg_elda);
tmp &= ~bits_elda; tmp &= ~bits_elda;
I915_WRITE(reg_elda, tmp); intel_de_write(dev_priv, reg_elda, tmp);
for (i = 0; i < drm_eld_size(eld) / 4; i++) for (i = 0; i < drm_eld_size(eld) / 4; i++)
if (I915_READ(reg_edid) != *((const u32 *)eld + i)) if (intel_de_read(dev_priv, reg_edid) != *((const u32 *)eld + i))
return false; return false;
return true; return true;
@ -315,18 +316,18 @@ static void g4x_audio_codec_disable(struct intel_encoder *encoder,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
u32 eldv, tmp; u32 eldv, tmp;
DRM_DEBUG_KMS("Disable audio codec\n"); drm_dbg_kms(&dev_priv->drm, "Disable audio codec\n");
tmp = I915_READ(G4X_AUD_VID_DID); tmp = intel_de_read(dev_priv, G4X_AUD_VID_DID);
if (tmp == INTEL_AUDIO_DEVBLC || tmp == INTEL_AUDIO_DEVCL) if (tmp == INTEL_AUDIO_DEVBLC || tmp == INTEL_AUDIO_DEVCL)
eldv = G4X_ELDV_DEVCL_DEVBLC; eldv = G4X_ELDV_DEVCL_DEVBLC;
else else
eldv = G4X_ELDV_DEVCTG; eldv = G4X_ELDV_DEVCTG;
/* Invalidate ELD */ /* Invalidate ELD */
tmp = I915_READ(G4X_AUD_CNTL_ST); tmp = intel_de_read(dev_priv, G4X_AUD_CNTL_ST);
tmp &= ~eldv; tmp &= ~eldv;
I915_WRITE(G4X_AUD_CNTL_ST, tmp); intel_de_write(dev_priv, G4X_AUD_CNTL_ST, tmp);
} }
static void g4x_audio_codec_enable(struct intel_encoder *encoder, static void g4x_audio_codec_enable(struct intel_encoder *encoder,
@ -340,9 +341,10 @@ static void g4x_audio_codec_enable(struct intel_encoder *encoder,
u32 tmp; u32 tmp;
int len, i; int len, i;
DRM_DEBUG_KMS("Enable audio codec, %u bytes ELD\n", drm_eld_size(eld)); drm_dbg_kms(&dev_priv->drm, "Enable audio codec, %u bytes ELD\n",
drm_eld_size(eld));
tmp = I915_READ(G4X_AUD_VID_DID); tmp = intel_de_read(dev_priv, G4X_AUD_VID_DID);
if (tmp == INTEL_AUDIO_DEVBLC || tmp == INTEL_AUDIO_DEVCL) if (tmp == INTEL_AUDIO_DEVBLC || tmp == INTEL_AUDIO_DEVCL)
eldv = G4X_ELDV_DEVCL_DEVBLC; eldv = G4X_ELDV_DEVCL_DEVBLC;
else else
@ -354,19 +356,20 @@ static void g4x_audio_codec_enable(struct intel_encoder *encoder,
G4X_HDMIW_HDMIEDID)) G4X_HDMIW_HDMIEDID))
return; return;
tmp = I915_READ(G4X_AUD_CNTL_ST); tmp = intel_de_read(dev_priv, G4X_AUD_CNTL_ST);
tmp &= ~(eldv | G4X_ELD_ADDR_MASK); tmp &= ~(eldv | G4X_ELD_ADDR_MASK);
len = (tmp >> 9) & 0x1f; /* ELD buffer size */ len = (tmp >> 9) & 0x1f; /* ELD buffer size */
I915_WRITE(G4X_AUD_CNTL_ST, tmp); intel_de_write(dev_priv, G4X_AUD_CNTL_ST, tmp);
len = min(drm_eld_size(eld) / 4, len); len = min(drm_eld_size(eld) / 4, len);
DRM_DEBUG_DRIVER("ELD size %d\n", len); drm_dbg(&dev_priv->drm, "ELD size %d\n", len);
for (i = 0; i < len; i++) for (i = 0; i < len; i++)
I915_WRITE(G4X_HDMIW_HDMIEDID, *((const u32 *)eld + i)); intel_de_write(dev_priv, G4X_HDMIW_HDMIEDID,
*((const u32 *)eld + i));
tmp = I915_READ(G4X_AUD_CNTL_ST); tmp = intel_de_read(dev_priv, G4X_AUD_CNTL_ST);
tmp |= eldv; tmp |= eldv;
I915_WRITE(G4X_AUD_CNTL_ST, tmp); intel_de_write(dev_priv, G4X_AUD_CNTL_ST, tmp);
} }
static void static void
@ -384,11 +387,12 @@ hsw_dp_audio_config_update(struct intel_encoder *encoder,
rate = acomp ? acomp->aud_sample_rate[port] : 0; rate = acomp ? acomp->aud_sample_rate[port] : 0;
nm = audio_config_dp_get_n_m(crtc_state, rate); nm = audio_config_dp_get_n_m(crtc_state, rate);
if (nm) if (nm)
DRM_DEBUG_KMS("using Maud %u, Naud %u\n", nm->m, nm->n); drm_dbg_kms(&dev_priv->drm, "using Maud %u, Naud %u\n", nm->m,
nm->n);
else else
DRM_DEBUG_KMS("using automatic Maud, Naud\n"); drm_dbg_kms(&dev_priv->drm, "using automatic Maud, Naud\n");
tmp = I915_READ(HSW_AUD_CFG(cpu_transcoder)); tmp = intel_de_read(dev_priv, HSW_AUD_CFG(cpu_transcoder));
tmp &= ~AUD_CONFIG_N_VALUE_INDEX; tmp &= ~AUD_CONFIG_N_VALUE_INDEX;
tmp &= ~AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK; tmp &= ~AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK;
tmp &= ~AUD_CONFIG_N_PROG_ENABLE; tmp &= ~AUD_CONFIG_N_PROG_ENABLE;
@ -400,9 +404,9 @@ hsw_dp_audio_config_update(struct intel_encoder *encoder,
tmp |= AUD_CONFIG_N_PROG_ENABLE; tmp |= AUD_CONFIG_N_PROG_ENABLE;
} }
I915_WRITE(HSW_AUD_CFG(cpu_transcoder), tmp); intel_de_write(dev_priv, HSW_AUD_CFG(cpu_transcoder), tmp);
tmp = I915_READ(HSW_AUD_M_CTS_ENABLE(cpu_transcoder)); tmp = intel_de_read(dev_priv, HSW_AUD_M_CTS_ENABLE(cpu_transcoder));
tmp &= ~AUD_CONFIG_M_MASK; tmp &= ~AUD_CONFIG_M_MASK;
tmp &= ~AUD_M_CTS_M_VALUE_INDEX; tmp &= ~AUD_M_CTS_M_VALUE_INDEX;
tmp &= ~AUD_M_CTS_M_PROG_ENABLE; tmp &= ~AUD_M_CTS_M_PROG_ENABLE;
@ -413,7 +417,7 @@ hsw_dp_audio_config_update(struct intel_encoder *encoder,
tmp |= AUD_M_CTS_M_PROG_ENABLE; tmp |= AUD_M_CTS_M_PROG_ENABLE;
} }
I915_WRITE(HSW_AUD_M_CTS_ENABLE(cpu_transcoder), tmp); intel_de_write(dev_priv, HSW_AUD_M_CTS_ENABLE(cpu_transcoder), tmp);
} }
static void static void
@ -429,7 +433,7 @@ hsw_hdmi_audio_config_update(struct intel_encoder *encoder,
rate = acomp ? acomp->aud_sample_rate[port] : 0; rate = acomp ? acomp->aud_sample_rate[port] : 0;
tmp = I915_READ(HSW_AUD_CFG(cpu_transcoder)); tmp = intel_de_read(dev_priv, HSW_AUD_CFG(cpu_transcoder));
tmp &= ~AUD_CONFIG_N_VALUE_INDEX; tmp &= ~AUD_CONFIG_N_VALUE_INDEX;
tmp &= ~AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK; tmp &= ~AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK;
tmp &= ~AUD_CONFIG_N_PROG_ENABLE; tmp &= ~AUD_CONFIG_N_PROG_ENABLE;
@ -437,25 +441,25 @@ hsw_hdmi_audio_config_update(struct intel_encoder *encoder,
n = audio_config_hdmi_get_n(crtc_state, rate); n = audio_config_hdmi_get_n(crtc_state, rate);
if (n != 0) { if (n != 0) {
DRM_DEBUG_KMS("using N %d\n", n); drm_dbg_kms(&dev_priv->drm, "using N %d\n", n);
tmp &= ~AUD_CONFIG_N_MASK; tmp &= ~AUD_CONFIG_N_MASK;
tmp |= AUD_CONFIG_N(n); tmp |= AUD_CONFIG_N(n);
tmp |= AUD_CONFIG_N_PROG_ENABLE; tmp |= AUD_CONFIG_N_PROG_ENABLE;
} else { } else {
DRM_DEBUG_KMS("using automatic N\n"); drm_dbg_kms(&dev_priv->drm, "using automatic N\n");
} }
I915_WRITE(HSW_AUD_CFG(cpu_transcoder), tmp); intel_de_write(dev_priv, HSW_AUD_CFG(cpu_transcoder), tmp);
/* /*
* Let's disable "Enable CTS or M Prog bit" * Let's disable "Enable CTS or M Prog bit"
* and let HW calculate the value * and let HW calculate the value
*/ */
tmp = I915_READ(HSW_AUD_M_CTS_ENABLE(cpu_transcoder)); tmp = intel_de_read(dev_priv, HSW_AUD_M_CTS_ENABLE(cpu_transcoder));
tmp &= ~AUD_M_CTS_M_PROG_ENABLE; tmp &= ~AUD_M_CTS_M_PROG_ENABLE;
tmp &= ~AUD_M_CTS_M_VALUE_INDEX; tmp &= ~AUD_M_CTS_M_VALUE_INDEX;
I915_WRITE(HSW_AUD_M_CTS_ENABLE(cpu_transcoder), tmp); intel_de_write(dev_priv, HSW_AUD_M_CTS_ENABLE(cpu_transcoder), tmp);
} }
static void static void
@ -476,26 +480,26 @@ static void hsw_audio_codec_disable(struct intel_encoder *encoder,
enum transcoder cpu_transcoder = old_crtc_state->cpu_transcoder; enum transcoder cpu_transcoder = old_crtc_state->cpu_transcoder;
u32 tmp; u32 tmp;
DRM_DEBUG_KMS("Disable audio codec on transcoder %s\n", drm_dbg_kms(&dev_priv->drm, "Disable audio codec on transcoder %s\n",
transcoder_name(cpu_transcoder)); transcoder_name(cpu_transcoder));
mutex_lock(&dev_priv->av_mutex); mutex_lock(&dev_priv->av_mutex);
/* Disable timestamps */ /* Disable timestamps */
tmp = I915_READ(HSW_AUD_CFG(cpu_transcoder)); tmp = intel_de_read(dev_priv, HSW_AUD_CFG(cpu_transcoder));
tmp &= ~AUD_CONFIG_N_VALUE_INDEX; tmp &= ~AUD_CONFIG_N_VALUE_INDEX;
tmp |= AUD_CONFIG_N_PROG_ENABLE; tmp |= AUD_CONFIG_N_PROG_ENABLE;
tmp &= ~AUD_CONFIG_UPPER_N_MASK; tmp &= ~AUD_CONFIG_UPPER_N_MASK;
tmp &= ~AUD_CONFIG_LOWER_N_MASK; tmp &= ~AUD_CONFIG_LOWER_N_MASK;
if (intel_crtc_has_dp_encoder(old_crtc_state)) if (intel_crtc_has_dp_encoder(old_crtc_state))
tmp |= AUD_CONFIG_N_VALUE_INDEX; tmp |= AUD_CONFIG_N_VALUE_INDEX;
I915_WRITE(HSW_AUD_CFG(cpu_transcoder), tmp); intel_de_write(dev_priv, HSW_AUD_CFG(cpu_transcoder), tmp);
/* Invalidate ELD */ /* Invalidate ELD */
tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD); tmp = intel_de_read(dev_priv, HSW_AUD_PIN_ELD_CP_VLD);
tmp &= ~AUDIO_ELD_VALID(cpu_transcoder); tmp &= ~AUDIO_ELD_VALID(cpu_transcoder);
tmp &= ~AUDIO_OUTPUT_ENABLE(cpu_transcoder); tmp &= ~AUDIO_OUTPUT_ENABLE(cpu_transcoder);
I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp); intel_de_write(dev_priv, HSW_AUD_PIN_ELD_CP_VLD, tmp);
mutex_unlock(&dev_priv->av_mutex); mutex_unlock(&dev_priv->av_mutex);
} }
@ -511,16 +515,17 @@ static void hsw_audio_codec_enable(struct intel_encoder *encoder,
u32 tmp; u32 tmp;
int len, i; int len, i;
DRM_DEBUG_KMS("Enable audio codec on transcoder %s, %u bytes ELD\n", drm_dbg_kms(&dev_priv->drm,
transcoder_name(cpu_transcoder), drm_eld_size(eld)); "Enable audio codec on transcoder %s, %u bytes ELD\n",
transcoder_name(cpu_transcoder), drm_eld_size(eld));
mutex_lock(&dev_priv->av_mutex); mutex_lock(&dev_priv->av_mutex);
/* Enable audio presence detect, invalidate ELD */ /* Enable audio presence detect, invalidate ELD */
tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD); tmp = intel_de_read(dev_priv, HSW_AUD_PIN_ELD_CP_VLD);
tmp |= AUDIO_OUTPUT_ENABLE(cpu_transcoder); tmp |= AUDIO_OUTPUT_ENABLE(cpu_transcoder);
tmp &= ~AUDIO_ELD_VALID(cpu_transcoder); tmp &= ~AUDIO_ELD_VALID(cpu_transcoder);
I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp); intel_de_write(dev_priv, HSW_AUD_PIN_ELD_CP_VLD, tmp);
/* /*
* FIXME: We're supposed to wait for vblank here, but we have vblanks * FIXME: We're supposed to wait for vblank here, but we have vblanks
@ -530,19 +535,20 @@ static void hsw_audio_codec_enable(struct intel_encoder *encoder,
*/ */
/* Reset ELD write address */ /* Reset ELD write address */
tmp = I915_READ(HSW_AUD_DIP_ELD_CTRL(cpu_transcoder)); tmp = intel_de_read(dev_priv, HSW_AUD_DIP_ELD_CTRL(cpu_transcoder));
tmp &= ~IBX_ELD_ADDRESS_MASK; tmp &= ~IBX_ELD_ADDRESS_MASK;
I915_WRITE(HSW_AUD_DIP_ELD_CTRL(cpu_transcoder), tmp); intel_de_write(dev_priv, HSW_AUD_DIP_ELD_CTRL(cpu_transcoder), tmp);
/* Up to 84 bytes of hw ELD buffer */ /* Up to 84 bytes of hw ELD buffer */
len = min(drm_eld_size(eld), 84); len = min(drm_eld_size(eld), 84);
for (i = 0; i < len / 4; i++) for (i = 0; i < len / 4; i++)
I915_WRITE(HSW_AUD_EDID_DATA(cpu_transcoder), *((const u32 *)eld + i)); intel_de_write(dev_priv, HSW_AUD_EDID_DATA(cpu_transcoder),
*((const u32 *)eld + i));
/* ELD valid */ /* ELD valid */
tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD); tmp = intel_de_read(dev_priv, HSW_AUD_PIN_ELD_CP_VLD);
tmp |= AUDIO_ELD_VALID(cpu_transcoder); tmp |= AUDIO_ELD_VALID(cpu_transcoder);
I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp); intel_de_write(dev_priv, HSW_AUD_PIN_ELD_CP_VLD, tmp);
/* Enable timestamps */ /* Enable timestamps */
hsw_audio_config_update(encoder, crtc_state); hsw_audio_config_update(encoder, crtc_state);
@ -561,11 +567,12 @@ static void ilk_audio_codec_disable(struct intel_encoder *encoder,
u32 tmp, eldv; u32 tmp, eldv;
i915_reg_t aud_config, aud_cntrl_st2; i915_reg_t aud_config, aud_cntrl_st2;
DRM_DEBUG_KMS("Disable audio codec on [ENCODER:%d:%s], pipe %c\n", drm_dbg_kms(&dev_priv->drm,
encoder->base.base.id, encoder->base.name, "Disable audio codec on [ENCODER:%d:%s], pipe %c\n",
pipe_name(pipe)); encoder->base.base.id, encoder->base.name,
pipe_name(pipe));
if (WARN_ON(port == PORT_A)) if (drm_WARN_ON(&dev_priv->drm, port == PORT_A))
return; return;
if (HAS_PCH_IBX(dev_priv)) { if (HAS_PCH_IBX(dev_priv)) {
@ -580,21 +587,21 @@ static void ilk_audio_codec_disable(struct intel_encoder *encoder,
} }
/* Disable timestamps */ /* Disable timestamps */
tmp = I915_READ(aud_config); tmp = intel_de_read(dev_priv, aud_config);
tmp &= ~AUD_CONFIG_N_VALUE_INDEX; tmp &= ~AUD_CONFIG_N_VALUE_INDEX;
tmp |= AUD_CONFIG_N_PROG_ENABLE; tmp |= AUD_CONFIG_N_PROG_ENABLE;
tmp &= ~AUD_CONFIG_UPPER_N_MASK; tmp &= ~AUD_CONFIG_UPPER_N_MASK;
tmp &= ~AUD_CONFIG_LOWER_N_MASK; tmp &= ~AUD_CONFIG_LOWER_N_MASK;
if (intel_crtc_has_dp_encoder(old_crtc_state)) if (intel_crtc_has_dp_encoder(old_crtc_state))
tmp |= AUD_CONFIG_N_VALUE_INDEX; tmp |= AUD_CONFIG_N_VALUE_INDEX;
I915_WRITE(aud_config, tmp); intel_de_write(dev_priv, aud_config, tmp);
eldv = IBX_ELD_VALID(port); eldv = IBX_ELD_VALID(port);
/* Invalidate ELD */ /* Invalidate ELD */
tmp = I915_READ(aud_cntrl_st2); tmp = intel_de_read(dev_priv, aud_cntrl_st2);
tmp &= ~eldv; tmp &= ~eldv;
I915_WRITE(aud_cntrl_st2, tmp); intel_de_write(dev_priv, aud_cntrl_st2, tmp);
} }
static void ilk_audio_codec_enable(struct intel_encoder *encoder, static void ilk_audio_codec_enable(struct intel_encoder *encoder,
@ -611,11 +618,12 @@ static void ilk_audio_codec_enable(struct intel_encoder *encoder,
int len, i; int len, i;
i915_reg_t hdmiw_hdmiedid, aud_config, aud_cntl_st, aud_cntrl_st2; i915_reg_t hdmiw_hdmiedid, aud_config, aud_cntl_st, aud_cntrl_st2;
DRM_DEBUG_KMS("Enable audio codec on [ENCODER:%d:%s], pipe %c, %u bytes ELD\n", drm_dbg_kms(&dev_priv->drm,
encoder->base.base.id, encoder->base.name, "Enable audio codec on [ENCODER:%d:%s], pipe %c, %u bytes ELD\n",
pipe_name(pipe), drm_eld_size(eld)); encoder->base.base.id, encoder->base.name,
pipe_name(pipe), drm_eld_size(eld));
if (WARN_ON(port == PORT_A)) if (drm_WARN_ON(&dev_priv->drm, port == PORT_A))
return; return;
/* /*
@ -646,27 +654,28 @@ static void ilk_audio_codec_enable(struct intel_encoder *encoder,
eldv = IBX_ELD_VALID(port); eldv = IBX_ELD_VALID(port);
/* Invalidate ELD */ /* Invalidate ELD */
tmp = I915_READ(aud_cntrl_st2); tmp = intel_de_read(dev_priv, aud_cntrl_st2);
tmp &= ~eldv; tmp &= ~eldv;
I915_WRITE(aud_cntrl_st2, tmp); intel_de_write(dev_priv, aud_cntrl_st2, tmp);
/* Reset ELD write address */ /* Reset ELD write address */
tmp = I915_READ(aud_cntl_st); tmp = intel_de_read(dev_priv, aud_cntl_st);
tmp &= ~IBX_ELD_ADDRESS_MASK; tmp &= ~IBX_ELD_ADDRESS_MASK;
I915_WRITE(aud_cntl_st, tmp); intel_de_write(dev_priv, aud_cntl_st, tmp);
/* Up to 84 bytes of hw ELD buffer */ /* Up to 84 bytes of hw ELD buffer */
len = min(drm_eld_size(eld), 84); len = min(drm_eld_size(eld), 84);
for (i = 0; i < len / 4; i++) for (i = 0; i < len / 4; i++)
I915_WRITE(hdmiw_hdmiedid, *((const u32 *)eld + i)); intel_de_write(dev_priv, hdmiw_hdmiedid,
*((const u32 *)eld + i));
/* ELD valid */ /* ELD valid */
tmp = I915_READ(aud_cntrl_st2); tmp = intel_de_read(dev_priv, aud_cntrl_st2);
tmp |= eldv; tmp |= eldv;
I915_WRITE(aud_cntrl_st2, tmp); intel_de_write(dev_priv, aud_cntrl_st2, tmp);
/* Enable timestamps */ /* Enable timestamps */
tmp = I915_READ(aud_config); tmp = intel_de_read(dev_priv, aud_config);
tmp &= ~AUD_CONFIG_N_VALUE_INDEX; tmp &= ~AUD_CONFIG_N_VALUE_INDEX;
tmp &= ~AUD_CONFIG_N_PROG_ENABLE; tmp &= ~AUD_CONFIG_N_PROG_ENABLE;
tmp &= ~AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK; tmp &= ~AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK;
@ -674,7 +683,7 @@ static void ilk_audio_codec_enable(struct intel_encoder *encoder,
tmp |= AUD_CONFIG_N_VALUE_INDEX; tmp |= AUD_CONFIG_N_VALUE_INDEX;
else else
tmp |= audio_config_hdmi_pixel_clock(crtc_state); tmp |= audio_config_hdmi_pixel_clock(crtc_state);
I915_WRITE(aud_config, tmp); intel_de_write(dev_priv, aud_config, tmp);
} }
/** /**
@ -701,14 +710,15 @@ void intel_audio_codec_enable(struct intel_encoder *encoder,
/* FIXME precompute the ELD in .compute_config() */ /* FIXME precompute the ELD in .compute_config() */
if (!connector->eld[0]) if (!connector->eld[0])
DRM_DEBUG_KMS("Bogus ELD on [CONNECTOR:%d:%s]\n", drm_dbg_kms(&dev_priv->drm,
connector->base.id, connector->name); "Bogus ELD on [CONNECTOR:%d:%s]\n",
connector->base.id, connector->name);
DRM_DEBUG_DRIVER("ELD on [CONNECTOR:%d:%s], [ENCODER:%d:%s]\n", drm_dbg(&dev_priv->drm, "ELD on [CONNECTOR:%d:%s], [ENCODER:%d:%s]\n",
connector->base.id, connector->base.id,
connector->name, connector->name,
encoder->base.base.id, encoder->base.base.id,
encoder->base.name); encoder->base.name);
connector->eld[6] = drm_av_sync_delay(connector, adjusted_mode) / 2; connector->eld[6] = drm_av_sync_delay(connector, adjusted_mode) / 2;
@ -800,37 +810,61 @@ void intel_init_audio_hooks(struct drm_i915_private *dev_priv)
} }
} }
static int glk_force_audio_cdclk_commit(struct intel_atomic_state *state,
struct intel_crtc *crtc,
bool enable)
{
struct intel_cdclk_state *cdclk_state;
int ret;
/* need to hold at least one crtc lock for the global state */
ret = drm_modeset_lock(&crtc->base.mutex, state->base.acquire_ctx);
if (ret)
return ret;
cdclk_state = intel_atomic_get_cdclk_state(state);
if (IS_ERR(cdclk_state))
return PTR_ERR(cdclk_state);
cdclk_state->force_min_cdclk_changed = true;
cdclk_state->force_min_cdclk = enable ? 2 * 96000 : 0;
ret = intel_atomic_lock_global_state(&cdclk_state->base);
if (ret)
return ret;
return drm_atomic_commit(&state->base);
}
static void glk_force_audio_cdclk(struct drm_i915_private *dev_priv, static void glk_force_audio_cdclk(struct drm_i915_private *dev_priv,
bool enable) bool enable)
{ {
struct drm_modeset_acquire_ctx ctx; struct drm_modeset_acquire_ctx ctx;
struct drm_atomic_state *state; struct drm_atomic_state *state;
struct intel_crtc *crtc;
int ret; int ret;
crtc = intel_get_crtc_for_pipe(dev_priv, PIPE_A);
if (!crtc)
return;
drm_modeset_acquire_init(&ctx, 0); drm_modeset_acquire_init(&ctx, 0);
state = drm_atomic_state_alloc(&dev_priv->drm); state = drm_atomic_state_alloc(&dev_priv->drm);
if (WARN_ON(!state)) if (drm_WARN_ON(&dev_priv->drm, !state))
return; return;
state->acquire_ctx = &ctx; state->acquire_ctx = &ctx;
retry: retry:
to_intel_atomic_state(state)->cdclk.force_min_cdclk_changed = true; ret = glk_force_audio_cdclk_commit(to_intel_atomic_state(state), crtc,
to_intel_atomic_state(state)->cdclk.force_min_cdclk = enable);
enable ? 2 * 96000 : 0;
/* Protects dev_priv->cdclk.force_min_cdclk */
ret = intel_atomic_lock_global_state(to_intel_atomic_state(state));
if (!ret)
ret = drm_atomic_commit(state);
if (ret == -EDEADLK) { if (ret == -EDEADLK) {
drm_atomic_state_clear(state); drm_atomic_state_clear(state);
drm_modeset_backoff(&ctx); drm_modeset_backoff(&ctx);
goto retry; goto retry;
} }
WARN_ON(ret); drm_WARN_ON(&dev_priv->drm, ret);
drm_atomic_state_put(state); drm_atomic_state_put(state);
@ -850,9 +884,11 @@ static unsigned long i915_audio_component_get_power(struct device *kdev)
if (dev_priv->audio_power_refcount++ == 0) { if (dev_priv->audio_power_refcount++ == 0) {
if (IS_TIGERLAKE(dev_priv) || IS_ICELAKE(dev_priv)) { if (IS_TIGERLAKE(dev_priv) || IS_ICELAKE(dev_priv)) {
I915_WRITE(AUD_FREQ_CNTRL, dev_priv->audio_freq_cntrl); intel_de_write(dev_priv, AUD_FREQ_CNTRL,
DRM_DEBUG_KMS("restored AUD_FREQ_CNTRL to 0x%x\n", dev_priv->audio_freq_cntrl);
dev_priv->audio_freq_cntrl); drm_dbg_kms(&dev_priv->drm,
"restored AUD_FREQ_CNTRL to 0x%x\n",
dev_priv->audio_freq_cntrl);
} }
/* Force CDCLK to 2*BCLK as long as we need audio powered. */ /* Force CDCLK to 2*BCLK as long as we need audio powered. */
@ -860,9 +896,8 @@ static unsigned long i915_audio_component_get_power(struct device *kdev)
glk_force_audio_cdclk(dev_priv, true); glk_force_audio_cdclk(dev_priv, true);
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
I915_WRITE(AUD_PIN_BUF_CTL, intel_de_write(dev_priv, AUD_PIN_BUF_CTL,
(I915_READ(AUD_PIN_BUF_CTL) | (intel_de_read(dev_priv, AUD_PIN_BUF_CTL) | AUD_PIN_BUF_ENABLE));
AUD_PIN_BUF_ENABLE));
} }
return ret; return ret;
@ -897,15 +932,15 @@ static void i915_audio_component_codec_wake_override(struct device *kdev,
* Enable/disable generating the codec wake signal, overriding the * Enable/disable generating the codec wake signal, overriding the
* internal logic to generate the codec wake to controller. * internal logic to generate the codec wake to controller.
*/ */
tmp = I915_READ(HSW_AUD_CHICKENBIT); tmp = intel_de_read(dev_priv, HSW_AUD_CHICKENBIT);
tmp &= ~SKL_AUD_CODEC_WAKE_SIGNAL; tmp &= ~SKL_AUD_CODEC_WAKE_SIGNAL;
I915_WRITE(HSW_AUD_CHICKENBIT, tmp); intel_de_write(dev_priv, HSW_AUD_CHICKENBIT, tmp);
usleep_range(1000, 1500); usleep_range(1000, 1500);
if (enable) { if (enable) {
tmp = I915_READ(HSW_AUD_CHICKENBIT); tmp = intel_de_read(dev_priv, HSW_AUD_CHICKENBIT);
tmp |= SKL_AUD_CODEC_WAKE_SIGNAL; tmp |= SKL_AUD_CODEC_WAKE_SIGNAL;
I915_WRITE(HSW_AUD_CHICKENBIT, tmp); intel_de_write(dev_priv, HSW_AUD_CHICKENBIT, tmp);
usleep_range(1000, 1500); usleep_range(1000, 1500);
} }
@ -917,7 +952,7 @@ static int i915_audio_component_get_cdclk_freq(struct device *kdev)
{ {
struct drm_i915_private *dev_priv = kdev_to_i915(kdev); struct drm_i915_private *dev_priv = kdev_to_i915(kdev);
if (WARN_ON_ONCE(!HAS_DDI(dev_priv))) if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_DDI(dev_priv)))
return -ENODEV; return -ENODEV;
return dev_priv->cdclk.hw.cdclk; return dev_priv->cdclk.hw.cdclk;
@ -940,7 +975,8 @@ static struct intel_encoder *get_saved_enc(struct drm_i915_private *dev_priv,
/* MST */ /* MST */
if (pipe >= 0) { if (pipe >= 0) {
if (WARN_ON(pipe >= ARRAY_SIZE(dev_priv->av_enc_map))) if (drm_WARN_ON(&dev_priv->drm,
pipe >= ARRAY_SIZE(dev_priv->av_enc_map)))
return NULL; return NULL;
encoder = dev_priv->av_enc_map[pipe]; encoder = dev_priv->av_enc_map[pipe];
@ -992,7 +1028,8 @@ static int i915_audio_component_sync_audio_rate(struct device *kdev, int port,
/* 1. get the pipe */ /* 1. get the pipe */
encoder = get_saved_enc(dev_priv, port, pipe); encoder = get_saved_enc(dev_priv, port, pipe);
if (!encoder || !encoder->base.crtc) { if (!encoder || !encoder->base.crtc) {
DRM_DEBUG_KMS("Not valid for port %c\n", port_name(port)); drm_dbg_kms(&dev_priv->drm, "Not valid for port %c\n",
port_name(port));
err = -ENODEV; err = -ENODEV;
goto unlock; goto unlock;
} }
@ -1023,7 +1060,8 @@ static int i915_audio_component_get_eld(struct device *kdev, int port,
intel_encoder = get_saved_enc(dev_priv, port, pipe); intel_encoder = get_saved_enc(dev_priv, port, pipe);
if (!intel_encoder) { if (!intel_encoder) {
DRM_DEBUG_KMS("Not valid for port %c\n", port_name(port)); drm_dbg_kms(&dev_priv->drm, "Not valid for port %c\n",
port_name(port));
mutex_unlock(&dev_priv->av_mutex); mutex_unlock(&dev_priv->av_mutex);
return ret; return ret;
} }
@ -1057,10 +1095,12 @@ static int i915_audio_component_bind(struct device *i915_kdev,
struct drm_i915_private *dev_priv = kdev_to_i915(i915_kdev); struct drm_i915_private *dev_priv = kdev_to_i915(i915_kdev);
int i; int i;
if (WARN_ON(acomp->base.ops || acomp->base.dev)) if (drm_WARN_ON(&dev_priv->drm, acomp->base.ops || acomp->base.dev))
return -EEXIST; return -EEXIST;
if (WARN_ON(!device_link_add(hda_kdev, i915_kdev, DL_FLAG_STATELESS))) if (drm_WARN_ON(&dev_priv->drm,
!device_link_add(hda_kdev, i915_kdev,
DL_FLAG_STATELESS)))
return -ENOMEM; return -ENOMEM;
drm_modeset_lock_all(&dev_priv->drm); drm_modeset_lock_all(&dev_priv->drm);
@ -1119,15 +1159,18 @@ static void i915_audio_component_init(struct drm_i915_private *dev_priv)
&i915_audio_component_bind_ops, &i915_audio_component_bind_ops,
I915_COMPONENT_AUDIO); I915_COMPONENT_AUDIO);
if (ret < 0) { if (ret < 0) {
DRM_ERROR("failed to add audio component (%d)\n", ret); drm_err(&dev_priv->drm,
"failed to add audio component (%d)\n", ret);
/* continue with reduced functionality */ /* continue with reduced functionality */
return; return;
} }
if (IS_TIGERLAKE(dev_priv) || IS_ICELAKE(dev_priv)) { if (IS_TIGERLAKE(dev_priv) || IS_ICELAKE(dev_priv)) {
dev_priv->audio_freq_cntrl = I915_READ(AUD_FREQ_CNTRL); dev_priv->audio_freq_cntrl = intel_de_read(dev_priv,
DRM_DEBUG_KMS("init value of AUD_FREQ_CNTRL of 0x%x\n", AUD_FREQ_CNTRL);
dev_priv->audio_freq_cntrl); drm_dbg_kms(&dev_priv->drm,
"init value of AUD_FREQ_CNTRL of 0x%x\n",
dev_priv->audio_freq_cntrl);
} }
dev_priv->audio_component_registered = true; dev_priv->audio_component_registered = true;

View file

@ -228,17 +228,20 @@ parse_panel_options(struct drm_i915_private *dev_priv,
ret = intel_opregion_get_panel_type(dev_priv); ret = intel_opregion_get_panel_type(dev_priv);
if (ret >= 0) { if (ret >= 0) {
WARN_ON(ret > 0xf); drm_WARN_ON(&dev_priv->drm, ret > 0xf);
panel_type = ret; panel_type = ret;
DRM_DEBUG_KMS("Panel type: %d (OpRegion)\n", panel_type); drm_dbg_kms(&dev_priv->drm, "Panel type: %d (OpRegion)\n",
panel_type);
} else { } else {
if (lvds_options->panel_type > 0xf) { if (lvds_options->panel_type > 0xf) {
DRM_DEBUG_KMS("Invalid VBT panel type 0x%x\n", drm_dbg_kms(&dev_priv->drm,
lvds_options->panel_type); "Invalid VBT panel type 0x%x\n",
lvds_options->panel_type);
return; return;
} }
panel_type = lvds_options->panel_type; panel_type = lvds_options->panel_type;
DRM_DEBUG_KMS("Panel type: %d (VBT)\n", panel_type); drm_dbg_kms(&dev_priv->drm, "Panel type: %d (VBT)\n",
panel_type);
} }
dev_priv->vbt.panel_type = panel_type; dev_priv->vbt.panel_type = panel_type;
@ -253,15 +256,17 @@ parse_panel_options(struct drm_i915_private *dev_priv,
switch (drrs_mode) { switch (drrs_mode) {
case 0: case 0:
dev_priv->vbt.drrs_type = STATIC_DRRS_SUPPORT; dev_priv->vbt.drrs_type = STATIC_DRRS_SUPPORT;
DRM_DEBUG_KMS("DRRS supported mode is static\n"); drm_dbg_kms(&dev_priv->drm, "DRRS supported mode is static\n");
break; break;
case 2: case 2:
dev_priv->vbt.drrs_type = SEAMLESS_DRRS_SUPPORT; dev_priv->vbt.drrs_type = SEAMLESS_DRRS_SUPPORT;
DRM_DEBUG_KMS("DRRS supported mode is seamless\n"); drm_dbg_kms(&dev_priv->drm,
"DRRS supported mode is seamless\n");
break; break;
default: default:
dev_priv->vbt.drrs_type = DRRS_NOT_SUPPORTED; dev_priv->vbt.drrs_type = DRRS_NOT_SUPPORTED;
DRM_DEBUG_KMS("DRRS not supported (VBT input)\n"); drm_dbg_kms(&dev_priv->drm,
"DRRS not supported (VBT input)\n");
break; break;
} }
} }
@ -298,7 +303,8 @@ parse_lfp_panel_dtd(struct drm_i915_private *dev_priv,
dev_priv->vbt.lfp_lvds_vbt_mode = panel_fixed_mode; dev_priv->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
DRM_DEBUG_KMS("Found panel mode in BIOS VBT legacy lfp table:\n"); drm_dbg_kms(&dev_priv->drm,
"Found panel mode in BIOS VBT legacy lfp table:\n");
drm_mode_debug_printmodeline(panel_fixed_mode); drm_mode_debug_printmodeline(panel_fixed_mode);
fp_timing = get_lvds_fp_timing(bdb, lvds_lfp_data, fp_timing = get_lvds_fp_timing(bdb, lvds_lfp_data,
@ -309,8 +315,9 @@ parse_lfp_panel_dtd(struct drm_i915_private *dev_priv,
if (fp_timing->x_res == panel_fixed_mode->hdisplay && if (fp_timing->x_res == panel_fixed_mode->hdisplay &&
fp_timing->y_res == panel_fixed_mode->vdisplay) { fp_timing->y_res == panel_fixed_mode->vdisplay) {
dev_priv->vbt.bios_lvds_val = fp_timing->lvds_reg_val; dev_priv->vbt.bios_lvds_val = fp_timing->lvds_reg_val;
DRM_DEBUG_KMS("VBT initial LVDS value %x\n", drm_dbg_kms(&dev_priv->drm,
dev_priv->vbt.bios_lvds_val); "VBT initial LVDS value %x\n",
dev_priv->vbt.bios_lvds_val);
} }
} }
} }
@ -329,20 +336,22 @@ parse_generic_dtd(struct drm_i915_private *dev_priv,
return; return;
if (generic_dtd->gdtd_size < sizeof(struct generic_dtd_entry)) { if (generic_dtd->gdtd_size < sizeof(struct generic_dtd_entry)) {
DRM_ERROR("GDTD size %u is too small.\n", drm_err(&dev_priv->drm, "GDTD size %u is too small.\n",
generic_dtd->gdtd_size); generic_dtd->gdtd_size);
return; return;
} else if (generic_dtd->gdtd_size != } else if (generic_dtd->gdtd_size !=
sizeof(struct generic_dtd_entry)) { sizeof(struct generic_dtd_entry)) {
DRM_ERROR("Unexpected GDTD size %u\n", generic_dtd->gdtd_size); drm_err(&dev_priv->drm, "Unexpected GDTD size %u\n",
generic_dtd->gdtd_size);
/* DTD has unknown fields, but keep going */ /* DTD has unknown fields, but keep going */
} }
num_dtd = (get_blocksize(generic_dtd) - num_dtd = (get_blocksize(generic_dtd) -
sizeof(struct bdb_generic_dtd)) / generic_dtd->gdtd_size; sizeof(struct bdb_generic_dtd)) / generic_dtd->gdtd_size;
if (dev_priv->vbt.panel_type >= num_dtd) { if (dev_priv->vbt.panel_type >= num_dtd) {
DRM_ERROR("Panel type %d not found in table of %d DTD's\n", drm_err(&dev_priv->drm,
dev_priv->vbt.panel_type, num_dtd); "Panel type %d not found in table of %d DTD's\n",
dev_priv->vbt.panel_type, num_dtd);
return; return;
} }
@ -385,7 +394,8 @@ parse_generic_dtd(struct drm_i915_private *dev_priv,
else else
panel_fixed_mode->flags |= DRM_MODE_FLAG_NVSYNC; panel_fixed_mode->flags |= DRM_MODE_FLAG_NVSYNC;
DRM_DEBUG_KMS("Found panel mode in BIOS VBT generic dtd table:\n"); drm_dbg_kms(&dev_priv->drm,
"Found panel mode in BIOS VBT generic dtd table:\n");
drm_mode_debug_printmodeline(panel_fixed_mode); drm_mode_debug_printmodeline(panel_fixed_mode);
dev_priv->vbt.lfp_lvds_vbt_mode = panel_fixed_mode; dev_priv->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
@ -422,8 +432,9 @@ parse_lfp_backlight(struct drm_i915_private *dev_priv,
return; return;
if (backlight_data->entry_size != sizeof(backlight_data->data[0])) { if (backlight_data->entry_size != sizeof(backlight_data->data[0])) {
DRM_DEBUG_KMS("Unsupported backlight data entry size %u\n", drm_dbg_kms(&dev_priv->drm,
backlight_data->entry_size); "Unsupported backlight data entry size %u\n",
backlight_data->entry_size);
return; return;
} }
@ -431,8 +442,9 @@ parse_lfp_backlight(struct drm_i915_private *dev_priv,
dev_priv->vbt.backlight.present = entry->type == BDB_BACKLIGHT_TYPE_PWM; dev_priv->vbt.backlight.present = entry->type == BDB_BACKLIGHT_TYPE_PWM;
if (!dev_priv->vbt.backlight.present) { if (!dev_priv->vbt.backlight.present) {
DRM_DEBUG_KMS("PWM backlight not present in VBT (type %u)\n", drm_dbg_kms(&dev_priv->drm,
entry->type); "PWM backlight not present in VBT (type %u)\n",
entry->type);
return; return;
} }
@ -449,13 +461,14 @@ parse_lfp_backlight(struct drm_i915_private *dev_priv,
dev_priv->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz; dev_priv->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz;
dev_priv->vbt.backlight.active_low_pwm = entry->active_low_pwm; dev_priv->vbt.backlight.active_low_pwm = entry->active_low_pwm;
dev_priv->vbt.backlight.min_brightness = entry->min_brightness; dev_priv->vbt.backlight.min_brightness = entry->min_brightness;
DRM_DEBUG_KMS("VBT backlight PWM modulation frequency %u Hz, " drm_dbg_kms(&dev_priv->drm,
"active %s, min brightness %u, level %u, controller %u\n", "VBT backlight PWM modulation frequency %u Hz, "
dev_priv->vbt.backlight.pwm_freq_hz, "active %s, min brightness %u, level %u, controller %u\n",
dev_priv->vbt.backlight.active_low_pwm ? "low" : "high", dev_priv->vbt.backlight.pwm_freq_hz,
dev_priv->vbt.backlight.min_brightness, dev_priv->vbt.backlight.active_low_pwm ? "low" : "high",
backlight_data->level[panel_type], dev_priv->vbt.backlight.min_brightness,
dev_priv->vbt.backlight.controller); backlight_data->level[panel_type],
dev_priv->vbt.backlight.controller);
} }
/* Try to find sdvo panel data */ /* Try to find sdvo panel data */
@ -469,7 +482,8 @@ parse_sdvo_panel_data(struct drm_i915_private *dev_priv,
index = i915_modparams.vbt_sdvo_panel_type; index = i915_modparams.vbt_sdvo_panel_type;
if (index == -2) { if (index == -2) {
DRM_DEBUG_KMS("Ignore SDVO panel mode from BIOS VBT tables.\n"); drm_dbg_kms(&dev_priv->drm,
"Ignore SDVO panel mode from BIOS VBT tables.\n");
return; return;
} }
@ -495,7 +509,8 @@ parse_sdvo_panel_data(struct drm_i915_private *dev_priv,
dev_priv->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode; dev_priv->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode;
DRM_DEBUG_KMS("Found SDVO panel mode in BIOS VBT tables:\n"); drm_dbg_kms(&dev_priv->drm,
"Found SDVO panel mode in BIOS VBT tables:\n");
drm_mode_debug_printmodeline(panel_fixed_mode); drm_mode_debug_printmodeline(panel_fixed_mode);
} }
@ -540,13 +555,14 @@ parse_general_features(struct drm_i915_private *dev_priv,
} else { } else {
dev_priv->vbt.orientation = DRM_MODE_PANEL_ORIENTATION_UNKNOWN; dev_priv->vbt.orientation = DRM_MODE_PANEL_ORIENTATION_UNKNOWN;
} }
DRM_DEBUG_KMS("BDB_GENERAL_FEATURES int_tv_support %d int_crt_support %d lvds_use_ssc %d lvds_ssc_freq %d display_clock_mode %d fdi_rx_polarity_inverted %d\n", drm_dbg_kms(&dev_priv->drm,
dev_priv->vbt.int_tv_support, "BDB_GENERAL_FEATURES int_tv_support %d int_crt_support %d lvds_use_ssc %d lvds_ssc_freq %d display_clock_mode %d fdi_rx_polarity_inverted %d\n",
dev_priv->vbt.int_crt_support, dev_priv->vbt.int_tv_support,
dev_priv->vbt.lvds_use_ssc, dev_priv->vbt.int_crt_support,
dev_priv->vbt.lvds_ssc_freq, dev_priv->vbt.lvds_use_ssc,
dev_priv->vbt.display_clock_mode, dev_priv->vbt.lvds_ssc_freq,
dev_priv->vbt.fdi_rx_polarity_inverted); dev_priv->vbt.display_clock_mode,
dev_priv->vbt.fdi_rx_polarity_inverted);
} }
static const struct child_device_config * static const struct child_device_config *
@ -568,7 +584,7 @@ parse_sdvo_device_mapping(struct drm_i915_private *dev_priv, u8 bdb_version)
* accurate and doesn't have to be, as long as it's not too strict. * accurate and doesn't have to be, as long as it's not too strict.
*/ */
if (!IS_GEN_RANGE(dev_priv, 3, 7)) { if (!IS_GEN_RANGE(dev_priv, 3, 7)) {
DRM_DEBUG_KMS("Skipping SDVO device mapping\n"); drm_dbg_kms(&dev_priv->drm, "Skipping SDVO device mapping\n");
return; return;
} }
@ -586,14 +602,16 @@ parse_sdvo_device_mapping(struct drm_i915_private *dev_priv, u8 bdb_version)
if (child->dvo_port != DEVICE_PORT_DVOB && if (child->dvo_port != DEVICE_PORT_DVOB &&
child->dvo_port != DEVICE_PORT_DVOC) { child->dvo_port != DEVICE_PORT_DVOC) {
/* skip the incorrect SDVO port */ /* skip the incorrect SDVO port */
DRM_DEBUG_KMS("Incorrect SDVO port. Skip it\n"); drm_dbg_kms(&dev_priv->drm,
"Incorrect SDVO port. Skip it\n");
continue; continue;
} }
DRM_DEBUG_KMS("the SDVO device with slave addr %2x is found on" drm_dbg_kms(&dev_priv->drm,
" %s port\n", "the SDVO device with slave addr %2x is found on"
child->slave_addr, " %s port\n",
(child->dvo_port == DEVICE_PORT_DVOB) ? child->slave_addr,
"SDVOB" : "SDVOC"); (child->dvo_port == DEVICE_PORT_DVOB) ?
"SDVOB" : "SDVOC");
mapping = &dev_priv->vbt.sdvo_mappings[child->dvo_port - 1]; mapping = &dev_priv->vbt.sdvo_mappings[child->dvo_port - 1];
if (!mapping->initialized) { if (!mapping->initialized) {
mapping->dvo_port = child->dvo_port; mapping->dvo_port = child->dvo_port;
@ -602,28 +620,30 @@ parse_sdvo_device_mapping(struct drm_i915_private *dev_priv, u8 bdb_version)
mapping->ddc_pin = child->ddc_pin; mapping->ddc_pin = child->ddc_pin;
mapping->i2c_pin = child->i2c_pin; mapping->i2c_pin = child->i2c_pin;
mapping->initialized = 1; mapping->initialized = 1;
DRM_DEBUG_KMS("SDVO device: dvo=%x, addr=%x, wiring=%d, ddc_pin=%d, i2c_pin=%d\n", drm_dbg_kms(&dev_priv->drm,
mapping->dvo_port, "SDVO device: dvo=%x, addr=%x, wiring=%d, ddc_pin=%d, i2c_pin=%d\n",
mapping->slave_addr, mapping->dvo_port, mapping->slave_addr,
mapping->dvo_wiring, mapping->dvo_wiring, mapping->ddc_pin,
mapping->ddc_pin, mapping->i2c_pin);
mapping->i2c_pin);
} else { } else {
DRM_DEBUG_KMS("Maybe one SDVO port is shared by " drm_dbg_kms(&dev_priv->drm,
"two SDVO device.\n"); "Maybe one SDVO port is shared by "
"two SDVO device.\n");
} }
if (child->slave2_addr) { if (child->slave2_addr) {
/* Maybe this is a SDVO device with multiple inputs */ /* Maybe this is a SDVO device with multiple inputs */
/* And the mapping info is not added */ /* And the mapping info is not added */
DRM_DEBUG_KMS("there exists the slave2_addr. Maybe this" drm_dbg_kms(&dev_priv->drm,
" is a SDVO device with multiple inputs.\n"); "there exists the slave2_addr. Maybe this"
" is a SDVO device with multiple inputs.\n");
} }
count++; count++;
} }
if (!count) { if (!count) {
/* No SDVO device info is found */ /* No SDVO device info is found */
DRM_DEBUG_KMS("No SDVO device info is found in VBT\n"); drm_dbg_kms(&dev_priv->drm,
"No SDVO device info is found in VBT\n");
} }
} }
@ -664,7 +684,8 @@ parse_driver_features(struct drm_i915_private *dev_priv,
} }
if (bdb->version < 228) { if (bdb->version < 228) {
DRM_DEBUG_KMS("DRRS State Enabled:%d\n", driver->drrs_enabled); drm_dbg_kms(&dev_priv->drm, "DRRS State Enabled:%d\n",
driver->drrs_enabled);
/* /*
* If DRRS is not supported, drrs_type has to be set to 0. * If DRRS is not supported, drrs_type has to be set to 0.
* This is because, VBT is configured in such a way that * This is because, VBT is configured in such a way that
@ -688,7 +709,7 @@ parse_power_conservation_features(struct drm_i915_private *dev_priv,
if (bdb->version < 228) if (bdb->version < 228)
return; return;
power = find_section(bdb, BDB_LVDS_POWER); power = find_section(bdb, BDB_LFP_POWER);
if (!power) if (!power)
return; return;
@ -742,8 +763,9 @@ parse_edp(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
dev_priv->vbt.edp.rate = DP_LINK_BW_2_7; dev_priv->vbt.edp.rate = DP_LINK_BW_2_7;
break; break;
default: default:
DRM_DEBUG_KMS("VBT has unknown eDP link rate value %u\n", drm_dbg_kms(&dev_priv->drm,
edp_link_params->rate); "VBT has unknown eDP link rate value %u\n",
edp_link_params->rate);
break; break;
} }
@ -758,8 +780,9 @@ parse_edp(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
dev_priv->vbt.edp.lanes = 4; dev_priv->vbt.edp.lanes = 4;
break; break;
default: default:
DRM_DEBUG_KMS("VBT has unknown eDP lane count value %u\n", drm_dbg_kms(&dev_priv->drm,
edp_link_params->lanes); "VBT has unknown eDP lane count value %u\n",
edp_link_params->lanes);
break; break;
} }
@ -777,8 +800,9 @@ parse_edp(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
dev_priv->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_3; dev_priv->vbt.edp.preemphasis = DP_TRAIN_PRE_EMPH_LEVEL_3;
break; break;
default: default:
DRM_DEBUG_KMS("VBT has unknown eDP pre-emphasis value %u\n", drm_dbg_kms(&dev_priv->drm,
edp_link_params->preemphasis); "VBT has unknown eDP pre-emphasis value %u\n",
edp_link_params->preemphasis);
break; break;
} }
@ -796,8 +820,9 @@ parse_edp(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
dev_priv->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_3; dev_priv->vbt.edp.vswing = DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
break; break;
default: default:
DRM_DEBUG_KMS("VBT has unknown eDP voltage swing value %u\n", drm_dbg_kms(&dev_priv->drm,
edp_link_params->vswing); "VBT has unknown eDP voltage swing value %u\n",
edp_link_params->vswing);
break; break;
} }
@ -824,7 +849,7 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
psr = find_section(bdb, BDB_PSR); psr = find_section(bdb, BDB_PSR);
if (!psr) { if (!psr) {
DRM_DEBUG_KMS("No PSR BDB found.\n"); drm_dbg_kms(&dev_priv->drm, "No PSR BDB found.\n");
return; return;
} }
@ -851,8 +876,9 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
dev_priv->vbt.psr.lines_to_wait = PSR_8_LINES_TO_WAIT; dev_priv->vbt.psr.lines_to_wait = PSR_8_LINES_TO_WAIT;
break; break;
default: default:
DRM_DEBUG_KMS("VBT has unknown PSR lines to wait %u\n", drm_dbg_kms(&dev_priv->drm,
psr_table->lines_to_wait); "VBT has unknown PSR lines to wait %u\n",
psr_table->lines_to_wait);
break; break;
} }
@ -874,8 +900,9 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
dev_priv->vbt.psr.tp1_wakeup_time_us = 0; dev_priv->vbt.psr.tp1_wakeup_time_us = 0;
break; break;
default: default:
DRM_DEBUG_KMS("VBT tp1 wakeup time value %d is outside range[0-3], defaulting to max value 2500us\n", drm_dbg_kms(&dev_priv->drm,
psr_table->tp1_wakeup_time); "VBT tp1 wakeup time value %d is outside range[0-3], defaulting to max value 2500us\n",
psr_table->tp1_wakeup_time);
/* fallthrough */ /* fallthrough */
case 2: case 2:
dev_priv->vbt.psr.tp1_wakeup_time_us = 2500; dev_priv->vbt.psr.tp1_wakeup_time_us = 2500;
@ -893,8 +920,9 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 0; dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 0;
break; break;
default: default:
DRM_DEBUG_KMS("VBT tp2_tp3 wakeup time value %d is outside range[0-3], defaulting to max value 2500us\n", drm_dbg_kms(&dev_priv->drm,
psr_table->tp2_tp3_wakeup_time); "VBT tp2_tp3 wakeup time value %d is outside range[0-3], defaulting to max value 2500us\n",
psr_table->tp2_tp3_wakeup_time);
/* fallthrough */ /* fallthrough */
case 2: case 2:
dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 2500; dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 2500;
@ -1000,12 +1028,12 @@ parse_mipi_config(struct drm_i915_private *dev_priv,
*/ */
start = find_section(bdb, BDB_MIPI_CONFIG); start = find_section(bdb, BDB_MIPI_CONFIG);
if (!start) { if (!start) {
DRM_DEBUG_KMS("No MIPI config BDB found"); drm_dbg_kms(&dev_priv->drm, "No MIPI config BDB found");
return; return;
} }
DRM_DEBUG_DRIVER("Found MIPI Config block, panel index = %d\n", drm_dbg(&dev_priv->drm, "Found MIPI Config block, panel index = %d\n",
panel_type); panel_type);
/* /*
* get hold of the correct configuration block and pps data as per * get hold of the correct configuration block and pps data as per
@ -1220,7 +1248,8 @@ static int get_init_otp_deassert_fragment_len(struct drm_i915_private *dev_priv)
const u8 *data = dev_priv->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP]; const u8 *data = dev_priv->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
int index, len; int index, len;
if (WARN_ON(!data || dev_priv->vbt.dsi.seq_version != 1)) if (drm_WARN_ON(&dev_priv->drm,
!data || dev_priv->vbt.dsi.seq_version != 1))
return 0; return 0;
/* index = 1 to skip sequence byte */ /* index = 1 to skip sequence byte */
@ -1273,7 +1302,8 @@ static void fixup_mipi_sequences(struct drm_i915_private *dev_priv)
if (!len) if (!len)
return; return;
DRM_DEBUG_KMS("Using init OTP fragment to deassert reset\n"); drm_dbg_kms(&dev_priv->drm,
"Using init OTP fragment to deassert reset\n");
/* Copy the fragment, update seq byte and terminate it */ /* Copy the fragment, update seq byte and terminate it */
init_otp = (u8 *)dev_priv->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP]; init_otp = (u8 *)dev_priv->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP];
@ -1308,18 +1338,21 @@ parse_mipi_sequence(struct drm_i915_private *dev_priv,
sequence = find_section(bdb, BDB_MIPI_SEQUENCE); sequence = find_section(bdb, BDB_MIPI_SEQUENCE);
if (!sequence) { if (!sequence) {
DRM_DEBUG_KMS("No MIPI Sequence found, parsing complete\n"); drm_dbg_kms(&dev_priv->drm,
"No MIPI Sequence found, parsing complete\n");
return; return;
} }
/* Fail gracefully for forward incompatible sequence block. */ /* Fail gracefully for forward incompatible sequence block. */
if (sequence->version >= 4) { if (sequence->version >= 4) {
DRM_ERROR("Unable to parse MIPI Sequence Block v%u\n", drm_err(&dev_priv->drm,
sequence->version); "Unable to parse MIPI Sequence Block v%u\n",
sequence->version);
return; return;
} }
DRM_DEBUG_DRIVER("Found MIPI sequence block v%u\n", sequence->version); drm_dbg(&dev_priv->drm, "Found MIPI sequence block v%u\n",
sequence->version);
seq_data = find_panel_sequence_block(sequence, panel_type, &seq_size); seq_data = find_panel_sequence_block(sequence, panel_type, &seq_size);
if (!seq_data) if (!seq_data)
@ -1336,13 +1369,15 @@ parse_mipi_sequence(struct drm_i915_private *dev_priv,
break; break;
if (seq_id >= MIPI_SEQ_MAX) { if (seq_id >= MIPI_SEQ_MAX) {
DRM_ERROR("Unknown sequence %u\n", seq_id); drm_err(&dev_priv->drm, "Unknown sequence %u\n",
seq_id);
goto err; goto err;
} }
/* Log about presence of sequences we won't run. */ /* Log about presence of sequences we won't run. */
if (seq_id == MIPI_SEQ_TEAR_ON || seq_id == MIPI_SEQ_TEAR_OFF) if (seq_id == MIPI_SEQ_TEAR_ON || seq_id == MIPI_SEQ_TEAR_OFF)
DRM_DEBUG_KMS("Unsupported sequence %u\n", seq_id); drm_dbg_kms(&dev_priv->drm,
"Unsupported sequence %u\n", seq_id);
dev_priv->vbt.dsi.sequence[seq_id] = data + index; dev_priv->vbt.dsi.sequence[seq_id] = data + index;
@ -1351,7 +1386,8 @@ parse_mipi_sequence(struct drm_i915_private *dev_priv,
else else
index = goto_next_sequence(data, index, seq_size); index = goto_next_sequence(data, index, seq_size);
if (!index) { if (!index) {
DRM_ERROR("Invalid sequence %u\n", seq_id); drm_err(&dev_priv->drm, "Invalid sequence %u\n",
seq_id);
goto err; goto err;
} }
} }
@ -1362,7 +1398,7 @@ parse_mipi_sequence(struct drm_i915_private *dev_priv,
fixup_mipi_sequences(dev_priv); fixup_mipi_sequences(dev_priv);
DRM_DEBUG_DRIVER("MIPI related VBT parsing complete\n"); drm_dbg(&dev_priv->drm, "MIPI related VBT parsing complete\n");
return; return;
err: err:
@ -1387,13 +1423,15 @@ parse_compression_parameters(struct drm_i915_private *i915,
if (params) { if (params) {
/* Sanity checks */ /* Sanity checks */
if (params->entry_size != sizeof(params->data[0])) { if (params->entry_size != sizeof(params->data[0])) {
DRM_DEBUG_KMS("VBT: unsupported compression param entry size\n"); drm_dbg_kms(&i915->drm,
"VBT: unsupported compression param entry size\n");
return; return;
} }
block_size = get_blocksize(params); block_size = get_blocksize(params);
if (block_size < sizeof(*params)) { if (block_size < sizeof(*params)) {
DRM_DEBUG_KMS("VBT: expected 16 compression param entries\n"); drm_dbg_kms(&i915->drm,
"VBT: expected 16 compression param entries\n");
return; return;
} }
} }
@ -1405,12 +1443,14 @@ parse_compression_parameters(struct drm_i915_private *i915,
continue; continue;
if (!params) { if (!params) {
DRM_DEBUG_KMS("VBT: compression params not available\n"); drm_dbg_kms(&i915->drm,
"VBT: compression params not available\n");
continue; continue;
} }
if (child->compression_method_cps) { if (child->compression_method_cps) {
DRM_DEBUG_KMS("VBT: CPS compression not supported\n"); drm_dbg_kms(&i915->drm,
"VBT: CPS compression not supported\n");
continue; continue;
} }
@ -1458,10 +1498,11 @@ static void sanitize_ddc_pin(struct drm_i915_private *dev_priv,
p = get_port_by_ddc_pin(dev_priv, info->alternate_ddc_pin); p = get_port_by_ddc_pin(dev_priv, info->alternate_ddc_pin);
if (p != PORT_NONE) { if (p != PORT_NONE) {
DRM_DEBUG_KMS("port %c trying to use the same DDC pin (0x%x) as port %c, " drm_dbg_kms(&dev_priv->drm,
"disabling port %c DVI/HDMI support\n", "port %c trying to use the same DDC pin (0x%x) as port %c, "
port_name(port), info->alternate_ddc_pin, "disabling port %c DVI/HDMI support\n",
port_name(p), port_name(p)); port_name(port), info->alternate_ddc_pin,
port_name(p), port_name(p));
/* /*
* If we have multiple ports supposedly sharing the * If we have multiple ports supposedly sharing the
@ -1509,10 +1550,11 @@ static void sanitize_aux_ch(struct drm_i915_private *dev_priv,
p = get_port_by_aux_ch(dev_priv, info->alternate_aux_channel); p = get_port_by_aux_ch(dev_priv, info->alternate_aux_channel);
if (p != PORT_NONE) { if (p != PORT_NONE) {
DRM_DEBUG_KMS("port %c trying to use the same AUX CH (0x%x) as port %c, " drm_dbg_kms(&dev_priv->drm,
"disabling port %c DP support\n", "port %c trying to use the same AUX CH (0x%x) as port %c, "
port_name(port), info->alternate_aux_channel, "disabling port %c DP support\n",
port_name(p), port_name(p)); port_name(port), info->alternate_aux_channel,
port_name(p), port_name(p));
/* /*
* If we have multiple ports supposedlt sharing the * If we have multiple ports supposedlt sharing the
@ -1572,8 +1614,9 @@ static u8 map_ddc_pin(struct drm_i915_private *dev_priv, u8 vbt_pin)
if (vbt_pin < n_entries && ddc_pin_map[vbt_pin] != 0) if (vbt_pin < n_entries && ddc_pin_map[vbt_pin] != 0)
return ddc_pin_map[vbt_pin]; return ddc_pin_map[vbt_pin];
DRM_DEBUG_KMS("Ignoring alternate pin: VBT claims DDC pin %d, which is not valid for this platform\n", drm_dbg_kms(&dev_priv->drm,
vbt_pin); "Ignoring alternate pin: VBT claims DDC pin %d, which is not valid for this platform\n",
vbt_pin);
return 0; return 0;
} }
@ -1624,8 +1667,9 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
info = &dev_priv->vbt.ddi_port_info[port]; info = &dev_priv->vbt.ddi_port_info[port];
if (info->child) { if (info->child) {
DRM_DEBUG_KMS("More than one child device for port %c in VBT, using the first.\n", drm_dbg_kms(&dev_priv->drm,
port_name(port)); "More than one child device for port %c in VBT, using the first.\n",
port_name(port));
return; return;
} }
@ -1636,8 +1680,9 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
is_edp = is_dp && (child->device_type & DEVICE_TYPE_INTERNAL_CONNECTOR); is_edp = is_dp && (child->device_type & DEVICE_TYPE_INTERNAL_CONNECTOR);
if (port == PORT_A && is_dvi && INTEL_GEN(dev_priv) < 12) { if (port == PORT_A && is_dvi && INTEL_GEN(dev_priv) < 12) {
DRM_DEBUG_KMS("VBT claims port A supports DVI%s, ignoring\n", drm_dbg_kms(&dev_priv->drm,
is_hdmi ? "/HDMI" : ""); "VBT claims port A supports DVI%s, ignoring\n",
is_hdmi ? "/HDMI" : "");
is_dvi = false; is_dvi = false;
is_hdmi = false; is_hdmi = false;
} }
@ -1653,11 +1698,12 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
if (bdb_version >= 209) if (bdb_version >= 209)
info->supports_tbt = child->tbt; info->supports_tbt = child->tbt;
DRM_DEBUG_KMS("Port %c VBT info: CRT:%d DVI:%d HDMI:%d DP:%d eDP:%d LSPCON:%d USB-Type-C:%d TBT:%d DSC:%d\n", drm_dbg_kms(&dev_priv->drm,
port_name(port), is_crt, is_dvi, is_hdmi, is_dp, is_edp, "Port %c VBT info: CRT:%d DVI:%d HDMI:%d DP:%d eDP:%d LSPCON:%d USB-Type-C:%d TBT:%d DSC:%d\n",
HAS_LSPCON(dev_priv) && child->lspcon, port_name(port), is_crt, is_dvi, is_hdmi, is_dp, is_edp,
info->supports_typec_usb, info->supports_tbt, HAS_LSPCON(dev_priv) && child->lspcon,
devdata->dsc != NULL); info->supports_typec_usb, info->supports_tbt,
devdata->dsc != NULL);
if (is_dvi) { if (is_dvi) {
u8 ddc_pin; u8 ddc_pin;
@ -1667,9 +1713,10 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
info->alternate_ddc_pin = ddc_pin; info->alternate_ddc_pin = ddc_pin;
sanitize_ddc_pin(dev_priv, port); sanitize_ddc_pin(dev_priv, port);
} else { } else {
DRM_DEBUG_KMS("Port %c has invalid DDC pin %d, " drm_dbg_kms(&dev_priv->drm,
"sticking to defaults\n", "Port %c has invalid DDC pin %d, "
port_name(port), ddc_pin); "sticking to defaults\n",
port_name(port), ddc_pin);
} }
} }
@ -1682,9 +1729,10 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
if (bdb_version >= 158) { if (bdb_version >= 158) {
/* The VBT HDMI level shift values match the table we have. */ /* The VBT HDMI level shift values match the table we have. */
u8 hdmi_level_shift = child->hdmi_level_shifter_value; u8 hdmi_level_shift = child->hdmi_level_shifter_value;
DRM_DEBUG_KMS("VBT HDMI level shift for port %c: %d\n", drm_dbg_kms(&dev_priv->drm,
port_name(port), "VBT HDMI level shift for port %c: %d\n",
hdmi_level_shift); port_name(port),
hdmi_level_shift);
info->hdmi_level_shift = hdmi_level_shift; info->hdmi_level_shift = hdmi_level_shift;
info->hdmi_level_shift_set = true; info->hdmi_level_shift_set = true;
} }
@ -1708,19 +1756,22 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
} }
if (max_tmds_clock) if (max_tmds_clock)
DRM_DEBUG_KMS("VBT HDMI max TMDS clock for port %c: %d kHz\n", drm_dbg_kms(&dev_priv->drm,
port_name(port), max_tmds_clock); "VBT HDMI max TMDS clock for port %c: %d kHz\n",
port_name(port), max_tmds_clock);
info->max_tmds_clock = max_tmds_clock; info->max_tmds_clock = max_tmds_clock;
} }
/* Parse the I_boost config for SKL and above */ /* Parse the I_boost config for SKL and above */
if (bdb_version >= 196 && child->iboost) { if (bdb_version >= 196 && child->iboost) {
info->dp_boost_level = translate_iboost(child->dp_iboost_level); info->dp_boost_level = translate_iboost(child->dp_iboost_level);
DRM_DEBUG_KMS("VBT (e)DP boost level for port %c: %d\n", drm_dbg_kms(&dev_priv->drm,
port_name(port), info->dp_boost_level); "VBT (e)DP boost level for port %c: %d\n",
port_name(port), info->dp_boost_level);
info->hdmi_boost_level = translate_iboost(child->hdmi_iboost_level); info->hdmi_boost_level = translate_iboost(child->hdmi_iboost_level);
DRM_DEBUG_KMS("VBT HDMI boost level for port %c: %d\n", drm_dbg_kms(&dev_priv->drm,
port_name(port), info->hdmi_boost_level); "VBT HDMI boost level for port %c: %d\n",
port_name(port), info->hdmi_boost_level);
} }
/* DP max link rate for CNL+ */ /* DP max link rate for CNL+ */
@ -1740,8 +1791,9 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
info->dp_max_link_rate = 162000; info->dp_max_link_rate = 162000;
break; break;
} }
DRM_DEBUG_KMS("VBT DP max link rate for port %c: %d\n", drm_dbg_kms(&dev_priv->drm,
port_name(port), info->dp_max_link_rate); "VBT DP max link rate for port %c: %d\n",
port_name(port), info->dp_max_link_rate);
} }
info->child = child; info->child = child;
@ -1775,19 +1827,21 @@ parse_general_definitions(struct drm_i915_private *dev_priv,
defs = find_section(bdb, BDB_GENERAL_DEFINITIONS); defs = find_section(bdb, BDB_GENERAL_DEFINITIONS);
if (!defs) { if (!defs) {
DRM_DEBUG_KMS("No general definition block is found, no devices defined.\n"); drm_dbg_kms(&dev_priv->drm,
"No general definition block is found, no devices defined.\n");
return; return;
} }
block_size = get_blocksize(defs); block_size = get_blocksize(defs);
if (block_size < sizeof(*defs)) { if (block_size < sizeof(*defs)) {
DRM_DEBUG_KMS("General definitions block too small (%u)\n", drm_dbg_kms(&dev_priv->drm,
block_size); "General definitions block too small (%u)\n",
block_size);
return; return;
} }
bus_pin = defs->crt_ddc_gmbus_pin; bus_pin = defs->crt_ddc_gmbus_pin;
DRM_DEBUG_KMS("crt_ddc_bus_pin: %d\n", bus_pin); drm_dbg_kms(&dev_priv->drm, "crt_ddc_bus_pin: %d\n", bus_pin);
if (intel_gmbus_is_valid_pin(dev_priv, bus_pin)) if (intel_gmbus_is_valid_pin(dev_priv, bus_pin))
dev_priv->vbt.crt_ddc_pin = bus_pin; dev_priv->vbt.crt_ddc_pin = bus_pin;
@ -1806,19 +1860,22 @@ parse_general_definitions(struct drm_i915_private *dev_priv,
} else { } else {
expected_size = sizeof(*child); expected_size = sizeof(*child);
BUILD_BUG_ON(sizeof(*child) < 39); BUILD_BUG_ON(sizeof(*child) < 39);
DRM_DEBUG_DRIVER("Expected child device config size for VBT version %u not known; assuming %u\n", drm_dbg(&dev_priv->drm,
bdb->version, expected_size); "Expected child device config size for VBT version %u not known; assuming %u\n",
bdb->version, expected_size);
} }
/* Flag an error for unexpected size, but continue anyway. */ /* Flag an error for unexpected size, but continue anyway. */
if (defs->child_dev_size != expected_size) if (defs->child_dev_size != expected_size)
DRM_ERROR("Unexpected child device config size %u (expected %u for VBT version %u)\n", drm_err(&dev_priv->drm,
defs->child_dev_size, expected_size, bdb->version); "Unexpected child device config size %u (expected %u for VBT version %u)\n",
defs->child_dev_size, expected_size, bdb->version);
/* The legacy sized child device config is the minimum we need. */ /* The legacy sized child device config is the minimum we need. */
if (defs->child_dev_size < LEGACY_CHILD_DEVICE_CONFIG_SIZE) { if (defs->child_dev_size < LEGACY_CHILD_DEVICE_CONFIG_SIZE) {
DRM_DEBUG_KMS("Child device config size %u is too small.\n", drm_dbg_kms(&dev_priv->drm,
defs->child_dev_size); "Child device config size %u is too small.\n",
defs->child_dev_size);
return; return;
} }
@ -1830,8 +1887,9 @@ parse_general_definitions(struct drm_i915_private *dev_priv,
if (!child->device_type) if (!child->device_type)
continue; continue;
DRM_DEBUG_KMS("Found VBT child device with type 0x%x\n", drm_dbg_kms(&dev_priv->drm,
child->device_type); "Found VBT child device with type 0x%x\n",
child->device_type);
devdata = kzalloc(sizeof(*devdata), GFP_KERNEL); devdata = kzalloc(sizeof(*devdata), GFP_KERNEL);
if (!devdata) if (!devdata)
@ -1849,7 +1907,8 @@ parse_general_definitions(struct drm_i915_private *dev_priv,
} }
if (list_empty(&dev_priv->vbt.display_devices)) if (list_empty(&dev_priv->vbt.display_devices))
DRM_DEBUG_KMS("no child dev is parsed from VBT\n"); drm_dbg_kms(&dev_priv->drm,
"no child dev is parsed from VBT\n");
} }
/* Common defaults which may be overridden by VBT. */ /* Common defaults which may be overridden by VBT. */
@ -1882,7 +1941,8 @@ init_vbt_defaults(struct drm_i915_private *dev_priv)
*/ */
dev_priv->vbt.lvds_ssc_freq = intel_bios_ssc_frequency(dev_priv, dev_priv->vbt.lvds_ssc_freq = intel_bios_ssc_frequency(dev_priv,
!HAS_PCH_SPLIT(dev_priv)); !HAS_PCH_SPLIT(dev_priv));
DRM_DEBUG_KMS("Set default to SSC at %d kHz\n", dev_priv->vbt.lvds_ssc_freq); drm_dbg_kms(&dev_priv->drm, "Set default to SSC at %d kHz\n",
dev_priv->vbt.lvds_ssc_freq);
} }
/* Defaults to initialize only if there is no VBT. */ /* Defaults to initialize only if there is no VBT. */
@ -1992,13 +2052,14 @@ static struct vbt_header *oprom_get_vbt(struct drm_i915_private *dev_priv)
goto err_unmap_oprom; goto err_unmap_oprom;
if (sizeof(struct vbt_header) > size) { if (sizeof(struct vbt_header) > size) {
DRM_DEBUG_DRIVER("VBT header incomplete\n"); drm_dbg(&dev_priv->drm, "VBT header incomplete\n");
goto err_unmap_oprom; goto err_unmap_oprom;
} }
vbt_size = ioread16(p + offsetof(struct vbt_header, vbt_size)); vbt_size = ioread16(p + offsetof(struct vbt_header, vbt_size));
if (vbt_size > size) { if (vbt_size > size) {
DRM_DEBUG_DRIVER("VBT incomplete (vbt_size overflows)\n"); drm_dbg(&dev_priv->drm,
"VBT incomplete (vbt_size overflows)\n");
goto err_unmap_oprom; goto err_unmap_oprom;
} }
@ -2041,7 +2102,8 @@ void intel_bios_init(struct drm_i915_private *dev_priv)
INIT_LIST_HEAD(&dev_priv->vbt.display_devices); INIT_LIST_HEAD(&dev_priv->vbt.display_devices);
if (!HAS_DISPLAY(dev_priv) || !INTEL_DISPLAY_ENABLED(dev_priv)) { if (!HAS_DISPLAY(dev_priv) || !INTEL_DISPLAY_ENABLED(dev_priv)) {
DRM_DEBUG_KMS("Skipping VBT init due to disabled display.\n"); drm_dbg_kms(&dev_priv->drm,
"Skipping VBT init due to disabled display.\n");
return; return;
} }
@ -2055,13 +2117,14 @@ void intel_bios_init(struct drm_i915_private *dev_priv)
vbt = oprom_vbt; vbt = oprom_vbt;
DRM_DEBUG_KMS("Found valid VBT in PCI ROM\n"); drm_dbg_kms(&dev_priv->drm, "Found valid VBT in PCI ROM\n");
} }
bdb = get_bdb_header(vbt); bdb = get_bdb_header(vbt);
DRM_DEBUG_KMS("VBT signature \"%.*s\", BDB version %d\n", drm_dbg_kms(&dev_priv->drm,
(int)sizeof(vbt->signature), vbt->signature, bdb->version); "VBT signature \"%.*s\", BDB version %d\n",
(int)sizeof(vbt->signature), vbt->signature, bdb->version);
/* Grab useful general definitions */ /* Grab useful general definitions */
parse_general_features(dev_priv, bdb); parse_general_features(dev_priv, bdb);
@ -2086,7 +2149,8 @@ void intel_bios_init(struct drm_i915_private *dev_priv)
out: out:
if (!vbt) { if (!vbt) {
DRM_INFO("Failed to find VBIOS tables (VBT)\n"); drm_info(&dev_priv->drm,
"Failed to find VBIOS tables (VBT)\n");
init_vbt_missing_defaults(dev_priv); init_vbt_missing_defaults(dev_priv);
} }
@ -2238,13 +2302,12 @@ bool intel_bios_is_port_present(struct drm_i915_private *dev_priv, enum port por
const struct ddi_vbt_port_info *port_info = const struct ddi_vbt_port_info *port_info =
&dev_priv->vbt.ddi_port_info[port]; &dev_priv->vbt.ddi_port_info[port];
return port_info->supports_dp || return port_info->child;
port_info->supports_dvi ||
port_info->supports_hdmi;
} }
/* FIXME maybe deal with port A as well? */ /* FIXME maybe deal with port A as well? */
if (WARN_ON(port == PORT_A) || port >= ARRAY_SIZE(port_mapping)) if (drm_WARN_ON(&dev_priv->drm,
port == PORT_A) || port >= ARRAY_SIZE(port_mapping))
return false; return false;
list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node) { list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node) {
@ -2373,8 +2436,9 @@ bool intel_bios_is_dsi_present(struct drm_i915_private *dev_priv,
} else if (dvo_port == DVO_PORT_MIPIB || } else if (dvo_port == DVO_PORT_MIPIB ||
dvo_port == DVO_PORT_MIPIC || dvo_port == DVO_PORT_MIPIC ||
dvo_port == DVO_PORT_MIPID) { dvo_port == DVO_PORT_MIPID) {
DRM_DEBUG_KMS("VBT has unsupported DSI port %c\n", drm_dbg_kms(&dev_priv->drm,
port_name(dvo_port - DVO_PORT_MIPIA)); "VBT has unsupported DSI port %c\n",
port_name(dvo_port - DVO_PORT_MIPIA));
} }
} }
@ -2493,7 +2557,7 @@ intel_bios_is_port_hpd_inverted(const struct drm_i915_private *i915,
const struct child_device_config *child = const struct child_device_config *child =
i915->vbt.ddi_port_info[port].child; i915->vbt.ddi_port_info[port].child;
if (WARN_ON_ONCE(!IS_GEN9_LP(i915))) if (drm_WARN_ON_ONCE(&i915->drm, !IS_GEN9_LP(i915)))
return false; return false;
return child && child->hpd_invert; return child && child->hpd_invert;
@ -2526,8 +2590,9 @@ enum aux_ch intel_bios_port_aux_ch(struct drm_i915_private *dev_priv,
if (!info->alternate_aux_channel) { if (!info->alternate_aux_channel) {
aux_ch = (enum aux_ch)port; aux_ch = (enum aux_ch)port;
DRM_DEBUG_KMS("using AUX %c for port %c (platform default)\n", drm_dbg_kms(&dev_priv->drm,
aux_ch_name(aux_ch), port_name(port)); "using AUX %c for port %c (platform default)\n",
aux_ch_name(aux_ch), port_name(port));
return aux_ch; return aux_ch;
} }
@ -2559,8 +2624,78 @@ enum aux_ch intel_bios_port_aux_ch(struct drm_i915_private *dev_priv,
break; break;
} }
DRM_DEBUG_KMS("using AUX %c for port %c (VBT)\n", drm_dbg_kms(&dev_priv->drm, "using AUX %c for port %c (VBT)\n",
aux_ch_name(aux_ch), port_name(port)); aux_ch_name(aux_ch), port_name(port));
return aux_ch; return aux_ch;
} }
int intel_bios_max_tmds_clock(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
return i915->vbt.ddi_port_info[encoder->port].max_tmds_clock;
}
int intel_bios_hdmi_level_shift(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
const struct ddi_vbt_port_info *info =
&i915->vbt.ddi_port_info[encoder->port];
return info->hdmi_level_shift_set ? info->hdmi_level_shift : -1;
}
int intel_bios_dp_boost_level(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
return i915->vbt.ddi_port_info[encoder->port].dp_boost_level;
}
int intel_bios_hdmi_boost_level(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
return i915->vbt.ddi_port_info[encoder->port].hdmi_boost_level;
}
int intel_bios_dp_max_link_rate(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
return i915->vbt.ddi_port_info[encoder->port].dp_max_link_rate;
}
int intel_bios_alternate_ddc_pin(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
return i915->vbt.ddi_port_info[encoder->port].alternate_ddc_pin;
}
bool intel_bios_port_supports_dvi(struct drm_i915_private *i915, enum port port)
{
return i915->vbt.ddi_port_info[port].supports_dvi;
}
bool intel_bios_port_supports_hdmi(struct drm_i915_private *i915, enum port port)
{
return i915->vbt.ddi_port_info[port].supports_hdmi;
}
bool intel_bios_port_supports_dp(struct drm_i915_private *i915, enum port port)
{
return i915->vbt.ddi_port_info[port].supports_dp;
}
bool intel_bios_port_supports_typec_usb(struct drm_i915_private *i915,
enum port port)
{
return i915->vbt.ddi_port_info[port].supports_typec_usb;
}
bool intel_bios_port_supports_tbt(struct drm_i915_private *i915, enum port port)
{
return i915->vbt.ddi_port_info[port].supports_tbt;
}

View file

@ -247,5 +247,16 @@ enum aux_ch intel_bios_port_aux_ch(struct drm_i915_private *dev_priv, enum port
bool intel_bios_get_dsc_params(struct intel_encoder *encoder, bool intel_bios_get_dsc_params(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state, struct intel_crtc_state *crtc_state,
int dsc_max_bpc); int dsc_max_bpc);
int intel_bios_max_tmds_clock(struct intel_encoder *encoder);
int intel_bios_hdmi_level_shift(struct intel_encoder *encoder);
int intel_bios_dp_boost_level(struct intel_encoder *encoder);
int intel_bios_hdmi_boost_level(struct intel_encoder *encoder);
int intel_bios_dp_max_link_rate(struct intel_encoder *encoder);
int intel_bios_alternate_ddc_pin(struct intel_encoder *encoder);
bool intel_bios_port_supports_dvi(struct drm_i915_private *i915, enum port port);
bool intel_bios_port_supports_hdmi(struct drm_i915_private *i915, enum port port);
bool intel_bios_port_supports_dp(struct drm_i915_private *i915, enum port port);
bool intel_bios_port_supports_typec_usb(struct drm_i915_private *i915, enum port port);
bool intel_bios_port_supports_tbt(struct drm_i915_private *i915, enum port port);
#endif /* _INTEL_BIOS_H_ */ #endif /* _INTEL_BIOS_H_ */

View file

@ -122,7 +122,8 @@ static int icl_get_qgv_points(struct drm_i915_private *dev_priv,
if (ret) if (ret)
return ret; return ret;
if (WARN_ON(qi->num_points > ARRAY_SIZE(qi->points))) if (drm_WARN_ON(&dev_priv->drm,
qi->num_points > ARRAY_SIZE(qi->points)))
qi->num_points = ARRAY_SIZE(qi->points); qi->num_points = ARRAY_SIZE(qi->points);
for (i = 0; i < qi->num_points; i++) { for (i = 0; i < qi->num_points; i++) {
@ -132,9 +133,10 @@ static int icl_get_qgv_points(struct drm_i915_private *dev_priv,
if (ret) if (ret)
return ret; return ret;
DRM_DEBUG_KMS("QGV %d: DCLK=%d tRP=%d tRDPRE=%d tRAS=%d tRCD=%d tRC=%d\n", drm_dbg_kms(&dev_priv->drm,
i, sp->dclk, sp->t_rp, sp->t_rdpre, sp->t_ras, "QGV %d: DCLK=%d tRP=%d tRDPRE=%d tRAS=%d tRCD=%d tRC=%d\n",
sp->t_rcd, sp->t_rc); i, sp->dclk, sp->t_rp, sp->t_rdpre, sp->t_ras,
sp->t_rcd, sp->t_rc);
} }
return 0; return 0;
@ -187,7 +189,8 @@ static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel
ret = icl_get_qgv_points(dev_priv, &qi); ret = icl_get_qgv_points(dev_priv, &qi);
if (ret) { if (ret) {
DRM_DEBUG_KMS("Failed to get memory subsystem information, ignoring bandwidth limits"); drm_dbg_kms(&dev_priv->drm,
"Failed to get memory subsystem information, ignoring bandwidth limits");
return ret; return ret;
} }
num_channels = qi.num_channels; num_channels = qi.num_channels;
@ -228,8 +231,9 @@ static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel
bi->deratedbw[j] = min(maxdebw, bi->deratedbw[j] = min(maxdebw,
bw * 9 / 10); /* 90% */ bw * 9 / 10); /* 90% */
DRM_DEBUG_KMS("BW%d / QGV %d: num_planes=%d deratedbw=%u\n", drm_dbg_kms(&dev_priv->drm,
i, j, bi->num_planes, bi->deratedbw[j]); "BW%d / QGV %d: num_planes=%d deratedbw=%u\n",
i, j, bi->num_planes, bi->deratedbw[j]);
} }
if (bi->num_planes == 1) if (bi->num_planes == 1)
@ -374,10 +378,9 @@ static struct intel_bw_state *
intel_atomic_get_bw_state(struct intel_atomic_state *state) intel_atomic_get_bw_state(struct intel_atomic_state *state)
{ {
struct drm_i915_private *dev_priv = to_i915(state->base.dev); struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct drm_private_state *bw_state; struct intel_global_state *bw_state;
bw_state = drm_atomic_get_private_obj_state(&state->base, bw_state = intel_atomic_get_global_obj_state(state, &dev_priv->bw_obj);
&dev_priv->bw_obj);
if (IS_ERR(bw_state)) if (IS_ERR(bw_state))
return ERR_CAST(bw_state); return ERR_CAST(bw_state);
@ -392,7 +395,7 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
unsigned int data_rate, max_data_rate; unsigned int data_rate, max_data_rate;
unsigned int num_active_planes; unsigned int num_active_planes;
struct intel_crtc *crtc; struct intel_crtc *crtc;
int i; int i, ret;
/* FIXME earlier gens need some checks too */ /* FIXME earlier gens need some checks too */
if (INTEL_GEN(dev_priv) < 11) if (INTEL_GEN(dev_priv) < 11)
@ -424,15 +427,20 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
bw_state->data_rate[crtc->pipe] = new_data_rate; bw_state->data_rate[crtc->pipe] = new_data_rate;
bw_state->num_active_planes[crtc->pipe] = new_active_planes; bw_state->num_active_planes[crtc->pipe] = new_active_planes;
DRM_DEBUG_KMS("pipe %c data rate %u num active planes %u\n", drm_dbg_kms(&dev_priv->drm,
pipe_name(crtc->pipe), "pipe %c data rate %u num active planes %u\n",
bw_state->data_rate[crtc->pipe], pipe_name(crtc->pipe),
bw_state->num_active_planes[crtc->pipe]); bw_state->data_rate[crtc->pipe],
bw_state->num_active_planes[crtc->pipe]);
} }
if (!bw_state) if (!bw_state)
return 0; return 0;
ret = intel_atomic_lock_global_state(&bw_state->base);
if (ret)
return ret;
data_rate = intel_bw_data_rate(dev_priv, bw_state); data_rate = intel_bw_data_rate(dev_priv, bw_state);
num_active_planes = intel_bw_num_active_planes(dev_priv, bw_state); num_active_planes = intel_bw_num_active_planes(dev_priv, bw_state);
@ -441,15 +449,17 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
data_rate = DIV_ROUND_UP(data_rate, 1000); data_rate = DIV_ROUND_UP(data_rate, 1000);
if (data_rate > max_data_rate) { if (data_rate > max_data_rate) {
DRM_DEBUG_KMS("Bandwidth %u MB/s exceeds max available %d MB/s (%d active planes)\n", drm_dbg_kms(&dev_priv->drm,
data_rate, max_data_rate, num_active_planes); "Bandwidth %u MB/s exceeds max available %d MB/s (%d active planes)\n",
data_rate, max_data_rate, num_active_planes);
return -EINVAL; return -EINVAL;
} }
return 0; return 0;
} }
static struct drm_private_state *intel_bw_duplicate_state(struct drm_private_obj *obj) static struct intel_global_state *
intel_bw_duplicate_state(struct intel_global_obj *obj)
{ {
struct intel_bw_state *state; struct intel_bw_state *state;
@ -457,18 +467,16 @@ static struct drm_private_state *intel_bw_duplicate_state(struct drm_private_obj
if (!state) if (!state)
return NULL; return NULL;
__drm_atomic_helper_private_obj_duplicate_state(obj, &state->base);
return &state->base; return &state->base;
} }
static void intel_bw_destroy_state(struct drm_private_obj *obj, static void intel_bw_destroy_state(struct intel_global_obj *obj,
struct drm_private_state *state) struct intel_global_state *state)
{ {
kfree(state); kfree(state);
} }
static const struct drm_private_state_funcs intel_bw_funcs = { static const struct intel_global_state_funcs intel_bw_funcs = {
.atomic_duplicate_state = intel_bw_duplicate_state, .atomic_duplicate_state = intel_bw_duplicate_state,
.atomic_destroy_state = intel_bw_destroy_state, .atomic_destroy_state = intel_bw_destroy_state,
}; };
@ -481,13 +489,8 @@ int intel_bw_init(struct drm_i915_private *dev_priv)
if (!state) if (!state)
return -ENOMEM; return -ENOMEM;
drm_atomic_private_obj_init(&dev_priv->drm, &dev_priv->bw_obj, intel_atomic_global_obj_init(dev_priv, &dev_priv->bw_obj,
&state->base, &intel_bw_funcs); &state->base, &intel_bw_funcs);
return 0; return 0;
} }
void intel_bw_cleanup(struct drm_i915_private *dev_priv)
{
drm_atomic_private_obj_fini(&dev_priv->bw_obj);
}

View file

@ -9,13 +9,14 @@
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include "intel_display.h" #include "intel_display.h"
#include "intel_global_state.h"
struct drm_i915_private; struct drm_i915_private;
struct intel_atomic_state; struct intel_atomic_state;
struct intel_crtc_state; struct intel_crtc_state;
struct intel_bw_state { struct intel_bw_state {
struct drm_private_state base; struct intel_global_state base;
unsigned int data_rate[I915_MAX_PIPES]; unsigned int data_rate[I915_MAX_PIPES];
u8 num_active_planes[I915_MAX_PIPES]; u8 num_active_planes[I915_MAX_PIPES];
@ -25,7 +26,6 @@ struct intel_bw_state {
void intel_bw_init_hw(struct drm_i915_private *dev_priv); void intel_bw_init_hw(struct drm_i915_private *dev_priv);
int intel_bw_init(struct drm_i915_private *dev_priv); int intel_bw_init(struct drm_i915_private *dev_priv);
void intel_bw_cleanup(struct drm_i915_private *dev_priv);
int intel_bw_atomic_check(struct intel_atomic_state *state); int intel_bw_atomic_check(struct intel_atomic_state *state);
void intel_bw_crtc_update(struct intel_bw_state *bw_state, void intel_bw_crtc_update(struct intel_bw_state *bw_state,
const struct intel_crtc_state *crtc_state); const struct intel_crtc_state *crtc_state);

File diff suppressed because it is too large Load diff

View file

@ -8,11 +8,12 @@
#include <linux/types.h> #include <linux/types.h>
#include "i915_drv.h"
#include "intel_display.h" #include "intel_display.h"
#include "intel_global_state.h"
struct drm_i915_private; struct drm_i915_private;
struct intel_atomic_state; struct intel_atomic_state;
struct intel_cdclk_state;
struct intel_crtc_state; struct intel_crtc_state;
struct intel_cdclk_vals { struct intel_cdclk_vals {
@ -22,28 +23,62 @@ struct intel_cdclk_vals {
u8 ratio; u8 ratio;
}; };
struct intel_cdclk_state {
struct intel_global_state base;
/*
* Logical configuration of cdclk (used for all scaling,
* watermark, etc. calculations and checks). This is
* computed as if all enabled crtcs were active.
*/
struct intel_cdclk_config logical;
/*
* Actual configuration of cdclk, can be different from the
* logical configuration only when all crtc's are DPMS off.
*/
struct intel_cdclk_config actual;
/* minimum acceptable cdclk for each pipe */
int min_cdclk[I915_MAX_PIPES];
/* minimum acceptable voltage level for each pipe */
u8 min_voltage_level[I915_MAX_PIPES];
/* pipe to which cd2x update is synchronized */
enum pipe pipe;
/* forced minimum cdclk for glk+ audio w/a */
int force_min_cdclk;
bool force_min_cdclk_changed;
/* bitmask of active pipes */
u8 active_pipes;
};
int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state); int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state);
void intel_cdclk_init(struct drm_i915_private *i915); void intel_cdclk_init_hw(struct drm_i915_private *i915);
void intel_cdclk_uninit(struct drm_i915_private *i915); void intel_cdclk_uninit_hw(struct drm_i915_private *i915);
void intel_init_cdclk_hooks(struct drm_i915_private *dev_priv); void intel_init_cdclk_hooks(struct drm_i915_private *dev_priv);
void intel_update_max_cdclk(struct drm_i915_private *dev_priv); void intel_update_max_cdclk(struct drm_i915_private *dev_priv);
void intel_update_cdclk(struct drm_i915_private *dev_priv); void intel_update_cdclk(struct drm_i915_private *dev_priv);
void intel_update_rawclk(struct drm_i915_private *dev_priv); u32 intel_read_rawclk(struct drm_i915_private *dev_priv);
bool intel_cdclk_needs_modeset(const struct intel_cdclk_state *a, bool intel_cdclk_needs_modeset(const struct intel_cdclk_config *a,
const struct intel_cdclk_state *b); const struct intel_cdclk_config *b);
void intel_cdclk_swap_state(struct intel_atomic_state *state); void intel_set_cdclk_pre_plane_update(struct intel_atomic_state *state);
void void intel_set_cdclk_post_plane_update(struct intel_atomic_state *state);
intel_set_cdclk_pre_plane_update(struct drm_i915_private *dev_priv, void intel_dump_cdclk_config(const struct intel_cdclk_config *cdclk_config,
const struct intel_cdclk_state *old_state, const char *context);
const struct intel_cdclk_state *new_state,
enum pipe pipe);
void
intel_set_cdclk_post_plane_update(struct drm_i915_private *dev_priv,
const struct intel_cdclk_state *old_state,
const struct intel_cdclk_state *new_state,
enum pipe pipe);
void intel_dump_cdclk_state(const struct intel_cdclk_state *cdclk_state,
const char *context);
int intel_modeset_calc_cdclk(struct intel_atomic_state *state); int intel_modeset_calc_cdclk(struct intel_atomic_state *state);
struct intel_cdclk_state *
intel_atomic_get_cdclk_state(struct intel_atomic_state *state);
#define to_intel_cdclk_state(x) container_of((x), struct intel_cdclk_state, base)
#define intel_atomic_get_old_cdclk_state(state) \
to_intel_cdclk_state(intel_atomic_get_old_global_obj_state(state, &to_i915(state->base.dev)->cdclk.obj))
#define intel_atomic_get_new_cdclk_state(state) \
to_intel_cdclk_state(intel_atomic_get_new_global_obj_state(state, &to_i915(state->base.dev)->cdclk.obj))
int intel_cdclk_init(struct drm_i915_private *dev_priv);
#endif /* __INTEL_CDCLK_H__ */ #endif /* __INTEL_CDCLK_H__ */

View file

@ -157,23 +157,29 @@ static void ilk_update_pipe_csc(struct intel_crtc *crtc,
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
I915_WRITE(PIPE_CSC_PREOFF_HI(pipe), preoff[0]); intel_de_write(dev_priv, PIPE_CSC_PREOFF_HI(pipe), preoff[0]);
I915_WRITE(PIPE_CSC_PREOFF_ME(pipe), preoff[1]); intel_de_write(dev_priv, PIPE_CSC_PREOFF_ME(pipe), preoff[1]);
I915_WRITE(PIPE_CSC_PREOFF_LO(pipe), preoff[2]); intel_de_write(dev_priv, PIPE_CSC_PREOFF_LO(pipe), preoff[2]);
I915_WRITE(PIPE_CSC_COEFF_RY_GY(pipe), coeff[0] << 16 | coeff[1]); intel_de_write(dev_priv, PIPE_CSC_COEFF_RY_GY(pipe),
I915_WRITE(PIPE_CSC_COEFF_BY(pipe), coeff[2] << 16); coeff[0] << 16 | coeff[1]);
intel_de_write(dev_priv, PIPE_CSC_COEFF_BY(pipe), coeff[2] << 16);
I915_WRITE(PIPE_CSC_COEFF_RU_GU(pipe), coeff[3] << 16 | coeff[4]); intel_de_write(dev_priv, PIPE_CSC_COEFF_RU_GU(pipe),
I915_WRITE(PIPE_CSC_COEFF_BU(pipe), coeff[5] << 16); coeff[3] << 16 | coeff[4]);
intel_de_write(dev_priv, PIPE_CSC_COEFF_BU(pipe), coeff[5] << 16);
I915_WRITE(PIPE_CSC_COEFF_RV_GV(pipe), coeff[6] << 16 | coeff[7]); intel_de_write(dev_priv, PIPE_CSC_COEFF_RV_GV(pipe),
I915_WRITE(PIPE_CSC_COEFF_BV(pipe), coeff[8] << 16); coeff[6] << 16 | coeff[7]);
intel_de_write(dev_priv, PIPE_CSC_COEFF_BV(pipe), coeff[8] << 16);
if (INTEL_GEN(dev_priv) >= 7) { if (INTEL_GEN(dev_priv) >= 7) {
I915_WRITE(PIPE_CSC_POSTOFF_HI(pipe), postoff[0]); intel_de_write(dev_priv, PIPE_CSC_POSTOFF_HI(pipe),
I915_WRITE(PIPE_CSC_POSTOFF_ME(pipe), postoff[1]); postoff[0]);
I915_WRITE(PIPE_CSC_POSTOFF_LO(pipe), postoff[2]); intel_de_write(dev_priv, PIPE_CSC_POSTOFF_ME(pipe),
postoff[1]);
intel_de_write(dev_priv, PIPE_CSC_POSTOFF_LO(pipe),
postoff[2]);
} }
} }
@ -185,22 +191,28 @@ static void icl_update_output_csc(struct intel_crtc *crtc,
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
I915_WRITE(PIPE_CSC_OUTPUT_PREOFF_HI(pipe), preoff[0]); intel_de_write(dev_priv, PIPE_CSC_OUTPUT_PREOFF_HI(pipe), preoff[0]);
I915_WRITE(PIPE_CSC_OUTPUT_PREOFF_ME(pipe), preoff[1]); intel_de_write(dev_priv, PIPE_CSC_OUTPUT_PREOFF_ME(pipe), preoff[1]);
I915_WRITE(PIPE_CSC_OUTPUT_PREOFF_LO(pipe), preoff[2]); intel_de_write(dev_priv, PIPE_CSC_OUTPUT_PREOFF_LO(pipe), preoff[2]);
I915_WRITE(PIPE_CSC_OUTPUT_COEFF_RY_GY(pipe), coeff[0] << 16 | coeff[1]); intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_RY_GY(pipe),
I915_WRITE(PIPE_CSC_OUTPUT_COEFF_BY(pipe), coeff[2] << 16); coeff[0] << 16 | coeff[1]);
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_BY(pipe),
coeff[2] << 16);
I915_WRITE(PIPE_CSC_OUTPUT_COEFF_RU_GU(pipe), coeff[3] << 16 | coeff[4]); intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_RU_GU(pipe),
I915_WRITE(PIPE_CSC_OUTPUT_COEFF_BU(pipe), coeff[5] << 16); coeff[3] << 16 | coeff[4]);
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_BU(pipe),
coeff[5] << 16);
I915_WRITE(PIPE_CSC_OUTPUT_COEFF_RV_GV(pipe), coeff[6] << 16 | coeff[7]); intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_RV_GV(pipe),
I915_WRITE(PIPE_CSC_OUTPUT_COEFF_BV(pipe), coeff[8] << 16); coeff[6] << 16 | coeff[7]);
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_BV(pipe),
coeff[8] << 16);
I915_WRITE(PIPE_CSC_OUTPUT_POSTOFF_HI(pipe), postoff[0]); intel_de_write(dev_priv, PIPE_CSC_OUTPUT_POSTOFF_HI(pipe), postoff[0]);
I915_WRITE(PIPE_CSC_OUTPUT_POSTOFF_ME(pipe), postoff[1]); intel_de_write(dev_priv, PIPE_CSC_OUTPUT_POSTOFF_ME(pipe), postoff[1]);
I915_WRITE(PIPE_CSC_OUTPUT_POSTOFF_LO(pipe), postoff[2]); intel_de_write(dev_priv, PIPE_CSC_OUTPUT_POSTOFF_LO(pipe), postoff[2]);
} }
static bool ilk_csc_limited_range(const struct intel_crtc_state *crtc_state) static bool ilk_csc_limited_range(const struct intel_crtc_state *crtc_state)
@ -297,14 +309,16 @@ static void ilk_load_csc_matrix(const struct intel_crtc_state *crtc_state)
* LUT is needed but CSC is not we need to load an * LUT is needed but CSC is not we need to load an
* identity matrix. * identity matrix.
*/ */
WARN_ON(!IS_CANNONLAKE(dev_priv) && !IS_GEMINILAKE(dev_priv)); drm_WARN_ON(&dev_priv->drm, !IS_CANNONLAKE(dev_priv) &&
!IS_GEMINILAKE(dev_priv));
ilk_update_pipe_csc(crtc, ilk_csc_off_zero, ilk_update_pipe_csc(crtc, ilk_csc_off_zero,
ilk_csc_coeff_identity, ilk_csc_coeff_identity,
ilk_csc_off_zero); ilk_csc_off_zero);
} }
I915_WRITE(PIPE_CSC_MODE(crtc->pipe), crtc_state->csc_mode); intel_de_write(dev_priv, PIPE_CSC_MODE(crtc->pipe),
crtc_state->csc_mode);
} }
static void icl_load_csc_matrix(const struct intel_crtc_state *crtc_state) static void icl_load_csc_matrix(const struct intel_crtc_state *crtc_state)
@ -330,7 +344,8 @@ static void icl_load_csc_matrix(const struct intel_crtc_state *crtc_state)
ilk_csc_postoff_limited_range); ilk_csc_postoff_limited_range);
} }
I915_WRITE(PIPE_CSC_MODE(crtc->pipe), crtc_state->csc_mode); intel_de_write(dev_priv, PIPE_CSC_MODE(crtc->pipe),
crtc_state->csc_mode);
} }
/* /*
@ -363,18 +378,25 @@ static void cherryview_load_csc_matrix(const struct intel_crtc_state *crtc_state
coeffs[i] |= (abs_coeff >> 20) & 0xfff; coeffs[i] |= (abs_coeff >> 20) & 0xfff;
} }
I915_WRITE(CGM_PIPE_CSC_COEFF01(pipe), intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF01(pipe),
coeffs[1] << 16 | coeffs[0]); coeffs[1] << 16 | coeffs[0]);
I915_WRITE(CGM_PIPE_CSC_COEFF23(pipe), intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF23(pipe),
coeffs[3] << 16 | coeffs[2]); coeffs[3] << 16 | coeffs[2]);
I915_WRITE(CGM_PIPE_CSC_COEFF45(pipe), intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF45(pipe),
coeffs[5] << 16 | coeffs[4]); coeffs[5] << 16 | coeffs[4]);
I915_WRITE(CGM_PIPE_CSC_COEFF67(pipe), intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF67(pipe),
coeffs[7] << 16 | coeffs[6]); coeffs[7] << 16 | coeffs[6]);
I915_WRITE(CGM_PIPE_CSC_COEFF8(pipe), coeffs[8]); intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF8(pipe), coeffs[8]);
} }
I915_WRITE(CGM_PIPE_MODE(pipe), crtc_state->cgm_mode); intel_de_write(dev_priv, CGM_PIPE_MODE(pipe), crtc_state->cgm_mode);
}
static u32 i9xx_lut_8(const struct drm_color_lut *color)
{
return drm_color_lut_extract(color->red, 8) << 16 |
drm_color_lut_extract(color->green, 8) << 8 |
drm_color_lut_extract(color->blue, 8);
} }
/* i965+ "10.6" bit interpolated format "even DW" (low 8 bits) */ /* i965+ "10.6" bit interpolated format "even DW" (low 8 bits) */
@ -420,15 +442,14 @@ static void i9xx_load_luts_internal(const struct intel_crtc_state *crtc_state,
const struct drm_color_lut *lut = blob->data; const struct drm_color_lut *lut = blob->data;
for (i = 0; i < 256; i++) { for (i = 0; i < 256; i++) {
u32 word = u32 word = i9xx_lut_8(&lut[i]);
(drm_color_lut_extract(lut[i].red, 8) << 16) |
(drm_color_lut_extract(lut[i].green, 8) << 8) |
drm_color_lut_extract(lut[i].blue, 8);
if (HAS_GMCH(dev_priv)) if (HAS_GMCH(dev_priv))
I915_WRITE(PALETTE(pipe, i), word); intel_de_write(dev_priv, PALETTE(pipe, i),
word);
else else
I915_WRITE(LGC_PALETTE(pipe, i), word); intel_de_write(dev_priv, LGC_PALETTE(pipe, i),
word);
} }
} }
} }
@ -445,10 +466,10 @@ static void i9xx_color_commit(const struct intel_crtc_state *crtc_state)
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
u32 val; u32 val;
val = I915_READ(PIPECONF(pipe)); val = intel_de_read(dev_priv, PIPECONF(pipe));
val &= ~PIPECONF_GAMMA_MODE_MASK_I9XX; val &= ~PIPECONF_GAMMA_MODE_MASK_I9XX;
val |= PIPECONF_GAMMA_MODE(crtc_state->gamma_mode); val |= PIPECONF_GAMMA_MODE(crtc_state->gamma_mode);
I915_WRITE(PIPECONF(pipe), val); intel_de_write(dev_priv, PIPECONF(pipe), val);
} }
static void ilk_color_commit(const struct intel_crtc_state *crtc_state) static void ilk_color_commit(const struct intel_crtc_state *crtc_state)
@ -458,10 +479,10 @@ static void ilk_color_commit(const struct intel_crtc_state *crtc_state)
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
u32 val; u32 val;
val = I915_READ(PIPECONF(pipe)); val = intel_de_read(dev_priv, PIPECONF(pipe));
val &= ~PIPECONF_GAMMA_MODE_MASK_ILK; val &= ~PIPECONF_GAMMA_MODE_MASK_ILK;
val |= PIPECONF_GAMMA_MODE(crtc_state->gamma_mode); val |= PIPECONF_GAMMA_MODE(crtc_state->gamma_mode);
I915_WRITE(PIPECONF(pipe), val); intel_de_write(dev_priv, PIPECONF(pipe), val);
ilk_load_csc_matrix(crtc_state); ilk_load_csc_matrix(crtc_state);
} }
@ -471,7 +492,8 @@ static void hsw_color_commit(const struct intel_crtc_state *crtc_state)
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
I915_WRITE(GAMMA_MODE(crtc->pipe), crtc_state->gamma_mode); intel_de_write(dev_priv, GAMMA_MODE(crtc->pipe),
crtc_state->gamma_mode);
ilk_load_csc_matrix(crtc_state); ilk_load_csc_matrix(crtc_state);
} }
@ -492,9 +514,10 @@ static void skl_color_commit(const struct intel_crtc_state *crtc_state)
val |= SKL_BOTTOM_COLOR_GAMMA_ENABLE; val |= SKL_BOTTOM_COLOR_GAMMA_ENABLE;
if (crtc_state->csc_enable) if (crtc_state->csc_enable)
val |= SKL_BOTTOM_COLOR_CSC_ENABLE; val |= SKL_BOTTOM_COLOR_CSC_ENABLE;
I915_WRITE(SKL_BOTTOM_COLOR(pipe), val); intel_de_write(dev_priv, SKL_BOTTOM_COLOR(pipe), val);
I915_WRITE(GAMMA_MODE(crtc->pipe), crtc_state->gamma_mode); intel_de_write(dev_priv, GAMMA_MODE(crtc->pipe),
crtc_state->gamma_mode);
if (INTEL_GEN(dev_priv) >= 11) if (INTEL_GEN(dev_priv) >= 11)
icl_load_csc_matrix(crtc_state); icl_load_csc_matrix(crtc_state);
@ -511,15 +534,15 @@ static void i965_load_lut_10p6(struct intel_crtc *crtc,
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
for (i = 0; i < lut_size - 1; i++) { for (i = 0; i < lut_size - 1; i++) {
I915_WRITE(PALETTE(pipe, 2 * i + 0), intel_de_write(dev_priv, PALETTE(pipe, 2 * i + 0),
i965_lut_10p6_ldw(&lut[i])); i965_lut_10p6_ldw(&lut[i]));
I915_WRITE(PALETTE(pipe, 2 * i + 1), intel_de_write(dev_priv, PALETTE(pipe, 2 * i + 1),
i965_lut_10p6_udw(&lut[i])); i965_lut_10p6_udw(&lut[i]));
} }
I915_WRITE(PIPEGCMAX(pipe, 0), lut[i].red); intel_de_write(dev_priv, PIPEGCMAX(pipe, 0), lut[i].red);
I915_WRITE(PIPEGCMAX(pipe, 1), lut[i].green); intel_de_write(dev_priv, PIPEGCMAX(pipe, 1), lut[i].green);
I915_WRITE(PIPEGCMAX(pipe, 2), lut[i].blue); intel_de_write(dev_priv, PIPEGCMAX(pipe, 2), lut[i].blue);
} }
static void i965_load_luts(const struct intel_crtc_state *crtc_state) static void i965_load_luts(const struct intel_crtc_state *crtc_state)
@ -542,7 +565,8 @@ static void ilk_load_lut_10(struct intel_crtc *crtc,
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
for (i = 0; i < lut_size; i++) for (i = 0; i < lut_size; i++)
I915_WRITE(PREC_PALETTE(pipe, i), ilk_lut_10(&lut[i])); intel_de_write(dev_priv, PREC_PALETTE(pipe, i),
ilk_lut_10(&lut[i]));
} }
static void ilk_load_luts(const struct intel_crtc_state *crtc_state) static void ilk_load_luts(const struct intel_crtc_state *crtc_state)
@ -584,15 +608,16 @@ static void ivb_load_lut_10(struct intel_crtc *crtc,
const struct drm_color_lut *entry = const struct drm_color_lut *entry =
&lut[i * (lut_size - 1) / (hw_lut_size - 1)]; &lut[i * (lut_size - 1) / (hw_lut_size - 1)];
I915_WRITE(PREC_PAL_INDEX(pipe), prec_index++); intel_de_write(dev_priv, PREC_PAL_INDEX(pipe), prec_index++);
I915_WRITE(PREC_PAL_DATA(pipe), ilk_lut_10(entry)); intel_de_write(dev_priv, PREC_PAL_DATA(pipe),
ilk_lut_10(entry));
} }
/* /*
* Reset the index, otherwise it prevents the legacy palette to be * Reset the index, otherwise it prevents the legacy palette to be
* written properly. * written properly.
*/ */
I915_WRITE(PREC_PAL_INDEX(pipe), 0); intel_de_write(dev_priv, PREC_PAL_INDEX(pipe), 0);
} }
/* On BDW+ the index auto increment mode actually works */ /* On BDW+ the index auto increment mode actually works */
@ -606,22 +631,23 @@ static void bdw_load_lut_10(struct intel_crtc *crtc,
int i, lut_size = drm_color_lut_size(blob); int i, lut_size = drm_color_lut_size(blob);
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
I915_WRITE(PREC_PAL_INDEX(pipe), prec_index | intel_de_write(dev_priv, PREC_PAL_INDEX(pipe),
PAL_PREC_AUTO_INCREMENT); prec_index | PAL_PREC_AUTO_INCREMENT);
for (i = 0; i < hw_lut_size; i++) { for (i = 0; i < hw_lut_size; i++) {
/* We discard half the user entries in split gamma mode */ /* We discard half the user entries in split gamma mode */
const struct drm_color_lut *entry = const struct drm_color_lut *entry =
&lut[i * (lut_size - 1) / (hw_lut_size - 1)]; &lut[i * (lut_size - 1) / (hw_lut_size - 1)];
I915_WRITE(PREC_PAL_DATA(pipe), ilk_lut_10(entry)); intel_de_write(dev_priv, PREC_PAL_DATA(pipe),
ilk_lut_10(entry));
} }
/* /*
* Reset the index, otherwise it prevents the legacy palette to be * Reset the index, otherwise it prevents the legacy palette to be
* written properly. * written properly.
*/ */
I915_WRITE(PREC_PAL_INDEX(pipe), 0); intel_de_write(dev_priv, PREC_PAL_INDEX(pipe), 0);
} }
static void ivb_load_lut_ext_max(struct intel_crtc *crtc) static void ivb_load_lut_ext_max(struct intel_crtc *crtc)
@ -712,8 +738,9 @@ static void glk_load_degamma_lut(const struct intel_crtc_state *crtc_state)
* ignore the index bits, so we need to reset it to index 0 * ignore the index bits, so we need to reset it to index 0
* separately. * separately.
*/ */
I915_WRITE(PRE_CSC_GAMC_INDEX(pipe), 0); intel_de_write(dev_priv, PRE_CSC_GAMC_INDEX(pipe), 0);
I915_WRITE(PRE_CSC_GAMC_INDEX(pipe), PRE_CSC_GAMC_AUTO_INCREMENT); intel_de_write(dev_priv, PRE_CSC_GAMC_INDEX(pipe),
PRE_CSC_GAMC_AUTO_INCREMENT);
for (i = 0; i < lut_size; i++) { for (i = 0; i < lut_size; i++) {
/* /*
@ -729,12 +756,13 @@ static void glk_load_degamma_lut(const struct intel_crtc_state *crtc_state)
* ToDo: Extend to max 7.0. Enable 32 bit input value * ToDo: Extend to max 7.0. Enable 32 bit input value
* as compared to just 16 to achieve this. * as compared to just 16 to achieve this.
*/ */
I915_WRITE(PRE_CSC_GAMC_DATA(pipe), lut[i].green); intel_de_write(dev_priv, PRE_CSC_GAMC_DATA(pipe),
lut[i].green);
} }
/* Clamp values > 1.0. */ /* Clamp values > 1.0. */
while (i++ < 35) while (i++ < 35)
I915_WRITE(PRE_CSC_GAMC_DATA(pipe), 1 << 16); intel_de_write(dev_priv, PRE_CSC_GAMC_DATA(pipe), 1 << 16);
} }
static void glk_load_degamma_lut_linear(const struct intel_crtc_state *crtc_state) static void glk_load_degamma_lut_linear(const struct intel_crtc_state *crtc_state)
@ -750,18 +778,19 @@ static void glk_load_degamma_lut_linear(const struct intel_crtc_state *crtc_stat
* ignore the index bits, so we need to reset it to index 0 * ignore the index bits, so we need to reset it to index 0
* separately. * separately.
*/ */
I915_WRITE(PRE_CSC_GAMC_INDEX(pipe), 0); intel_de_write(dev_priv, PRE_CSC_GAMC_INDEX(pipe), 0);
I915_WRITE(PRE_CSC_GAMC_INDEX(pipe), PRE_CSC_GAMC_AUTO_INCREMENT); intel_de_write(dev_priv, PRE_CSC_GAMC_INDEX(pipe),
PRE_CSC_GAMC_AUTO_INCREMENT);
for (i = 0; i < lut_size; i++) { for (i = 0; i < lut_size; i++) {
u32 v = (i << 16) / (lut_size - 1); u32 v = (i << 16) / (lut_size - 1);
I915_WRITE(PRE_CSC_GAMC_DATA(pipe), v); intel_de_write(dev_priv, PRE_CSC_GAMC_DATA(pipe), v);
} }
/* Clamp values > 1.0. */ /* Clamp values > 1.0. */
while (i++ < 35) while (i++ < 35)
I915_WRITE(PRE_CSC_GAMC_DATA(pipe), 1 << 16); intel_de_write(dev_priv, PRE_CSC_GAMC_DATA(pipe), 1 << 16);
} }
static void glk_load_luts(const struct intel_crtc_state *crtc_state) static void glk_load_luts(const struct intel_crtc_state *crtc_state)
@ -954,10 +983,10 @@ static void chv_load_cgm_degamma(struct intel_crtc *crtc,
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
for (i = 0; i < lut_size; i++) { for (i = 0; i < lut_size; i++) {
I915_WRITE(CGM_PIPE_DEGAMMA(pipe, i, 0), intel_de_write(dev_priv, CGM_PIPE_DEGAMMA(pipe, i, 0),
chv_cgm_degamma_ldw(&lut[i])); chv_cgm_degamma_ldw(&lut[i]));
I915_WRITE(CGM_PIPE_DEGAMMA(pipe, i, 1), intel_de_write(dev_priv, CGM_PIPE_DEGAMMA(pipe, i, 1),
chv_cgm_degamma_udw(&lut[i])); chv_cgm_degamma_udw(&lut[i]));
} }
} }
@ -981,10 +1010,10 @@ static void chv_load_cgm_gamma(struct intel_crtc *crtc,
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
for (i = 0; i < lut_size; i++) { for (i = 0; i < lut_size; i++) {
I915_WRITE(CGM_PIPE_GAMMA(pipe, i, 0), intel_de_write(dev_priv, CGM_PIPE_GAMMA(pipe, i, 0),
chv_cgm_gamma_ldw(&lut[i])); chv_cgm_gamma_ldw(&lut[i]));
I915_WRITE(CGM_PIPE_GAMMA(pipe, i, 1), intel_de_write(dev_priv, CGM_PIPE_GAMMA(pipe, i, 1),
chv_cgm_gamma_udw(&lut[i])); chv_cgm_gamma_udw(&lut[i]));
} }
} }
@ -1167,7 +1196,8 @@ static int check_luts(const struct intel_crtc_state *crtc_state)
/* C8 relies on its palette being stored in the legacy LUT */ /* C8 relies on its palette being stored in the legacy LUT */
if (crtc_state->c8_planes) { if (crtc_state->c8_planes) {
DRM_DEBUG_KMS("C8 pixelformat requires the legacy LUT\n"); drm_dbg_kms(&dev_priv->drm,
"C8 pixelformat requires the legacy LUT\n");
return -EINVAL; return -EINVAL;
} }
@ -1663,9 +1693,9 @@ i9xx_read_lut_8(const struct intel_crtc_state *crtc_state)
for (i = 0; i < LEGACY_LUT_LENGTH; i++) { for (i = 0; i < LEGACY_LUT_LENGTH; i++) {
if (HAS_GMCH(dev_priv)) if (HAS_GMCH(dev_priv))
val = I915_READ(PALETTE(pipe, i)); val = intel_de_read(dev_priv, PALETTE(pipe, i));
else else
val = I915_READ(LGC_PALETTE(pipe, i)); val = intel_de_read(dev_priv, LGC_PALETTE(pipe, i));
blob_data[i].red = intel_color_lut_pack(REG_FIELD_GET( blob_data[i].red = intel_color_lut_pack(REG_FIELD_GET(
LGC_PALETTE_RED_MASK, val), 8); LGC_PALETTE_RED_MASK, val), 8);
@ -1706,8 +1736,8 @@ i965_read_lut_10p6(const struct intel_crtc_state *crtc_state)
blob_data = blob->data; blob_data = blob->data;
for (i = 0; i < lut_size - 1; i++) { for (i = 0; i < lut_size - 1; i++) {
val1 = I915_READ(PALETTE(pipe, 2 * i + 0)); val1 = intel_de_read(dev_priv, PALETTE(pipe, 2 * i + 0));
val2 = I915_READ(PALETTE(pipe, 2 * i + 1)); val2 = intel_de_read(dev_priv, PALETTE(pipe, 2 * i + 1));
blob_data[i].red = REG_FIELD_GET(PALETTE_RED_MASK, val2) << 8 | blob_data[i].red = REG_FIELD_GET(PALETTE_RED_MASK, val2) << 8 |
REG_FIELD_GET(PALETTE_RED_MASK, val1); REG_FIELD_GET(PALETTE_RED_MASK, val1);
@ -1718,11 +1748,11 @@ i965_read_lut_10p6(const struct intel_crtc_state *crtc_state)
} }
blob_data[i].red = REG_FIELD_GET(PIPEGCMAX_RGB_MASK, blob_data[i].red = REG_FIELD_GET(PIPEGCMAX_RGB_MASK,
I915_READ(PIPEGCMAX(pipe, 0))); intel_de_read(dev_priv, PIPEGCMAX(pipe, 0)));
blob_data[i].green = REG_FIELD_GET(PIPEGCMAX_RGB_MASK, blob_data[i].green = REG_FIELD_GET(PIPEGCMAX_RGB_MASK,
I915_READ(PIPEGCMAX(pipe, 1))); intel_de_read(dev_priv, PIPEGCMAX(pipe, 1)));
blob_data[i].blue = REG_FIELD_GET(PIPEGCMAX_RGB_MASK, blob_data[i].blue = REG_FIELD_GET(PIPEGCMAX_RGB_MASK,
I915_READ(PIPEGCMAX(pipe, 2))); intel_de_read(dev_priv, PIPEGCMAX(pipe, 2)));
return blob; return blob;
} }
@ -1758,13 +1788,13 @@ chv_read_cgm_lut(const struct intel_crtc_state *crtc_state)
blob_data = blob->data; blob_data = blob->data;
for (i = 0; i < lut_size; i++) { for (i = 0; i < lut_size; i++) {
val = I915_READ(CGM_PIPE_GAMMA(pipe, i, 0)); val = intel_de_read(dev_priv, CGM_PIPE_GAMMA(pipe, i, 0));
blob_data[i].green = intel_color_lut_pack(REG_FIELD_GET( blob_data[i].green = intel_color_lut_pack(REG_FIELD_GET(
CGM_PIPE_GAMMA_GREEN_MASK, val), 10); CGM_PIPE_GAMMA_GREEN_MASK, val), 10);
blob_data[i].blue = intel_color_lut_pack(REG_FIELD_GET( blob_data[i].blue = intel_color_lut_pack(REG_FIELD_GET(
CGM_PIPE_GAMMA_BLUE_MASK, val), 10); CGM_PIPE_GAMMA_BLUE_MASK, val), 10);
val = I915_READ(CGM_PIPE_GAMMA(pipe, i, 1)); val = intel_de_read(dev_priv, CGM_PIPE_GAMMA(pipe, i, 1));
blob_data[i].red = intel_color_lut_pack(REG_FIELD_GET( blob_data[i].red = intel_color_lut_pack(REG_FIELD_GET(
CGM_PIPE_GAMMA_RED_MASK, val), 10); CGM_PIPE_GAMMA_RED_MASK, val), 10);
} }
@ -1800,7 +1830,7 @@ ilk_read_lut_10(const struct intel_crtc_state *crtc_state)
blob_data = blob->data; blob_data = blob->data;
for (i = 0; i < lut_size; i++) { for (i = 0; i < lut_size; i++) {
val = I915_READ(PREC_PALETTE(pipe, i)); val = intel_de_read(dev_priv, PREC_PALETTE(pipe, i));
blob_data[i].red = intel_color_lut_pack(REG_FIELD_GET( blob_data[i].red = intel_color_lut_pack(REG_FIELD_GET(
PREC_PALETTE_RED_MASK, val), 10); PREC_PALETTE_RED_MASK, val), 10);
@ -1846,11 +1876,11 @@ glk_read_lut_10(const struct intel_crtc_state *crtc_state, u32 prec_index)
blob_data = blob->data; blob_data = blob->data;
I915_WRITE(PREC_PAL_INDEX(pipe), prec_index | intel_de_write(dev_priv, PREC_PAL_INDEX(pipe),
PAL_PREC_AUTO_INCREMENT); prec_index | PAL_PREC_AUTO_INCREMENT);
for (i = 0; i < hw_lut_size; i++) { for (i = 0; i < hw_lut_size; i++) {
val = I915_READ(PREC_PAL_DATA(pipe)); val = intel_de_read(dev_priv, PREC_PAL_DATA(pipe));
blob_data[i].red = intel_color_lut_pack(REG_FIELD_GET( blob_data[i].red = intel_color_lut_pack(REG_FIELD_GET(
PREC_PAL_DATA_RED_MASK, val), 10); PREC_PAL_DATA_RED_MASK, val), 10);
@ -1860,7 +1890,7 @@ glk_read_lut_10(const struct intel_crtc_state *crtc_state, u32 prec_index)
PREC_PAL_DATA_BLUE_MASK, val), 10); PREC_PAL_DATA_BLUE_MASK, val), 10);
} }
I915_WRITE(PREC_PAL_INDEX(pipe), 0); intel_de_write(dev_priv, PREC_PAL_INDEX(pipe), 0);
return blob; return blob;
} }

View file

@ -48,7 +48,7 @@ cnl_get_procmon_ref_values(struct drm_i915_private *dev_priv, enum phy phy)
const struct cnl_procmon *procmon; const struct cnl_procmon *procmon;
u32 val; u32 val;
val = I915_READ(ICL_PORT_COMP_DW3(phy)); val = intel_de_read(dev_priv, ICL_PORT_COMP_DW3(phy));
switch (val & (PROCESS_INFO_MASK | VOLTAGE_INFO_MASK)) { switch (val & (PROCESS_INFO_MASK | VOLTAGE_INFO_MASK)) {
default: default:
MISSING_CASE(val); MISSING_CASE(val);
@ -81,26 +81,27 @@ static void cnl_set_procmon_ref_values(struct drm_i915_private *dev_priv,
procmon = cnl_get_procmon_ref_values(dev_priv, phy); procmon = cnl_get_procmon_ref_values(dev_priv, phy);
val = I915_READ(ICL_PORT_COMP_DW1(phy)); val = intel_de_read(dev_priv, ICL_PORT_COMP_DW1(phy));
val &= ~((0xff << 16) | 0xff); val &= ~((0xff << 16) | 0xff);
val |= procmon->dw1; val |= procmon->dw1;
I915_WRITE(ICL_PORT_COMP_DW1(phy), val); intel_de_write(dev_priv, ICL_PORT_COMP_DW1(phy), val);
I915_WRITE(ICL_PORT_COMP_DW9(phy), procmon->dw9); intel_de_write(dev_priv, ICL_PORT_COMP_DW9(phy), procmon->dw9);
I915_WRITE(ICL_PORT_COMP_DW10(phy), procmon->dw10); intel_de_write(dev_priv, ICL_PORT_COMP_DW10(phy), procmon->dw10);
} }
static bool check_phy_reg(struct drm_i915_private *dev_priv, static bool check_phy_reg(struct drm_i915_private *dev_priv,
enum phy phy, i915_reg_t reg, u32 mask, enum phy phy, i915_reg_t reg, u32 mask,
u32 expected_val) u32 expected_val)
{ {
u32 val = I915_READ(reg); u32 val = intel_de_read(dev_priv, reg);
if ((val & mask) != expected_val) { if ((val & mask) != expected_val) {
DRM_DEBUG_DRIVER("Combo PHY %c reg %08x state mismatch: " drm_dbg(&dev_priv->drm,
"current %08x mask %08x expected %08x\n", "Combo PHY %c reg %08x state mismatch: "
phy_name(phy), "current %08x mask %08x expected %08x\n",
reg.reg, val, mask, expected_val); phy_name(phy),
reg.reg, val, mask, expected_val);
return false; return false;
} }
@ -127,8 +128,8 @@ static bool cnl_verify_procmon_ref_values(struct drm_i915_private *dev_priv,
static bool cnl_combo_phy_enabled(struct drm_i915_private *dev_priv) static bool cnl_combo_phy_enabled(struct drm_i915_private *dev_priv)
{ {
return !(I915_READ(CHICKEN_MISC_2) & CNL_COMP_PWR_DOWN) && return !(intel_de_read(dev_priv, CHICKEN_MISC_2) & CNL_COMP_PWR_DOWN) &&
(I915_READ(CNL_PORT_COMP_DW0) & COMP_INIT); (intel_de_read(dev_priv, CNL_PORT_COMP_DW0) & COMP_INIT);
} }
static bool cnl_combo_phy_verify_state(struct drm_i915_private *dev_priv) static bool cnl_combo_phy_verify_state(struct drm_i915_private *dev_priv)
@ -151,20 +152,20 @@ static void cnl_combo_phys_init(struct drm_i915_private *dev_priv)
{ {
u32 val; u32 val;
val = I915_READ(CHICKEN_MISC_2); val = intel_de_read(dev_priv, CHICKEN_MISC_2);
val &= ~CNL_COMP_PWR_DOWN; val &= ~CNL_COMP_PWR_DOWN;
I915_WRITE(CHICKEN_MISC_2, val); intel_de_write(dev_priv, CHICKEN_MISC_2, val);
/* Dummy PORT_A to get the correct CNL register from the ICL macro */ /* Dummy PORT_A to get the correct CNL register from the ICL macro */
cnl_set_procmon_ref_values(dev_priv, PHY_A); cnl_set_procmon_ref_values(dev_priv, PHY_A);
val = I915_READ(CNL_PORT_COMP_DW0); val = intel_de_read(dev_priv, CNL_PORT_COMP_DW0);
val |= COMP_INIT; val |= COMP_INIT;
I915_WRITE(CNL_PORT_COMP_DW0, val); intel_de_write(dev_priv, CNL_PORT_COMP_DW0, val);
val = I915_READ(CNL_PORT_CL1CM_DW5); val = intel_de_read(dev_priv, CNL_PORT_CL1CM_DW5);
val |= CL_POWER_DOWN_ENABLE; val |= CL_POWER_DOWN_ENABLE;
I915_WRITE(CNL_PORT_CL1CM_DW5, val); intel_de_write(dev_priv, CNL_PORT_CL1CM_DW5, val);
} }
static void cnl_combo_phys_uninit(struct drm_i915_private *dev_priv) static void cnl_combo_phys_uninit(struct drm_i915_private *dev_priv)
@ -172,11 +173,12 @@ static void cnl_combo_phys_uninit(struct drm_i915_private *dev_priv)
u32 val; u32 val;
if (!cnl_combo_phy_verify_state(dev_priv)) if (!cnl_combo_phy_verify_state(dev_priv))
DRM_WARN("Combo PHY HW state changed unexpectedly.\n"); drm_warn(&dev_priv->drm,
"Combo PHY HW state changed unexpectedly.\n");
val = I915_READ(CHICKEN_MISC_2); val = intel_de_read(dev_priv, CHICKEN_MISC_2);
val |= CNL_COMP_PWR_DOWN; val |= CNL_COMP_PWR_DOWN;
I915_WRITE(CHICKEN_MISC_2, val); intel_de_write(dev_priv, CHICKEN_MISC_2, val);
} }
static bool icl_combo_phy_enabled(struct drm_i915_private *dev_priv, static bool icl_combo_phy_enabled(struct drm_i915_private *dev_priv,
@ -184,27 +186,65 @@ static bool icl_combo_phy_enabled(struct drm_i915_private *dev_priv,
{ {
/* The PHY C added by EHL has no PHY_MISC register */ /* The PHY C added by EHL has no PHY_MISC register */
if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_C) if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_C)
return I915_READ(ICL_PORT_COMP_DW0(phy)) & COMP_INIT; return intel_de_read(dev_priv, ICL_PORT_COMP_DW0(phy)) & COMP_INIT;
else else
return !(I915_READ(ICL_PHY_MISC(phy)) & return !(intel_de_read(dev_priv, ICL_PHY_MISC(phy)) &
ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN) && ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN) &&
(I915_READ(ICL_PORT_COMP_DW0(phy)) & COMP_INIT); (intel_de_read(dev_priv, ICL_PORT_COMP_DW0(phy)) & COMP_INIT);
}
static bool ehl_vbt_ddi_d_present(struct drm_i915_private *i915)
{
bool ddi_a_present = intel_bios_is_port_present(i915, PORT_A);
bool ddi_d_present = intel_bios_is_port_present(i915, PORT_D);
bool dsi_present = intel_bios_is_dsi_present(i915, NULL);
/*
* VBT's 'dvo port' field for child devices references the DDI, not
* the PHY. So if combo PHY A is wired up to drive an external
* display, we should see a child device present on PORT_D and
* nothing on PORT_A and no DSI.
*/
if (ddi_d_present && !ddi_a_present && !dsi_present)
return true;
/*
* If we encounter a VBT that claims to have an external display on
* DDI-D _and_ an internal display on DDI-A/DSI leave an error message
* in the log and let the internal display win.
*/
if (ddi_d_present)
drm_err(&i915->drm,
"VBT claims to have both internal and external displays on PHY A. Configuring for internal.\n");
return false;
} }
static bool icl_combo_phy_verify_state(struct drm_i915_private *dev_priv, static bool icl_combo_phy_verify_state(struct drm_i915_private *dev_priv,
enum phy phy) enum phy phy)
{ {
bool ret; bool ret;
u32 expected_val = 0;
if (!icl_combo_phy_enabled(dev_priv, phy)) if (!icl_combo_phy_enabled(dev_priv, phy))
return false; return false;
ret = cnl_verify_procmon_ref_values(dev_priv, phy); ret = cnl_verify_procmon_ref_values(dev_priv, phy);
if (phy == PHY_A) if (phy == PHY_A) {
ret &= check_phy_reg(dev_priv, phy, ICL_PORT_COMP_DW8(phy), ret &= check_phy_reg(dev_priv, phy, ICL_PORT_COMP_DW8(phy),
IREFGEN, IREFGEN); IREFGEN, IREFGEN);
if (IS_ELKHARTLAKE(dev_priv)) {
if (ehl_vbt_ddi_d_present(dev_priv))
expected_val = ICL_PHY_MISC_MUX_DDID;
ret &= check_phy_reg(dev_priv, phy, ICL_PHY_MISC(phy),
ICL_PHY_MISC_MUX_DDID,
expected_val);
}
}
ret &= check_phy_reg(dev_priv, phy, ICL_PORT_CL_DW5(phy), ret &= check_phy_reg(dev_priv, phy, ICL_PORT_CL_DW5(phy),
CL_POWER_DOWN_ENABLE, CL_POWER_DOWN_ENABLE); CL_POWER_DOWN_ENABLE, CL_POWER_DOWN_ENABLE);
@ -219,7 +259,7 @@ void intel_combo_phy_power_up_lanes(struct drm_i915_private *dev_priv,
u32 val; u32 val;
if (is_dsi) { if (is_dsi) {
WARN_ON(lane_reversal); drm_WARN_ON(&dev_priv->drm, lane_reversal);
switch (lane_count) { switch (lane_count) {
case 1: case 1:
@ -257,36 +297,10 @@ void intel_combo_phy_power_up_lanes(struct drm_i915_private *dev_priv,
} }
} }
val = I915_READ(ICL_PORT_CL_DW10(phy)); val = intel_de_read(dev_priv, ICL_PORT_CL_DW10(phy));
val &= ~PWR_DOWN_LN_MASK; val &= ~PWR_DOWN_LN_MASK;
val |= lane_mask << PWR_DOWN_LN_SHIFT; val |= lane_mask << PWR_DOWN_LN_SHIFT;
I915_WRITE(ICL_PORT_CL_DW10(phy), val); intel_de_write(dev_priv, ICL_PORT_CL_DW10(phy), val);
}
static u32 ehl_combo_phy_a_mux(struct drm_i915_private *i915, u32 val)
{
bool ddi_a_present = i915->vbt.ddi_port_info[PORT_A].child != NULL;
bool ddi_d_present = i915->vbt.ddi_port_info[PORT_D].child != NULL;
bool dsi_present = intel_bios_is_dsi_present(i915, NULL);
/*
* VBT's 'dvo port' field for child devices references the DDI, not
* the PHY. So if combo PHY A is wired up to drive an external
* display, we should see a child device present on PORT_D and
* nothing on PORT_A and no DSI.
*/
if (ddi_d_present && !ddi_a_present && !dsi_present)
return val | ICL_PHY_MISC_MUX_DDID;
/*
* If we encounter a VBT that claims to have an external display on
* DDI-D _and_ an internal display on DDI-A/DSI leave an error message
* in the log and let the internal display win.
*/
if (ddi_d_present)
DRM_ERROR("VBT claims to have both internal and external displays on PHY A. Configuring for internal.\n");
return val & ~ICL_PHY_MISC_MUX_DDID;
} }
static void icl_combo_phys_init(struct drm_i915_private *dev_priv) static void icl_combo_phys_init(struct drm_i915_private *dev_priv)
@ -297,8 +311,9 @@ static void icl_combo_phys_init(struct drm_i915_private *dev_priv)
u32 val; u32 val;
if (icl_combo_phy_verify_state(dev_priv, phy)) { if (icl_combo_phy_verify_state(dev_priv, phy)) {
DRM_DEBUG_DRIVER("Combo PHY %c already enabled, won't reprogram it.\n", drm_dbg(&dev_priv->drm,
phy_name(phy)); "Combo PHY %c already enabled, won't reprogram it.\n",
phy_name(phy));
continue; continue;
} }
@ -318,28 +333,33 @@ static void icl_combo_phys_init(struct drm_i915_private *dev_priv)
* based on whether our VBT indicates the presence of any * based on whether our VBT indicates the presence of any
* "internal" child devices. * "internal" child devices.
*/ */
val = I915_READ(ICL_PHY_MISC(phy)); val = intel_de_read(dev_priv, ICL_PHY_MISC(phy));
if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_A) if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_A) {
val = ehl_combo_phy_a_mux(dev_priv, val); val &= ~ICL_PHY_MISC_MUX_DDID;
if (ehl_vbt_ddi_d_present(dev_priv))
val |= ICL_PHY_MISC_MUX_DDID;
}
val &= ~ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN; val &= ~ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN;
I915_WRITE(ICL_PHY_MISC(phy), val); intel_de_write(dev_priv, ICL_PHY_MISC(phy), val);
skip_phy_misc: skip_phy_misc:
cnl_set_procmon_ref_values(dev_priv, phy); cnl_set_procmon_ref_values(dev_priv, phy);
if (phy == PHY_A) { if (phy == PHY_A) {
val = I915_READ(ICL_PORT_COMP_DW8(phy)); val = intel_de_read(dev_priv, ICL_PORT_COMP_DW8(phy));
val |= IREFGEN; val |= IREFGEN;
I915_WRITE(ICL_PORT_COMP_DW8(phy), val); intel_de_write(dev_priv, ICL_PORT_COMP_DW8(phy), val);
} }
val = I915_READ(ICL_PORT_COMP_DW0(phy)); val = intel_de_read(dev_priv, ICL_PORT_COMP_DW0(phy));
val |= COMP_INIT; val |= COMP_INIT;
I915_WRITE(ICL_PORT_COMP_DW0(phy), val); intel_de_write(dev_priv, ICL_PORT_COMP_DW0(phy), val);
val = I915_READ(ICL_PORT_CL_DW5(phy)); val = intel_de_read(dev_priv, ICL_PORT_CL_DW5(phy));
val |= CL_POWER_DOWN_ENABLE; val |= CL_POWER_DOWN_ENABLE;
I915_WRITE(ICL_PORT_CL_DW5(phy), val); intel_de_write(dev_priv, ICL_PORT_CL_DW5(phy), val);
} }
} }
@ -352,7 +372,8 @@ static void icl_combo_phys_uninit(struct drm_i915_private *dev_priv)
if (phy == PHY_A && if (phy == PHY_A &&
!icl_combo_phy_verify_state(dev_priv, phy)) !icl_combo_phy_verify_state(dev_priv, phy))
DRM_WARN("Combo PHY %c HW state changed unexpectedly\n", drm_warn(&dev_priv->drm,
"Combo PHY %c HW state changed unexpectedly\n",
phy_name(phy)); phy_name(phy));
/* /*
@ -363,14 +384,14 @@ static void icl_combo_phys_uninit(struct drm_i915_private *dev_priv)
if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_C) if (IS_ELKHARTLAKE(dev_priv) && phy == PHY_C)
goto skip_phy_misc; goto skip_phy_misc;
val = I915_READ(ICL_PHY_MISC(phy)); val = intel_de_read(dev_priv, ICL_PHY_MISC(phy));
val |= ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN; val |= ICL_PHY_MISC_DE_IO_COMP_PWR_DOWN;
I915_WRITE(ICL_PHY_MISC(phy), val); intel_de_write(dev_priv, ICL_PHY_MISC(phy), val);
skip_phy_misc: skip_phy_misc:
val = I915_READ(ICL_PORT_COMP_DW0(phy)); val = intel_de_read(dev_priv, ICL_PORT_COMP_DW0(phy));
val &= ~COMP_INIT; val &= ~COMP_INIT;
I915_WRITE(ICL_PORT_COMP_DW0(phy), val); intel_de_write(dev_priv, ICL_PORT_COMP_DW0(phy), val);
} }
} }

View file

@ -153,7 +153,7 @@ void intel_connector_attach_encoder(struct intel_connector *connector,
bool intel_connector_get_hw_state(struct intel_connector *connector) bool intel_connector_get_hw_state(struct intel_connector *connector)
{ {
enum pipe pipe = 0; enum pipe pipe = 0;
struct intel_encoder *encoder = connector->encoder; struct intel_encoder *encoder = intel_attached_encoder(connector);
return encoder->get_hw_state(encoder, &pipe); return encoder->get_hw_state(encoder, &pipe);
} }
@ -162,7 +162,8 @@ enum pipe intel_connector_get_pipe(struct intel_connector *connector)
{ {
struct drm_device *dev = connector->base.dev; struct drm_device *dev = connector->base.dev;
WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex)); drm_WARN_ON(dev,
!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
if (!connector->base.state->crtc) if (!connector->base.state->crtc)
return INVALID_PIPE; return INVALID_PIPE;

View file

@ -75,7 +75,7 @@ bool intel_crt_port_enabled(struct drm_i915_private *dev_priv,
{ {
u32 val; u32 val;
val = I915_READ(adpa_reg); val = intel_de_read(dev_priv, adpa_reg);
/* asserts want to know the pipe even if the port is disabled */ /* asserts want to know the pipe even if the port is disabled */
if (HAS_PCH_CPT(dev_priv)) if (HAS_PCH_CPT(dev_priv))
@ -112,7 +112,7 @@ static unsigned int intel_crt_get_flags(struct intel_encoder *encoder)
struct intel_crt *crt = intel_encoder_to_crt(encoder); struct intel_crt *crt = intel_encoder_to_crt(encoder);
u32 tmp, flags = 0; u32 tmp, flags = 0;
tmp = I915_READ(crt->adpa_reg); tmp = intel_de_read(dev_priv, crt->adpa_reg);
if (tmp & ADPA_HSYNC_ACTIVE_HIGH) if (tmp & ADPA_HSYNC_ACTIVE_HIGH)
flags |= DRM_MODE_FLAG_PHSYNC; flags |= DRM_MODE_FLAG_PHSYNC;
@ -184,7 +184,7 @@ static void intel_crt_set_dpms(struct intel_encoder *encoder,
adpa |= ADPA_PIPE_SEL(crtc->pipe); adpa |= ADPA_PIPE_SEL(crtc->pipe);
if (!HAS_PCH_SPLIT(dev_priv)) if (!HAS_PCH_SPLIT(dev_priv))
I915_WRITE(BCLRPAT(crtc->pipe), 0); intel_de_write(dev_priv, BCLRPAT(crtc->pipe), 0);
switch (mode) { switch (mode) {
case DRM_MODE_DPMS_ON: case DRM_MODE_DPMS_ON:
@ -201,7 +201,7 @@ static void intel_crt_set_dpms(struct intel_encoder *encoder,
break; break;
} }
I915_WRITE(crt->adpa_reg, adpa); intel_de_write(dev_priv, crt->adpa_reg, adpa);
} }
static void intel_disable_crt(struct intel_encoder *encoder, static void intel_disable_crt(struct intel_encoder *encoder,
@ -230,7 +230,7 @@ static void hsw_disable_crt(struct intel_encoder *encoder,
{ {
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
WARN_ON(!old_crtc_state->has_pch_encoder); drm_WARN_ON(&dev_priv->drm, !old_crtc_state->has_pch_encoder);
intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false); intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false);
} }
@ -258,7 +258,7 @@ static void hsw_post_disable_crt(struct intel_encoder *encoder,
intel_ddi_fdi_post_disable(encoder, old_crtc_state, old_conn_state); intel_ddi_fdi_post_disable(encoder, old_crtc_state, old_conn_state);
WARN_ON(!old_crtc_state->has_pch_encoder); drm_WARN_ON(&dev_priv->drm, !old_crtc_state->has_pch_encoder);
intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, true); intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, true);
} }
@ -269,7 +269,7 @@ static void hsw_pre_pll_enable_crt(struct intel_encoder *encoder,
{ {
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
WARN_ON(!crtc_state->has_pch_encoder); drm_WARN_ON(&dev_priv->drm, !crtc_state->has_pch_encoder);
intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false); intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, false);
} }
@ -282,7 +282,7 @@ static void hsw_pre_enable_crt(struct intel_encoder *encoder,
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
WARN_ON(!crtc_state->has_pch_encoder); drm_WARN_ON(&dev_priv->drm, !crtc_state->has_pch_encoder);
intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false); intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false);
@ -299,7 +299,13 @@ static void hsw_enable_crt(struct intel_encoder *encoder,
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
WARN_ON(!crtc_state->has_pch_encoder); drm_WARN_ON(&dev_priv->drm, !crtc_state->has_pch_encoder);
intel_enable_pipe(crtc_state);
lpt_pch_enable(crtc_state);
intel_crtc_vblank_on(crtc_state);
intel_crt_set_dpms(encoder, crtc_state, DRM_MODE_DPMS_ON); intel_crt_set_dpms(encoder, crtc_state, DRM_MODE_DPMS_ON);
@ -414,7 +420,8 @@ static int hsw_crt_compute_config(struct intel_encoder *encoder,
/* LPT FDI RX only supports 8bpc. */ /* LPT FDI RX only supports 8bpc. */
if (HAS_PCH_LPT(dev_priv)) { if (HAS_PCH_LPT(dev_priv)) {
if (pipe_config->bw_constrained && pipe_config->pipe_bpp < 24) { if (pipe_config->bw_constrained && pipe_config->pipe_bpp < 24) {
DRM_DEBUG_KMS("LPT only supports 24bpp\n"); drm_dbg_kms(&dev_priv->drm,
"LPT only supports 24bpp\n");
return -EINVAL; return -EINVAL;
} }
@ -442,34 +449,37 @@ static bool ilk_crt_detect_hotplug(struct drm_connector *connector)
crt->force_hotplug_required = false; crt->force_hotplug_required = false;
save_adpa = adpa = I915_READ(crt->adpa_reg); save_adpa = adpa = intel_de_read(dev_priv, crt->adpa_reg);
DRM_DEBUG_KMS("trigger hotplug detect cycle: adpa=0x%x\n", adpa); drm_dbg_kms(&dev_priv->drm,
"trigger hotplug detect cycle: adpa=0x%x\n", adpa);
adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER; adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER;
if (turn_off_dac) if (turn_off_dac)
adpa &= ~ADPA_DAC_ENABLE; adpa &= ~ADPA_DAC_ENABLE;
I915_WRITE(crt->adpa_reg, adpa); intel_de_write(dev_priv, crt->adpa_reg, adpa);
if (intel_de_wait_for_clear(dev_priv, if (intel_de_wait_for_clear(dev_priv,
crt->adpa_reg, crt->adpa_reg,
ADPA_CRT_HOTPLUG_FORCE_TRIGGER, ADPA_CRT_HOTPLUG_FORCE_TRIGGER,
1000)) 1000))
DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER"); drm_dbg_kms(&dev_priv->drm,
"timed out waiting for FORCE_TRIGGER");
if (turn_off_dac) { if (turn_off_dac) {
I915_WRITE(crt->adpa_reg, save_adpa); intel_de_write(dev_priv, crt->adpa_reg, save_adpa);
POSTING_READ(crt->adpa_reg); intel_de_posting_read(dev_priv, crt->adpa_reg);
} }
} }
/* Check the status to see if both blue and green are on now */ /* Check the status to see if both blue and green are on now */
adpa = I915_READ(crt->adpa_reg); adpa = intel_de_read(dev_priv, crt->adpa_reg);
if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0) if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0)
ret = true; ret = true;
else else
ret = false; ret = false;
DRM_DEBUG_KMS("ironlake hotplug adpa=0x%x, result %d\n", adpa, ret); drm_dbg_kms(&dev_priv->drm, "ironlake hotplug adpa=0x%x, result %d\n",
adpa, ret);
return ret; return ret;
} }
@ -498,27 +508,30 @@ static bool valleyview_crt_detect_hotplug(struct drm_connector *connector)
*/ */
reenable_hpd = intel_hpd_disable(dev_priv, crt->base.hpd_pin); reenable_hpd = intel_hpd_disable(dev_priv, crt->base.hpd_pin);
save_adpa = adpa = I915_READ(crt->adpa_reg); save_adpa = adpa = intel_de_read(dev_priv, crt->adpa_reg);
DRM_DEBUG_KMS("trigger hotplug detect cycle: adpa=0x%x\n", adpa); drm_dbg_kms(&dev_priv->drm,
"trigger hotplug detect cycle: adpa=0x%x\n", adpa);
adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER; adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER;
I915_WRITE(crt->adpa_reg, adpa); intel_de_write(dev_priv, crt->adpa_reg, adpa);
if (intel_de_wait_for_clear(dev_priv, crt->adpa_reg, if (intel_de_wait_for_clear(dev_priv, crt->adpa_reg,
ADPA_CRT_HOTPLUG_FORCE_TRIGGER, 1000)) { ADPA_CRT_HOTPLUG_FORCE_TRIGGER, 1000)) {
DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER"); drm_dbg_kms(&dev_priv->drm,
I915_WRITE(crt->adpa_reg, save_adpa); "timed out waiting for FORCE_TRIGGER");
intel_de_write(dev_priv, crt->adpa_reg, save_adpa);
} }
/* Check the status to see if both blue and green are on now */ /* Check the status to see if both blue and green are on now */
adpa = I915_READ(crt->adpa_reg); adpa = intel_de_read(dev_priv, crt->adpa_reg);
if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0) if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0)
ret = true; ret = true;
else else
ret = false; ret = false;
DRM_DEBUG_KMS("valleyview hotplug adpa=0x%x, result %d\n", adpa, ret); drm_dbg_kms(&dev_priv->drm,
"valleyview hotplug adpa=0x%x, result %d\n", adpa, ret);
if (reenable_hpd) if (reenable_hpd)
intel_hpd_enable(dev_priv, crt->base.hpd_pin); intel_hpd_enable(dev_priv, crt->base.hpd_pin);
@ -558,15 +571,16 @@ static bool intel_crt_detect_hotplug(struct drm_connector *connector)
/* wait for FORCE_DETECT to go off */ /* wait for FORCE_DETECT to go off */
if (intel_de_wait_for_clear(dev_priv, PORT_HOTPLUG_EN, if (intel_de_wait_for_clear(dev_priv, PORT_HOTPLUG_EN,
CRT_HOTPLUG_FORCE_DETECT, 1000)) CRT_HOTPLUG_FORCE_DETECT, 1000))
DRM_DEBUG_KMS("timed out waiting for FORCE_DETECT to go off"); drm_dbg_kms(&dev_priv->drm,
"timed out waiting for FORCE_DETECT to go off");
} }
stat = I915_READ(PORT_HOTPLUG_STAT); stat = intel_de_read(dev_priv, PORT_HOTPLUG_STAT);
if ((stat & CRT_HOTPLUG_MONITOR_MASK) != CRT_HOTPLUG_MONITOR_NONE) if ((stat & CRT_HOTPLUG_MONITOR_MASK) != CRT_HOTPLUG_MONITOR_NONE)
ret = true; ret = true;
/* clear the interrupt we just generated, if any */ /* clear the interrupt we just generated, if any */
I915_WRITE(PORT_HOTPLUG_STAT, CRT_HOTPLUG_INT_STATUS); intel_de_write(dev_priv, PORT_HOTPLUG_STAT, CRT_HOTPLUG_INT_STATUS);
i915_hotplug_interrupt_update(dev_priv, CRT_HOTPLUG_FORCE_DETECT, 0); i915_hotplug_interrupt_update(dev_priv, CRT_HOTPLUG_FORCE_DETECT, 0);
@ -629,13 +643,16 @@ static bool intel_crt_detect_ddc(struct drm_connector *connector)
* have to check the EDID input spec of the attached device. * have to check the EDID input spec of the attached device.
*/ */
if (!is_digital) { if (!is_digital) {
DRM_DEBUG_KMS("CRT detected via DDC:0x50 [EDID]\n"); drm_dbg_kms(&dev_priv->drm,
"CRT detected via DDC:0x50 [EDID]\n");
ret = true; ret = true;
} else { } else {
DRM_DEBUG_KMS("CRT not detected via DDC:0x50 [EDID reports a digital panel]\n"); drm_dbg_kms(&dev_priv->drm,
"CRT not detected via DDC:0x50 [EDID reports a digital panel]\n");
} }
} else { } else {
DRM_DEBUG_KMS("CRT not detected via DDC:0x50 [no valid EDID found]\n"); drm_dbg_kms(&dev_priv->drm,
"CRT not detected via DDC:0x50 [no valid EDID found]\n");
} }
kfree(edid); kfree(edid);
@ -660,7 +677,7 @@ intel_crt_load_detect(struct intel_crt *crt, u32 pipe)
u8 st00; u8 st00;
enum drm_connector_status status; enum drm_connector_status status;
DRM_DEBUG_KMS("starting load-detect on CRT\n"); drm_dbg_kms(&dev_priv->drm, "starting load-detect on CRT\n");
bclrpat_reg = BCLRPAT(pipe); bclrpat_reg = BCLRPAT(pipe);
vtotal_reg = VTOTAL(pipe); vtotal_reg = VTOTAL(pipe);
@ -706,7 +723,7 @@ intel_crt_load_detect(struct intel_crt *crt, u32 pipe)
* Yes, this will flicker * Yes, this will flicker
*/ */
if (vblank_start <= vactive && vblank_end >= vtotal) { if (vblank_start <= vactive && vblank_end >= vtotal) {
u32 vsync = I915_READ(vsync_reg); u32 vsync = intel_de_read(dev_priv, vsync_reg);
u32 vsync_start = (vsync & 0xffff) + 1; u32 vsync_start = (vsync & 0xffff) + 1;
vblank_start = vsync_start; vblank_start = vsync_start;
@ -801,9 +818,9 @@ intel_crt_detect(struct drm_connector *connector,
int status, ret; int status, ret;
struct intel_load_detect_pipe tmp; struct intel_load_detect_pipe tmp;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] force=%d\n", drm_dbg_kms(&dev_priv->drm, "[CONNECTOR:%d:%s] force=%d\n",
connector->base.id, connector->name, connector->base.id, connector->name,
force); force);
if (i915_modparams.load_detect_test) { if (i915_modparams.load_detect_test) {
wakeref = intel_display_power_get(dev_priv, wakeref = intel_display_power_get(dev_priv,
@ -824,11 +841,13 @@ intel_crt_detect(struct drm_connector *connector,
* only trust an assertion that the monitor is connected. * only trust an assertion that the monitor is connected.
*/ */
if (intel_crt_detect_hotplug(connector)) { if (intel_crt_detect_hotplug(connector)) {
DRM_DEBUG_KMS("CRT detected via hotplug\n"); drm_dbg_kms(&dev_priv->drm,
"CRT detected via hotplug\n");
status = connector_status_connected; status = connector_status_connected;
goto out; goto out;
} else } else
DRM_DEBUG_KMS("CRT not detected via hotplug\n"); drm_dbg_kms(&dev_priv->drm,
"CRT not detected via hotplug\n");
} }
if (intel_crt_detect_ddc(connector)) { if (intel_crt_detect_ddc(connector)) {
@ -918,13 +937,13 @@ void intel_crt_reset(struct drm_encoder *encoder)
if (INTEL_GEN(dev_priv) >= 5) { if (INTEL_GEN(dev_priv) >= 5) {
u32 adpa; u32 adpa;
adpa = I915_READ(crt->adpa_reg); adpa = intel_de_read(dev_priv, crt->adpa_reg);
adpa &= ~ADPA_CRT_HOTPLUG_MASK; adpa &= ~ADPA_CRT_HOTPLUG_MASK;
adpa |= ADPA_HOTPLUG_BITS; adpa |= ADPA_HOTPLUG_BITS;
I915_WRITE(crt->adpa_reg, adpa); intel_de_write(dev_priv, crt->adpa_reg, adpa);
POSTING_READ(crt->adpa_reg); intel_de_posting_read(dev_priv, crt->adpa_reg);
DRM_DEBUG_KMS("crt adpa set to 0x%x\n", adpa); drm_dbg_kms(&dev_priv->drm, "crt adpa set to 0x%x\n", adpa);
crt->force_hotplug_required = true; crt->force_hotplug_required = true;
} }
@ -969,7 +988,7 @@ void intel_crt_init(struct drm_i915_private *dev_priv)
else else
adpa_reg = ADPA; adpa_reg = ADPA;
adpa = I915_READ(adpa_reg); adpa = intel_de_read(dev_priv, adpa_reg);
if ((adpa & ADPA_DAC_ENABLE) == 0) { if ((adpa & ADPA_DAC_ENABLE) == 0) {
/* /*
* On some machines (some IVB at least) CRT can be * On some machines (some IVB at least) CRT can be
@ -979,11 +998,11 @@ void intel_crt_init(struct drm_i915_private *dev_priv)
* take. So the only way to tell is attempt to enable * take. So the only way to tell is attempt to enable
* it and see what happens. * it and see what happens.
*/ */
I915_WRITE(adpa_reg, adpa | ADPA_DAC_ENABLE | intel_de_write(dev_priv, adpa_reg,
ADPA_HSYNC_CNTL_DISABLE | ADPA_VSYNC_CNTL_DISABLE); adpa | ADPA_DAC_ENABLE | ADPA_HSYNC_CNTL_DISABLE | ADPA_VSYNC_CNTL_DISABLE);
if ((I915_READ(adpa_reg) & ADPA_DAC_ENABLE) == 0) if ((intel_de_read(dev_priv, adpa_reg) & ADPA_DAC_ENABLE) == 0)
return; return;
I915_WRITE(adpa_reg, adpa); intel_de_write(dev_priv, adpa_reg, adpa);
} }
crt = kzalloc(sizeof(struct intel_crt), GFP_KERNEL); crt = kzalloc(sizeof(struct intel_crt), GFP_KERNEL);
@ -1027,6 +1046,9 @@ void intel_crt_init(struct drm_i915_private *dev_priv)
!dmi_check_system(intel_spurious_crt_detect)) { !dmi_check_system(intel_spurious_crt_detect)) {
crt->base.hpd_pin = HPD_CRT; crt->base.hpd_pin = HPD_CRT;
crt->base.hotplug = intel_encoder_hotplug; crt->base.hotplug = intel_encoder_hotplug;
intel_connector->polled = DRM_CONNECTOR_POLL_HPD;
} else {
intel_connector->polled = DRM_CONNECTOR_POLL_CONNECT;
} }
if (HAS_DDI(dev_priv)) { if (HAS_DDI(dev_priv)) {
@ -1057,14 +1079,6 @@ void intel_crt_init(struct drm_i915_private *dev_priv)
drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs); drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs);
if (!I915_HAS_HOTPLUG(dev_priv))
intel_connector->polled = DRM_CONNECTOR_POLL_CONNECT;
/*
* Configure the automatic hotplug detection stuff
*/
crt->force_hotplug_required = false;
/* /*
* TODO: find a proper way to discover whether we need to set the the * TODO: find a proper way to discover whether we need to set the the
* polarity and link reversal bits or not, instead of relying on the * polarity and link reversal bits or not, instead of relying on the
@ -1074,7 +1088,8 @@ void intel_crt_init(struct drm_i915_private *dev_priv)
u32 fdi_config = FDI_RX_POLARITY_REVERSED_LPT | u32 fdi_config = FDI_RX_POLARITY_REVERSED_LPT |
FDI_RX_LINK_REVERSAL_OVERRIDE; FDI_RX_LINK_REVERSAL_OVERRIDE;
dev_priv->fdi_rx_config = I915_READ(FDI_RX_CTL(PIPE_A)) & fdi_config; dev_priv->fdi_rx_config = intel_de_read(dev_priv,
FDI_RX_CTL(PIPE_A)) & fdi_config;
} }
intel_crt_reset(&crt->base.base); intel_crt_reset(&crt->base.base);

View file

@ -27,6 +27,7 @@
#include "i915_drv.h" #include "i915_drv.h"
#include "i915_reg.h" #include "i915_reg.h"
#include "intel_csr.h" #include "intel_csr.h"
#include "intel_de.h"
/** /**
* DOC: csr support for dmc * DOC: csr support for dmc
@ -276,11 +277,11 @@ static void gen9_set_dc_state_debugmask(struct drm_i915_private *dev_priv)
mask |= DC_STATE_DEBUG_MASK_CORES; mask |= DC_STATE_DEBUG_MASK_CORES;
/* The below bit doesn't need to be cleared ever afterwards */ /* The below bit doesn't need to be cleared ever afterwards */
val = I915_READ(DC_STATE_DEBUG); val = intel_de_read(dev_priv, DC_STATE_DEBUG);
if ((val & mask) != mask) { if ((val & mask) != mask) {
val |= mask; val |= mask;
I915_WRITE(DC_STATE_DEBUG, val); intel_de_write(dev_priv, DC_STATE_DEBUG, val);
POSTING_READ(DC_STATE_DEBUG); intel_de_posting_read(dev_priv, DC_STATE_DEBUG);
} }
} }
@ -298,12 +299,14 @@ void intel_csr_load_program(struct drm_i915_private *dev_priv)
u32 i, fw_size; u32 i, fw_size;
if (!HAS_CSR(dev_priv)) { if (!HAS_CSR(dev_priv)) {
DRM_ERROR("No CSR support available for this platform\n"); drm_err(&dev_priv->drm,
"No CSR support available for this platform\n");
return; return;
} }
if (!dev_priv->csr.dmc_payload) { if (!dev_priv->csr.dmc_payload) {
DRM_ERROR("Tried to program CSR with empty payload\n"); drm_err(&dev_priv->drm,
"Tried to program CSR with empty payload\n");
return; return;
} }
@ -313,13 +316,14 @@ void intel_csr_load_program(struct drm_i915_private *dev_priv)
preempt_disable(); preempt_disable();
for (i = 0; i < fw_size; i++) for (i = 0; i < fw_size; i++)
I915_WRITE_FW(CSR_PROGRAM(i), payload[i]); intel_uncore_write_fw(&dev_priv->uncore, CSR_PROGRAM(i),
payload[i]);
preempt_enable(); preempt_enable();
for (i = 0; i < dev_priv->csr.mmio_count; i++) { for (i = 0; i < dev_priv->csr.mmio_count; i++) {
I915_WRITE(dev_priv->csr.mmioaddr[i], intel_de_write(dev_priv, dev_priv->csr.mmioaddr[i],
dev_priv->csr.mmiodata[i]); dev_priv->csr.mmiodata[i]);
} }
dev_priv->csr.dc_state = 0; dev_priv->csr.dc_state = 0;
@ -607,7 +611,7 @@ static void parse_csr_fw(struct drm_i915_private *dev_priv,
static void intel_csr_runtime_pm_get(struct drm_i915_private *dev_priv) static void intel_csr_runtime_pm_get(struct drm_i915_private *dev_priv)
{ {
WARN_ON(dev_priv->csr.wakeref); drm_WARN_ON(&dev_priv->drm, dev_priv->csr.wakeref);
dev_priv->csr.wakeref = dev_priv->csr.wakeref =
intel_display_power_get(dev_priv, POWER_DOMAIN_INIT); intel_display_power_get(dev_priv, POWER_DOMAIN_INIT);
} }
@ -636,16 +640,16 @@ static void csr_load_work_fn(struct work_struct *work)
intel_csr_load_program(dev_priv); intel_csr_load_program(dev_priv);
intel_csr_runtime_pm_put(dev_priv); intel_csr_runtime_pm_put(dev_priv);
DRM_INFO("Finished loading DMC firmware %s (v%u.%u)\n", drm_info(&dev_priv->drm,
dev_priv->csr.fw_path, "Finished loading DMC firmware %s (v%u.%u)\n",
CSR_VERSION_MAJOR(csr->version), dev_priv->csr.fw_path, CSR_VERSION_MAJOR(csr->version),
CSR_VERSION_MINOR(csr->version)); CSR_VERSION_MINOR(csr->version));
} else { } else {
dev_notice(dev_priv->drm.dev, drm_notice(&dev_priv->drm,
"Failed to load DMC firmware %s." "Failed to load DMC firmware %s."
" Disabling runtime power management.\n", " Disabling runtime power management.\n",
csr->fw_path); csr->fw_path);
dev_notice(dev_priv->drm.dev, "DMC firmware homepage: %s", drm_notice(&dev_priv->drm, "DMC firmware homepage: %s",
INTEL_UC_FIRMWARE_URL); INTEL_UC_FIRMWARE_URL);
} }
@ -712,7 +716,8 @@ void intel_csr_ucode_init(struct drm_i915_private *dev_priv)
if (i915_modparams.dmc_firmware_path) { if (i915_modparams.dmc_firmware_path) {
if (strlen(i915_modparams.dmc_firmware_path) == 0) { if (strlen(i915_modparams.dmc_firmware_path) == 0) {
csr->fw_path = NULL; csr->fw_path = NULL;
DRM_INFO("Disabling CSR firmware and runtime PM\n"); drm_info(&dev_priv->drm,
"Disabling CSR firmware and runtime PM\n");
return; return;
} }
@ -722,11 +727,12 @@ void intel_csr_ucode_init(struct drm_i915_private *dev_priv)
} }
if (csr->fw_path == NULL) { if (csr->fw_path == NULL) {
DRM_DEBUG_KMS("No known CSR firmware for platform, disabling runtime PM\n"); drm_dbg_kms(&dev_priv->drm,
"No known CSR firmware for platform, disabling runtime PM\n");
return; return;
} }
DRM_DEBUG_KMS("Loading %s\n", csr->fw_path); drm_dbg_kms(&dev_priv->drm, "Loading %s\n", csr->fw_path);
schedule_work(&dev_priv->csr.work); schedule_work(&dev_priv->csr.work);
} }
@ -783,7 +789,7 @@ void intel_csr_ucode_fini(struct drm_i915_private *dev_priv)
return; return;
intel_csr_ucode_suspend(dev_priv); intel_csr_ucode_suspend(dev_priv);
WARN_ON(dev_priv->csr.wakeref); drm_WARN_ON(&dev_priv->drm, dev_priv->csr.wakeref);
kfree(dev_priv->csr.dmc_payload); kfree(dev_priv->csr.dmc_payload);
} }

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,72 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2019 Intel Corporation
*/
#ifndef __INTEL_DE_H__
#define __INTEL_DE_H__
#include "i915_drv.h"
#include "i915_reg.h"
#include "intel_uncore.h"
static inline u32
intel_de_read(struct drm_i915_private *i915, i915_reg_t reg)
{
return intel_uncore_read(&i915->uncore, reg);
}
static inline void
intel_de_posting_read(struct drm_i915_private *i915, i915_reg_t reg)
{
intel_uncore_posting_read(&i915->uncore, reg);
}
/* Note: read the warnings for intel_uncore_*_fw() functions! */
static inline u32
intel_de_read_fw(struct drm_i915_private *i915, i915_reg_t reg)
{
return intel_uncore_read_fw(&i915->uncore, reg);
}
static inline void
intel_de_write(struct drm_i915_private *i915, i915_reg_t reg, u32 val)
{
intel_uncore_write(&i915->uncore, reg, val);
}
/* Note: read the warnings for intel_uncore_*_fw() functions! */
static inline void
intel_de_write_fw(struct drm_i915_private *i915, i915_reg_t reg, u32 val)
{
intel_uncore_write_fw(&i915->uncore, reg, val);
}
static inline void
intel_de_rmw(struct drm_i915_private *i915, i915_reg_t reg, u32 clear, u32 set)
{
intel_uncore_rmw(&i915->uncore, reg, clear, set);
}
static inline int
intel_de_wait_for_register(struct drm_i915_private *i915, i915_reg_t reg,
u32 mask, u32 value, unsigned int timeout)
{
return intel_wait_for_register(&i915->uncore, reg, mask, value, timeout);
}
static inline int
intel_de_wait_for_set(struct drm_i915_private *i915, i915_reg_t reg,
u32 mask, unsigned int timeout)
{
return intel_de_wait_for_register(i915, reg, mask, mask, timeout);
}
static inline int
intel_de_wait_for_clear(struct drm_i915_private *i915, i915_reg_t reg,
u32 mask, unsigned int timeout)
{
return intel_de_wait_for_register(i915, reg, mask, 0, timeout);
}
#endif /* __INTEL_DE_H__ */

File diff suppressed because it is too large Load diff

View file

@ -44,6 +44,7 @@ struct drm_modeset_acquire_ctx;
struct drm_plane; struct drm_plane;
struct drm_plane_state; struct drm_plane_state;
struct i915_ggtt_view; struct i915_ggtt_view;
struct intel_atomic_state;
struct intel_crtc; struct intel_crtc;
struct intel_crtc_state; struct intel_crtc_state;
struct intel_digital_port; struct intel_digital_port;
@ -469,6 +470,8 @@ enum phy_fia {
((connector) = to_intel_connector((__state)->base.connectors[__i].ptr), \ ((connector) = to_intel_connector((__state)->base.connectors[__i].ptr), \
(new_connector_state) = to_intel_digital_connector_state((__state)->base.connectors[__i].new_state), 1)) (new_connector_state) = to_intel_digital_connector_state((__state)->base.connectors[__i].new_state), 1))
u8 intel_calc_active_pipes(struct intel_atomic_state *state,
u8 active_pipes);
void intel_link_compute_m_n(u16 bpp, int nlanes, void intel_link_compute_m_n(u16 bpp, int nlanes,
int pixel_clock, int link_clock, int pixel_clock, int link_clock,
struct intel_link_m_n *m_n, struct intel_link_m_n *m_n,
@ -486,6 +489,7 @@ enum phy intel_port_to_phy(struct drm_i915_private *i915, enum port port);
bool is_trans_port_sync_mode(const struct intel_crtc_state *state); bool is_trans_port_sync_mode(const struct intel_crtc_state *state);
void intel_plane_destroy(struct drm_plane *plane); void intel_plane_destroy(struct drm_plane *plane);
void intel_enable_pipe(const struct intel_crtc_state *new_crtc_state);
void intel_disable_pipe(const struct intel_crtc_state *old_crtc_state); void intel_disable_pipe(const struct intel_crtc_state *old_crtc_state);
void i830_enable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe); void i830_enable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe);
void i830_disable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe); void i830_disable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe);
@ -495,6 +499,7 @@ int vlv_get_cck_clock(struct drm_i915_private *dev_priv,
const char *name, u32 reg, int ref_freq); const char *name, u32 reg, int ref_freq);
int vlv_get_cck_clock_hpll(struct drm_i915_private *dev_priv, int vlv_get_cck_clock_hpll(struct drm_i915_private *dev_priv,
const char *name, u32 reg); const char *name, u32 reg);
void lpt_pch_enable(const struct intel_crtc_state *crtc_state);
void lpt_disable_pch_transcoder(struct drm_i915_private *dev_priv); void lpt_disable_pch_transcoder(struct drm_i915_private *dev_priv);
void lpt_disable_iclkip(struct drm_i915_private *dev_priv); void lpt_disable_iclkip(struct drm_i915_private *dev_priv);
void intel_init_display_hooks(struct drm_i915_private *dev_priv); void intel_init_display_hooks(struct drm_i915_private *dev_priv);
@ -520,6 +525,7 @@ enum tc_port intel_port_to_tc(struct drm_i915_private *dev_priv,
int intel_get_pipe_from_crtc_id_ioctl(struct drm_device *dev, void *data, int intel_get_pipe_from_crtc_id_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv); struct drm_file *file_priv);
u32 intel_crtc_get_vblank_counter(struct intel_crtc *crtc); u32 intel_crtc_get_vblank_counter(struct intel_crtc *crtc);
void intel_crtc_vblank_on(const struct intel_crtc_state *crtc_state);
void intel_crtc_vblank_off(const struct intel_crtc_state *crtc_state); void intel_crtc_vblank_off(const struct intel_crtc_state *crtc_state);
int ilk_get_lanes_required(int target_clock, int link_bw, int bpp); int ilk_get_lanes_required(int target_clock, int link_bw, int bpp);
@ -610,6 +616,7 @@ intel_format_info_is_yuv_semiplanar(const struct drm_format_info *info,
void intel_modeset_init_hw(struct drm_i915_private *i915); void intel_modeset_init_hw(struct drm_i915_private *i915);
int intel_modeset_init(struct drm_i915_private *i915); int intel_modeset_init(struct drm_i915_private *i915);
void intel_modeset_driver_remove(struct drm_i915_private *i915); void intel_modeset_driver_remove(struct drm_i915_private *i915);
void intel_modeset_driver_remove_noirq(struct drm_i915_private *i915);
void intel_display_resume(struct drm_device *dev); void intel_display_resume(struct drm_device *dev);
void intel_init_pch_refclk(struct drm_i915_private *dev_priv); void intel_init_pch_refclk(struct drm_i915_private *dev_priv);

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,20 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2020 Intel Corporation
*/
#ifndef __INTEL_DISPLAY_DEBUGFS_H__
#define __INTEL_DISPLAY_DEBUGFS_H__
struct drm_connector;
struct drm_i915_private;
#ifdef CONFIG_DEBUG_FS
int intel_display_debugfs_register(struct drm_i915_private *i915);
int intel_connector_debugfs_add(struct drm_connector *connector);
#else
static inline int intel_display_debugfs_register(struct drm_i915_private *i915) { return 0; }
static inline int intel_connector_debugfs_add(struct drm_connector *connector) { return 0; }
#endif
#endif /* __INTEL_DISPLAY_DEBUGFS_H__ */

File diff suppressed because it is too large Load diff

View file

@ -307,6 +307,11 @@ intel_display_power_put_async(struct drm_i915_private *i915,
} }
#endif #endif
enum dbuf_slice {
DBUF_S1,
DBUF_S2,
};
#define with_intel_display_power(i915, domain, wf) \ #define with_intel_display_power(i915, domain, wf) \
for ((wf) = intel_display_power_get((i915), (domain)); (wf); \ for ((wf) = intel_display_power_get((i915), (domain)); (wf); \
intel_display_power_put_async((i915), (domain), (wf)), (wf) = 0) intel_display_power_put_async((i915), (domain), (wf)), (wf) = 0)

View file

@ -44,8 +44,10 @@
#include <media/cec-notifier.h> #include <media/cec-notifier.h>
#include "i915_drv.h" #include "i915_drv.h"
#include "intel_de.h"
struct drm_printer; struct drm_printer;
struct __intel_global_objs_state;
/* /*
* Display related stuff * Display related stuff
@ -139,6 +141,9 @@ struct intel_encoder {
int (*compute_config)(struct intel_encoder *, int (*compute_config)(struct intel_encoder *,
struct intel_crtc_state *, struct intel_crtc_state *,
struct drm_connector_state *); struct drm_connector_state *);
int (*compute_config_late)(struct intel_encoder *,
struct intel_crtc_state *,
struct drm_connector_state *);
void (*update_prepare)(struct intel_atomic_state *, void (*update_prepare)(struct intel_atomic_state *,
struct intel_encoder *, struct intel_encoder *,
struct intel_crtc *); struct intel_crtc *);
@ -214,6 +219,9 @@ struct intel_panel {
u8 controller; /* bxt+ only */ u8 controller; /* bxt+ only */
struct pwm_device *pwm; struct pwm_device *pwm;
/* DPCD backlight */
u8 pwmgen_bit_count;
struct backlight_device *device; struct backlight_device *device;
/* Connector and platform specific backlight functions */ /* Connector and platform specific backlight functions */
@ -458,25 +466,8 @@ struct intel_atomic_state {
intel_wakeref_t wakeref; intel_wakeref_t wakeref;
struct { struct __intel_global_objs_state *global_objs;
/* int num_global_objs;
* Logical state of cdclk (used for all scaling, watermark,
* etc. calculations and checks). This is computed as if all
* enabled crtcs were active.
*/
struct intel_cdclk_state logical;
/*
* Actual state of cdclk, can be different from the logical
* state only when all crtc's are DPMS off.
*/
struct intel_cdclk_state actual;
int force_min_cdclk;
bool force_min_cdclk_changed;
/* pipe to which cd2x update is synchronized */
enum pipe pipe;
} cdclk;
bool dpll_set, modeset; bool dpll_set, modeset;
@ -491,10 +482,6 @@ struct intel_atomic_state {
u8 active_pipe_changes; u8 active_pipe_changes;
u8 active_pipes; u8 active_pipes;
/* minimum acceptable cdclk for each pipe */
int min_cdclk[I915_MAX_PIPES];
/* minimum acceptable voltage level for each pipe */
u8 min_voltage_level[I915_MAX_PIPES];
struct intel_shared_dpll_state shared_dpll[I915_NUM_PLLS]; struct intel_shared_dpll_state shared_dpll[I915_NUM_PLLS];
@ -508,14 +495,11 @@ struct intel_atomic_state {
/* /*
* active_pipes * active_pipes
* min_cdclk[]
* min_voltage_level[]
* cdclk.*
*/ */
bool global_state_changed; bool global_state_changed;
/* Gen9+ only */ /* Number of enabled DBuf slices */
struct skl_ddb_values wm_results; u8 enabled_dbuf_slices_mask;
struct i915_sw_fence commit_ready; struct i915_sw_fence commit_ready;
@ -611,6 +595,7 @@ struct intel_plane_state {
struct intel_initial_plane_config { struct intel_initial_plane_config {
struct intel_framebuffer *fb; struct intel_framebuffer *fb;
struct i915_vma *vma;
unsigned int tiling; unsigned int tiling;
int size; int size;
u32 base; u32 base;
@ -659,7 +644,6 @@ struct intel_crtc_scaler_state {
struct intel_pipe_wm { struct intel_pipe_wm {
struct intel_wm_level wm[5]; struct intel_wm_level wm[5];
u32 linetime;
bool fbc_wm_enabled; bool fbc_wm_enabled;
bool pipe_enabled; bool pipe_enabled;
bool sprites_enabled; bool sprites_enabled;
@ -675,7 +659,6 @@ struct skl_plane_wm {
struct skl_pipe_wm { struct skl_pipe_wm {
struct skl_plane_wm planes[I915_MAX_PLANES]; struct skl_plane_wm planes[I915_MAX_PLANES];
u32 linetime;
}; };
enum vlv_wm_level { enum vlv_wm_level {
@ -1046,6 +1029,10 @@ struct intel_crtc_state {
struct drm_dsc_config config; struct drm_dsc_config config;
} dsc; } dsc;
/* HSW+ linetime watermarks */
u16 linetime;
u16 ips_linetime;
/* Forward Error correction State */ /* Forward Error correction State */
bool fec_enable; bool fec_enable;
@ -1467,7 +1454,7 @@ enc_to_dig_port(struct intel_encoder *encoder)
} }
static inline struct intel_digital_port * static inline struct intel_digital_port *
conn_to_dig_port(struct intel_connector *connector) intel_attached_dig_port(struct intel_connector *connector)
{ {
return enc_to_dig_port(intel_attached_encoder(connector)); return enc_to_dig_port(intel_attached_encoder(connector));
} }
@ -1484,6 +1471,11 @@ static inline struct intel_dp *enc_to_intel_dp(struct intel_encoder *encoder)
return &enc_to_dig_port(encoder)->dp; return &enc_to_dig_port(encoder)->dp;
} }
static inline struct intel_dp *intel_attached_dp(struct intel_connector *connector)
{
return enc_to_intel_dp(intel_attached_encoder(connector));
}
static inline bool intel_encoder_is_dp(struct intel_encoder *encoder) static inline bool intel_encoder_is_dp(struct intel_encoder *encoder)
{ {
switch (encoder->type) { switch (encoder->type) {

File diff suppressed because it is too large Load diff

View file

@ -57,10 +57,27 @@ static void set_aux_backlight_enable(struct intel_dp *intel_dp, bool enable)
*/ */
static u32 intel_dp_aux_get_backlight(struct intel_connector *connector) static u32 intel_dp_aux_get_backlight(struct intel_connector *connector)
{ {
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); struct intel_dp *intel_dp = intel_attached_dp(connector);
u8 read_val[2] = { 0x0 }; u8 read_val[2] = { 0x0 };
u8 mode_reg;
u16 level = 0; u16 level = 0;
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_BACKLIGHT_MODE_SET_REGISTER,
&mode_reg) != 1) {
DRM_DEBUG_KMS("Failed to read the DPCD register 0x%x\n",
DP_EDP_BACKLIGHT_MODE_SET_REGISTER);
return 0;
}
/*
* If we're not in DPCD control mode yet, the programmed brightness
* value is meaningless and we should assume max brightness
*/
if ((mode_reg & DP_EDP_BACKLIGHT_CONTROL_MODE_MASK) !=
DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD)
return connector->panel.backlight.max;
if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB, if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB,
&read_val, sizeof(read_val)) < 0) { &read_val, sizeof(read_val)) < 0) {
DRM_DEBUG_KMS("Failed to read DPCD register 0x%x\n", DRM_DEBUG_KMS("Failed to read DPCD register 0x%x\n",
@ -82,7 +99,7 @@ static void
intel_dp_aux_set_backlight(const struct drm_connector_state *conn_state, u32 level) intel_dp_aux_set_backlight(const struct drm_connector_state *conn_state, u32 level)
{ {
struct intel_connector *connector = to_intel_connector(conn_state->connector); struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); struct intel_dp *intel_dp = intel_attached_dp(connector);
u8 vals[2] = { 0x0 }; u8 vals[2] = { 0x0 };
vals[0] = level; vals[0] = level;
@ -110,62 +127,29 @@ intel_dp_aux_set_backlight(const struct drm_connector_state *conn_state, u32 lev
static bool intel_dp_aux_set_pwm_freq(struct intel_connector *connector) static bool intel_dp_aux_set_pwm_freq(struct intel_connector *connector)
{ {
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); struct intel_dp *intel_dp = intel_attached_dp(connector);
int freq, fxp, fxp_min, fxp_max, fxp_actual, f = 1; const u8 pn = connector->panel.backlight.pwmgen_bit_count;
u8 pn, pn_min, pn_max; int freq, fxp, f, fxp_actual, fxp_min, fxp_max;
/* Find desired value of (F x P)
* Note that, if F x P is out of supported range, the maximum value or
* minimum value will applied automatically. So no need to check that.
*/
freq = dev_priv->vbt.backlight.pwm_freq_hz; freq = dev_priv->vbt.backlight.pwm_freq_hz;
DRM_DEBUG_KMS("VBT defined backlight frequency %u Hz\n", freq);
if (!freq) { if (!freq) {
DRM_DEBUG_KMS("Use panel default backlight frequency\n"); DRM_DEBUG_KMS("Use panel default backlight frequency\n");
return false; return false;
} }
fxp = DIV_ROUND_CLOSEST(KHz(DP_EDP_BACKLIGHT_FREQ_BASE_KHZ), freq); fxp = DIV_ROUND_CLOSEST(KHz(DP_EDP_BACKLIGHT_FREQ_BASE_KHZ), freq);
f = clamp(DIV_ROUND_CLOSEST(fxp, 1 << pn), 1, 255);
fxp_actual = f << pn;
/* Use highest possible value of Pn for more granularity of brightness /* Ensure frequency is within 25% of desired value */
* adjustment while satifying the conditions below.
* - Pn is in the range of Pn_min and Pn_max
* - F is in the range of 1 and 255
* - FxP is within 25% of desired value.
* Note: 25% is arbitrary value and may need some tweak.
*/
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT_CAP_MIN, &pn_min) != 1) {
DRM_DEBUG_KMS("Failed to read pwmgen bit count cap min\n");
return false;
}
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT_CAP_MAX, &pn_max) != 1) {
DRM_DEBUG_KMS("Failed to read pwmgen bit count cap max\n");
return false;
}
pn_min &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
pn_max &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
fxp_min = DIV_ROUND_CLOSEST(fxp * 3, 4); fxp_min = DIV_ROUND_CLOSEST(fxp * 3, 4);
fxp_max = DIV_ROUND_CLOSEST(fxp * 5, 4); fxp_max = DIV_ROUND_CLOSEST(fxp * 5, 4);
if (fxp_min < (1 << pn_min) || (255 << pn_max) < fxp_max) {
DRM_DEBUG_KMS("VBT defined backlight frequency out of range\n"); if (fxp_min > fxp_actual || fxp_actual > fxp_max) {
DRM_DEBUG_KMS("Actual frequency out of range\n");
return false; return false;
} }
for (pn = pn_max; pn >= pn_min; pn--) {
f = clamp(DIV_ROUND_CLOSEST(fxp, 1 << pn), 1, 255);
fxp_actual = f << pn;
if (fxp_min <= fxp_actual && fxp_actual <= fxp_max)
break;
}
if (drm_dp_dpcd_writeb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT, pn) < 0) {
DRM_DEBUG_KMS("Failed to write aux pwmgen bit count\n");
return false;
}
if (drm_dp_dpcd_writeb(&intel_dp->aux, if (drm_dp_dpcd_writeb(&intel_dp->aux,
DP_EDP_BACKLIGHT_FREQ_SET, (u8) f) < 0) { DP_EDP_BACKLIGHT_FREQ_SET, (u8) f) < 0) {
DRM_DEBUG_KMS("Failed to write aux backlight freq\n"); DRM_DEBUG_KMS("Failed to write aux backlight freq\n");
@ -178,7 +162,8 @@ static void intel_dp_aux_enable_backlight(const struct intel_crtc_state *crtc_st
const struct drm_connector_state *conn_state) const struct drm_connector_state *conn_state)
{ {
struct intel_connector *connector = to_intel_connector(conn_state->connector); struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); struct intel_dp *intel_dp = intel_attached_dp(connector);
struct intel_panel *panel = &connector->panel;
u8 dpcd_buf, new_dpcd_buf, edp_backlight_mode; u8 dpcd_buf, new_dpcd_buf, edp_backlight_mode;
if (drm_dp_dpcd_readb(&intel_dp->aux, if (drm_dp_dpcd_readb(&intel_dp->aux,
@ -197,6 +182,12 @@ static void intel_dp_aux_enable_backlight(const struct intel_crtc_state *crtc_st
case DP_EDP_BACKLIGHT_CONTROL_MODE_PRODUCT: case DP_EDP_BACKLIGHT_CONTROL_MODE_PRODUCT:
new_dpcd_buf &= ~DP_EDP_BACKLIGHT_CONTROL_MODE_MASK; new_dpcd_buf &= ~DP_EDP_BACKLIGHT_CONTROL_MODE_MASK;
new_dpcd_buf |= DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD; new_dpcd_buf |= DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD;
if (drm_dp_dpcd_writeb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT,
panel->backlight.pwmgen_bit_count) < 0)
DRM_DEBUG_KMS("Failed to write aux pwmgen bit count\n");
break; break;
/* Do nothing when it is already DPCD mode */ /* Do nothing when it is already DPCD mode */
@ -216,8 +207,9 @@ static void intel_dp_aux_enable_backlight(const struct intel_crtc_state *crtc_st
} }
} }
intel_dp_aux_set_backlight(conn_state,
connector->panel.backlight.level);
set_aux_backlight_enable(intel_dp, true); set_aux_backlight_enable(intel_dp, true);
intel_dp_aux_set_backlight(conn_state, connector->panel.backlight.level);
} }
static void intel_dp_aux_disable_backlight(const struct drm_connector_state *old_conn_state) static void intel_dp_aux_disable_backlight(const struct drm_connector_state *old_conn_state)
@ -226,20 +218,91 @@ static void intel_dp_aux_disable_backlight(const struct drm_connector_state *old
false); false);
} }
static u32 intel_dp_aux_calc_max_backlight(struct intel_connector *connector)
{
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_attached_dp(connector);
struct intel_panel *panel = &connector->panel;
u32 max_backlight = 0;
int freq, fxp, fxp_min, fxp_max, fxp_actual, f = 1;
u8 pn, pn_min, pn_max;
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_EDP_PWMGEN_BIT_COUNT, &pn) == 1) {
pn &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
max_backlight = (1 << pn) - 1;
}
/* Find desired value of (F x P)
* Note that, if F x P is out of supported range, the maximum value or
* minimum value will applied automatically. So no need to check that.
*/
freq = i915->vbt.backlight.pwm_freq_hz;
DRM_DEBUG_KMS("VBT defined backlight frequency %u Hz\n", freq);
if (!freq) {
DRM_DEBUG_KMS("Use panel default backlight frequency\n");
return max_backlight;
}
fxp = DIV_ROUND_CLOSEST(KHz(DP_EDP_BACKLIGHT_FREQ_BASE_KHZ), freq);
/* Use highest possible value of Pn for more granularity of brightness
* adjustment while satifying the conditions below.
* - Pn is in the range of Pn_min and Pn_max
* - F is in the range of 1 and 255
* - FxP is within 25% of desired value.
* Note: 25% is arbitrary value and may need some tweak.
*/
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT_CAP_MIN, &pn_min) != 1) {
DRM_DEBUG_KMS("Failed to read pwmgen bit count cap min\n");
return max_backlight;
}
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT_CAP_MAX, &pn_max) != 1) {
DRM_DEBUG_KMS("Failed to read pwmgen bit count cap max\n");
return max_backlight;
}
pn_min &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
pn_max &= DP_EDP_PWMGEN_BIT_COUNT_MASK;
fxp_min = DIV_ROUND_CLOSEST(fxp * 3, 4);
fxp_max = DIV_ROUND_CLOSEST(fxp * 5, 4);
if (fxp_min < (1 << pn_min) || (255 << pn_max) < fxp_max) {
DRM_DEBUG_KMS("VBT defined backlight frequency out of range\n");
return max_backlight;
}
for (pn = pn_max; pn >= pn_min; pn--) {
f = clamp(DIV_ROUND_CLOSEST(fxp, 1 << pn), 1, 255);
fxp_actual = f << pn;
if (fxp_min <= fxp_actual && fxp_actual <= fxp_max)
break;
}
DRM_DEBUG_KMS("Using eDP pwmgen bit count of %d\n", pn);
if (drm_dp_dpcd_writeb(&intel_dp->aux,
DP_EDP_PWMGEN_BIT_COUNT, pn) < 0) {
DRM_DEBUG_KMS("Failed to write aux pwmgen bit count\n");
return max_backlight;
}
panel->backlight.pwmgen_bit_count = pn;
max_backlight = (1 << pn) - 1;
return max_backlight;
}
static int intel_dp_aux_setup_backlight(struct intel_connector *connector, static int intel_dp_aux_setup_backlight(struct intel_connector *connector,
enum pipe pipe) enum pipe pipe)
{ {
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
if (intel_dp->edp_dpcd[2] & DP_EDP_BACKLIGHT_BRIGHTNESS_BYTE_COUNT) panel->backlight.max = intel_dp_aux_calc_max_backlight(connector);
panel->backlight.max = 0xFFFF; if (!panel->backlight.max)
else return -ENODEV;
panel->backlight.max = 0xFF;
panel->backlight.min = 0; panel->backlight.min = 0;
panel->backlight.level = intel_dp_aux_get_backlight(connector); panel->backlight.level = intel_dp_aux_get_backlight(connector);
panel->backlight.enabled = panel->backlight.level != 0; panel->backlight.enabled = panel->backlight.level != 0;
return 0; return 0;
@ -248,7 +311,7 @@ static int intel_dp_aux_setup_backlight(struct intel_connector *connector,
static bool static bool
intel_dp_aux_display_control_capable(struct intel_connector *connector) intel_dp_aux_display_control_capable(struct intel_connector *connector)
{ {
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); struct intel_dp *intel_dp = intel_attached_dp(connector);
/* Check the eDP Display control capabilities registers to determine if /* Check the eDP Display control capabilities registers to determine if
* the panel can support backlight control over the aux channel * the panel can support backlight control over the aux channel

View file

@ -130,6 +130,7 @@ static bool intel_dp_link_max_vswing_reached(struct intel_dp *intel_dp)
static bool static bool
intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp) intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
{ {
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
u8 voltage; u8 voltage;
int voltage_tries, cr_tries, max_cr_tries; int voltage_tries, cr_tries, max_cr_tries;
bool max_vswing_reached = false; bool max_vswing_reached = false;
@ -143,9 +144,11 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
&link_bw, &rate_select); &link_bw, &rate_select);
if (link_bw) if (link_bw)
DRM_DEBUG_KMS("Using LINK_BW_SET value %02x\n", link_bw); drm_dbg_kms(&i915->drm,
"Using LINK_BW_SET value %02x\n", link_bw);
else else
DRM_DEBUG_KMS("Using LINK_RATE_SET value %02x\n", rate_select); drm_dbg_kms(&i915->drm,
"Using LINK_RATE_SET value %02x\n", rate_select);
/* Write the link configuration data */ /* Write the link configuration data */
link_config[0] = link_bw; link_config[0] = link_bw;
@ -169,7 +172,7 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
if (!intel_dp_reset_link_train(intel_dp, if (!intel_dp_reset_link_train(intel_dp,
DP_TRAINING_PATTERN_1 | DP_TRAINING_PATTERN_1 |
DP_LINK_SCRAMBLING_DISABLE)) { DP_LINK_SCRAMBLING_DISABLE)) {
DRM_ERROR("failed to enable link training\n"); drm_err(&i915->drm, "failed to enable link training\n");
return false; return false;
} }
@ -193,22 +196,23 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
drm_dp_link_train_clock_recovery_delay(intel_dp->dpcd); drm_dp_link_train_clock_recovery_delay(intel_dp->dpcd);
if (!intel_dp_get_link_status(intel_dp, link_status)) { if (!intel_dp_get_link_status(intel_dp, link_status)) {
DRM_ERROR("failed to get link status\n"); drm_err(&i915->drm, "failed to get link status\n");
return false; return false;
} }
if (drm_dp_clock_recovery_ok(link_status, intel_dp->lane_count)) { if (drm_dp_clock_recovery_ok(link_status, intel_dp->lane_count)) {
DRM_DEBUG_KMS("clock recovery OK\n"); drm_dbg_kms(&i915->drm, "clock recovery OK\n");
return true; return true;
} }
if (voltage_tries == 5) { if (voltage_tries == 5) {
DRM_DEBUG_KMS("Same voltage tried 5 times\n"); drm_dbg_kms(&i915->drm,
"Same voltage tried 5 times\n");
return false; return false;
} }
if (max_vswing_reached) { if (max_vswing_reached) {
DRM_DEBUG_KMS("Max Voltage Swing reached\n"); drm_dbg_kms(&i915->drm, "Max Voltage Swing reached\n");
return false; return false;
} }
@ -217,7 +221,8 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
/* Update training set as requested by target */ /* Update training set as requested by target */
intel_get_adjust_train(intel_dp, link_status); intel_get_adjust_train(intel_dp, link_status);
if (!intel_dp_update_link_train(intel_dp)) { if (!intel_dp_update_link_train(intel_dp)) {
DRM_ERROR("failed to update link training\n"); drm_err(&i915->drm,
"failed to update link training\n");
return false; return false;
} }
@ -231,7 +236,8 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
max_vswing_reached = true; max_vswing_reached = true;
} }
DRM_ERROR("Failed clock recovery %d times, giving up!\n", max_cr_tries); drm_err(&i915->drm,
"Failed clock recovery %d times, giving up!\n", max_cr_tries);
return false; return false;
} }
@ -256,9 +262,11 @@ static u32 intel_dp_training_pattern(struct intel_dp *intel_dp)
return DP_TRAINING_PATTERN_4; return DP_TRAINING_PATTERN_4;
} else if (intel_dp->link_rate == 810000) { } else if (intel_dp->link_rate == 810000) {
if (!source_tps4) if (!source_tps4)
DRM_DEBUG_KMS("8.1 Gbps link rate without source HBR3/TPS4 support\n"); drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
"8.1 Gbps link rate without source HBR3/TPS4 support\n");
if (!sink_tps4) if (!sink_tps4)
DRM_DEBUG_KMS("8.1 Gbps link rate without sink TPS4 support\n"); drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
"8.1 Gbps link rate without sink TPS4 support\n");
} }
/* /*
* Intel platforms that support HBR2 also support TPS3. TPS3 support is * Intel platforms that support HBR2 also support TPS3. TPS3 support is
@ -271,9 +279,11 @@ static u32 intel_dp_training_pattern(struct intel_dp *intel_dp)
return DP_TRAINING_PATTERN_3; return DP_TRAINING_PATTERN_3;
} else if (intel_dp->link_rate >= 540000) { } else if (intel_dp->link_rate >= 540000) {
if (!source_tps3) if (!source_tps3)
DRM_DEBUG_KMS(">=5.4/6.48 Gbps link rate without source HBR2/TPS3 support\n"); drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
">=5.4/6.48 Gbps link rate without source HBR2/TPS3 support\n");
if (!sink_tps3) if (!sink_tps3)
DRM_DEBUG_KMS(">=5.4/6.48 Gbps link rate without sink TPS3 support\n"); drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
">=5.4/6.48 Gbps link rate without sink TPS3 support\n");
} }
return DP_TRAINING_PATTERN_2; return DP_TRAINING_PATTERN_2;
@ -282,6 +292,7 @@ static u32 intel_dp_training_pattern(struct intel_dp *intel_dp)
static bool static bool
intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp) intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
{ {
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
int tries; int tries;
u32 training_pattern; u32 training_pattern;
u8 link_status[DP_LINK_STATUS_SIZE]; u8 link_status[DP_LINK_STATUS_SIZE];
@ -295,7 +306,7 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
/* channel equalization */ /* channel equalization */
if (!intel_dp_set_link_train(intel_dp, if (!intel_dp_set_link_train(intel_dp,
training_pattern)) { training_pattern)) {
DRM_ERROR("failed to start channel equalization\n"); drm_err(&i915->drm, "failed to start channel equalization\n");
return false; return false;
} }
@ -303,7 +314,8 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
drm_dp_link_train_channel_eq_delay(intel_dp->dpcd); drm_dp_link_train_channel_eq_delay(intel_dp->dpcd);
if (!intel_dp_get_link_status(intel_dp, link_status)) { if (!intel_dp_get_link_status(intel_dp, link_status)) {
DRM_ERROR("failed to get link status\n"); drm_err(&i915->drm,
"failed to get link status\n");
break; break;
} }
@ -311,23 +323,25 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
if (!drm_dp_clock_recovery_ok(link_status, if (!drm_dp_clock_recovery_ok(link_status,
intel_dp->lane_count)) { intel_dp->lane_count)) {
intel_dp_dump_link_status(link_status); intel_dp_dump_link_status(link_status);
DRM_DEBUG_KMS("Clock recovery check failed, cannot " drm_dbg_kms(&i915->drm,
"continue channel equalization\n"); "Clock recovery check failed, cannot "
"continue channel equalization\n");
break; break;
} }
if (drm_dp_channel_eq_ok(link_status, if (drm_dp_channel_eq_ok(link_status,
intel_dp->lane_count)) { intel_dp->lane_count)) {
channel_eq = true; channel_eq = true;
DRM_DEBUG_KMS("Channel EQ done. DP Training " drm_dbg_kms(&i915->drm, "Channel EQ done. DP Training "
"successful\n"); "successful\n");
break; break;
} }
/* Update training set as requested by target */ /* Update training set as requested by target */
intel_get_adjust_train(intel_dp, link_status); intel_get_adjust_train(intel_dp, link_status);
if (!intel_dp_update_link_train(intel_dp)) { if (!intel_dp_update_link_train(intel_dp)) {
DRM_ERROR("failed to update link training\n"); drm_err(&i915->drm,
"failed to update link training\n");
break; break;
} }
} }
@ -335,7 +349,8 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
/* Try 5 times, else fail and try at lower BW */ /* Try 5 times, else fail and try at lower BW */
if (tries == 5) { if (tries == 5) {
intel_dp_dump_link_status(link_status); intel_dp_dump_link_status(link_status);
DRM_DEBUG_KMS("Channel equalization failed 5 times\n"); drm_dbg_kms(&i915->drm,
"Channel equalization failed 5 times\n");
} }
intel_dp_set_idle_link_train(intel_dp); intel_dp_set_idle_link_train(intel_dp);
@ -362,17 +377,19 @@ intel_dp_start_link_train(struct intel_dp *intel_dp)
if (!intel_dp_link_training_channel_equalization(intel_dp)) if (!intel_dp_link_training_channel_equalization(intel_dp))
goto failure_handling; goto failure_handling;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training Passed at Link Rate = %d, Lane count = %d", drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
intel_connector->base.base.id, "[CONNECTOR:%d:%s] Link Training Passed at Link Rate = %d, Lane count = %d",
intel_connector->base.name, intel_connector->base.base.id,
intel_dp->link_rate, intel_dp->lane_count); intel_connector->base.name,
intel_dp->link_rate, intel_dp->lane_count);
return; return;
failure_handling: failure_handling:
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d", drm_dbg_kms(&dp_to_i915(intel_dp)->drm,
intel_connector->base.base.id, "[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
intel_connector->base.name, intel_connector->base.base.id,
intel_dp->link_rate, intel_dp->lane_count); intel_connector->base.name,
intel_dp->link_rate, intel_dp->lane_count);
if (!intel_dp_get_link_train_fallback_values(intel_dp, if (!intel_dp_get_link_train_fallback_values(intel_dp,
intel_dp->link_rate, intel_dp->link_rate,
intel_dp->lane_count)) intel_dp->lane_count))

View file

@ -352,8 +352,9 @@ static void intel_mst_post_disable_dp(struct intel_encoder *encoder,
intel_dp->active_mst_links--; intel_dp->active_mst_links--;
last_mst_stream = intel_dp->active_mst_links == 0; last_mst_stream = intel_dp->active_mst_links == 0;
WARN_ON(INTEL_GEN(dev_priv) >= 12 && last_mst_stream && drm_WARN_ON(&dev_priv->drm,
!intel_dp_mst_is_master_trans(old_crtc_state)); INTEL_GEN(dev_priv) >= 12 && last_mst_stream &&
!intel_dp_mst_is_master_trans(old_crtc_state));
intel_crtc_vblank_off(old_crtc_state); intel_crtc_vblank_off(old_crtc_state);
@ -361,9 +362,12 @@ static void intel_mst_post_disable_dp(struct intel_encoder *encoder,
drm_dp_update_payload_part2(&intel_dp->mst_mgr); drm_dp_update_payload_part2(&intel_dp->mst_mgr);
val = I915_READ(TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder)); val = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder));
val &= ~TRANS_DDI_DP_VC_PAYLOAD_ALLOC; val &= ~TRANS_DDI_DP_VC_PAYLOAD_ALLOC;
I915_WRITE(TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder), val); intel_de_write(dev_priv,
TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder),
val);
if (intel_de_wait_for_set(dev_priv, intel_dp->regs.dp_tp_status, if (intel_de_wait_for_set(dev_priv, intel_dp->regs.dp_tp_status,
DP_TP_STATUS_ACT_SENT, 1)) DP_TP_STATUS_ACT_SENT, 1))
@ -437,8 +441,9 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
connector->encoder = encoder; connector->encoder = encoder;
intel_mst->connector = connector; intel_mst->connector = connector;
first_mst_stream = intel_dp->active_mst_links == 0; first_mst_stream = intel_dp->active_mst_links == 0;
WARN_ON(INTEL_GEN(dev_priv) >= 12 && first_mst_stream && drm_WARN_ON(&dev_priv->drm,
!intel_dp_mst_is_master_trans(pipe_config)); INTEL_GEN(dev_priv) >= 12 && first_mst_stream &&
!intel_dp_mst_is_master_trans(pipe_config));
DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links); DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links);
@ -459,8 +464,8 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
DRM_ERROR("failed to allocate vcpi\n"); DRM_ERROR("failed to allocate vcpi\n");
intel_dp->active_mst_links++; intel_dp->active_mst_links++;
temp = I915_READ(intel_dp->regs.dp_tp_status); temp = intel_de_read(dev_priv, intel_dp->regs.dp_tp_status);
I915_WRITE(intel_dp->regs.dp_tp_status, temp); intel_de_write(dev_priv, intel_dp->regs.dp_tp_status, temp);
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr); ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr);
@ -475,6 +480,8 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
intel_ddi_enable_pipe_clock(pipe_config); intel_ddi_enable_pipe_clock(pipe_config);
intel_ddi_set_dp_msa(pipe_config, conn_state); intel_ddi_set_dp_msa(pipe_config, conn_state);
intel_dp_set_m_n(pipe_config, M1_N1);
} }
static void intel_mst_enable_dp(struct intel_encoder *encoder, static void intel_mst_enable_dp(struct intel_encoder *encoder,
@ -486,6 +493,12 @@ static void intel_mst_enable_dp(struct intel_encoder *encoder,
struct intel_dp *intel_dp = &intel_dig_port->dp; struct intel_dp *intel_dp = &intel_dig_port->dp;
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
drm_WARN_ON(&dev_priv->drm, pipe_config->has_pch_encoder);
intel_enable_pipe(pipe_config);
intel_crtc_vblank_on(pipe_config);
DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links); DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links);
if (intel_de_wait_for_set(dev_priv, intel_dp->regs.dp_tp_status, if (intel_de_wait_for_set(dev_priv, intel_dp->regs.dp_tp_status,
@ -632,9 +645,9 @@ static const struct drm_encoder_funcs intel_dp_mst_enc_funcs = {
static bool intel_dp_mst_get_hw_state(struct intel_connector *connector) static bool intel_dp_mst_get_hw_state(struct intel_connector *connector)
{ {
if (connector->encoder && connector->base.state->crtc) { if (intel_attached_encoder(connector) && connector->base.state->crtc) {
enum pipe pipe; enum pipe pipe;
if (!connector->encoder->get_hw_state(connector->encoder, &pipe)) if (!intel_attached_encoder(connector)->get_hw_state(intel_attached_encoder(connector), &pipe))
return false; return false;
return true; return true;
} }

View file

@ -259,7 +259,8 @@ void bxt_port_to_phy_channel(struct drm_i915_private *dev_priv, enum port port,
} }
} }
WARN(1, "PHY not found for PORT %c", port_name(port)); drm_WARN(&dev_priv->drm, 1, "PHY not found for PORT %c",
port_name(port));
*phy = DPIO_PHY0; *phy = DPIO_PHY0;
*ch = DPIO_CH0; *ch = DPIO_CH0;
} }
@ -278,33 +279,34 @@ void bxt_ddi_phy_set_signal_level(struct drm_i915_private *dev_priv,
* While we write to the group register to program all lanes at once we * While we write to the group register to program all lanes at once we
* can read only lane registers and we pick lanes 0/1 for that. * can read only lane registers and we pick lanes 0/1 for that.
*/ */
val = I915_READ(BXT_PORT_PCS_DW10_LN01(phy, ch)); val = intel_de_read(dev_priv, BXT_PORT_PCS_DW10_LN01(phy, ch));
val &= ~(TX2_SWING_CALC_INIT | TX1_SWING_CALC_INIT); val &= ~(TX2_SWING_CALC_INIT | TX1_SWING_CALC_INIT);
I915_WRITE(BXT_PORT_PCS_DW10_GRP(phy, ch), val); intel_de_write(dev_priv, BXT_PORT_PCS_DW10_GRP(phy, ch), val);
val = I915_READ(BXT_PORT_TX_DW2_LN0(phy, ch)); val = intel_de_read(dev_priv, BXT_PORT_TX_DW2_LN0(phy, ch));
val &= ~(MARGIN_000 | UNIQ_TRANS_SCALE); val &= ~(MARGIN_000 | UNIQ_TRANS_SCALE);
val |= margin << MARGIN_000_SHIFT | scale << UNIQ_TRANS_SCALE_SHIFT; val |= margin << MARGIN_000_SHIFT | scale << UNIQ_TRANS_SCALE_SHIFT;
I915_WRITE(BXT_PORT_TX_DW2_GRP(phy, ch), val); intel_de_write(dev_priv, BXT_PORT_TX_DW2_GRP(phy, ch), val);
val = I915_READ(BXT_PORT_TX_DW3_LN0(phy, ch)); val = intel_de_read(dev_priv, BXT_PORT_TX_DW3_LN0(phy, ch));
val &= ~SCALE_DCOMP_METHOD; val &= ~SCALE_DCOMP_METHOD;
if (enable) if (enable)
val |= SCALE_DCOMP_METHOD; val |= SCALE_DCOMP_METHOD;
if ((val & UNIQUE_TRANGE_EN_METHOD) && !(val & SCALE_DCOMP_METHOD)) if ((val & UNIQUE_TRANGE_EN_METHOD) && !(val & SCALE_DCOMP_METHOD))
DRM_ERROR("Disabled scaling while ouniqetrangenmethod was set"); drm_err(&dev_priv->drm,
"Disabled scaling while ouniqetrangenmethod was set");
I915_WRITE(BXT_PORT_TX_DW3_GRP(phy, ch), val); intel_de_write(dev_priv, BXT_PORT_TX_DW3_GRP(phy, ch), val);
val = I915_READ(BXT_PORT_TX_DW4_LN0(phy, ch)); val = intel_de_read(dev_priv, BXT_PORT_TX_DW4_LN0(phy, ch));
val &= ~DE_EMPHASIS; val &= ~DE_EMPHASIS;
val |= deemphasis << DEEMPH_SHIFT; val |= deemphasis << DEEMPH_SHIFT;
I915_WRITE(BXT_PORT_TX_DW4_GRP(phy, ch), val); intel_de_write(dev_priv, BXT_PORT_TX_DW4_GRP(phy, ch), val);
val = I915_READ(BXT_PORT_PCS_DW10_LN01(phy, ch)); val = intel_de_read(dev_priv, BXT_PORT_PCS_DW10_LN01(phy, ch));
val |= TX2_SWING_CALC_INIT | TX1_SWING_CALC_INIT; val |= TX2_SWING_CALC_INIT | TX1_SWING_CALC_INIT;
I915_WRITE(BXT_PORT_PCS_DW10_GRP(phy, ch), val); intel_de_write(dev_priv, BXT_PORT_PCS_DW10_GRP(phy, ch), val);
} }
bool bxt_ddi_phy_is_enabled(struct drm_i915_private *dev_priv, bool bxt_ddi_phy_is_enabled(struct drm_i915_private *dev_priv,
@ -314,20 +316,20 @@ bool bxt_ddi_phy_is_enabled(struct drm_i915_private *dev_priv,
phy_info = bxt_get_phy_info(dev_priv, phy); phy_info = bxt_get_phy_info(dev_priv, phy);
if (!(I915_READ(BXT_P_CR_GT_DISP_PWRON) & phy_info->pwron_mask)) if (!(intel_de_read(dev_priv, BXT_P_CR_GT_DISP_PWRON) & phy_info->pwron_mask))
return false; return false;
if ((I915_READ(BXT_PORT_CL1CM_DW0(phy)) & if ((intel_de_read(dev_priv, BXT_PORT_CL1CM_DW0(phy)) &
(PHY_POWER_GOOD | PHY_RESERVED)) != PHY_POWER_GOOD) { (PHY_POWER_GOOD | PHY_RESERVED)) != PHY_POWER_GOOD) {
DRM_DEBUG_DRIVER("DDI PHY %d powered, but power hasn't settled\n", drm_dbg(&dev_priv->drm,
phy); "DDI PHY %d powered, but power hasn't settled\n", phy);
return false; return false;
} }
if (!(I915_READ(BXT_PHY_CTL_FAMILY(phy)) & COMMON_RESET_DIS)) { if (!(intel_de_read(dev_priv, BXT_PHY_CTL_FAMILY(phy)) & COMMON_RESET_DIS)) {
DRM_DEBUG_DRIVER("DDI PHY %d powered, but still in reset\n", drm_dbg(&dev_priv->drm,
phy); "DDI PHY %d powered, but still in reset\n", phy);
return false; return false;
} }
@ -337,7 +339,7 @@ bool bxt_ddi_phy_is_enabled(struct drm_i915_private *dev_priv,
static u32 bxt_get_grc(struct drm_i915_private *dev_priv, enum dpio_phy phy) static u32 bxt_get_grc(struct drm_i915_private *dev_priv, enum dpio_phy phy)
{ {
u32 val = I915_READ(BXT_PORT_REF_DW6(phy)); u32 val = intel_de_read(dev_priv, BXT_PORT_REF_DW6(phy));
return (val & GRC_CODE_MASK) >> GRC_CODE_SHIFT; return (val & GRC_CODE_MASK) >> GRC_CODE_SHIFT;
} }
@ -347,7 +349,8 @@ static void bxt_phy_wait_grc_done(struct drm_i915_private *dev_priv,
{ {
if (intel_de_wait_for_set(dev_priv, BXT_PORT_REF_DW3(phy), if (intel_de_wait_for_set(dev_priv, BXT_PORT_REF_DW3(phy),
GRC_DONE, 10)) GRC_DONE, 10))
DRM_ERROR("timeout waiting for PHY%d GRC\n", phy); drm_err(&dev_priv->drm, "timeout waiting for PHY%d GRC\n",
phy);
} }
static void _bxt_ddi_phy_init(struct drm_i915_private *dev_priv, static void _bxt_ddi_phy_init(struct drm_i915_private *dev_priv,
@ -364,18 +367,19 @@ static void _bxt_ddi_phy_init(struct drm_i915_private *dev_priv,
dev_priv->bxt_phy_grc = bxt_get_grc(dev_priv, phy); dev_priv->bxt_phy_grc = bxt_get_grc(dev_priv, phy);
if (bxt_ddi_phy_verify_state(dev_priv, phy)) { if (bxt_ddi_phy_verify_state(dev_priv, phy)) {
DRM_DEBUG_DRIVER("DDI PHY %d already enabled, " drm_dbg(&dev_priv->drm, "DDI PHY %d already enabled, "
"won't reprogram it\n", phy); "won't reprogram it\n", phy);
return; return;
} }
DRM_DEBUG_DRIVER("DDI PHY %d enabled with invalid state, " drm_dbg(&dev_priv->drm,
"force reprogramming it\n", phy); "DDI PHY %d enabled with invalid state, "
"force reprogramming it\n", phy);
} }
val = I915_READ(BXT_P_CR_GT_DISP_PWRON); val = intel_de_read(dev_priv, BXT_P_CR_GT_DISP_PWRON);
val |= phy_info->pwron_mask; val |= phy_info->pwron_mask;
I915_WRITE(BXT_P_CR_GT_DISP_PWRON, val); intel_de_write(dev_priv, BXT_P_CR_GT_DISP_PWRON, val);
/* /*
* The PHY registers start out inaccessible and respond to reads with * The PHY registers start out inaccessible and respond to reads with
@ -390,29 +394,30 @@ static void _bxt_ddi_phy_init(struct drm_i915_private *dev_priv,
PHY_RESERVED | PHY_POWER_GOOD, PHY_RESERVED | PHY_POWER_GOOD,
PHY_POWER_GOOD, PHY_POWER_GOOD,
1)) 1))
DRM_ERROR("timeout during PHY%d power on\n", phy); drm_err(&dev_priv->drm, "timeout during PHY%d power on\n",
phy);
/* Program PLL Rcomp code offset */ /* Program PLL Rcomp code offset */
val = I915_READ(BXT_PORT_CL1CM_DW9(phy)); val = intel_de_read(dev_priv, BXT_PORT_CL1CM_DW9(phy));
val &= ~IREF0RC_OFFSET_MASK; val &= ~IREF0RC_OFFSET_MASK;
val |= 0xE4 << IREF0RC_OFFSET_SHIFT; val |= 0xE4 << IREF0RC_OFFSET_SHIFT;
I915_WRITE(BXT_PORT_CL1CM_DW9(phy), val); intel_de_write(dev_priv, BXT_PORT_CL1CM_DW9(phy), val);
val = I915_READ(BXT_PORT_CL1CM_DW10(phy)); val = intel_de_read(dev_priv, BXT_PORT_CL1CM_DW10(phy));
val &= ~IREF1RC_OFFSET_MASK; val &= ~IREF1RC_OFFSET_MASK;
val |= 0xE4 << IREF1RC_OFFSET_SHIFT; val |= 0xE4 << IREF1RC_OFFSET_SHIFT;
I915_WRITE(BXT_PORT_CL1CM_DW10(phy), val); intel_de_write(dev_priv, BXT_PORT_CL1CM_DW10(phy), val);
/* Program power gating */ /* Program power gating */
val = I915_READ(BXT_PORT_CL1CM_DW28(phy)); val = intel_de_read(dev_priv, BXT_PORT_CL1CM_DW28(phy));
val |= OCL1_POWER_DOWN_EN | DW28_OLDO_DYN_PWR_DOWN_EN | val |= OCL1_POWER_DOWN_EN | DW28_OLDO_DYN_PWR_DOWN_EN |
SUS_CLK_CONFIG; SUS_CLK_CONFIG;
I915_WRITE(BXT_PORT_CL1CM_DW28(phy), val); intel_de_write(dev_priv, BXT_PORT_CL1CM_DW28(phy), val);
if (phy_info->dual_channel) { if (phy_info->dual_channel) {
val = I915_READ(BXT_PORT_CL2CM_DW6(phy)); val = intel_de_read(dev_priv, BXT_PORT_CL2CM_DW6(phy));
val |= DW6_OLDO_DYN_PWR_DOWN_EN; val |= DW6_OLDO_DYN_PWR_DOWN_EN;
I915_WRITE(BXT_PORT_CL2CM_DW6(phy), val); intel_de_write(dev_priv, BXT_PORT_CL2CM_DW6(phy), val);
} }
if (phy_info->rcomp_phy != -1) { if (phy_info->rcomp_phy != -1) {
@ -430,19 +435,19 @@ static void _bxt_ddi_phy_init(struct drm_i915_private *dev_priv,
grc_code = val << GRC_CODE_FAST_SHIFT | grc_code = val << GRC_CODE_FAST_SHIFT |
val << GRC_CODE_SLOW_SHIFT | val << GRC_CODE_SLOW_SHIFT |
val; val;
I915_WRITE(BXT_PORT_REF_DW6(phy), grc_code); intel_de_write(dev_priv, BXT_PORT_REF_DW6(phy), grc_code);
val = I915_READ(BXT_PORT_REF_DW8(phy)); val = intel_de_read(dev_priv, BXT_PORT_REF_DW8(phy));
val |= GRC_DIS | GRC_RDY_OVRD; val |= GRC_DIS | GRC_RDY_OVRD;
I915_WRITE(BXT_PORT_REF_DW8(phy), val); intel_de_write(dev_priv, BXT_PORT_REF_DW8(phy), val);
} }
if (phy_info->reset_delay) if (phy_info->reset_delay)
udelay(phy_info->reset_delay); udelay(phy_info->reset_delay);
val = I915_READ(BXT_PHY_CTL_FAMILY(phy)); val = intel_de_read(dev_priv, BXT_PHY_CTL_FAMILY(phy));
val |= COMMON_RESET_DIS; val |= COMMON_RESET_DIS;
I915_WRITE(BXT_PHY_CTL_FAMILY(phy), val); intel_de_write(dev_priv, BXT_PHY_CTL_FAMILY(phy), val);
} }
void bxt_ddi_phy_uninit(struct drm_i915_private *dev_priv, enum dpio_phy phy) void bxt_ddi_phy_uninit(struct drm_i915_private *dev_priv, enum dpio_phy phy)
@ -452,13 +457,13 @@ void bxt_ddi_phy_uninit(struct drm_i915_private *dev_priv, enum dpio_phy phy)
phy_info = bxt_get_phy_info(dev_priv, phy); phy_info = bxt_get_phy_info(dev_priv, phy);
val = I915_READ(BXT_PHY_CTL_FAMILY(phy)); val = intel_de_read(dev_priv, BXT_PHY_CTL_FAMILY(phy));
val &= ~COMMON_RESET_DIS; val &= ~COMMON_RESET_DIS;
I915_WRITE(BXT_PHY_CTL_FAMILY(phy), val); intel_de_write(dev_priv, BXT_PHY_CTL_FAMILY(phy), val);
val = I915_READ(BXT_P_CR_GT_DISP_PWRON); val = intel_de_read(dev_priv, BXT_P_CR_GT_DISP_PWRON);
val &= ~phy_info->pwron_mask; val &= ~phy_info->pwron_mask;
I915_WRITE(BXT_P_CR_GT_DISP_PWRON, val); intel_de_write(dev_priv, BXT_P_CR_GT_DISP_PWRON, val);
} }
void bxt_ddi_phy_init(struct drm_i915_private *dev_priv, enum dpio_phy phy) void bxt_ddi_phy_init(struct drm_i915_private *dev_priv, enum dpio_phy phy)
@ -496,7 +501,7 @@ __phy_reg_verify_state(struct drm_i915_private *dev_priv, enum dpio_phy phy,
va_list args; va_list args;
u32 val; u32 val;
val = I915_READ(reg); val = intel_de_read(dev_priv, reg);
if ((val & mask) == expected) if ((val & mask) == expected)
return true; return true;
@ -504,7 +509,7 @@ __phy_reg_verify_state(struct drm_i915_private *dev_priv, enum dpio_phy phy,
vaf.fmt = reg_fmt; vaf.fmt = reg_fmt;
vaf.va = &args; vaf.va = &args;
DRM_DEBUG_DRIVER("DDI PHY %d reg %pV [%08x] state mismatch: " drm_dbg(&dev_priv->drm, "DDI PHY %d reg %pV [%08x] state mismatch: "
"current %08x, expected %08x (mask %08x)\n", "current %08x, expected %08x (mask %08x)\n",
phy, &vaf, reg.reg, val, (val & ~mask) | expected, phy, &vaf, reg.reg, val, (val & ~mask) | expected,
mask); mask);
@ -599,7 +604,8 @@ void bxt_ddi_phy_set_lane_optim_mask(struct intel_encoder *encoder,
bxt_port_to_phy_channel(dev_priv, port, &phy, &ch); bxt_port_to_phy_channel(dev_priv, port, &phy, &ch);
for (lane = 0; lane < 4; lane++) { for (lane = 0; lane < 4; lane++) {
u32 val = I915_READ(BXT_PORT_TX_DW14_LN(phy, ch, lane)); u32 val = intel_de_read(dev_priv,
BXT_PORT_TX_DW14_LN(phy, ch, lane));
/* /*
* Note that on CHV this flag is called UPAR, but has * Note that on CHV this flag is called UPAR, but has
@ -609,7 +615,8 @@ void bxt_ddi_phy_set_lane_optim_mask(struct intel_encoder *encoder,
if (lane_lat_optim_mask & BIT(lane)) if (lane_lat_optim_mask & BIT(lane))
val |= LATENCY_OPTIM; val |= LATENCY_OPTIM;
I915_WRITE(BXT_PORT_TX_DW14_LN(phy, ch, lane), val); intel_de_write(dev_priv, BXT_PORT_TX_DW14_LN(phy, ch, lane),
val);
} }
} }
@ -627,7 +634,8 @@ bxt_ddi_phy_get_lane_lat_optim_mask(struct intel_encoder *encoder)
mask = 0; mask = 0;
for (lane = 0; lane < 4; lane++) { for (lane = 0; lane < 4; lane++) {
u32 val = I915_READ(BXT_PORT_TX_DW14_LN(phy, ch, lane)); u32 val = intel_de_read(dev_priv,
BXT_PORT_TX_DW14_LN(phy, ch, lane));
if (val & LATENCY_OPTIM) if (val & LATENCY_OPTIM)
mask |= BIT(lane); mask |= BIT(lane);

File diff suppressed because it is too large Load diff

View file

@ -40,7 +40,7 @@ static inline bool is_dsb_busy(struct intel_dsb *dsb)
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
return DSB_STATUS & I915_READ(DSB_CTRL(pipe, dsb->id)); return DSB_STATUS & intel_de_read(dev_priv, DSB_CTRL(pipe, dsb->id));
} }
static inline bool intel_dsb_enable_engine(struct intel_dsb *dsb) static inline bool intel_dsb_enable_engine(struct intel_dsb *dsb)
@ -50,16 +50,16 @@ static inline bool intel_dsb_enable_engine(struct intel_dsb *dsb)
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
u32 dsb_ctrl; u32 dsb_ctrl;
dsb_ctrl = I915_READ(DSB_CTRL(pipe, dsb->id)); dsb_ctrl = intel_de_read(dev_priv, DSB_CTRL(pipe, dsb->id));
if (DSB_STATUS & dsb_ctrl) { if (DSB_STATUS & dsb_ctrl) {
DRM_DEBUG_KMS("DSB engine is busy.\n"); DRM_DEBUG_KMS("DSB engine is busy.\n");
return false; return false;
} }
dsb_ctrl |= DSB_ENABLE; dsb_ctrl |= DSB_ENABLE;
I915_WRITE(DSB_CTRL(pipe, dsb->id), dsb_ctrl); intel_de_write(dev_priv, DSB_CTRL(pipe, dsb->id), dsb_ctrl);
POSTING_READ(DSB_CTRL(pipe, dsb->id)); intel_de_posting_read(dev_priv, DSB_CTRL(pipe, dsb->id));
return true; return true;
} }
@ -70,16 +70,16 @@ static inline bool intel_dsb_disable_engine(struct intel_dsb *dsb)
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
u32 dsb_ctrl; u32 dsb_ctrl;
dsb_ctrl = I915_READ(DSB_CTRL(pipe, dsb->id)); dsb_ctrl = intel_de_read(dev_priv, DSB_CTRL(pipe, dsb->id));
if (DSB_STATUS & dsb_ctrl) { if (DSB_STATUS & dsb_ctrl) {
DRM_DEBUG_KMS("DSB engine is busy.\n"); DRM_DEBUG_KMS("DSB engine is busy.\n");
return false; return false;
} }
dsb_ctrl &= ~DSB_ENABLE; dsb_ctrl &= ~DSB_ENABLE;
I915_WRITE(DSB_CTRL(pipe, dsb->id), dsb_ctrl); intel_de_write(dev_priv, DSB_CTRL(pipe, dsb->id), dsb_ctrl);
POSTING_READ(DSB_CTRL(pipe, dsb->id)); intel_de_posting_read(dev_priv, DSB_CTRL(pipe, dsb->id));
return true; return true;
} }
@ -165,7 +165,7 @@ void intel_dsb_put(struct intel_dsb *dsb)
if (!HAS_DSB(i915)) if (!HAS_DSB(i915))
return; return;
if (WARN_ON(dsb->refcount == 0)) if (drm_WARN_ON(&i915->drm, dsb->refcount == 0))
return; return;
if (--dsb->refcount == 0) { if (--dsb->refcount == 0) {
@ -198,11 +198,11 @@ void intel_dsb_indexed_reg_write(struct intel_dsb *dsb, i915_reg_t reg,
u32 reg_val; u32 reg_val;
if (!buf) { if (!buf) {
I915_WRITE(reg, val); intel_de_write(dev_priv, reg, val);
return; return;
} }
if (WARN_ON(dsb->free_pos >= DSB_BUF_SIZE)) { if (drm_WARN_ON(&dev_priv->drm, dsb->free_pos >= DSB_BUF_SIZE)) {
DRM_DEBUG_KMS("DSB buffer overflow\n"); DRM_DEBUG_KMS("DSB buffer overflow\n");
return; return;
} }
@ -272,11 +272,11 @@ void intel_dsb_reg_write(struct intel_dsb *dsb, i915_reg_t reg, u32 val)
u32 *buf = dsb->cmd_buf; u32 *buf = dsb->cmd_buf;
if (!buf) { if (!buf) {
I915_WRITE(reg, val); intel_de_write(dev_priv, reg, val);
return; return;
} }
if (WARN_ON(dsb->free_pos >= DSB_BUF_SIZE)) { if (drm_WARN_ON(&dev_priv->drm, dsb->free_pos >= DSB_BUF_SIZE)) {
DRM_DEBUG_KMS("DSB buffer overflow\n"); DRM_DEBUG_KMS("DSB buffer overflow\n");
return; return;
} }
@ -313,7 +313,8 @@ void intel_dsb_commit(struct intel_dsb *dsb)
DRM_ERROR("HEAD_PTR write failed - dsb engine is busy.\n"); DRM_ERROR("HEAD_PTR write failed - dsb engine is busy.\n");
goto reset; goto reset;
} }
I915_WRITE(DSB_HEAD(pipe, dsb->id), i915_ggtt_offset(dsb->vma)); intel_de_write(dev_priv, DSB_HEAD(pipe, dsb->id),
i915_ggtt_offset(dsb->vma));
tail = ALIGN(dsb->free_pos * 4, CACHELINE_BYTES); tail = ALIGN(dsb->free_pos * 4, CACHELINE_BYTES);
if (tail > dsb->free_pos * 4) if (tail > dsb->free_pos * 4)
@ -326,7 +327,8 @@ void intel_dsb_commit(struct intel_dsb *dsb)
} }
DRM_DEBUG_KMS("DSB execution started - head 0x%x, tail 0x%x\n", DRM_DEBUG_KMS("DSB execution started - head 0x%x, tail 0x%x\n",
i915_ggtt_offset(dsb->vma), tail); i915_ggtt_offset(dsb->vma), tail);
I915_WRITE(DSB_TAIL(pipe, dsb->id), i915_ggtt_offset(dsb->vma) + tail); intel_de_write(dev_priv, DSB_TAIL(pipe, dsb->id),
i915_ggtt_offset(dsb->vma) + tail);
if (wait_for(!is_dsb_busy(dsb), 1)) { if (wait_for(!is_dsb_busy(dsb), 1)) {
DRM_ERROR("Timed out waiting for DSB workload completion.\n"); DRM_ERROR("Timed out waiting for DSB workload completion.\n");
goto reset; goto reset;

View file

@ -45,7 +45,7 @@
static u32 dcs_get_backlight(struct intel_connector *connector) static u32 dcs_get_backlight(struct intel_connector *connector)
{ {
struct intel_encoder *encoder = connector->encoder; struct intel_encoder *encoder = intel_attached_encoder(connector);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
struct mipi_dsi_device *dsi_device; struct mipi_dsi_device *dsi_device;
u8 data = 0; u8 data = 0;
@ -160,13 +160,13 @@ int intel_dsi_dcs_init_backlight_funcs(struct intel_connector *intel_connector)
{ {
struct drm_device *dev = intel_connector->base.dev; struct drm_device *dev = intel_connector->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev); struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_encoder *encoder = intel_connector->encoder; struct intel_encoder *encoder = intel_attached_encoder(intel_connector);
struct intel_panel *panel = &intel_connector->panel; struct intel_panel *panel = &intel_connector->panel;
if (dev_priv->vbt.backlight.type != INTEL_BACKLIGHT_DSI_DCS) if (dev_priv->vbt.backlight.type != INTEL_BACKLIGHT_DSI_DCS)
return -ENODEV; return -ENODEV;
if (WARN_ON(encoder->type != INTEL_OUTPUT_DSI)) if (drm_WARN_ON(dev, encoder->type != INTEL_OUTPUT_DSI))
return -EINVAL; return -EINVAL;
panel->backlight.setup = dcs_setup_backlight; panel->backlight.setup = dcs_setup_backlight;

View file

@ -136,7 +136,7 @@ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
u16 len; u16 len;
enum port port; enum port port;
DRM_DEBUG_KMS("\n"); drm_dbg_kms(&dev_priv->drm, "\n");
flags = *data++; flags = *data++;
type = *data++; type = *data++;
@ -158,7 +158,8 @@ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
dsi_device = intel_dsi->dsi_hosts[port]->device; dsi_device = intel_dsi->dsi_hosts[port]->device;
if (!dsi_device) { if (!dsi_device) {
DRM_DEBUG_KMS("no dsi device for port %c\n", port_name(port)); drm_dbg_kms(&dev_priv->drm, "no dsi device for port %c\n",
port_name(port));
goto out; goto out;
} }
@ -182,7 +183,8 @@ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
case MIPI_DSI_GENERIC_READ_REQUEST_0_PARAM: case MIPI_DSI_GENERIC_READ_REQUEST_0_PARAM:
case MIPI_DSI_GENERIC_READ_REQUEST_1_PARAM: case MIPI_DSI_GENERIC_READ_REQUEST_1_PARAM:
case MIPI_DSI_GENERIC_READ_REQUEST_2_PARAM: case MIPI_DSI_GENERIC_READ_REQUEST_2_PARAM:
DRM_DEBUG_DRIVER("Generic Read not yet implemented or used\n"); drm_dbg(&dev_priv->drm,
"Generic Read not yet implemented or used\n");
break; break;
case MIPI_DSI_GENERIC_LONG_WRITE: case MIPI_DSI_GENERIC_LONG_WRITE:
mipi_dsi_generic_write(dsi_device, data, len); mipi_dsi_generic_write(dsi_device, data, len);
@ -194,7 +196,8 @@ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
mipi_dsi_dcs_write_buffer(dsi_device, data, 2); mipi_dsi_dcs_write_buffer(dsi_device, data, 2);
break; break;
case MIPI_DSI_DCS_READ: case MIPI_DSI_DCS_READ:
DRM_DEBUG_DRIVER("DCS Read not yet implemented or used\n"); drm_dbg(&dev_priv->drm,
"DCS Read not yet implemented or used\n");
break; break;
case MIPI_DSI_DCS_LONG_WRITE: case MIPI_DSI_DCS_LONG_WRITE:
mipi_dsi_dcs_write_buffer(dsi_device, data, len); mipi_dsi_dcs_write_buffer(dsi_device, data, len);
@ -212,9 +215,10 @@ out:
static const u8 *mipi_exec_delay(struct intel_dsi *intel_dsi, const u8 *data) static const u8 *mipi_exec_delay(struct intel_dsi *intel_dsi, const u8 *data)
{ {
struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev);
u32 delay = *((const u32 *) data); u32 delay = *((const u32 *) data);
DRM_DEBUG_KMS("\n"); drm_dbg_kms(&i915->drm, "\n");
usleep_range(delay, delay + 10); usleep_range(delay, delay + 10);
data += 4; data += 4;
@ -231,7 +235,8 @@ static void vlv_exec_gpio(struct drm_i915_private *dev_priv,
u8 port; u8 port;
if (gpio_index >= ARRAY_SIZE(vlv_gpio_table)) { if (gpio_index >= ARRAY_SIZE(vlv_gpio_table)) {
DRM_DEBUG_KMS("unknown gpio index %u\n", gpio_index); drm_dbg_kms(&dev_priv->drm, "unknown gpio index %u\n",
gpio_index);
return; return;
} }
@ -244,10 +249,11 @@ static void vlv_exec_gpio(struct drm_i915_private *dev_priv,
if (gpio_source == 0) { if (gpio_source == 0) {
port = IOSF_PORT_GPIO_NC; port = IOSF_PORT_GPIO_NC;
} else if (gpio_source == 1) { } else if (gpio_source == 1) {
DRM_DEBUG_KMS("SC gpio not supported\n"); drm_dbg_kms(&dev_priv->drm, "SC gpio not supported\n");
return; return;
} else { } else {
DRM_DEBUG_KMS("unknown gpio source %u\n", gpio_source); drm_dbg_kms(&dev_priv->drm,
"unknown gpio source %u\n", gpio_source);
return; return;
} }
} }
@ -291,13 +297,15 @@ static void chv_exec_gpio(struct drm_i915_private *dev_priv,
} else { } else {
/* XXX: The spec is unclear about CHV GPIO on seq v2 */ /* XXX: The spec is unclear about CHV GPIO on seq v2 */
if (gpio_source != 0) { if (gpio_source != 0) {
DRM_DEBUG_KMS("unknown gpio source %u\n", gpio_source); drm_dbg_kms(&dev_priv->drm,
"unknown gpio source %u\n", gpio_source);
return; return;
} }
if (gpio_index >= CHV_GPIO_IDX_START_E) { if (gpio_index >= CHV_GPIO_IDX_START_E) {
DRM_DEBUG_KMS("invalid gpio index %u for GPIO N\n", drm_dbg_kms(&dev_priv->drm,
gpio_index); "invalid gpio index %u for GPIO N\n",
gpio_index);
return; return;
} }
@ -332,8 +340,9 @@ static void bxt_exec_gpio(struct drm_i915_private *dev_priv,
GPIOD_OUT_HIGH); GPIOD_OUT_HIGH);
if (IS_ERR_OR_NULL(gpio_desc)) { if (IS_ERR_OR_NULL(gpio_desc)) {
DRM_ERROR("GPIO index %u request failed (%ld)\n", drm_err(&dev_priv->drm,
gpio_index, PTR_ERR(gpio_desc)); "GPIO index %u request failed (%ld)\n",
gpio_index, PTR_ERR(gpio_desc));
return; return;
} }
@ -346,7 +355,7 @@ static void bxt_exec_gpio(struct drm_i915_private *dev_priv,
static void icl_exec_gpio(struct drm_i915_private *dev_priv, static void icl_exec_gpio(struct drm_i915_private *dev_priv,
u8 gpio_source, u8 gpio_index, bool value) u8 gpio_source, u8 gpio_index, bool value)
{ {
DRM_DEBUG_KMS("Skipping ICL GPIO element execution\n"); drm_dbg_kms(&dev_priv->drm, "Skipping ICL GPIO element execution\n");
} }
static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data) static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)
@ -356,7 +365,7 @@ static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)
u8 gpio_source, gpio_index = 0, gpio_number; u8 gpio_source, gpio_index = 0, gpio_number;
bool value; bool value;
DRM_DEBUG_KMS("\n"); drm_dbg_kms(&dev_priv->drm, "\n");
if (dev_priv->vbt.dsi.seq_version >= 3) if (dev_priv->vbt.dsi.seq_version >= 3)
gpio_index = *data++; gpio_index = *data++;
@ -494,13 +503,16 @@ err_bus:
static const u8 *mipi_exec_spi(struct intel_dsi *intel_dsi, const u8 *data) static const u8 *mipi_exec_spi(struct intel_dsi *intel_dsi, const u8 *data)
{ {
DRM_DEBUG_KMS("Skipping SPI element execution\n"); struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev);
drm_dbg_kms(&i915->drm, "Skipping SPI element execution\n");
return data + *(data + 5) + 6; return data + *(data + 5) + 6;
} }
static const u8 *mipi_exec_pmic(struct intel_dsi *intel_dsi, const u8 *data) static const u8 *mipi_exec_pmic(struct intel_dsi *intel_dsi, const u8 *data)
{ {
struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev);
#ifdef CONFIG_PMIC_OPREGION #ifdef CONFIG_PMIC_OPREGION
u32 value, mask, reg_address; u32 value, mask, reg_address;
u16 i2c_address; u16 i2c_address;
@ -516,9 +528,10 @@ static const u8 *mipi_exec_pmic(struct intel_dsi *intel_dsi, const u8 *data)
reg_address, reg_address,
value, mask); value, mask);
if (ret) if (ret)
DRM_ERROR("%s failed, error: %d\n", __func__, ret); drm_err(&i915->drm, "%s failed, error: %d\n", __func__, ret);
#else #else
DRM_ERROR("Your hardware requires CONFIG_PMIC_OPREGION and it is not set\n"); drm_err(&i915->drm,
"Your hardware requires CONFIG_PMIC_OPREGION and it is not set\n");
#endif #endif
return data + 15; return data + 15;
@ -570,17 +583,18 @@ static void intel_dsi_vbt_exec(struct intel_dsi *intel_dsi,
const u8 *data; const u8 *data;
fn_mipi_elem_exec mipi_elem_exec; fn_mipi_elem_exec mipi_elem_exec;
if (WARN_ON(seq_id >= ARRAY_SIZE(dev_priv->vbt.dsi.sequence))) if (drm_WARN_ON(&dev_priv->drm,
seq_id >= ARRAY_SIZE(dev_priv->vbt.dsi.sequence)))
return; return;
data = dev_priv->vbt.dsi.sequence[seq_id]; data = dev_priv->vbt.dsi.sequence[seq_id];
if (!data) if (!data)
return; return;
WARN_ON(*data != seq_id); drm_WARN_ON(&dev_priv->drm, *data != seq_id);
DRM_DEBUG_KMS("Starting MIPI sequence %d - %s\n", drm_dbg_kms(&dev_priv->drm, "Starting MIPI sequence %d - %s\n",
seq_id, sequence_name(seq_id)); seq_id, sequence_name(seq_id));
/* Skip Sequence Byte. */ /* Skip Sequence Byte. */
data++; data++;
@ -612,18 +626,21 @@ static void intel_dsi_vbt_exec(struct intel_dsi *intel_dsi,
/* Consistency check if we have size. */ /* Consistency check if we have size. */
if (operation_size && data != next) { if (operation_size && data != next) {
DRM_ERROR("Inconsistent operation size\n"); drm_err(&dev_priv->drm,
"Inconsistent operation size\n");
return; return;
} }
} else if (operation_size) { } else if (operation_size) {
/* We have size, skip. */ /* We have size, skip. */
DRM_DEBUG_KMS("Unsupported MIPI operation byte %u\n", drm_dbg_kms(&dev_priv->drm,
operation_byte); "Unsupported MIPI operation byte %u\n",
operation_byte);
data += operation_size; data += operation_size;
} else { } else {
/* No size, can't skip without parsing. */ /* No size, can't skip without parsing. */
DRM_ERROR("Unsupported MIPI operation byte %u\n", drm_err(&dev_priv->drm,
operation_byte); "Unsupported MIPI operation byte %u\n",
operation_byte);
return; return;
} }
} }
@ -658,40 +675,54 @@ void intel_dsi_msleep(struct intel_dsi *intel_dsi, int msec)
void intel_dsi_log_params(struct intel_dsi *intel_dsi) void intel_dsi_log_params(struct intel_dsi *intel_dsi)
{ {
DRM_DEBUG_KMS("Pclk %d\n", intel_dsi->pclk); struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev);
DRM_DEBUG_KMS("Pixel overlap %d\n", intel_dsi->pixel_overlap);
DRM_DEBUG_KMS("Lane count %d\n", intel_dsi->lane_count); drm_dbg_kms(&i915->drm, "Pclk %d\n", intel_dsi->pclk);
DRM_DEBUG_KMS("DPHY param reg 0x%x\n", intel_dsi->dphy_reg); drm_dbg_kms(&i915->drm, "Pixel overlap %d\n",
DRM_DEBUG_KMS("Video mode format %s\n", intel_dsi->pixel_overlap);
intel_dsi->video_mode_format == VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE ? drm_dbg_kms(&i915->drm, "Lane count %d\n", intel_dsi->lane_count);
"non-burst with sync pulse" : drm_dbg_kms(&i915->drm, "DPHY param reg 0x%x\n", intel_dsi->dphy_reg);
intel_dsi->video_mode_format == VIDEO_MODE_NON_BURST_WITH_SYNC_EVENTS ? drm_dbg_kms(&i915->drm, "Video mode format %s\n",
"non-burst with sync events" : intel_dsi->video_mode_format == VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE ?
intel_dsi->video_mode_format == VIDEO_MODE_BURST ? "non-burst with sync pulse" :
"burst" : "<unknown>"); intel_dsi->video_mode_format == VIDEO_MODE_NON_BURST_WITH_SYNC_EVENTS ?
DRM_DEBUG_KMS("Burst mode ratio %d\n", intel_dsi->burst_mode_ratio); "non-burst with sync events" :
DRM_DEBUG_KMS("Reset timer %d\n", intel_dsi->rst_timer_val); intel_dsi->video_mode_format == VIDEO_MODE_BURST ?
DRM_DEBUG_KMS("Eot %s\n", enableddisabled(intel_dsi->eotp_pkt)); "burst" : "<unknown>");
DRM_DEBUG_KMS("Clockstop %s\n", enableddisabled(!intel_dsi->clock_stop)); drm_dbg_kms(&i915->drm, "Burst mode ratio %d\n",
DRM_DEBUG_KMS("Mode %s\n", intel_dsi->operation_mode ? "command" : "video"); intel_dsi->burst_mode_ratio);
drm_dbg_kms(&i915->drm, "Reset timer %d\n", intel_dsi->rst_timer_val);
drm_dbg_kms(&i915->drm, "Eot %s\n",
enableddisabled(intel_dsi->eotp_pkt));
drm_dbg_kms(&i915->drm, "Clockstop %s\n",
enableddisabled(!intel_dsi->clock_stop));
drm_dbg_kms(&i915->drm, "Mode %s\n",
intel_dsi->operation_mode ? "command" : "video");
if (intel_dsi->dual_link == DSI_DUAL_LINK_FRONT_BACK) if (intel_dsi->dual_link == DSI_DUAL_LINK_FRONT_BACK)
DRM_DEBUG_KMS("Dual link: DSI_DUAL_LINK_FRONT_BACK\n"); drm_dbg_kms(&i915->drm,
"Dual link: DSI_DUAL_LINK_FRONT_BACK\n");
else if (intel_dsi->dual_link == DSI_DUAL_LINK_PIXEL_ALT) else if (intel_dsi->dual_link == DSI_DUAL_LINK_PIXEL_ALT)
DRM_DEBUG_KMS("Dual link: DSI_DUAL_LINK_PIXEL_ALT\n"); drm_dbg_kms(&i915->drm,
"Dual link: DSI_DUAL_LINK_PIXEL_ALT\n");
else else
DRM_DEBUG_KMS("Dual link: NONE\n"); drm_dbg_kms(&i915->drm, "Dual link: NONE\n");
DRM_DEBUG_KMS("Pixel Format %d\n", intel_dsi->pixel_format); drm_dbg_kms(&i915->drm, "Pixel Format %d\n", intel_dsi->pixel_format);
DRM_DEBUG_KMS("TLPX %d\n", intel_dsi->escape_clk_div); drm_dbg_kms(&i915->drm, "TLPX %d\n", intel_dsi->escape_clk_div);
DRM_DEBUG_KMS("LP RX Timeout 0x%x\n", intel_dsi->lp_rx_timeout); drm_dbg_kms(&i915->drm, "LP RX Timeout 0x%x\n",
DRM_DEBUG_KMS("Turnaround Timeout 0x%x\n", intel_dsi->turn_arnd_val); intel_dsi->lp_rx_timeout);
DRM_DEBUG_KMS("Init Count 0x%x\n", intel_dsi->init_count); drm_dbg_kms(&i915->drm, "Turnaround Timeout 0x%x\n",
DRM_DEBUG_KMS("HS to LP Count 0x%x\n", intel_dsi->hs_to_lp_count); intel_dsi->turn_arnd_val);
DRM_DEBUG_KMS("LP Byte Clock %d\n", intel_dsi->lp_byte_clk); drm_dbg_kms(&i915->drm, "Init Count 0x%x\n", intel_dsi->init_count);
DRM_DEBUG_KMS("DBI BW Timer 0x%x\n", intel_dsi->bw_timer); drm_dbg_kms(&i915->drm, "HS to LP Count 0x%x\n",
DRM_DEBUG_KMS("LP to HS Clock Count 0x%x\n", intel_dsi->clk_lp_to_hs_count); intel_dsi->hs_to_lp_count);
DRM_DEBUG_KMS("HS to LP Clock Count 0x%x\n", intel_dsi->clk_hs_to_lp_count); drm_dbg_kms(&i915->drm, "LP Byte Clock %d\n", intel_dsi->lp_byte_clk);
DRM_DEBUG_KMS("BTA %s\n", drm_dbg_kms(&i915->drm, "DBI BW Timer 0x%x\n", intel_dsi->bw_timer);
enableddisabled(!(intel_dsi->video_frmt_cfg_bits & DISABLE_VIDEO_BTA))); drm_dbg_kms(&i915->drm, "LP to HS Clock Count 0x%x\n",
intel_dsi->clk_lp_to_hs_count);
drm_dbg_kms(&i915->drm, "HS to LP Clock Count 0x%x\n",
intel_dsi->clk_hs_to_lp_count);
drm_dbg_kms(&i915->drm, "BTA %s\n",
enableddisabled(!(intel_dsi->video_frmt_cfg_bits & DISABLE_VIDEO_BTA)));
} }
bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id) bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
@ -704,7 +735,7 @@ bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
u16 burst_mode_ratio; u16 burst_mode_ratio;
enum port port; enum port port;
DRM_DEBUG_KMS("\n"); drm_dbg_kms(&dev_priv->drm, "\n");
intel_dsi->eotp_pkt = mipi_config->eot_pkt_disabled ? 0 : 1; intel_dsi->eotp_pkt = mipi_config->eot_pkt_disabled ? 0 : 1;
intel_dsi->clock_stop = mipi_config->enable_clk_stop ? 1 : 0; intel_dsi->clock_stop = mipi_config->enable_clk_stop ? 1 : 0;
@ -763,7 +794,8 @@ bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
mipi_config->target_burst_mode_freq = bitrate; mipi_config->target_burst_mode_freq = bitrate;
if (mipi_config->target_burst_mode_freq < bitrate) { if (mipi_config->target_burst_mode_freq < bitrate) {
DRM_ERROR("Burst mode freq is less than computed\n"); drm_err(&dev_priv->drm,
"Burst mode freq is less than computed\n");
return false; return false;
} }
@ -773,7 +805,8 @@ bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
intel_dsi->pclk = DIV_ROUND_UP(intel_dsi->pclk * burst_mode_ratio, 100); intel_dsi->pclk = DIV_ROUND_UP(intel_dsi->pclk * burst_mode_ratio, 100);
} else { } else {
DRM_ERROR("Burst mode target is not set\n"); drm_err(&dev_priv->drm,
"Burst mode target is not set\n");
return false; return false;
} }
} else } else
@ -856,17 +889,20 @@ void intel_dsi_vbt_gpio_init(struct intel_dsi *intel_dsi, bool panel_is_on)
ret = pinctrl_register_mappings(soc_pwm_pinctrl_map, ret = pinctrl_register_mappings(soc_pwm_pinctrl_map,
ARRAY_SIZE(soc_pwm_pinctrl_map)); ARRAY_SIZE(soc_pwm_pinctrl_map));
if (ret) if (ret)
DRM_ERROR("Failed to register pwm0 pinmux mapping\n"); drm_err(&dev_priv->drm,
"Failed to register pwm0 pinmux mapping\n");
pinctrl = devm_pinctrl_get_select(dev->dev, "soc_pwm0"); pinctrl = devm_pinctrl_get_select(dev->dev, "soc_pwm0");
if (IS_ERR(pinctrl)) if (IS_ERR(pinctrl))
DRM_ERROR("Failed to set pinmux to PWM\n"); drm_err(&dev_priv->drm,
"Failed to set pinmux to PWM\n");
} }
if (want_panel_gpio) { if (want_panel_gpio) {
intel_dsi->gpio_panel = gpiod_get(dev->dev, "panel", flags); intel_dsi->gpio_panel = gpiod_get(dev->dev, "panel", flags);
if (IS_ERR(intel_dsi->gpio_panel)) { if (IS_ERR(intel_dsi->gpio_panel)) {
DRM_ERROR("Failed to own gpio for panel control\n"); drm_err(&dev_priv->drm,
"Failed to own gpio for panel control\n");
intel_dsi->gpio_panel = NULL; intel_dsi->gpio_panel = NULL;
} }
} }
@ -875,7 +911,8 @@ void intel_dsi_vbt_gpio_init(struct intel_dsi *intel_dsi, bool panel_is_on)
intel_dsi->gpio_backlight = intel_dsi->gpio_backlight =
gpiod_get(dev->dev, "backlight", flags); gpiod_get(dev->dev, "backlight", flags);
if (IS_ERR(intel_dsi->gpio_backlight)) { if (IS_ERR(intel_dsi->gpio_backlight)) {
DRM_ERROR("Failed to own gpio for backlight control\n"); drm_err(&dev_priv->drm,
"Failed to own gpio for backlight control\n");
intel_dsi->gpio_backlight = NULL; intel_dsi->gpio_backlight = NULL;
} }
} }

View file

@ -44,6 +44,7 @@
#define INTEL_DVO_CHIP_LVDS 1 #define INTEL_DVO_CHIP_LVDS 1
#define INTEL_DVO_CHIP_TMDS 2 #define INTEL_DVO_CHIP_TMDS 2
#define INTEL_DVO_CHIP_TVOUT 4 #define INTEL_DVO_CHIP_TVOUT 4
#define INTEL_DVO_CHIP_LVDS_NO_FIXED 5
#define SIL164_ADDR 0x38 #define SIL164_ADDR 0x38
#define CH7xxx_ADDR 0x76 #define CH7xxx_ADDR 0x76
@ -101,13 +102,13 @@ static const struct intel_dvo_device intel_dvo_devices[] = {
.dev_ops = &ch7017_ops, .dev_ops = &ch7017_ops,
}, },
{ {
.type = INTEL_DVO_CHIP_TMDS, .type = INTEL_DVO_CHIP_LVDS_NO_FIXED,
.name = "ns2501", .name = "ns2501",
.dvo_reg = DVOB, .dvo_reg = DVOB,
.dvo_srcdim_reg = DVOB_SRCDIM, .dvo_srcdim_reg = DVOB_SRCDIM,
.slave_addr = NS2501_ADDR, .slave_addr = NS2501_ADDR,
.dev_ops = &ns2501_ops, .dev_ops = &ns2501_ops,
} },
}; };
struct intel_dvo { struct intel_dvo {
@ -137,7 +138,7 @@ static bool intel_dvo_connector_get_hw_state(struct intel_connector *connector)
struct intel_dvo *intel_dvo = intel_attached_dvo(connector); struct intel_dvo *intel_dvo = intel_attached_dvo(connector);
u32 tmp; u32 tmp;
tmp = I915_READ(intel_dvo->dev.dvo_reg); tmp = intel_de_read(dev_priv, intel_dvo->dev.dvo_reg);
if (!(tmp & DVO_ENABLE)) if (!(tmp & DVO_ENABLE))
return false; return false;
@ -152,7 +153,7 @@ static bool intel_dvo_get_hw_state(struct intel_encoder *encoder,
struct intel_dvo *intel_dvo = enc_to_dvo(encoder); struct intel_dvo *intel_dvo = enc_to_dvo(encoder);
u32 tmp; u32 tmp;
tmp = I915_READ(intel_dvo->dev.dvo_reg); tmp = intel_de_read(dev_priv, intel_dvo->dev.dvo_reg);
*pipe = (tmp & DVO_PIPE_SEL_MASK) >> DVO_PIPE_SEL_SHIFT; *pipe = (tmp & DVO_PIPE_SEL_MASK) >> DVO_PIPE_SEL_SHIFT;
@ -168,7 +169,7 @@ static void intel_dvo_get_config(struct intel_encoder *encoder,
pipe_config->output_types |= BIT(INTEL_OUTPUT_DVO); pipe_config->output_types |= BIT(INTEL_OUTPUT_DVO);
tmp = I915_READ(intel_dvo->dev.dvo_reg); tmp = intel_de_read(dev_priv, intel_dvo->dev.dvo_reg);
if (tmp & DVO_HSYNC_ACTIVE_HIGH) if (tmp & DVO_HSYNC_ACTIVE_HIGH)
flags |= DRM_MODE_FLAG_PHSYNC; flags |= DRM_MODE_FLAG_PHSYNC;
else else
@ -190,11 +191,11 @@ static void intel_disable_dvo(struct intel_encoder *encoder,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dvo *intel_dvo = enc_to_dvo(encoder); struct intel_dvo *intel_dvo = enc_to_dvo(encoder);
i915_reg_t dvo_reg = intel_dvo->dev.dvo_reg; i915_reg_t dvo_reg = intel_dvo->dev.dvo_reg;
u32 temp = I915_READ(dvo_reg); u32 temp = intel_de_read(dev_priv, dvo_reg);
intel_dvo->dev.dev_ops->dpms(&intel_dvo->dev, false); intel_dvo->dev.dev_ops->dpms(&intel_dvo->dev, false);
I915_WRITE(dvo_reg, temp & ~DVO_ENABLE); intel_de_write(dev_priv, dvo_reg, temp & ~DVO_ENABLE);
I915_READ(dvo_reg); intel_de_read(dev_priv, dvo_reg);
} }
static void intel_enable_dvo(struct intel_encoder *encoder, static void intel_enable_dvo(struct intel_encoder *encoder,
@ -204,14 +205,14 @@ static void intel_enable_dvo(struct intel_encoder *encoder,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dvo *intel_dvo = enc_to_dvo(encoder); struct intel_dvo *intel_dvo = enc_to_dvo(encoder);
i915_reg_t dvo_reg = intel_dvo->dev.dvo_reg; i915_reg_t dvo_reg = intel_dvo->dev.dvo_reg;
u32 temp = I915_READ(dvo_reg); u32 temp = intel_de_read(dev_priv, dvo_reg);
intel_dvo->dev.dev_ops->mode_set(&intel_dvo->dev, intel_dvo->dev.dev_ops->mode_set(&intel_dvo->dev,
&pipe_config->hw.mode, &pipe_config->hw.mode,
&pipe_config->hw.adjusted_mode); &pipe_config->hw.adjusted_mode);
I915_WRITE(dvo_reg, temp | DVO_ENABLE); intel_de_write(dev_priv, dvo_reg, temp | DVO_ENABLE);
I915_READ(dvo_reg); intel_de_read(dev_priv, dvo_reg);
intel_dvo->dev.dev_ops->dpms(&intel_dvo->dev, true); intel_dvo->dev.dev_ops->dpms(&intel_dvo->dev, true);
} }
@ -286,7 +287,7 @@ static void intel_dvo_pre_enable(struct intel_encoder *encoder,
i915_reg_t dvo_srcdim_reg = intel_dvo->dev.dvo_srcdim_reg; i915_reg_t dvo_srcdim_reg = intel_dvo->dev.dvo_srcdim_reg;
/* Save the data order, since I don't know what it should be set to. */ /* Save the data order, since I don't know what it should be set to. */
dvo_val = I915_READ(dvo_reg) & dvo_val = intel_de_read(dev_priv, dvo_reg) &
(DVO_PRESERVE_MASK | DVO_DATA_ORDER_GBRG); (DVO_PRESERVE_MASK | DVO_DATA_ORDER_GBRG);
dvo_val |= DVO_DATA_ORDER_FP | DVO_BORDER_ENABLE | dvo_val |= DVO_DATA_ORDER_FP | DVO_BORDER_ENABLE |
DVO_BLANK_ACTIVE_HIGH; DVO_BLANK_ACTIVE_HIGH;
@ -301,11 +302,10 @@ static void intel_dvo_pre_enable(struct intel_encoder *encoder,
/*I915_WRITE(DVOB_SRCDIM, /*I915_WRITE(DVOB_SRCDIM,
(adjusted_mode->crtc_hdisplay << DVO_SRCDIM_HORIZONTAL_SHIFT) | (adjusted_mode->crtc_hdisplay << DVO_SRCDIM_HORIZONTAL_SHIFT) |
(adjusted_mode->crtc_vdisplay << DVO_SRCDIM_VERTICAL_SHIFT));*/ (adjusted_mode->crtc_vdisplay << DVO_SRCDIM_VERTICAL_SHIFT));*/
I915_WRITE(dvo_srcdim_reg, intel_de_write(dev_priv, dvo_srcdim_reg,
(adjusted_mode->crtc_hdisplay << DVO_SRCDIM_HORIZONTAL_SHIFT) | (adjusted_mode->crtc_hdisplay << DVO_SRCDIM_HORIZONTAL_SHIFT) | (adjusted_mode->crtc_vdisplay << DVO_SRCDIM_VERTICAL_SHIFT));
(adjusted_mode->crtc_vdisplay << DVO_SRCDIM_VERTICAL_SHIFT));
/*I915_WRITE(DVOB, dvo_val);*/ /*I915_WRITE(DVOB, dvo_val);*/
I915_WRITE(dvo_reg, dvo_val); intel_de_write(dev_priv, dvo_reg, dvo_val);
} }
static enum drm_connector_status static enum drm_connector_status
@ -481,15 +481,16 @@ void intel_dvo_init(struct drm_i915_private *dev_priv)
* initialize the device. * initialize the device.
*/ */
for_each_pipe(dev_priv, pipe) { for_each_pipe(dev_priv, pipe) {
dpll[pipe] = I915_READ(DPLL(pipe)); dpll[pipe] = intel_de_read(dev_priv, DPLL(pipe));
I915_WRITE(DPLL(pipe), dpll[pipe] | DPLL_DVO_2X_MODE); intel_de_write(dev_priv, DPLL(pipe),
dpll[pipe] | DPLL_DVO_2X_MODE);
} }
dvoinit = dvo->dev_ops->init(&intel_dvo->dev, i2c); dvoinit = dvo->dev_ops->init(&intel_dvo->dev, i2c);
/* restore the DVO 2x clock state to original */ /* restore the DVO 2x clock state to original */
for_each_pipe(dev_priv, pipe) { for_each_pipe(dev_priv, pipe) {
I915_WRITE(DPLL(pipe), dpll[pipe]); intel_de_write(dev_priv, DPLL(pipe), dpll[pipe]);
} }
intel_gmbus_force_bit(i2c, false); intel_gmbus_force_bit(i2c, false);
@ -507,17 +508,21 @@ void intel_dvo_init(struct drm_i915_private *dev_priv)
intel_encoder->port = port; intel_encoder->port = port;
intel_encoder->pipe_mask = ~0; intel_encoder->pipe_mask = ~0;
switch (dvo->type) { if (dvo->type != INTEL_DVO_CHIP_LVDS)
case INTEL_DVO_CHIP_TMDS:
intel_encoder->cloneable = (1 << INTEL_OUTPUT_ANALOG) | intel_encoder->cloneable = (1 << INTEL_OUTPUT_ANALOG) |
(1 << INTEL_OUTPUT_DVO); (1 << INTEL_OUTPUT_DVO);
switch (dvo->type) {
case INTEL_DVO_CHIP_TMDS:
intel_connector->polled = DRM_CONNECTOR_POLL_CONNECT |
DRM_CONNECTOR_POLL_DISCONNECT;
drm_connector_init(&dev_priv->drm, connector, drm_connector_init(&dev_priv->drm, connector,
&intel_dvo_connector_funcs, &intel_dvo_connector_funcs,
DRM_MODE_CONNECTOR_DVII); DRM_MODE_CONNECTOR_DVII);
encoder_type = DRM_MODE_ENCODER_TMDS; encoder_type = DRM_MODE_ENCODER_TMDS;
break; break;
case INTEL_DVO_CHIP_LVDS_NO_FIXED:
case INTEL_DVO_CHIP_LVDS: case INTEL_DVO_CHIP_LVDS:
intel_encoder->cloneable = 0;
drm_connector_init(&dev_priv->drm, connector, drm_connector_init(&dev_priv->drm, connector,
&intel_dvo_connector_funcs, &intel_dvo_connector_funcs,
DRM_MODE_CONNECTOR_LVDS); DRM_MODE_CONNECTOR_LVDS);

View file

@ -41,15 +41,11 @@
#include <drm/drm_fourcc.h> #include <drm/drm_fourcc.h>
#include "i915_drv.h" #include "i915_drv.h"
#include "i915_trace.h"
#include "intel_display_types.h" #include "intel_display_types.h"
#include "intel_fbc.h" #include "intel_fbc.h"
#include "intel_frontbuffer.h" #include "intel_frontbuffer.h"
static inline bool fbc_supported(struct drm_i915_private *dev_priv)
{
return HAS_FBC(dev_priv);
}
/* /*
* In some platforms where the CRTC's x:0/y:0 coordinates doesn't match the * In some platforms where the CRTC's x:0/y:0 coordinates doesn't match the
* frontbuffer's x:0/y:0 coordinates we lie to the hardware about the plane's * frontbuffer's x:0/y:0 coordinates we lie to the hardware about the plane's
@ -97,12 +93,12 @@ static void i8xx_fbc_deactivate(struct drm_i915_private *dev_priv)
u32 fbc_ctl; u32 fbc_ctl;
/* Disable compression */ /* Disable compression */
fbc_ctl = I915_READ(FBC_CONTROL); fbc_ctl = intel_de_read(dev_priv, FBC_CONTROL);
if ((fbc_ctl & FBC_CTL_EN) == 0) if ((fbc_ctl & FBC_CTL_EN) == 0)
return; return;
fbc_ctl &= ~FBC_CTL_EN; fbc_ctl &= ~FBC_CTL_EN;
I915_WRITE(FBC_CONTROL, fbc_ctl); intel_de_write(dev_priv, FBC_CONTROL, fbc_ctl);
/* Wait for compressing bit to clear */ /* Wait for compressing bit to clear */
if (intel_de_wait_for_clear(dev_priv, FBC_STATUS, if (intel_de_wait_for_clear(dev_priv, FBC_STATUS,
@ -132,7 +128,7 @@ static void i8xx_fbc_activate(struct drm_i915_private *dev_priv)
/* Clear old tags */ /* Clear old tags */
for (i = 0; i < (FBC_LL_SIZE / 32) + 1; i++) for (i = 0; i < (FBC_LL_SIZE / 32) + 1; i++)
I915_WRITE(FBC_TAG(i), 0); intel_de_write(dev_priv, FBC_TAG(i), 0);
if (IS_GEN(dev_priv, 4)) { if (IS_GEN(dev_priv, 4)) {
u32 fbc_ctl2; u32 fbc_ctl2;
@ -142,12 +138,13 @@ static void i8xx_fbc_activate(struct drm_i915_private *dev_priv)
fbc_ctl2 |= FBC_CTL_PLANE(params->crtc.i9xx_plane); fbc_ctl2 |= FBC_CTL_PLANE(params->crtc.i9xx_plane);
if (params->fence_id >= 0) if (params->fence_id >= 0)
fbc_ctl2 |= FBC_CTL_CPU_FENCE; fbc_ctl2 |= FBC_CTL_CPU_FENCE;
I915_WRITE(FBC_CONTROL2, fbc_ctl2); intel_de_write(dev_priv, FBC_CONTROL2, fbc_ctl2);
I915_WRITE(FBC_FENCE_OFF, params->crtc.fence_y_offset); intel_de_write(dev_priv, FBC_FENCE_OFF,
params->crtc.fence_y_offset);
} }
/* enable it... */ /* enable it... */
fbc_ctl = I915_READ(FBC_CONTROL); fbc_ctl = intel_de_read(dev_priv, FBC_CONTROL);
fbc_ctl &= 0x3fff << FBC_CTL_INTERVAL_SHIFT; fbc_ctl &= 0x3fff << FBC_CTL_INTERVAL_SHIFT;
fbc_ctl |= FBC_CTL_EN | FBC_CTL_PERIODIC; fbc_ctl |= FBC_CTL_EN | FBC_CTL_PERIODIC;
if (IS_I945GM(dev_priv)) if (IS_I945GM(dev_priv))
@ -155,12 +152,12 @@ static void i8xx_fbc_activate(struct drm_i915_private *dev_priv)
fbc_ctl |= (cfb_pitch & 0xff) << FBC_CTL_STRIDE_SHIFT; fbc_ctl |= (cfb_pitch & 0xff) << FBC_CTL_STRIDE_SHIFT;
if (params->fence_id >= 0) if (params->fence_id >= 0)
fbc_ctl |= params->fence_id; fbc_ctl |= params->fence_id;
I915_WRITE(FBC_CONTROL, fbc_ctl); intel_de_write(dev_priv, FBC_CONTROL, fbc_ctl);
} }
static bool i8xx_fbc_is_active(struct drm_i915_private *dev_priv) static bool i8xx_fbc_is_active(struct drm_i915_private *dev_priv)
{ {
return I915_READ(FBC_CONTROL) & FBC_CTL_EN; return intel_de_read(dev_priv, FBC_CONTROL) & FBC_CTL_EN;
} }
static void g4x_fbc_activate(struct drm_i915_private *dev_priv) static void g4x_fbc_activate(struct drm_i915_private *dev_priv)
@ -176,13 +173,14 @@ static void g4x_fbc_activate(struct drm_i915_private *dev_priv)
if (params->fence_id >= 0) { if (params->fence_id >= 0) {
dpfc_ctl |= DPFC_CTL_FENCE_EN | params->fence_id; dpfc_ctl |= DPFC_CTL_FENCE_EN | params->fence_id;
I915_WRITE(DPFC_FENCE_YOFF, params->crtc.fence_y_offset); intel_de_write(dev_priv, DPFC_FENCE_YOFF,
params->crtc.fence_y_offset);
} else { } else {
I915_WRITE(DPFC_FENCE_YOFF, 0); intel_de_write(dev_priv, DPFC_FENCE_YOFF, 0);
} }
/* enable it... */ /* enable it... */
I915_WRITE(DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN); intel_de_write(dev_priv, DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);
} }
static void g4x_fbc_deactivate(struct drm_i915_private *dev_priv) static void g4x_fbc_deactivate(struct drm_i915_private *dev_priv)
@ -190,23 +188,27 @@ static void g4x_fbc_deactivate(struct drm_i915_private *dev_priv)
u32 dpfc_ctl; u32 dpfc_ctl;
/* Disable compression */ /* Disable compression */
dpfc_ctl = I915_READ(DPFC_CONTROL); dpfc_ctl = intel_de_read(dev_priv, DPFC_CONTROL);
if (dpfc_ctl & DPFC_CTL_EN) { if (dpfc_ctl & DPFC_CTL_EN) {
dpfc_ctl &= ~DPFC_CTL_EN; dpfc_ctl &= ~DPFC_CTL_EN;
I915_WRITE(DPFC_CONTROL, dpfc_ctl); intel_de_write(dev_priv, DPFC_CONTROL, dpfc_ctl);
} }
} }
static bool g4x_fbc_is_active(struct drm_i915_private *dev_priv) static bool g4x_fbc_is_active(struct drm_i915_private *dev_priv)
{ {
return I915_READ(DPFC_CONTROL) & DPFC_CTL_EN; return intel_de_read(dev_priv, DPFC_CONTROL) & DPFC_CTL_EN;
} }
/* This function forces a CFB recompression through the nuke operation. */ /* This function forces a CFB recompression through the nuke operation. */
static void intel_fbc_recompress(struct drm_i915_private *dev_priv) static void intel_fbc_recompress(struct drm_i915_private *dev_priv)
{ {
I915_WRITE(MSG_FBC_REND_STATE, FBC_REND_NUKE); struct intel_fbc *fbc = &dev_priv->fbc;
POSTING_READ(MSG_FBC_REND_STATE);
trace_intel_fbc_nuke(fbc->crtc);
intel_de_write(dev_priv, MSG_FBC_REND_STATE, FBC_REND_NUKE);
intel_de_posting_read(dev_priv, MSG_FBC_REND_STATE);
} }
static void ilk_fbc_activate(struct drm_i915_private *dev_priv) static void ilk_fbc_activate(struct drm_i915_private *dev_priv)
@ -237,22 +239,22 @@ static void ilk_fbc_activate(struct drm_i915_private *dev_priv)
if (IS_GEN(dev_priv, 5)) if (IS_GEN(dev_priv, 5))
dpfc_ctl |= params->fence_id; dpfc_ctl |= params->fence_id;
if (IS_GEN(dev_priv, 6)) { if (IS_GEN(dev_priv, 6)) {
I915_WRITE(SNB_DPFC_CTL_SA, intel_de_write(dev_priv, SNB_DPFC_CTL_SA,
SNB_CPU_FENCE_ENABLE | SNB_CPU_FENCE_ENABLE | params->fence_id);
params->fence_id); intel_de_write(dev_priv, DPFC_CPU_FENCE_OFFSET,
I915_WRITE(DPFC_CPU_FENCE_OFFSET, params->crtc.fence_y_offset);
params->crtc.fence_y_offset);
} }
} else { } else {
if (IS_GEN(dev_priv, 6)) { if (IS_GEN(dev_priv, 6)) {
I915_WRITE(SNB_DPFC_CTL_SA, 0); intel_de_write(dev_priv, SNB_DPFC_CTL_SA, 0);
I915_WRITE(DPFC_CPU_FENCE_OFFSET, 0); intel_de_write(dev_priv, DPFC_CPU_FENCE_OFFSET, 0);
} }
} }
I915_WRITE(ILK_DPFC_FENCE_YOFF, params->crtc.fence_y_offset); intel_de_write(dev_priv, ILK_DPFC_FENCE_YOFF,
params->crtc.fence_y_offset);
/* enable it... */ /* enable it... */
I915_WRITE(ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN); intel_de_write(dev_priv, ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);
intel_fbc_recompress(dev_priv); intel_fbc_recompress(dev_priv);
} }
@ -262,16 +264,16 @@ static void ilk_fbc_deactivate(struct drm_i915_private *dev_priv)
u32 dpfc_ctl; u32 dpfc_ctl;
/* Disable compression */ /* Disable compression */
dpfc_ctl = I915_READ(ILK_DPFC_CONTROL); dpfc_ctl = intel_de_read(dev_priv, ILK_DPFC_CONTROL);
if (dpfc_ctl & DPFC_CTL_EN) { if (dpfc_ctl & DPFC_CTL_EN) {
dpfc_ctl &= ~DPFC_CTL_EN; dpfc_ctl &= ~DPFC_CTL_EN;
I915_WRITE(ILK_DPFC_CONTROL, dpfc_ctl); intel_de_write(dev_priv, ILK_DPFC_CONTROL, dpfc_ctl);
} }
} }
static bool ilk_fbc_is_active(struct drm_i915_private *dev_priv) static bool ilk_fbc_is_active(struct drm_i915_private *dev_priv)
{ {
return I915_READ(ILK_DPFC_CONTROL) & DPFC_CTL_EN; return intel_de_read(dev_priv, ILK_DPFC_CONTROL) & DPFC_CTL_EN;
} }
static void gen7_fbc_activate(struct drm_i915_private *dev_priv) static void gen7_fbc_activate(struct drm_i915_private *dev_priv)
@ -282,14 +284,14 @@ static void gen7_fbc_activate(struct drm_i915_private *dev_priv)
/* Display WA #0529: skl, kbl, bxt. */ /* Display WA #0529: skl, kbl, bxt. */
if (IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv)) { if (IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv)) {
u32 val = I915_READ(CHICKEN_MISC_4); u32 val = intel_de_read(dev_priv, CHICKEN_MISC_4);
val &= ~(FBC_STRIDE_OVERRIDE | FBC_STRIDE_MASK); val &= ~(FBC_STRIDE_OVERRIDE | FBC_STRIDE_MASK);
if (params->gen9_wa_cfb_stride) if (params->gen9_wa_cfb_stride)
val |= FBC_STRIDE_OVERRIDE | params->gen9_wa_cfb_stride; val |= FBC_STRIDE_OVERRIDE | params->gen9_wa_cfb_stride;
I915_WRITE(CHICKEN_MISC_4, val); intel_de_write(dev_priv, CHICKEN_MISC_4, val);
} }
dpfc_ctl = 0; dpfc_ctl = 0;
@ -314,13 +316,13 @@ static void gen7_fbc_activate(struct drm_i915_private *dev_priv)
if (params->fence_id >= 0) { if (params->fence_id >= 0) {
dpfc_ctl |= IVB_DPFC_CTL_FENCE_EN; dpfc_ctl |= IVB_DPFC_CTL_FENCE_EN;
I915_WRITE(SNB_DPFC_CTL_SA, intel_de_write(dev_priv, SNB_DPFC_CTL_SA,
SNB_CPU_FENCE_ENABLE | SNB_CPU_FENCE_ENABLE | params->fence_id);
params->fence_id); intel_de_write(dev_priv, DPFC_CPU_FENCE_OFFSET,
I915_WRITE(DPFC_CPU_FENCE_OFFSET, params->crtc.fence_y_offset); params->crtc.fence_y_offset);
} else { } else {
I915_WRITE(SNB_DPFC_CTL_SA,0); intel_de_write(dev_priv, SNB_DPFC_CTL_SA, 0);
I915_WRITE(DPFC_CPU_FENCE_OFFSET, 0); intel_de_write(dev_priv, DPFC_CPU_FENCE_OFFSET, 0);
} }
if (dev_priv->fbc.false_color) if (dev_priv->fbc.false_color)
@ -328,21 +330,20 @@ static void gen7_fbc_activate(struct drm_i915_private *dev_priv)
if (IS_IVYBRIDGE(dev_priv)) { if (IS_IVYBRIDGE(dev_priv)) {
/* WaFbcAsynchFlipDisableFbcQueue:ivb */ /* WaFbcAsynchFlipDisableFbcQueue:ivb */
I915_WRITE(ILK_DISPLAY_CHICKEN1, intel_de_write(dev_priv, ILK_DISPLAY_CHICKEN1,
I915_READ(ILK_DISPLAY_CHICKEN1) | intel_de_read(dev_priv, ILK_DISPLAY_CHICKEN1) | ILK_FBCQ_DIS);
ILK_FBCQ_DIS);
} else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {
/* WaFbcAsynchFlipDisableFbcQueue:hsw,bdw */ /* WaFbcAsynchFlipDisableFbcQueue:hsw,bdw */
I915_WRITE(CHICKEN_PIPESL_1(params->crtc.pipe), intel_de_write(dev_priv, CHICKEN_PIPESL_1(params->crtc.pipe),
I915_READ(CHICKEN_PIPESL_1(params->crtc.pipe)) | intel_de_read(dev_priv, CHICKEN_PIPESL_1(params->crtc.pipe)) | HSW_FBCQ_DIS);
HSW_FBCQ_DIS);
} }
if (INTEL_GEN(dev_priv) >= 11) if (INTEL_GEN(dev_priv) >= 11)
/* Wa_1409120013:icl,ehl,tgl */ /* Wa_1409120013:icl,ehl,tgl */
I915_WRITE(ILK_DPFC_CHICKEN, ILK_DPFC_CHICKEN_COMP_DUMMY_PIXEL); intel_de_write(dev_priv, ILK_DPFC_CHICKEN,
ILK_DPFC_CHICKEN_COMP_DUMMY_PIXEL);
I915_WRITE(ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN); intel_de_write(dev_priv, ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);
intel_fbc_recompress(dev_priv); intel_fbc_recompress(dev_priv);
} }
@ -361,6 +362,8 @@ static void intel_fbc_hw_activate(struct drm_i915_private *dev_priv)
{ {
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
trace_intel_fbc_activate(fbc->crtc);
fbc->active = true; fbc->active = true;
fbc->activated = true; fbc->activated = true;
@ -378,6 +381,8 @@ static void intel_fbc_hw_deactivate(struct drm_i915_private *dev_priv)
{ {
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
trace_intel_fbc_deactivate(fbc->crtc);
fbc->active = false; fbc->active = false;
if (INTEL_GEN(dev_priv) >= 5) if (INTEL_GEN(dev_priv) >= 5)
@ -407,7 +412,7 @@ static void intel_fbc_deactivate(struct drm_i915_private *dev_priv,
{ {
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
WARN_ON(!mutex_is_locked(&fbc->lock)); drm_WARN_ON(&dev_priv->drm, !mutex_is_locked(&fbc->lock));
if (fbc->active) if (fbc->active)
intel_fbc_hw_deactivate(dev_priv); intel_fbc_hw_deactivate(dev_priv);
@ -471,7 +476,8 @@ static int intel_fbc_alloc_cfb(struct drm_i915_private *dev_priv,
struct drm_mm_node *uninitialized_var(compressed_llb); struct drm_mm_node *uninitialized_var(compressed_llb);
int ret; int ret;
WARN_ON(drm_mm_node_allocated(&fbc->compressed_fb)); drm_WARN_ON(&dev_priv->drm,
drm_mm_node_allocated(&fbc->compressed_fb));
ret = find_compression_threshold(dev_priv, &fbc->compressed_fb, ret = find_compression_threshold(dev_priv, &fbc->compressed_fb,
size, fb_cpp); size, fb_cpp);
@ -485,9 +491,11 @@ static int intel_fbc_alloc_cfb(struct drm_i915_private *dev_priv,
fbc->threshold = ret; fbc->threshold = ret;
if (INTEL_GEN(dev_priv) >= 5) if (INTEL_GEN(dev_priv) >= 5)
I915_WRITE(ILK_DPFC_CB_BASE, fbc->compressed_fb.start); intel_de_write(dev_priv, ILK_DPFC_CB_BASE,
fbc->compressed_fb.start);
else if (IS_GM45(dev_priv)) { else if (IS_GM45(dev_priv)) {
I915_WRITE(DPFC_CB_BASE, fbc->compressed_fb.start); intel_de_write(dev_priv, DPFC_CB_BASE,
fbc->compressed_fb.start);
} else { } else {
compressed_llb = kzalloc(sizeof(*compressed_llb), GFP_KERNEL); compressed_llb = kzalloc(sizeof(*compressed_llb), GFP_KERNEL);
if (!compressed_llb) if (!compressed_llb)
@ -506,10 +514,10 @@ static int intel_fbc_alloc_cfb(struct drm_i915_private *dev_priv,
GEM_BUG_ON(range_overflows_t(u64, dev_priv->dsm.start, GEM_BUG_ON(range_overflows_t(u64, dev_priv->dsm.start,
fbc->compressed_llb->start, fbc->compressed_llb->start,
U32_MAX)); U32_MAX));
I915_WRITE(FBC_CFB_BASE, intel_de_write(dev_priv, FBC_CFB_BASE,
dev_priv->dsm.start + fbc->compressed_fb.start); dev_priv->dsm.start + fbc->compressed_fb.start);
I915_WRITE(FBC_LL_BASE, intel_de_write(dev_priv, FBC_LL_BASE,
dev_priv->dsm.start + compressed_llb->start); dev_priv->dsm.start + compressed_llb->start);
} }
DRM_DEBUG_KMS("reserved %llu bytes of contiguous stolen space for FBC, threshold: %d\n", DRM_DEBUG_KMS("reserved %llu bytes of contiguous stolen space for FBC, threshold: %d\n",
@ -530,20 +538,22 @@ static void __intel_fbc_cleanup_cfb(struct drm_i915_private *dev_priv)
{ {
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
if (drm_mm_node_allocated(&fbc->compressed_fb)) if (!drm_mm_node_allocated(&fbc->compressed_fb))
i915_gem_stolen_remove_node(dev_priv, &fbc->compressed_fb); return;
if (fbc->compressed_llb) { if (fbc->compressed_llb) {
i915_gem_stolen_remove_node(dev_priv, fbc->compressed_llb); i915_gem_stolen_remove_node(dev_priv, fbc->compressed_llb);
kfree(fbc->compressed_llb); kfree(fbc->compressed_llb);
} }
i915_gem_stolen_remove_node(dev_priv, &fbc->compressed_fb);
} }
void intel_fbc_cleanup_cfb(struct drm_i915_private *dev_priv) void intel_fbc_cleanup_cfb(struct drm_i915_private *dev_priv)
{ {
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
if (!fbc_supported(dev_priv)) if (!HAS_FBC(dev_priv))
return; return;
mutex_lock(&fbc->lock); mutex_lock(&fbc->lock);
@ -555,7 +565,7 @@ static bool stride_is_valid(struct drm_i915_private *dev_priv,
unsigned int stride) unsigned int stride)
{ {
/* This should have been caught earlier. */ /* This should have been caught earlier. */
if (WARN_ON_ONCE((stride & (64 - 1)) != 0)) if (drm_WARN_ON_ONCE(&dev_priv->drm, (stride & (64 - 1)) != 0))
return false; return false;
/* Below are the additional FBC restrictions. */ /* Below are the additional FBC restrictions. */
@ -663,8 +673,8 @@ static void intel_fbc_update_state_cache(struct intel_crtc *crtc,
cache->fb.format = fb->format; cache->fb.format = fb->format;
cache->fb.stride = fb->pitches[0]; cache->fb.stride = fb->pitches[0];
WARN_ON(plane_state->flags & PLANE_HAS_FENCE && drm_WARN_ON(&dev_priv->drm, plane_state->flags & PLANE_HAS_FENCE &&
!plane_state->vma->fence); !plane_state->vma->fence);
if (plane_state->flags & PLANE_HAS_FENCE && if (plane_state->flags & PLANE_HAS_FENCE &&
plane_state->vma->fence) plane_state->vma->fence)
@ -867,16 +877,20 @@ static bool intel_fbc_can_flip_nuke(const struct intel_crtc_state *crtc_state)
return true; return true;
} }
bool intel_fbc_pre_update(struct intel_crtc *crtc, bool intel_fbc_pre_update(struct intel_atomic_state *state,
const struct intel_crtc_state *crtc_state, struct intel_crtc *crtc)
const struct intel_plane_state *plane_state)
{ {
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
const struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
const struct intel_plane_state *plane_state =
intel_atomic_get_new_plane_state(state, plane);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
const char *reason = "update pending"; const char *reason = "update pending";
bool need_vblank_wait = false; bool need_vblank_wait = false;
if (!fbc_supported(dev_priv)) if (!plane->has_fbc || !plane_state)
return need_vblank_wait; return need_vblank_wait;
mutex_lock(&fbc->lock); mutex_lock(&fbc->lock);
@ -926,9 +940,9 @@ static void __intel_fbc_disable(struct drm_i915_private *dev_priv)
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
struct intel_crtc *crtc = fbc->crtc; struct intel_crtc *crtc = fbc->crtc;
WARN_ON(!mutex_is_locked(&fbc->lock)); drm_WARN_ON(&dev_priv->drm, !mutex_is_locked(&fbc->lock));
WARN_ON(!fbc->crtc); drm_WARN_ON(&dev_priv->drm, !fbc->crtc);
WARN_ON(fbc->active); drm_WARN_ON(&dev_priv->drm, fbc->active);
DRM_DEBUG_KMS("Disabling FBC on pipe %c\n", pipe_name(crtc->pipe)); DRM_DEBUG_KMS("Disabling FBC on pipe %c\n", pipe_name(crtc->pipe));
@ -942,7 +956,7 @@ static void __intel_fbc_post_update(struct intel_crtc *crtc)
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
WARN_ON(!mutex_is_locked(&fbc->lock)); drm_WARN_ON(&dev_priv->drm, !mutex_is_locked(&fbc->lock));
if (fbc->crtc != crtc) if (fbc->crtc != crtc)
return; return;
@ -967,12 +981,16 @@ static void __intel_fbc_post_update(struct intel_crtc *crtc)
intel_fbc_deactivate(dev_priv, "frontbuffer write"); intel_fbc_deactivate(dev_priv, "frontbuffer write");
} }
void intel_fbc_post_update(struct intel_crtc *crtc) void intel_fbc_post_update(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{ {
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
const struct intel_plane_state *plane_state =
intel_atomic_get_new_plane_state(state, plane);
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
if (!fbc_supported(dev_priv)) if (!plane->has_fbc || !plane_state)
return; return;
mutex_lock(&fbc->lock); mutex_lock(&fbc->lock);
@ -994,7 +1012,7 @@ void intel_fbc_invalidate(struct drm_i915_private *dev_priv,
{ {
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
if (!fbc_supported(dev_priv)) if (!HAS_FBC(dev_priv))
return; return;
if (origin == ORIGIN_GTT || origin == ORIGIN_FLIP) if (origin == ORIGIN_GTT || origin == ORIGIN_FLIP)
@ -1015,7 +1033,7 @@ void intel_fbc_flush(struct drm_i915_private *dev_priv,
{ {
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
if (!fbc_supported(dev_priv)) if (!HAS_FBC(dev_priv))
return; return;
mutex_lock(&fbc->lock); mutex_lock(&fbc->lock);
@ -1099,24 +1117,26 @@ out:
/** /**
* intel_fbc_enable: tries to enable FBC on the CRTC * intel_fbc_enable: tries to enable FBC on the CRTC
* @crtc: the CRTC * @crtc: the CRTC
* @crtc_state: corresponding &drm_crtc_state for @crtc * @state: corresponding &drm_crtc_state for @crtc
* @plane_state: corresponding &drm_plane_state for the primary plane of @crtc
* *
* This function checks if the given CRTC was chosen for FBC, then enables it if * This function checks if the given CRTC was chosen for FBC, then enables it if
* possible. Notice that it doesn't activate FBC. It is valid to call * possible. Notice that it doesn't activate FBC. It is valid to call
* intel_fbc_enable multiple times for the same pipe without an * intel_fbc_enable multiple times for the same pipe without an
* intel_fbc_disable in the middle, as long as it is deactivated. * intel_fbc_disable in the middle, as long as it is deactivated.
*/ */
void intel_fbc_enable(struct intel_crtc *crtc, void intel_fbc_enable(struct intel_atomic_state *state,
const struct intel_crtc_state *crtc_state, struct intel_crtc *crtc)
const struct intel_plane_state *plane_state)
{ {
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
const struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
const struct intel_plane_state *plane_state =
intel_atomic_get_new_plane_state(state, plane);
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
struct intel_fbc_state_cache *cache = &fbc->state_cache; struct intel_fbc_state_cache *cache = &fbc->state_cache;
const struct drm_framebuffer *fb = plane_state->hw.fb;
if (!fbc_supported(dev_priv)) if (!plane->has_fbc || !plane_state)
return; return;
mutex_lock(&fbc->lock); mutex_lock(&fbc->lock);
@ -1129,7 +1149,7 @@ void intel_fbc_enable(struct intel_crtc *crtc,
__intel_fbc_disable(dev_priv); __intel_fbc_disable(dev_priv);
} }
WARN_ON(fbc->active); drm_WARN_ON(&dev_priv->drm, fbc->active);
intel_fbc_update_state_cache(crtc, crtc_state, plane_state); intel_fbc_update_state_cache(crtc, crtc_state, plane_state);
@ -1139,14 +1159,14 @@ void intel_fbc_enable(struct intel_crtc *crtc,
if (intel_fbc_alloc_cfb(dev_priv, if (intel_fbc_alloc_cfb(dev_priv,
intel_fbc_calculate_cfb_size(dev_priv, cache), intel_fbc_calculate_cfb_size(dev_priv, cache),
fb->format->cpp[0])) { plane_state->hw.fb->format->cpp[0])) {
cache->plane.visible = false; cache->plane.visible = false;
fbc->no_fbc_reason = "not enough stolen memory"; fbc->no_fbc_reason = "not enough stolen memory";
goto out; goto out;
} }
if ((IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv)) && if ((IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv)) &&
fb->modifier != I915_FORMAT_MOD_X_TILED) plane_state->hw.fb->modifier != I915_FORMAT_MOD_X_TILED)
cache->gen9_wa_cfb_stride = cache->gen9_wa_cfb_stride =
DIV_ROUND_UP(cache->plane.src_w, 32 * fbc->threshold) * 8; DIV_ROUND_UP(cache->plane.src_w, 32 * fbc->threshold) * 8;
else else
@ -1169,9 +1189,10 @@ out:
void intel_fbc_disable(struct intel_crtc *crtc) void intel_fbc_disable(struct intel_crtc *crtc)
{ {
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
if (!fbc_supported(dev_priv)) if (!plane->has_fbc)
return; return;
mutex_lock(&fbc->lock); mutex_lock(&fbc->lock);
@ -1190,12 +1211,12 @@ void intel_fbc_global_disable(struct drm_i915_private *dev_priv)
{ {
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
if (!fbc_supported(dev_priv)) if (!HAS_FBC(dev_priv))
return; return;
mutex_lock(&fbc->lock); mutex_lock(&fbc->lock);
if (fbc->crtc) { if (fbc->crtc) {
WARN_ON(fbc->crtc->active); drm_WARN_ON(&dev_priv->drm, fbc->crtc->active);
__intel_fbc_disable(dev_priv); __intel_fbc_disable(dev_priv);
} }
mutex_unlock(&fbc->lock); mutex_unlock(&fbc->lock);
@ -1267,7 +1288,7 @@ void intel_fbc_handle_fifo_underrun_irq(struct drm_i915_private *dev_priv)
{ {
struct intel_fbc *fbc = &dev_priv->fbc; struct intel_fbc *fbc = &dev_priv->fbc;
if (!fbc_supported(dev_priv)) if (!HAS_FBC(dev_priv))
return; return;
/* There's no guarantee that underrun_detected won't be set to true /* There's no guarantee that underrun_detected won't be set to true
@ -1348,7 +1369,8 @@ void intel_fbc_init(struct drm_i915_private *dev_priv)
/* This value was pulled out of someone's hat */ /* This value was pulled out of someone's hat */
if (INTEL_GEN(dev_priv) <= 4 && !IS_GM45(dev_priv)) if (INTEL_GEN(dev_priv) <= 4 && !IS_GM45(dev_priv))
I915_WRITE(FBC_CONTROL, 500 << FBC_CTL_INTERVAL_SHIFT); intel_de_write(dev_priv, FBC_CONTROL,
500 << FBC_CTL_INTERVAL_SHIFT);
/* We still don't have any sort of hardware state readout for FBC, so /* We still don't have any sort of hardware state readout for FBC, so
* deactivate it in case the BIOS activated it to make sure software * deactivate it in case the BIOS activated it to make sure software

View file

@ -19,14 +19,13 @@ struct intel_plane_state;
void intel_fbc_choose_crtc(struct drm_i915_private *dev_priv, void intel_fbc_choose_crtc(struct drm_i915_private *dev_priv,
struct intel_atomic_state *state); struct intel_atomic_state *state);
bool intel_fbc_is_active(struct drm_i915_private *dev_priv); bool intel_fbc_is_active(struct drm_i915_private *dev_priv);
bool intel_fbc_pre_update(struct intel_crtc *crtc, bool intel_fbc_pre_update(struct intel_atomic_state *state,
const struct intel_crtc_state *crtc_state, struct intel_crtc *crtc);
const struct intel_plane_state *plane_state); void intel_fbc_post_update(struct intel_atomic_state *state,
void intel_fbc_post_update(struct intel_crtc *crtc); struct intel_crtc *crtc);
void intel_fbc_init(struct drm_i915_private *dev_priv); void intel_fbc_init(struct drm_i915_private *dev_priv);
void intel_fbc_enable(struct intel_crtc *crtc, void intel_fbc_enable(struct intel_atomic_state *state,
const struct intel_crtc_state *crtc_state, struct intel_crtc *crtc);
const struct intel_plane_state *plane_state);
void intel_fbc_disable(struct intel_crtc *crtc); void intel_fbc_disable(struct intel_crtc *crtc);
void intel_fbc_global_disable(struct drm_i915_private *dev_priv); void intel_fbc_global_disable(struct drm_i915_private *dev_priv);
void intel_fbc_invalidate(struct drm_i915_private *dev_priv, void intel_fbc_invalidate(struct drm_i915_private *dev_priv,

View file

@ -191,7 +191,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
drm_framebuffer_put(&intel_fb->base); drm_framebuffer_put(&intel_fb->base);
intel_fb = ifbdev->fb = NULL; intel_fb = ifbdev->fb = NULL;
} }
if (!intel_fb || WARN_ON(!intel_fb_obj(&intel_fb->base))) { if (!intel_fb || drm_WARN_ON(dev, !intel_fb_obj(&intel_fb->base))) {
DRM_DEBUG_KMS("no BIOS fb, allocating a new one\n"); DRM_DEBUG_KMS("no BIOS fb, allocating a new one\n");
ret = intelfb_alloc(helper, sizes); ret = intelfb_alloc(helper, sizes);
if (ret) if (ret)
@ -410,9 +410,9 @@ static bool intel_fbdev_init_bios(struct drm_device *dev,
if (!crtc->state->active) if (!crtc->state->active)
continue; continue;
WARN(!crtc->primary->state->fb, drm_WARN(dev, !crtc->primary->state->fb,
"re-used BIOS config but lost an fb on crtc %d\n", "re-used BIOS config but lost an fb on crtc %d\n",
crtc->base.id); crtc->base.id);
} }
@ -439,7 +439,8 @@ int intel_fbdev_init(struct drm_device *dev)
struct intel_fbdev *ifbdev; struct intel_fbdev *ifbdev;
int ret; int ret;
if (WARN_ON(!HAS_DISPLAY(dev_priv) || !INTEL_DISPLAY_ENABLED(dev_priv))) if (drm_WARN_ON(dev, !HAS_DISPLAY(dev_priv) ||
!INTEL_DISPLAY_ENABLED(dev_priv)))
return -ENODEV; return -ENODEV;
ifbdev = kzalloc(sizeof(struct intel_fbdev), GFP_KERNEL); ifbdev = kzalloc(sizeof(struct intel_fbdev), GFP_KERNEL);
@ -569,7 +570,7 @@ void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous
* to all the printk activity. Try to keep it out of the hot * to all the printk activity. Try to keep it out of the hot
* path of resume if possible. * path of resume if possible.
*/ */
WARN_ON(state != FBINFO_STATE_RUNNING); drm_WARN_ON(dev, state != FBINFO_STATE_RUNNING);
if (!console_trylock()) { if (!console_trylock()) {
/* Don't block our own workqueue as this can /* Don't block our own workqueue as this can
* be run in parallel with other i915.ko tasks. * be run in parallel with other i915.ko tasks.

View file

@ -95,12 +95,12 @@ static void i9xx_check_fifo_underruns(struct intel_crtc *crtc)
lockdep_assert_held(&dev_priv->irq_lock); lockdep_assert_held(&dev_priv->irq_lock);
if ((I915_READ(reg) & PIPE_FIFO_UNDERRUN_STATUS) == 0) if ((intel_de_read(dev_priv, reg) & PIPE_FIFO_UNDERRUN_STATUS) == 0)
return; return;
enable_mask = i915_pipestat_enable_mask(dev_priv, crtc->pipe); enable_mask = i915_pipestat_enable_mask(dev_priv, crtc->pipe);
I915_WRITE(reg, enable_mask | PIPE_FIFO_UNDERRUN_STATUS); intel_de_write(dev_priv, reg, enable_mask | PIPE_FIFO_UNDERRUN_STATUS);
POSTING_READ(reg); intel_de_posting_read(dev_priv, reg);
trace_intel_cpu_fifo_underrun(dev_priv, crtc->pipe); trace_intel_cpu_fifo_underrun(dev_priv, crtc->pipe);
DRM_ERROR("pipe %c underrun\n", pipe_name(crtc->pipe)); DRM_ERROR("pipe %c underrun\n", pipe_name(crtc->pipe));
@ -118,10 +118,11 @@ static void i9xx_set_fifo_underrun_reporting(struct drm_device *dev,
if (enable) { if (enable) {
u32 enable_mask = i915_pipestat_enable_mask(dev_priv, pipe); u32 enable_mask = i915_pipestat_enable_mask(dev_priv, pipe);
I915_WRITE(reg, enable_mask | PIPE_FIFO_UNDERRUN_STATUS); intel_de_write(dev_priv, reg,
POSTING_READ(reg); enable_mask | PIPE_FIFO_UNDERRUN_STATUS);
intel_de_posting_read(dev_priv, reg);
} else { } else {
if (old && I915_READ(reg) & PIPE_FIFO_UNDERRUN_STATUS) if (old && intel_de_read(dev_priv, reg) & PIPE_FIFO_UNDERRUN_STATUS)
DRM_ERROR("pipe %c underrun\n", pipe_name(pipe)); DRM_ERROR("pipe %c underrun\n", pipe_name(pipe));
} }
} }
@ -143,15 +144,15 @@ static void ivb_check_fifo_underruns(struct intel_crtc *crtc)
{ {
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe; enum pipe pipe = crtc->pipe;
u32 err_int = I915_READ(GEN7_ERR_INT); u32 err_int = intel_de_read(dev_priv, GEN7_ERR_INT);
lockdep_assert_held(&dev_priv->irq_lock); lockdep_assert_held(&dev_priv->irq_lock);
if ((err_int & ERR_INT_FIFO_UNDERRUN(pipe)) == 0) if ((err_int & ERR_INT_FIFO_UNDERRUN(pipe)) == 0)
return; return;
I915_WRITE(GEN7_ERR_INT, ERR_INT_FIFO_UNDERRUN(pipe)); intel_de_write(dev_priv, GEN7_ERR_INT, ERR_INT_FIFO_UNDERRUN(pipe));
POSTING_READ(GEN7_ERR_INT); intel_de_posting_read(dev_priv, GEN7_ERR_INT);
trace_intel_cpu_fifo_underrun(dev_priv, pipe); trace_intel_cpu_fifo_underrun(dev_priv, pipe);
DRM_ERROR("fifo underrun on pipe %c\n", pipe_name(pipe)); DRM_ERROR("fifo underrun on pipe %c\n", pipe_name(pipe));
@ -163,7 +164,8 @@ static void ivb_set_fifo_underrun_reporting(struct drm_device *dev,
{ {
struct drm_i915_private *dev_priv = to_i915(dev); struct drm_i915_private *dev_priv = to_i915(dev);
if (enable) { if (enable) {
I915_WRITE(GEN7_ERR_INT, ERR_INT_FIFO_UNDERRUN(pipe)); intel_de_write(dev_priv, GEN7_ERR_INT,
ERR_INT_FIFO_UNDERRUN(pipe));
if (!ivb_can_enable_err_int(dev)) if (!ivb_can_enable_err_int(dev))
return; return;
@ -173,7 +175,7 @@ static void ivb_set_fifo_underrun_reporting(struct drm_device *dev,
ilk_disable_display_irq(dev_priv, DE_ERR_INT_IVB); ilk_disable_display_irq(dev_priv, DE_ERR_INT_IVB);
if (old && if (old &&
I915_READ(GEN7_ERR_INT) & ERR_INT_FIFO_UNDERRUN(pipe)) { intel_de_read(dev_priv, GEN7_ERR_INT) & ERR_INT_FIFO_UNDERRUN(pipe)) {
DRM_ERROR("uncleared fifo underrun on pipe %c\n", DRM_ERROR("uncleared fifo underrun on pipe %c\n",
pipe_name(pipe)); pipe_name(pipe));
} }
@ -209,15 +211,16 @@ static void cpt_check_pch_fifo_underruns(struct intel_crtc *crtc)
{ {
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pch_transcoder = crtc->pipe; enum pipe pch_transcoder = crtc->pipe;
u32 serr_int = I915_READ(SERR_INT); u32 serr_int = intel_de_read(dev_priv, SERR_INT);
lockdep_assert_held(&dev_priv->irq_lock); lockdep_assert_held(&dev_priv->irq_lock);
if ((serr_int & SERR_INT_TRANS_FIFO_UNDERRUN(pch_transcoder)) == 0) if ((serr_int & SERR_INT_TRANS_FIFO_UNDERRUN(pch_transcoder)) == 0)
return; return;
I915_WRITE(SERR_INT, SERR_INT_TRANS_FIFO_UNDERRUN(pch_transcoder)); intel_de_write(dev_priv, SERR_INT,
POSTING_READ(SERR_INT); SERR_INT_TRANS_FIFO_UNDERRUN(pch_transcoder));
intel_de_posting_read(dev_priv, SERR_INT);
trace_intel_pch_fifo_underrun(dev_priv, pch_transcoder); trace_intel_pch_fifo_underrun(dev_priv, pch_transcoder);
DRM_ERROR("pch fifo underrun on pch transcoder %c\n", DRM_ERROR("pch fifo underrun on pch transcoder %c\n",
@ -231,8 +234,8 @@ static void cpt_set_fifo_underrun_reporting(struct drm_device *dev,
struct drm_i915_private *dev_priv = to_i915(dev); struct drm_i915_private *dev_priv = to_i915(dev);
if (enable) { if (enable) {
I915_WRITE(SERR_INT, intel_de_write(dev_priv, SERR_INT,
SERR_INT_TRANS_FIFO_UNDERRUN(pch_transcoder)); SERR_INT_TRANS_FIFO_UNDERRUN(pch_transcoder));
if (!cpt_can_enable_serr_int(dev)) if (!cpt_can_enable_serr_int(dev))
return; return;
@ -241,7 +244,7 @@ static void cpt_set_fifo_underrun_reporting(struct drm_device *dev,
} else { } else {
ibx_disable_display_interrupt(dev_priv, SDE_ERROR_CPT); ibx_disable_display_interrupt(dev_priv, SDE_ERROR_CPT);
if (old && I915_READ(SERR_INT) & if (old && intel_de_read(dev_priv, SERR_INT) &
SERR_INT_TRANS_FIFO_UNDERRUN(pch_transcoder)) { SERR_INT_TRANS_FIFO_UNDERRUN(pch_transcoder)) {
DRM_ERROR("uncleared pch fifo underrun on pch transcoder %c\n", DRM_ERROR("uncleared pch fifo underrun on pch transcoder %c\n",
pipe_name(pch_transcoder)); pipe_name(pch_transcoder));

View file

@ -0,0 +1,223 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2020 Intel Corporation
*/
#include <linux/string.h>
#include "i915_drv.h"
#include "intel_atomic.h"
#include "intel_display_types.h"
#include "intel_global_state.h"
void intel_atomic_global_obj_init(struct drm_i915_private *dev_priv,
struct intel_global_obj *obj,
struct intel_global_state *state,
const struct intel_global_state_funcs *funcs)
{
memset(obj, 0, sizeof(*obj));
obj->state = state;
obj->funcs = funcs;
list_add_tail(&obj->head, &dev_priv->global_obj_list);
}
void intel_atomic_global_obj_cleanup(struct drm_i915_private *dev_priv)
{
struct intel_global_obj *obj, *next;
list_for_each_entry_safe(obj, next, &dev_priv->global_obj_list, head) {
list_del(&obj->head);
obj->funcs->atomic_destroy_state(obj, obj->state);
}
}
static void assert_global_state_write_locked(struct drm_i915_private *dev_priv)
{
struct intel_crtc *crtc;
for_each_intel_crtc(&dev_priv->drm, crtc)
drm_modeset_lock_assert_held(&crtc->base.mutex);
}
static bool modeset_lock_is_held(struct drm_modeset_acquire_ctx *ctx,
struct drm_modeset_lock *lock)
{
struct drm_modeset_lock *l;
list_for_each_entry(l, &ctx->locked, head) {
if (lock == l)
return true;
}
return false;
}
static void assert_global_state_read_locked(struct intel_atomic_state *state)
{
struct drm_modeset_acquire_ctx *ctx = state->base.acquire_ctx;
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_crtc *crtc;
for_each_intel_crtc(&dev_priv->drm, crtc) {
if (modeset_lock_is_held(ctx, &crtc->base.mutex))
return;
}
WARN(1, "Global state not read locked\n");
}
struct intel_global_state *
intel_atomic_get_global_obj_state(struct intel_atomic_state *state,
struct intel_global_obj *obj)
{
int index, num_objs, i;
size_t size;
struct __intel_global_objs_state *arr;
struct intel_global_state *obj_state;
for (i = 0; i < state->num_global_objs; i++)
if (obj == state->global_objs[i].ptr)
return state->global_objs[i].state;
assert_global_state_read_locked(state);
num_objs = state->num_global_objs + 1;
size = sizeof(*state->global_objs) * num_objs;
arr = krealloc(state->global_objs, size, GFP_KERNEL);
if (!arr)
return ERR_PTR(-ENOMEM);
state->global_objs = arr;
index = state->num_global_objs;
memset(&state->global_objs[index], 0, sizeof(*state->global_objs));
obj_state = obj->funcs->atomic_duplicate_state(obj);
if (!obj_state)
return ERR_PTR(-ENOMEM);
obj_state->changed = false;
state->global_objs[index].state = obj_state;
state->global_objs[index].old_state = obj->state;
state->global_objs[index].new_state = obj_state;
state->global_objs[index].ptr = obj;
obj_state->state = state;
state->num_global_objs = num_objs;
DRM_DEBUG_ATOMIC("Added new global object %p state %p to %p\n",
obj, obj_state, state);
return obj_state;
}
struct intel_global_state *
intel_atomic_get_old_global_obj_state(struct intel_atomic_state *state,
struct intel_global_obj *obj)
{
int i;
for (i = 0; i < state->num_global_objs; i++)
if (obj == state->global_objs[i].ptr)
return state->global_objs[i].old_state;
return NULL;
}
struct intel_global_state *
intel_atomic_get_new_global_obj_state(struct intel_atomic_state *state,
struct intel_global_obj *obj)
{
int i;
for (i = 0; i < state->num_global_objs; i++)
if (obj == state->global_objs[i].ptr)
return state->global_objs[i].new_state;
return NULL;
}
void intel_atomic_swap_global_state(struct intel_atomic_state *state)
{
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_global_state *old_obj_state, *new_obj_state;
struct intel_global_obj *obj;
int i;
for_each_oldnew_global_obj_in_state(state, obj, old_obj_state,
new_obj_state, i) {
WARN_ON(obj->state != old_obj_state);
/*
* If the new state wasn't modified (and properly
* locked for write access) we throw it away.
*/
if (!new_obj_state->changed)
continue;
assert_global_state_write_locked(dev_priv);
old_obj_state->state = state;
new_obj_state->state = NULL;
state->global_objs[i].state = old_obj_state;
obj->state = new_obj_state;
}
}
void intel_atomic_clear_global_state(struct intel_atomic_state *state)
{
int i;
for (i = 0; i < state->num_global_objs; i++) {
struct intel_global_obj *obj = state->global_objs[i].ptr;
obj->funcs->atomic_destroy_state(obj,
state->global_objs[i].state);
state->global_objs[i].ptr = NULL;
state->global_objs[i].state = NULL;
state->global_objs[i].old_state = NULL;
state->global_objs[i].new_state = NULL;
}
state->num_global_objs = 0;
}
int intel_atomic_lock_global_state(struct intel_global_state *obj_state)
{
struct intel_atomic_state *state = obj_state->state;
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_crtc *crtc;
for_each_intel_crtc(&dev_priv->drm, crtc) {
int ret;
ret = drm_modeset_lock(&crtc->base.mutex,
state->base.acquire_ctx);
if (ret)
return ret;
}
obj_state->changed = true;
return 0;
}
int intel_atomic_serialize_global_state(struct intel_global_state *obj_state)
{
struct intel_atomic_state *state = obj_state->state;
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_crtc *crtc;
for_each_intel_crtc(&dev_priv->drm, crtc) {
struct intel_crtc_state *crtc_state;
crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state);
}
obj_state->changed = true;
return 0;
}

View file

@ -0,0 +1,87 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2020 Intel Corporation
*/
#ifndef __INTEL_GLOBAL_STATE_H__
#define __INTEL_GLOBAL_STATE_H__
#include <linux/list.h>
struct drm_i915_private;
struct intel_atomic_state;
struct intel_global_obj;
struct intel_global_state;
struct intel_global_state_funcs {
struct intel_global_state *(*atomic_duplicate_state)(struct intel_global_obj *obj);
void (*atomic_destroy_state)(struct intel_global_obj *obj,
struct intel_global_state *state);
};
struct intel_global_obj {
struct list_head head;
struct intel_global_state *state;
const struct intel_global_state_funcs *funcs;
};
#define intel_for_each_global_obj(obj, dev_priv) \
list_for_each_entry(obj, &(dev_priv)->global_obj_list, head)
#define for_each_new_global_obj_in_state(__state, obj, new_obj_state, __i) \
for ((__i) = 0; \
(__i) < (__state)->num_global_objs && \
((obj) = (__state)->global_objs[__i].ptr, \
(new_obj_state) = (__state)->global_objs[__i].new_state, 1); \
(__i)++) \
for_each_if(obj)
#define for_each_old_global_obj_in_state(__state, obj, new_obj_state, __i) \
for ((__i) = 0; \
(__i) < (__state)->num_global_objs && \
((obj) = (__state)->global_objs[__i].ptr, \
(new_obj_state) = (__state)->global_objs[__i].old_state, 1); \
(__i)++) \
for_each_if(obj)
#define for_each_oldnew_global_obj_in_state(__state, obj, old_obj_state, new_obj_state, __i) \
for ((__i) = 0; \
(__i) < (__state)->num_global_objs && \
((obj) = (__state)->global_objs[__i].ptr, \
(old_obj_state) = (__state)->global_objs[__i].old_state, \
(new_obj_state) = (__state)->global_objs[__i].new_state, 1); \
(__i)++) \
for_each_if(obj)
struct intel_global_state {
struct intel_atomic_state *state;
bool changed;
};
struct __intel_global_objs_state {
struct intel_global_obj *ptr;
struct intel_global_state *state, *old_state, *new_state;
};
void intel_atomic_global_obj_init(struct drm_i915_private *dev_priv,
struct intel_global_obj *obj,
struct intel_global_state *state,
const struct intel_global_state_funcs *funcs);
void intel_atomic_global_obj_cleanup(struct drm_i915_private *dev_priv);
struct intel_global_state *
intel_atomic_get_global_obj_state(struct intel_atomic_state *state,
struct intel_global_obj *obj);
struct intel_global_state *
intel_atomic_get_old_global_obj_state(struct intel_atomic_state *state,
struct intel_global_obj *obj);
struct intel_global_state *
intel_atomic_get_new_global_obj_state(struct intel_atomic_state *state,
struct intel_global_obj *obj);
void intel_atomic_swap_global_state(struct intel_atomic_state *state);
void intel_atomic_clear_global_state(struct intel_atomic_state *state);
int intel_atomic_lock_global_state(struct intel_global_state *obj_state);
int intel_atomic_serialize_global_state(struct intel_global_state *obj_state);
#endif

View file

@ -143,8 +143,8 @@ to_intel_gmbus(struct i2c_adapter *i2c)
void void
intel_gmbus_reset(struct drm_i915_private *dev_priv) intel_gmbus_reset(struct drm_i915_private *dev_priv)
{ {
I915_WRITE(GMBUS0, 0); intel_de_write(dev_priv, GMBUS0, 0);
I915_WRITE(GMBUS4, 0); intel_de_write(dev_priv, GMBUS4, 0);
} }
static void pnv_gmbus_clock_gating(struct drm_i915_private *dev_priv, static void pnv_gmbus_clock_gating(struct drm_i915_private *dev_priv,
@ -153,12 +153,12 @@ static void pnv_gmbus_clock_gating(struct drm_i915_private *dev_priv,
u32 val; u32 val;
/* When using bit bashing for I2C, this bit needs to be set to 1 */ /* When using bit bashing for I2C, this bit needs to be set to 1 */
val = I915_READ(DSPCLK_GATE_D); val = intel_de_read(dev_priv, DSPCLK_GATE_D);
if (!enable) if (!enable)
val |= PNV_GMBUSUNIT_CLOCK_GATE_DISABLE; val |= PNV_GMBUSUNIT_CLOCK_GATE_DISABLE;
else else
val &= ~PNV_GMBUSUNIT_CLOCK_GATE_DISABLE; val &= ~PNV_GMBUSUNIT_CLOCK_GATE_DISABLE;
I915_WRITE(DSPCLK_GATE_D, val); intel_de_write(dev_priv, DSPCLK_GATE_D, val);
} }
static void pch_gmbus_clock_gating(struct drm_i915_private *dev_priv, static void pch_gmbus_clock_gating(struct drm_i915_private *dev_priv,
@ -166,12 +166,12 @@ static void pch_gmbus_clock_gating(struct drm_i915_private *dev_priv,
{ {
u32 val; u32 val;
val = I915_READ(SOUTH_DSPCLK_GATE_D); val = intel_de_read(dev_priv, SOUTH_DSPCLK_GATE_D);
if (!enable) if (!enable)
val |= PCH_GMBUSUNIT_CLOCK_GATE_DISABLE; val |= PCH_GMBUSUNIT_CLOCK_GATE_DISABLE;
else else
val &= ~PCH_GMBUSUNIT_CLOCK_GATE_DISABLE; val &= ~PCH_GMBUSUNIT_CLOCK_GATE_DISABLE;
I915_WRITE(SOUTH_DSPCLK_GATE_D, val); intel_de_write(dev_priv, SOUTH_DSPCLK_GATE_D, val);
} }
static void bxt_gmbus_clock_gating(struct drm_i915_private *dev_priv, static void bxt_gmbus_clock_gating(struct drm_i915_private *dev_priv,
@ -179,12 +179,12 @@ static void bxt_gmbus_clock_gating(struct drm_i915_private *dev_priv,
{ {
u32 val; u32 val;
val = I915_READ(GEN9_CLKGATE_DIS_4); val = intel_de_read(dev_priv, GEN9_CLKGATE_DIS_4);
if (!enable) if (!enable)
val |= BXT_GMBUS_GATING_DIS; val |= BXT_GMBUS_GATING_DIS;
else else
val &= ~BXT_GMBUS_GATING_DIS; val &= ~BXT_GMBUS_GATING_DIS;
I915_WRITE(GEN9_CLKGATE_DIS_4, val); intel_de_write(dev_priv, GEN9_CLKGATE_DIS_4, val);
} }
static u32 get_reserved(struct intel_gmbus *bus) static u32 get_reserved(struct intel_gmbus *bus)
@ -337,14 +337,16 @@ static int gmbus_wait(struct drm_i915_private *dev_priv, u32 status, u32 irq_en)
irq_en = 0; irq_en = 0;
add_wait_queue(&dev_priv->gmbus_wait_queue, &wait); add_wait_queue(&dev_priv->gmbus_wait_queue, &wait);
I915_WRITE_FW(GMBUS4, irq_en); intel_de_write_fw(dev_priv, GMBUS4, irq_en);
status |= GMBUS_SATOER; status |= GMBUS_SATOER;
ret = wait_for_us((gmbus2 = I915_READ_FW(GMBUS2)) & status, 2); ret = wait_for_us((gmbus2 = intel_de_read_fw(dev_priv, GMBUS2)) & status,
2);
if (ret) if (ret)
ret = wait_for((gmbus2 = I915_READ_FW(GMBUS2)) & status, 50); ret = wait_for((gmbus2 = intel_de_read_fw(dev_priv, GMBUS2)) & status,
50);
I915_WRITE_FW(GMBUS4, 0); intel_de_write_fw(dev_priv, GMBUS4, 0);
remove_wait_queue(&dev_priv->gmbus_wait_queue, &wait); remove_wait_queue(&dev_priv->gmbus_wait_queue, &wait);
if (gmbus2 & GMBUS_SATOER) if (gmbus2 & GMBUS_SATOER)
@ -366,13 +368,13 @@ gmbus_wait_idle(struct drm_i915_private *dev_priv)
irq_enable = GMBUS_IDLE_EN; irq_enable = GMBUS_IDLE_EN;
add_wait_queue(&dev_priv->gmbus_wait_queue, &wait); add_wait_queue(&dev_priv->gmbus_wait_queue, &wait);
I915_WRITE_FW(GMBUS4, irq_enable); intel_de_write_fw(dev_priv, GMBUS4, irq_enable);
ret = intel_wait_for_register_fw(&dev_priv->uncore, ret = intel_wait_for_register_fw(&dev_priv->uncore,
GMBUS2, GMBUS_ACTIVE, 0, GMBUS2, GMBUS_ACTIVE, 0,
10); 10);
I915_WRITE_FW(GMBUS4, 0); intel_de_write_fw(dev_priv, GMBUS4, 0);
remove_wait_queue(&dev_priv->gmbus_wait_queue, &wait); remove_wait_queue(&dev_priv->gmbus_wait_queue, &wait);
return ret; return ret;
@ -404,15 +406,12 @@ gmbus_xfer_read_chunk(struct drm_i915_private *dev_priv,
len++; len++;
} }
size = len % 256 + 256; size = len % 256 + 256;
I915_WRITE_FW(GMBUS0, gmbus0_reg | GMBUS_BYTE_CNT_OVERRIDE); intel_de_write_fw(dev_priv, GMBUS0,
gmbus0_reg | GMBUS_BYTE_CNT_OVERRIDE);
} }
I915_WRITE_FW(GMBUS1, intel_de_write_fw(dev_priv, GMBUS1,
gmbus1_index | gmbus1_index | GMBUS_CYCLE_WAIT | (size << GMBUS_BYTE_COUNT_SHIFT) | (addr << GMBUS_SLAVE_ADDR_SHIFT) | GMBUS_SLAVE_READ | GMBUS_SW_RDY);
GMBUS_CYCLE_WAIT |
(size << GMBUS_BYTE_COUNT_SHIFT) |
(addr << GMBUS_SLAVE_ADDR_SHIFT) |
GMBUS_SLAVE_READ | GMBUS_SW_RDY);
while (len) { while (len) {
int ret; int ret;
u32 val, loop = 0; u32 val, loop = 0;
@ -421,7 +420,7 @@ gmbus_xfer_read_chunk(struct drm_i915_private *dev_priv,
if (ret) if (ret)
return ret; return ret;
val = I915_READ_FW(GMBUS3); val = intel_de_read_fw(dev_priv, GMBUS3);
do { do {
if (extra_byte_added && len == 1) if (extra_byte_added && len == 1)
break; break;
@ -432,7 +431,7 @@ gmbus_xfer_read_chunk(struct drm_i915_private *dev_priv,
if (burst_read && len == size - 4) if (burst_read && len == size - 4)
/* Reset the override bit */ /* Reset the override bit */
I915_WRITE_FW(GMBUS0, gmbus0_reg); intel_de_write_fw(dev_priv, GMBUS0, gmbus0_reg);
} }
return 0; return 0;
@ -489,12 +488,9 @@ gmbus_xfer_write_chunk(struct drm_i915_private *dev_priv,
len -= 1; len -= 1;
} }
I915_WRITE_FW(GMBUS3, val); intel_de_write_fw(dev_priv, GMBUS3, val);
I915_WRITE_FW(GMBUS1, intel_de_write_fw(dev_priv, GMBUS1,
gmbus1_index | GMBUS_CYCLE_WAIT | gmbus1_index | GMBUS_CYCLE_WAIT | (chunk_size << GMBUS_BYTE_COUNT_SHIFT) | (addr << GMBUS_SLAVE_ADDR_SHIFT) | GMBUS_SLAVE_WRITE | GMBUS_SW_RDY);
(chunk_size << GMBUS_BYTE_COUNT_SHIFT) |
(addr << GMBUS_SLAVE_ADDR_SHIFT) |
GMBUS_SLAVE_WRITE | GMBUS_SW_RDY);
while (len) { while (len) {
int ret; int ret;
@ -503,7 +499,7 @@ gmbus_xfer_write_chunk(struct drm_i915_private *dev_priv,
val |= *buf++ << (8 * loop); val |= *buf++ << (8 * loop);
} while (--len && ++loop < 4); } while (--len && ++loop < 4);
I915_WRITE_FW(GMBUS3, val); intel_de_write_fw(dev_priv, GMBUS3, val);
ret = gmbus_wait(dev_priv, GMBUS_HW_RDY, GMBUS_HW_RDY_EN); ret = gmbus_wait(dev_priv, GMBUS_HW_RDY, GMBUS_HW_RDY_EN);
if (ret) if (ret)
@ -568,7 +564,7 @@ gmbus_index_xfer(struct drm_i915_private *dev_priv, struct i2c_msg *msgs,
/* GMBUS5 holds 16-bit index */ /* GMBUS5 holds 16-bit index */
if (gmbus5) if (gmbus5)
I915_WRITE_FW(GMBUS5, gmbus5); intel_de_write_fw(dev_priv, GMBUS5, gmbus5);
if (msgs[1].flags & I2C_M_RD) if (msgs[1].flags & I2C_M_RD)
ret = gmbus_xfer_read(dev_priv, &msgs[1], gmbus0_reg, ret = gmbus_xfer_read(dev_priv, &msgs[1], gmbus0_reg,
@ -578,7 +574,7 @@ gmbus_index_xfer(struct drm_i915_private *dev_priv, struct i2c_msg *msgs,
/* Clear GMBUS5 after each index transfer */ /* Clear GMBUS5 after each index transfer */
if (gmbus5) if (gmbus5)
I915_WRITE_FW(GMBUS5, 0); intel_de_write_fw(dev_priv, GMBUS5, 0);
return ret; return ret;
} }
@ -601,7 +597,7 @@ do_gmbus_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs, int num,
pch_gmbus_clock_gating(dev_priv, false); pch_gmbus_clock_gating(dev_priv, false);
retry: retry:
I915_WRITE_FW(GMBUS0, gmbus0_source | bus->reg0); intel_de_write_fw(dev_priv, GMBUS0, gmbus0_source | bus->reg0);
for (; i < num; i += inc) { for (; i < num; i += inc) {
inc = 1; inc = 1;
@ -629,7 +625,7 @@ retry:
* a STOP on the very first cycle. To simplify the code we * a STOP on the very first cycle. To simplify the code we
* unconditionally generate the STOP condition with an additional gmbus * unconditionally generate the STOP condition with an additional gmbus
* cycle. */ * cycle. */
I915_WRITE_FW(GMBUS1, GMBUS_CYCLE_STOP | GMBUS_SW_RDY); intel_de_write_fw(dev_priv, GMBUS1, GMBUS_CYCLE_STOP | GMBUS_SW_RDY);
/* Mark the GMBUS interface as disabled after waiting for idle. /* Mark the GMBUS interface as disabled after waiting for idle.
* We will re-enable it at the start of the next xfer, * We will re-enable it at the start of the next xfer,
@ -640,7 +636,7 @@ retry:
adapter->name); adapter->name);
ret = -ETIMEDOUT; ret = -ETIMEDOUT;
} }
I915_WRITE_FW(GMBUS0, 0); intel_de_write_fw(dev_priv, GMBUS0, 0);
ret = ret ?: i; ret = ret ?: i;
goto out; goto out;
@ -669,9 +665,9 @@ clear_err:
* of resetting the GMBUS controller and so clearing the * of resetting the GMBUS controller and so clearing the
* BUS_ERROR raised by the slave's NAK. * BUS_ERROR raised by the slave's NAK.
*/ */
I915_WRITE_FW(GMBUS1, GMBUS_SW_CLR_INT); intel_de_write_fw(dev_priv, GMBUS1, GMBUS_SW_CLR_INT);
I915_WRITE_FW(GMBUS1, 0); intel_de_write_fw(dev_priv, GMBUS1, 0);
I915_WRITE_FW(GMBUS0, 0); intel_de_write_fw(dev_priv, GMBUS0, 0);
DRM_DEBUG_KMS("GMBUS [%s] NAK for addr: %04x %c(%d)\n", DRM_DEBUG_KMS("GMBUS [%s] NAK for addr: %04x %c(%d)\n",
adapter->name, msgs[i].addr, adapter->name, msgs[i].addr,
@ -694,7 +690,7 @@ clear_err:
timeout: timeout:
DRM_DEBUG_KMS("GMBUS [%s] timed out, falling back to bit banging on pin %d\n", DRM_DEBUG_KMS("GMBUS [%s] timed out, falling back to bit banging on pin %d\n",
bus->adapter.name, bus->reg0 & 0xff); bus->adapter.name, bus->reg0 & 0xff);
I915_WRITE_FW(GMBUS0, 0); intel_de_write_fw(dev_priv, GMBUS0, 0);
/* /*
* Hardware may not support GMBUS over these pins? Try GPIO bitbanging * Hardware may not support GMBUS over these pins? Try GPIO bitbanging
@ -908,7 +904,8 @@ err:
struct i2c_adapter *intel_gmbus_get_adapter(struct drm_i915_private *dev_priv, struct i2c_adapter *intel_gmbus_get_adapter(struct drm_i915_private *dev_priv,
unsigned int pin) unsigned int pin)
{ {
if (WARN_ON(!intel_gmbus_is_valid_pin(dev_priv, pin))) if (drm_WARN_ON(&dev_priv->drm,
!intel_gmbus_is_valid_pin(dev_priv, pin)))
return NULL; return NULL;
return &dev_priv->gmbus[pin].adapter; return &dev_priv->gmbus[pin].adapter;

File diff suppressed because it is too large Load diff

View file

@ -14,6 +14,8 @@ struct drm_connector;
struct drm_connector_state; struct drm_connector_state;
struct drm_i915_private; struct drm_i915_private;
struct intel_connector; struct intel_connector;
struct intel_crtc_state;
struct intel_encoder;
struct intel_hdcp_shim; struct intel_hdcp_shim;
enum port; enum port;
enum transcoder; enum transcoder;
@ -26,6 +28,9 @@ int intel_hdcp_init(struct intel_connector *connector,
int intel_hdcp_enable(struct intel_connector *connector, int intel_hdcp_enable(struct intel_connector *connector,
enum transcoder cpu_transcoder, u8 content_type); enum transcoder cpu_transcoder, u8 content_type);
int intel_hdcp_disable(struct intel_connector *connector); int intel_hdcp_disable(struct intel_connector *connector);
void intel_hdcp_update_pipe(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state);
bool is_hdcp_supported(struct drm_i915_private *dev_priv, enum port port); bool is_hdcp_supported(struct drm_i915_private *dev_priv, enum port port);
bool intel_hdcp_capable(struct intel_connector *connector); bool intel_hdcp_capable(struct intel_connector *connector);
bool intel_hdcp2_capable(struct intel_connector *connector); bool intel_hdcp2_capable(struct intel_connector *connector);

File diff suppressed because it is too large Load diff

View file

@ -120,6 +120,20 @@ enum hpd_pin intel_hpd_pin_default(struct drm_i915_private *dev_priv,
#define HPD_STORM_REENABLE_DELAY (2 * 60 * 1000) #define HPD_STORM_REENABLE_DELAY (2 * 60 * 1000)
#define HPD_RETRY_DELAY 1000 #define HPD_RETRY_DELAY 1000
static enum hpd_pin
intel_connector_hpd_pin(struct intel_connector *connector)
{
struct intel_encoder *encoder = intel_attached_encoder(connector);
/*
* MST connectors get their encoder attached dynamically
* so need to make sure we have an encoder here. But since
* MST encoders have their hpd_pin set to HPD_NONE we don't
* have to special case them beyond that.
*/
return encoder ? encoder->hpd_pin : HPD_NONE;
}
/** /**
* intel_hpd_irq_storm_detect - gather stats and detect HPD IRQ storm on a pin * intel_hpd_irq_storm_detect - gather stats and detect HPD IRQ storm on a pin
* @dev_priv: private driver data pointer * @dev_priv: private driver data pointer
@ -185,37 +199,31 @@ static void
intel_hpd_irq_storm_switch_to_polling(struct drm_i915_private *dev_priv) intel_hpd_irq_storm_switch_to_polling(struct drm_i915_private *dev_priv)
{ {
struct drm_device *dev = &dev_priv->drm; struct drm_device *dev = &dev_priv->drm;
struct intel_connector *intel_connector;
struct intel_encoder *intel_encoder;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter; struct drm_connector_list_iter conn_iter;
enum hpd_pin pin; struct intel_connector *connector;
bool hpd_disabled = false; bool hpd_disabled = false;
lockdep_assert_held(&dev_priv->irq_lock); lockdep_assert_held(&dev_priv->irq_lock);
drm_connector_list_iter_begin(dev, &conn_iter); drm_connector_list_iter_begin(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) { for_each_intel_connector_iter(connector, &conn_iter) {
if (connector->polled != DRM_CONNECTOR_POLL_HPD) enum hpd_pin pin;
if (connector->base.polled != DRM_CONNECTOR_POLL_HPD)
continue; continue;
intel_connector = to_intel_connector(connector); pin = intel_connector_hpd_pin(connector);
intel_encoder = intel_connector->encoder;
if (!intel_encoder)
continue;
pin = intel_encoder->hpd_pin;
if (pin == HPD_NONE || if (pin == HPD_NONE ||
dev_priv->hotplug.stats[pin].state != HPD_MARK_DISABLED) dev_priv->hotplug.stats[pin].state != HPD_MARK_DISABLED)
continue; continue;
DRM_INFO("HPD interrupt storm detected on connector %s: " DRM_INFO("HPD interrupt storm detected on connector %s: "
"switching from hotplug detection to polling\n", "switching from hotplug detection to polling\n",
connector->name); connector->base.name);
dev_priv->hotplug.stats[pin].state = HPD_DISABLED; dev_priv->hotplug.stats[pin].state = HPD_DISABLED;
connector->polled = DRM_CONNECTOR_POLL_CONNECT connector->base.polled = DRM_CONNECTOR_POLL_CONNECT |
| DRM_CONNECTOR_POLL_DISCONNECT; DRM_CONNECTOR_POLL_DISCONNECT;
hpd_disabled = true; hpd_disabled = true;
} }
drm_connector_list_iter_end(&conn_iter); drm_connector_list_iter_end(&conn_iter);
@ -234,40 +242,37 @@ static void intel_hpd_irq_storm_reenable_work(struct work_struct *work)
container_of(work, typeof(*dev_priv), container_of(work, typeof(*dev_priv),
hotplug.reenable_work.work); hotplug.reenable_work.work);
struct drm_device *dev = &dev_priv->drm; struct drm_device *dev = &dev_priv->drm;
struct drm_connector_list_iter conn_iter;
struct intel_connector *connector;
intel_wakeref_t wakeref; intel_wakeref_t wakeref;
enum hpd_pin pin; enum hpd_pin pin;
wakeref = intel_runtime_pm_get(&dev_priv->runtime_pm); wakeref = intel_runtime_pm_get(&dev_priv->runtime_pm);
spin_lock_irq(&dev_priv->irq_lock); spin_lock_irq(&dev_priv->irq_lock);
for_each_hpd_pin(pin) {
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
if (dev_priv->hotplug.stats[pin].state != HPD_DISABLED) drm_connector_list_iter_begin(dev, &conn_iter);
for_each_intel_connector_iter(connector, &conn_iter) {
pin = intel_connector_hpd_pin(connector);
if (pin == HPD_NONE ||
dev_priv->hotplug.stats[pin].state != HPD_DISABLED)
continue; continue;
dev_priv->hotplug.stats[pin].state = HPD_ENABLED; if (connector->base.polled != connector->polled)
DRM_DEBUG_DRIVER("Reenabling HPD on connector %s\n",
drm_connector_list_iter_begin(dev, &conn_iter); connector->base.name);
drm_for_each_connector_iter(connector, &conn_iter) { connector->base.polled = connector->polled;
struct intel_connector *intel_connector = to_intel_connector(connector);
/* Don't check MST ports, they don't have pins */
if (!intel_connector->mst_port &&
intel_connector->encoder->hpd_pin == pin) {
if (connector->polled != intel_connector->polled)
DRM_DEBUG_DRIVER("Reenabling HPD on connector %s\n",
connector->name);
connector->polled = intel_connector->polled;
if (!connector->polled)
connector->polled = DRM_CONNECTOR_POLL_HPD;
}
}
drm_connector_list_iter_end(&conn_iter);
} }
drm_connector_list_iter_end(&conn_iter);
for_each_hpd_pin(pin) {
if (dev_priv->hotplug.stats[pin].state == HPD_DISABLED)
dev_priv->hotplug.stats[pin].state = HPD_ENABLED;
}
if (dev_priv->display_irqs_enabled && dev_priv->display.hpd_irq_setup) if (dev_priv->display_irqs_enabled && dev_priv->display.hpd_irq_setup)
dev_priv->display.hpd_irq_setup(dev_priv); dev_priv->display.hpd_irq_setup(dev_priv);
spin_unlock_irq(&dev_priv->irq_lock); spin_unlock_irq(&dev_priv->irq_lock);
intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref); intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref);
@ -281,7 +286,7 @@ intel_encoder_hotplug(struct intel_encoder *encoder,
struct drm_device *dev = connector->base.dev; struct drm_device *dev = connector->base.dev;
enum drm_connector_status old_status; enum drm_connector_status old_status;
WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); drm_WARN_ON(dev, !mutex_is_locked(&dev->mode_config.mutex));
old_status = connector->base.status; old_status = connector->base.status;
connector->base.status = connector->base.status =
@ -361,10 +366,8 @@ static void i915_hotplug_work_func(struct work_struct *work)
container_of(work, struct drm_i915_private, container_of(work, struct drm_i915_private,
hotplug.hotplug_work.work); hotplug.hotplug_work.work);
struct drm_device *dev = &dev_priv->drm; struct drm_device *dev = &dev_priv->drm;
struct intel_connector *intel_connector;
struct intel_encoder *intel_encoder;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter; struct drm_connector_list_iter conn_iter;
struct intel_connector *connector;
u32 changed = 0, retry = 0; u32 changed = 0, retry = 0;
u32 hpd_event_bits; u32 hpd_event_bits;
u32 hpd_retry_bits; u32 hpd_retry_bits;
@ -385,21 +388,24 @@ static void i915_hotplug_work_func(struct work_struct *work)
spin_unlock_irq(&dev_priv->irq_lock); spin_unlock_irq(&dev_priv->irq_lock);
drm_connector_list_iter_begin(dev, &conn_iter); drm_connector_list_iter_begin(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) { for_each_intel_connector_iter(connector, &conn_iter) {
enum hpd_pin pin;
u32 hpd_bit; u32 hpd_bit;
intel_connector = to_intel_connector(connector); pin = intel_connector_hpd_pin(connector);
if (!intel_connector->encoder) if (pin == HPD_NONE)
continue; continue;
intel_encoder = intel_connector->encoder;
hpd_bit = BIT(intel_encoder->hpd_pin);
if ((hpd_event_bits | hpd_retry_bits) & hpd_bit) {
DRM_DEBUG_KMS("Connector %s (pin %i) received hotplug event.\n",
connector->name, intel_encoder->hpd_pin);
switch (intel_encoder->hotplug(intel_encoder, hpd_bit = BIT(pin);
intel_connector, if ((hpd_event_bits | hpd_retry_bits) & hpd_bit) {
hpd_event_bits & hpd_bit)) { struct intel_encoder *encoder =
intel_attached_encoder(connector);
DRM_DEBUG_KMS("Connector %s (pin %i) received hotplug event.\n",
connector->base.name, pin);
switch (encoder->hotplug(encoder, connector,
hpd_event_bits & hpd_bit)) {
case INTEL_HOTPLUG_UNCHANGED: case INTEL_HOTPLUG_UNCHANGED:
break; break;
case INTEL_HOTPLUG_CHANGED: case INTEL_HOTPLUG_CHANGED:
@ -509,8 +515,9 @@ void intel_hpd_irq_handler(struct drm_i915_private *dev_priv,
* hotplug bits itself. So only WARN about unexpected * hotplug bits itself. So only WARN about unexpected
* interrupts on saner platforms. * interrupts on saner platforms.
*/ */
WARN_ONCE(!HAS_GMCH(dev_priv), drm_WARN_ONCE(&dev_priv->drm, !HAS_GMCH(dev_priv),
"Received HPD interrupt on pin %d although disabled\n", pin); "Received HPD interrupt on pin %d although disabled\n",
pin);
continue; continue;
} }
@ -601,8 +608,8 @@ static void i915_hpd_poll_init_work(struct work_struct *work)
container_of(work, struct drm_i915_private, container_of(work, struct drm_i915_private,
hotplug.poll_init_work); hotplug.poll_init_work);
struct drm_device *dev = &dev_priv->drm; struct drm_device *dev = &dev_priv->drm;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter; struct drm_connector_list_iter conn_iter;
struct intel_connector *connector;
bool enabled; bool enabled;
mutex_lock(&dev->mode_config.mutex); mutex_lock(&dev->mode_config.mutex);
@ -610,23 +617,18 @@ static void i915_hpd_poll_init_work(struct work_struct *work)
enabled = READ_ONCE(dev_priv->hotplug.poll_enabled); enabled = READ_ONCE(dev_priv->hotplug.poll_enabled);
drm_connector_list_iter_begin(dev, &conn_iter); drm_connector_list_iter_begin(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) { for_each_intel_connector_iter(connector, &conn_iter) {
struct intel_connector *intel_connector = enum hpd_pin pin;
to_intel_connector(connector);
connector->polled = intel_connector->polled;
/* MST has a dynamic intel_connector->encoder and it's reprobing pin = intel_connector_hpd_pin(connector);
* is all handled by the MST helpers. */ if (pin == HPD_NONE)
if (intel_connector->mst_port)
continue; continue;
if (!connector->polled && I915_HAS_HOTPLUG(dev_priv) && connector->base.polled = connector->polled;
intel_connector->encoder->hpd_pin > HPD_NONE) {
connector->polled = enabled ? if (enabled && connector->base.polled == DRM_CONNECTOR_POLL_HPD)
DRM_CONNECTOR_POLL_CONNECT | connector->base.polled = DRM_CONNECTOR_POLL_CONNECT |
DRM_CONNECTOR_POLL_DISCONNECT : DRM_CONNECTOR_POLL_DISCONNECT;
DRM_CONNECTOR_POLL_HPD;
}
} }
drm_connector_list_iter_end(&conn_iter); drm_connector_list_iter_end(&conn_iter);

View file

@ -71,6 +71,7 @@
#include <drm/intel_lpe_audio.h> #include <drm/intel_lpe_audio.h>
#include "i915_drv.h" #include "i915_drv.h"
#include "intel_de.h"
#include "intel_lpe_audio.h" #include "intel_lpe_audio.h"
#define HAS_LPE_AUDIO(dev_priv) ((dev_priv)->lpe_audio.platdev != NULL) #define HAS_LPE_AUDIO(dev_priv) ((dev_priv)->lpe_audio.platdev != NULL)
@ -166,7 +167,7 @@ static int lpe_audio_irq_init(struct drm_i915_private *dev_priv)
{ {
int irq = dev_priv->lpe_audio.irq; int irq = dev_priv->lpe_audio.irq;
WARN_ON(!intel_irqs_enabled(dev_priv)); drm_WARN_ON(&dev_priv->drm, !intel_irqs_enabled(dev_priv));
irq_set_chip_and_handler_name(irq, irq_set_chip_and_handler_name(irq,
&lpe_audio_irqchip, &lpe_audio_irqchip,
handle_simple_irq, handle_simple_irq,
@ -230,7 +231,8 @@ static int lpe_audio_setup(struct drm_i915_private *dev_priv)
/* enable chicken bit; at least this is required for Dell Wyse 3040 /* enable chicken bit; at least this is required for Dell Wyse 3040
* with DP outputs (but only sometimes by some reason!) * with DP outputs (but only sometimes by some reason!)
*/ */
I915_WRITE(VLV_AUD_CHICKEN_BIT_REG, VLV_CHICKEN_BIT_DBG_ENABLE); intel_de_write(dev_priv, VLV_AUD_CHICKEN_BIT_REG,
VLV_CHICKEN_BIT_DBG_ENABLE);
return 0; return 0;
err_free_irq: err_free_irq:
@ -334,7 +336,7 @@ void intel_lpe_audio_notify(struct drm_i915_private *dev_priv,
spin_lock_irqsave(&pdata->lpe_audio_slock, irqflags); spin_lock_irqsave(&pdata->lpe_audio_slock, irqflags);
audio_enable = I915_READ(VLV_AUD_PORT_EN_DBG(port)); audio_enable = intel_de_read(dev_priv, VLV_AUD_PORT_EN_DBG(port));
if (eld != NULL) { if (eld != NULL) {
memcpy(ppdata->eld, eld, HDMI_MAX_ELD_BYTES); memcpy(ppdata->eld, eld, HDMI_MAX_ELD_BYTES);
@ -343,8 +345,8 @@ void intel_lpe_audio_notify(struct drm_i915_private *dev_priv,
ppdata->dp_output = dp_output; ppdata->dp_output = dp_output;
/* Unmute the amp for both DP and HDMI */ /* Unmute the amp for both DP and HDMI */
I915_WRITE(VLV_AUD_PORT_EN_DBG(port), intel_de_write(dev_priv, VLV_AUD_PORT_EN_DBG(port),
audio_enable & ~VLV_AMP_MUTE); audio_enable & ~VLV_AMP_MUTE);
} else { } else {
memset(ppdata->eld, 0, HDMI_MAX_ELD_BYTES); memset(ppdata->eld, 0, HDMI_MAX_ELD_BYTES);
ppdata->pipe = -1; ppdata->pipe = -1;
@ -352,8 +354,8 @@ void intel_lpe_audio_notify(struct drm_i915_private *dev_priv,
ppdata->dp_output = false; ppdata->dp_output = false;
/* Mute the amp for both DP and HDMI */ /* Mute the amp for both DP and HDMI */
I915_WRITE(VLV_AUD_PORT_EN_DBG(port), intel_de_write(dev_priv, VLV_AUD_PORT_EN_DBG(port),
audio_enable | VLV_AMP_MUTE); audio_enable | VLV_AMP_MUTE);
} }
if (pdata->notify_audio_lpe) if (pdata->notify_audio_lpe)

View file

@ -85,7 +85,7 @@ bool intel_lvds_port_enabled(struct drm_i915_private *dev_priv,
{ {
u32 val; u32 val;
val = I915_READ(lvds_reg); val = intel_de_read(dev_priv, lvds_reg);
/* asserts want to know the pipe even if the port is disabled */ /* asserts want to know the pipe even if the port is disabled */
if (HAS_PCH_CPT(dev_priv)) if (HAS_PCH_CPT(dev_priv))
@ -125,7 +125,7 @@ static void intel_lvds_get_config(struct intel_encoder *encoder,
pipe_config->output_types |= BIT(INTEL_OUTPUT_LVDS); pipe_config->output_types |= BIT(INTEL_OUTPUT_LVDS);
tmp = I915_READ(lvds_encoder->reg); tmp = intel_de_read(dev_priv, lvds_encoder->reg);
if (tmp & LVDS_HSYNC_POLARITY) if (tmp & LVDS_HSYNC_POLARITY)
flags |= DRM_MODE_FLAG_NHSYNC; flags |= DRM_MODE_FLAG_NHSYNC;
else else
@ -143,7 +143,7 @@ static void intel_lvds_get_config(struct intel_encoder *encoder,
/* gen2/3 store dither state in pfit control, needs to match */ /* gen2/3 store dither state in pfit control, needs to match */
if (INTEL_GEN(dev_priv) < 4) { if (INTEL_GEN(dev_priv) < 4) {
tmp = I915_READ(PFIT_CONTROL); tmp = intel_de_read(dev_priv, PFIT_CONTROL);
pipe_config->gmch_pfit.control |= tmp & PANEL_8TO6_DITHER_ENABLE; pipe_config->gmch_pfit.control |= tmp & PANEL_8TO6_DITHER_ENABLE;
} }
@ -156,18 +156,18 @@ static void intel_lvds_pps_get_hw_state(struct drm_i915_private *dev_priv,
{ {
u32 val; u32 val;
pps->powerdown_on_reset = I915_READ(PP_CONTROL(0)) & PANEL_POWER_RESET; pps->powerdown_on_reset = intel_de_read(dev_priv, PP_CONTROL(0)) & PANEL_POWER_RESET;
val = I915_READ(PP_ON_DELAYS(0)); val = intel_de_read(dev_priv, PP_ON_DELAYS(0));
pps->port = REG_FIELD_GET(PANEL_PORT_SELECT_MASK, val); pps->port = REG_FIELD_GET(PANEL_PORT_SELECT_MASK, val);
pps->t1_t2 = REG_FIELD_GET(PANEL_POWER_UP_DELAY_MASK, val); pps->t1_t2 = REG_FIELD_GET(PANEL_POWER_UP_DELAY_MASK, val);
pps->t5 = REG_FIELD_GET(PANEL_LIGHT_ON_DELAY_MASK, val); pps->t5 = REG_FIELD_GET(PANEL_LIGHT_ON_DELAY_MASK, val);
val = I915_READ(PP_OFF_DELAYS(0)); val = intel_de_read(dev_priv, PP_OFF_DELAYS(0));
pps->t3 = REG_FIELD_GET(PANEL_POWER_DOWN_DELAY_MASK, val); pps->t3 = REG_FIELD_GET(PANEL_POWER_DOWN_DELAY_MASK, val);
pps->tx = REG_FIELD_GET(PANEL_LIGHT_OFF_DELAY_MASK, val); pps->tx = REG_FIELD_GET(PANEL_LIGHT_OFF_DELAY_MASK, val);
val = I915_READ(PP_DIVISOR(0)); val = intel_de_read(dev_priv, PP_DIVISOR(0));
pps->divider = REG_FIELD_GET(PP_REFERENCE_DIVIDER_MASK, val); pps->divider = REG_FIELD_GET(PP_REFERENCE_DIVIDER_MASK, val);
val = REG_FIELD_GET(PANEL_POWER_CYCLE_DELAY_MASK, val); val = REG_FIELD_GET(PANEL_POWER_CYCLE_DELAY_MASK, val);
/* /*
@ -203,25 +203,21 @@ static void intel_lvds_pps_init_hw(struct drm_i915_private *dev_priv,
{ {
u32 val; u32 val;
val = I915_READ(PP_CONTROL(0)); val = intel_de_read(dev_priv, PP_CONTROL(0));
WARN_ON((val & PANEL_UNLOCK_MASK) != PANEL_UNLOCK_REGS); drm_WARN_ON(&dev_priv->drm,
(val & PANEL_UNLOCK_MASK) != PANEL_UNLOCK_REGS);
if (pps->powerdown_on_reset) if (pps->powerdown_on_reset)
val |= PANEL_POWER_RESET; val |= PANEL_POWER_RESET;
I915_WRITE(PP_CONTROL(0), val); intel_de_write(dev_priv, PP_CONTROL(0), val);
I915_WRITE(PP_ON_DELAYS(0), intel_de_write(dev_priv, PP_ON_DELAYS(0),
REG_FIELD_PREP(PANEL_PORT_SELECT_MASK, pps->port) | REG_FIELD_PREP(PANEL_PORT_SELECT_MASK, pps->port) | REG_FIELD_PREP(PANEL_POWER_UP_DELAY_MASK, pps->t1_t2) | REG_FIELD_PREP(PANEL_LIGHT_ON_DELAY_MASK, pps->t5));
REG_FIELD_PREP(PANEL_POWER_UP_DELAY_MASK, pps->t1_t2) |
REG_FIELD_PREP(PANEL_LIGHT_ON_DELAY_MASK, pps->t5));
I915_WRITE(PP_OFF_DELAYS(0), intel_de_write(dev_priv, PP_OFF_DELAYS(0),
REG_FIELD_PREP(PANEL_POWER_DOWN_DELAY_MASK, pps->t3) | REG_FIELD_PREP(PANEL_POWER_DOWN_DELAY_MASK, pps->t3) | REG_FIELD_PREP(PANEL_LIGHT_OFF_DELAY_MASK, pps->tx));
REG_FIELD_PREP(PANEL_LIGHT_OFF_DELAY_MASK, pps->tx));
I915_WRITE(PP_DIVISOR(0), intel_de_write(dev_priv, PP_DIVISOR(0),
REG_FIELD_PREP(PP_REFERENCE_DIVIDER_MASK, pps->divider) | REG_FIELD_PREP(PP_REFERENCE_DIVIDER_MASK, pps->divider) | REG_FIELD_PREP(PANEL_POWER_CYCLE_DELAY_MASK, DIV_ROUND_UP(pps->t4, 1000) + 1));
REG_FIELD_PREP(PANEL_POWER_CYCLE_DELAY_MASK,
DIV_ROUND_UP(pps->t4, 1000) + 1));
} }
static void intel_pre_enable_lvds(struct intel_encoder *encoder, static void intel_pre_enable_lvds(struct intel_encoder *encoder,
@ -299,7 +295,7 @@ static void intel_pre_enable_lvds(struct intel_encoder *encoder,
if (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC) if (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC)
temp |= LVDS_VSYNC_POLARITY; temp |= LVDS_VSYNC_POLARITY;
I915_WRITE(lvds_encoder->reg, temp); intel_de_write(dev_priv, lvds_encoder->reg, temp);
} }
/* /*
@ -313,10 +309,12 @@ static void intel_enable_lvds(struct intel_encoder *encoder,
struct intel_lvds_encoder *lvds_encoder = to_lvds_encoder(&encoder->base); struct intel_lvds_encoder *lvds_encoder = to_lvds_encoder(&encoder->base);
struct drm_i915_private *dev_priv = to_i915(dev); struct drm_i915_private *dev_priv = to_i915(dev);
I915_WRITE(lvds_encoder->reg, I915_READ(lvds_encoder->reg) | LVDS_PORT_EN); intel_de_write(dev_priv, lvds_encoder->reg,
intel_de_read(dev_priv, lvds_encoder->reg) | LVDS_PORT_EN);
I915_WRITE(PP_CONTROL(0), I915_READ(PP_CONTROL(0)) | PANEL_POWER_ON); intel_de_write(dev_priv, PP_CONTROL(0),
POSTING_READ(lvds_encoder->reg); intel_de_read(dev_priv, PP_CONTROL(0)) | PANEL_POWER_ON);
intel_de_posting_read(dev_priv, lvds_encoder->reg);
if (intel_de_wait_for_set(dev_priv, PP_STATUS(0), PP_ON, 5000)) if (intel_de_wait_for_set(dev_priv, PP_STATUS(0), PP_ON, 5000))
DRM_ERROR("timed out waiting for panel to power on\n"); DRM_ERROR("timed out waiting for panel to power on\n");
@ -331,12 +329,14 @@ static void intel_disable_lvds(struct intel_encoder *encoder,
struct intel_lvds_encoder *lvds_encoder = to_lvds_encoder(&encoder->base); struct intel_lvds_encoder *lvds_encoder = to_lvds_encoder(&encoder->base);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
I915_WRITE(PP_CONTROL(0), I915_READ(PP_CONTROL(0)) & ~PANEL_POWER_ON); intel_de_write(dev_priv, PP_CONTROL(0),
intel_de_read(dev_priv, PP_CONTROL(0)) & ~PANEL_POWER_ON);
if (intel_de_wait_for_clear(dev_priv, PP_STATUS(0), PP_ON, 1000)) if (intel_de_wait_for_clear(dev_priv, PP_STATUS(0), PP_ON, 1000))
DRM_ERROR("timed out waiting for panel to power off\n"); DRM_ERROR("timed out waiting for panel to power off\n");
I915_WRITE(lvds_encoder->reg, I915_READ(lvds_encoder->reg) & ~LVDS_PORT_EN); intel_de_write(dev_priv, lvds_encoder->reg,
POSTING_READ(lvds_encoder->reg); intel_de_read(dev_priv, lvds_encoder->reg) & ~LVDS_PORT_EN);
intel_de_posting_read(dev_priv, lvds_encoder->reg);
} }
static void gmch_disable_lvds(struct intel_encoder *encoder, static void gmch_disable_lvds(struct intel_encoder *encoder,
@ -791,7 +791,7 @@ static bool compute_is_dual_link_lvds(struct intel_lvds_encoder *lvds_encoder)
* we need to check "the value to be set" in VBT when LVDS * we need to check "the value to be set" in VBT when LVDS
* register is uninitialized. * register is uninitialized.
*/ */
val = I915_READ(lvds_encoder->reg); val = intel_de_read(dev_priv, lvds_encoder->reg);
if (HAS_PCH_CPT(dev_priv)) if (HAS_PCH_CPT(dev_priv))
val &= ~(LVDS_DETECTED | LVDS_PIPE_SEL_MASK_CPT); val &= ~(LVDS_DETECTED | LVDS_PIPE_SEL_MASK_CPT);
else else
@ -827,8 +827,8 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
/* Skip init on machines we know falsely report LVDS */ /* Skip init on machines we know falsely report LVDS */
if (dmi_check_system(intel_no_lvds)) { if (dmi_check_system(intel_no_lvds)) {
WARN(!dev_priv->vbt.int_lvds_support, drm_WARN(dev, !dev_priv->vbt.int_lvds_support,
"Useless DMI match. Internal LVDS support disabled by VBT\n"); "Useless DMI match. Internal LVDS support disabled by VBT\n");
return; return;
} }
@ -842,7 +842,7 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
else else
lvds_reg = LVDS; lvds_reg = LVDS;
lvds = I915_READ(lvds_reg); lvds = intel_de_read(dev_priv, lvds_reg);
if (HAS_PCH_SPLIT(dev_priv)) { if (HAS_PCH_SPLIT(dev_priv)) {
if ((lvds & LVDS_DETECTED) == 0) if ((lvds & LVDS_DETECTED) == 0)

View file

@ -35,6 +35,7 @@
#include "display/intel_panel.h" #include "display/intel_panel.h"
#include "i915_drv.h" #include "i915_drv.h"
#include "intel_acpi.h"
#include "intel_display_types.h" #include "intel_display_types.h"
#include "intel_opregion.h" #include "intel_opregion.h"
@ -242,29 +243,6 @@ struct opregion_asle_ext {
#define SWSCI_SBCB_POST_VBE_PM SWSCI_FUNCTION_CODE(SWSCI_SBCB, 19) #define SWSCI_SBCB_POST_VBE_PM SWSCI_FUNCTION_CODE(SWSCI_SBCB, 19)
#define SWSCI_SBCB_ENABLE_DISABLE_AUDIO SWSCI_FUNCTION_CODE(SWSCI_SBCB, 21) #define SWSCI_SBCB_ENABLE_DISABLE_AUDIO SWSCI_FUNCTION_CODE(SWSCI_SBCB, 21)
/*
* ACPI Specification, Revision 5.0, Appendix B.3.2 _DOD (Enumerate All Devices
* Attached to the Display Adapter).
*/
#define ACPI_DISPLAY_INDEX_SHIFT 0
#define ACPI_DISPLAY_INDEX_MASK (0xf << 0)
#define ACPI_DISPLAY_PORT_ATTACHMENT_SHIFT 4
#define ACPI_DISPLAY_PORT_ATTACHMENT_MASK (0xf << 4)
#define ACPI_DISPLAY_TYPE_SHIFT 8
#define ACPI_DISPLAY_TYPE_MASK (0xf << 8)
#define ACPI_DISPLAY_TYPE_OTHER (0 << 8)
#define ACPI_DISPLAY_TYPE_VGA (1 << 8)
#define ACPI_DISPLAY_TYPE_TV (2 << 8)
#define ACPI_DISPLAY_TYPE_EXTERNAL_DIGITAL (3 << 8)
#define ACPI_DISPLAY_TYPE_INTERNAL_DIGITAL (4 << 8)
#define ACPI_VENDOR_SPECIFIC_SHIFT 12
#define ACPI_VENDOR_SPECIFIC_MASK (0xf << 12)
#define ACPI_BIOS_CAN_DETECT (1 << 16)
#define ACPI_DEPENDS_ON_VGA (1 << 17)
#define ACPI_PIPE_ID_SHIFT 18
#define ACPI_PIPE_ID_MASK (7 << 18)
#define ACPI_DEVICE_ID_SCHEME (1 << 31)
#define MAX_DSLP 1500 #define MAX_DSLP 1500
static int swsci(struct drm_i915_private *dev_priv, static int swsci(struct drm_i915_private *dev_priv,
@ -311,7 +289,7 @@ static int swsci(struct drm_i915_private *dev_priv,
/* The spec tells us to do this, but we are the only user... */ /* The spec tells us to do this, but we are the only user... */
scic = swsci->scic; scic = swsci->scic;
if (scic & SWSCI_SCIC_INDICATOR) { if (scic & SWSCI_SCIC_INDICATOR) {
DRM_DEBUG_DRIVER("SWSCI request already in progress\n"); drm_dbg(&dev_priv->drm, "SWSCI request already in progress\n");
return -EBUSY; return -EBUSY;
} }
@ -335,7 +313,7 @@ static int swsci(struct drm_i915_private *dev_priv,
/* Poll for the result. */ /* Poll for the result. */
#define C (((scic = swsci->scic) & SWSCI_SCIC_INDICATOR) == 0) #define C (((scic = swsci->scic) & SWSCI_SCIC_INDICATOR) == 0)
if (wait_for(C, dslp)) { if (wait_for(C, dslp)) {
DRM_DEBUG_DRIVER("SWSCI request timed out\n"); drm_dbg(&dev_priv->drm, "SWSCI request timed out\n");
return -ETIMEDOUT; return -ETIMEDOUT;
} }
@ -344,7 +322,7 @@ static int swsci(struct drm_i915_private *dev_priv,
/* Note: scic == 0 is an error! */ /* Note: scic == 0 is an error! */
if (scic != SWSCI_SCIC_EXIT_STATUS_SUCCESS) { if (scic != SWSCI_SCIC_EXIT_STATUS_SUCCESS) {
DRM_DEBUG_DRIVER("SWSCI request error %u\n", scic); drm_dbg(&dev_priv->drm, "SWSCI request error %u\n", scic);
return -EIO; return -EIO;
} }
@ -403,8 +381,9 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
type = DISPLAY_TYPE_INTERNAL_FLAT_PANEL; type = DISPLAY_TYPE_INTERNAL_FLAT_PANEL;
break; break;
default: default:
WARN_ONCE(1, "unsupported intel_encoder type %d\n", drm_WARN_ONCE(&dev_priv->drm, 1,
intel_encoder->type); "unsupported intel_encoder type %d\n",
intel_encoder->type);
return -EINVAL; return -EINVAL;
} }
@ -448,10 +427,11 @@ static u32 asle_set_backlight(struct drm_i915_private *dev_priv, u32 bclp)
struct opregion_asle *asle = dev_priv->opregion.asle; struct opregion_asle *asle = dev_priv->opregion.asle;
struct drm_device *dev = &dev_priv->drm; struct drm_device *dev = &dev_priv->drm;
DRM_DEBUG_DRIVER("bclp = 0x%08x\n", bclp); drm_dbg(&dev_priv->drm, "bclp = 0x%08x\n", bclp);
if (acpi_video_get_backlight_type() == acpi_backlight_native) { if (acpi_video_get_backlight_type() == acpi_backlight_native) {
DRM_DEBUG_KMS("opregion backlight request ignored\n"); drm_dbg_kms(&dev_priv->drm,
"opregion backlight request ignored\n");
return 0; return 0;
} }
@ -468,7 +448,8 @@ static u32 asle_set_backlight(struct drm_i915_private *dev_priv, u32 bclp)
* Update backlight on all connectors that support backlight (usually * Update backlight on all connectors that support backlight (usually
* only one). * only one).
*/ */
DRM_DEBUG_KMS("updating opregion backlight %d/255\n", bclp); drm_dbg_kms(&dev_priv->drm, "updating opregion backlight %d/255\n",
bclp);
drm_connector_list_iter_begin(dev, &conn_iter); drm_connector_list_iter_begin(dev, &conn_iter);
for_each_intel_connector_iter(connector, &conn_iter) for_each_intel_connector_iter(connector, &conn_iter)
intel_panel_set_backlight_acpi(connector->base.state, bclp, 255); intel_panel_set_backlight_acpi(connector->base.state, bclp, 255);
@ -485,13 +466,13 @@ static u32 asle_set_als_illum(struct drm_i915_private *dev_priv, u32 alsi)
{ {
/* alsi is the current ALS reading in lux. 0 indicates below sensor /* alsi is the current ALS reading in lux. 0 indicates below sensor
range, 0xffff indicates above sensor range. 1-0xfffe are valid */ range, 0xffff indicates above sensor range. 1-0xfffe are valid */
DRM_DEBUG_DRIVER("Illum is not supported\n"); drm_dbg(&dev_priv->drm, "Illum is not supported\n");
return ASLC_ALS_ILLUM_FAILED; return ASLC_ALS_ILLUM_FAILED;
} }
static u32 asle_set_pwm_freq(struct drm_i915_private *dev_priv, u32 pfmb) static u32 asle_set_pwm_freq(struct drm_i915_private *dev_priv, u32 pfmb)
{ {
DRM_DEBUG_DRIVER("PWM freq is not supported\n"); drm_dbg(&dev_priv->drm, "PWM freq is not supported\n");
return ASLC_PWM_FREQ_FAILED; return ASLC_PWM_FREQ_FAILED;
} }
@ -499,30 +480,36 @@ static u32 asle_set_pfit(struct drm_i915_private *dev_priv, u32 pfit)
{ {
/* Panel fitting is currently controlled by the X code, so this is a /* Panel fitting is currently controlled by the X code, so this is a
noop until modesetting support works fully */ noop until modesetting support works fully */
DRM_DEBUG_DRIVER("Pfit is not supported\n"); drm_dbg(&dev_priv->drm, "Pfit is not supported\n");
return ASLC_PFIT_FAILED; return ASLC_PFIT_FAILED;
} }
static u32 asle_set_supported_rotation_angles(struct drm_i915_private *dev_priv, u32 srot) static u32 asle_set_supported_rotation_angles(struct drm_i915_private *dev_priv, u32 srot)
{ {
DRM_DEBUG_DRIVER("SROT is not supported\n"); drm_dbg(&dev_priv->drm, "SROT is not supported\n");
return ASLC_ROTATION_ANGLES_FAILED; return ASLC_ROTATION_ANGLES_FAILED;
} }
static u32 asle_set_button_array(struct drm_i915_private *dev_priv, u32 iuer) static u32 asle_set_button_array(struct drm_i915_private *dev_priv, u32 iuer)
{ {
if (!iuer) if (!iuer)
DRM_DEBUG_DRIVER("Button array event is not supported (nothing)\n"); drm_dbg(&dev_priv->drm,
"Button array event is not supported (nothing)\n");
if (iuer & ASLE_IUER_ROTATION_LOCK_BTN) if (iuer & ASLE_IUER_ROTATION_LOCK_BTN)
DRM_DEBUG_DRIVER("Button array event is not supported (rotation lock)\n"); drm_dbg(&dev_priv->drm,
"Button array event is not supported (rotation lock)\n");
if (iuer & ASLE_IUER_VOLUME_DOWN_BTN) if (iuer & ASLE_IUER_VOLUME_DOWN_BTN)
DRM_DEBUG_DRIVER("Button array event is not supported (volume down)\n"); drm_dbg(&dev_priv->drm,
"Button array event is not supported (volume down)\n");
if (iuer & ASLE_IUER_VOLUME_UP_BTN) if (iuer & ASLE_IUER_VOLUME_UP_BTN)
DRM_DEBUG_DRIVER("Button array event is not supported (volume up)\n"); drm_dbg(&dev_priv->drm,
"Button array event is not supported (volume up)\n");
if (iuer & ASLE_IUER_WINDOWS_BTN) if (iuer & ASLE_IUER_WINDOWS_BTN)
DRM_DEBUG_DRIVER("Button array event is not supported (windows)\n"); drm_dbg(&dev_priv->drm,
"Button array event is not supported (windows)\n");
if (iuer & ASLE_IUER_POWER_BTN) if (iuer & ASLE_IUER_POWER_BTN)
DRM_DEBUG_DRIVER("Button array event is not supported (power)\n"); drm_dbg(&dev_priv->drm,
"Button array event is not supported (power)\n");
return ASLC_BUTTON_ARRAY_FAILED; return ASLC_BUTTON_ARRAY_FAILED;
} }
@ -530,9 +517,11 @@ static u32 asle_set_button_array(struct drm_i915_private *dev_priv, u32 iuer)
static u32 asle_set_convertible(struct drm_i915_private *dev_priv, u32 iuer) static u32 asle_set_convertible(struct drm_i915_private *dev_priv, u32 iuer)
{ {
if (iuer & ASLE_IUER_CONVERTIBLE) if (iuer & ASLE_IUER_CONVERTIBLE)
DRM_DEBUG_DRIVER("Convertible is not supported (clamshell)\n"); drm_dbg(&dev_priv->drm,
"Convertible is not supported (clamshell)\n");
else else
DRM_DEBUG_DRIVER("Convertible is not supported (slate)\n"); drm_dbg(&dev_priv->drm,
"Convertible is not supported (slate)\n");
return ASLC_CONVERTIBLE_FAILED; return ASLC_CONVERTIBLE_FAILED;
} }
@ -540,16 +529,17 @@ static u32 asle_set_convertible(struct drm_i915_private *dev_priv, u32 iuer)
static u32 asle_set_docking(struct drm_i915_private *dev_priv, u32 iuer) static u32 asle_set_docking(struct drm_i915_private *dev_priv, u32 iuer)
{ {
if (iuer & ASLE_IUER_DOCKING) if (iuer & ASLE_IUER_DOCKING)
DRM_DEBUG_DRIVER("Docking is not supported (docked)\n"); drm_dbg(&dev_priv->drm, "Docking is not supported (docked)\n");
else else
DRM_DEBUG_DRIVER("Docking is not supported (undocked)\n"); drm_dbg(&dev_priv->drm,
"Docking is not supported (undocked)\n");
return ASLC_DOCKING_FAILED; return ASLC_DOCKING_FAILED;
} }
static u32 asle_isct_state(struct drm_i915_private *dev_priv) static u32 asle_isct_state(struct drm_i915_private *dev_priv)
{ {
DRM_DEBUG_DRIVER("ISCT is not supported\n"); drm_dbg(&dev_priv->drm, "ISCT is not supported\n");
return ASLC_ISCT_STATE_FAILED; return ASLC_ISCT_STATE_FAILED;
} }
@ -569,8 +559,8 @@ static void asle_work(struct work_struct *work)
aslc_req = asle->aslc; aslc_req = asle->aslc;
if (!(aslc_req & ASLC_REQ_MSK)) { if (!(aslc_req & ASLC_REQ_MSK)) {
DRM_DEBUG_DRIVER("No request on ASLC interrupt 0x%08x\n", drm_dbg(&dev_priv->drm,
aslc_req); "No request on ASLC interrupt 0x%08x\n", aslc_req);
return; return;
} }
@ -662,54 +652,12 @@ static void set_did(struct intel_opregion *opregion, int i, u32 val)
} }
} }
static u32 acpi_display_type(struct intel_connector *connector)
{
u32 display_type;
switch (connector->base.connector_type) {
case DRM_MODE_CONNECTOR_VGA:
case DRM_MODE_CONNECTOR_DVIA:
display_type = ACPI_DISPLAY_TYPE_VGA;
break;
case DRM_MODE_CONNECTOR_Composite:
case DRM_MODE_CONNECTOR_SVIDEO:
case DRM_MODE_CONNECTOR_Component:
case DRM_MODE_CONNECTOR_9PinDIN:
case DRM_MODE_CONNECTOR_TV:
display_type = ACPI_DISPLAY_TYPE_TV;
break;
case DRM_MODE_CONNECTOR_DVII:
case DRM_MODE_CONNECTOR_DVID:
case DRM_MODE_CONNECTOR_DisplayPort:
case DRM_MODE_CONNECTOR_HDMIA:
case DRM_MODE_CONNECTOR_HDMIB:
display_type = ACPI_DISPLAY_TYPE_EXTERNAL_DIGITAL;
break;
case DRM_MODE_CONNECTOR_LVDS:
case DRM_MODE_CONNECTOR_eDP:
case DRM_MODE_CONNECTOR_DSI:
display_type = ACPI_DISPLAY_TYPE_INTERNAL_DIGITAL;
break;
case DRM_MODE_CONNECTOR_Unknown:
case DRM_MODE_CONNECTOR_VIRTUAL:
display_type = ACPI_DISPLAY_TYPE_OTHER;
break;
default:
MISSING_CASE(connector->base.connector_type);
display_type = ACPI_DISPLAY_TYPE_OTHER;
break;
}
return display_type;
}
static void intel_didl_outputs(struct drm_i915_private *dev_priv) static void intel_didl_outputs(struct drm_i915_private *dev_priv)
{ {
struct intel_opregion *opregion = &dev_priv->opregion; struct intel_opregion *opregion = &dev_priv->opregion;
struct intel_connector *connector; struct intel_connector *connector;
struct drm_connector_list_iter conn_iter; struct drm_connector_list_iter conn_iter;
int i = 0, max_outputs; int i = 0, max_outputs;
int display_index[16] = {};
/* /*
* In theory, did2, the extended didl, gets added at opregion version * In theory, did2, the extended didl, gets added at opregion version
@ -721,29 +669,22 @@ static void intel_didl_outputs(struct drm_i915_private *dev_priv)
max_outputs = ARRAY_SIZE(opregion->acpi->didl) + max_outputs = ARRAY_SIZE(opregion->acpi->didl) +
ARRAY_SIZE(opregion->acpi->did2); ARRAY_SIZE(opregion->acpi->did2);
intel_acpi_device_id_update(dev_priv);
drm_connector_list_iter_begin(&dev_priv->drm, &conn_iter); drm_connector_list_iter_begin(&dev_priv->drm, &conn_iter);
for_each_intel_connector_iter(connector, &conn_iter) { for_each_intel_connector_iter(connector, &conn_iter) {
u32 device_id, type;
device_id = acpi_display_type(connector);
/* Use display type specific display index. */
type = (device_id & ACPI_DISPLAY_TYPE_MASK)
>> ACPI_DISPLAY_TYPE_SHIFT;
device_id |= display_index[type]++ << ACPI_DISPLAY_INDEX_SHIFT;
connector->acpi_device_id = device_id;
if (i < max_outputs) if (i < max_outputs)
set_did(opregion, i, device_id); set_did(opregion, i, connector->acpi_device_id);
i++; i++;
} }
drm_connector_list_iter_end(&conn_iter); drm_connector_list_iter_end(&conn_iter);
DRM_DEBUG_KMS("%d outputs detected\n", i); drm_dbg_kms(&dev_priv->drm, "%d outputs detected\n", i);
if (i > max_outputs) if (i > max_outputs)
DRM_ERROR("More than %d outputs in connector list\n", drm_err(&dev_priv->drm,
max_outputs); "More than %d outputs in connector list\n",
max_outputs);
/* If fewer than max outputs, the list must be null terminated */ /* If fewer than max outputs, the list must be null terminated */
if (i < max_outputs) if (i < max_outputs)
@ -823,7 +764,9 @@ static void swsci_setup(struct drm_i915_private *dev_priv)
if (requested_callbacks) { if (requested_callbacks) {
u32 req = opregion->swsci_sbcb_sub_functions; u32 req = opregion->swsci_sbcb_sub_functions;
if ((req & tmp) != req) if ((req & tmp) != req)
DRM_DEBUG_DRIVER("SWSCI BIOS requested (%08x) SBCB callbacks that are not supported (%08x)\n", req, tmp); drm_dbg(&dev_priv->drm,
"SWSCI BIOS requested (%08x) SBCB callbacks that are not supported (%08x)\n",
req, tmp);
/* XXX: for now, trust the requested callbacks */ /* XXX: for now, trust the requested callbacks */
/* opregion->swsci_sbcb_sub_functions &= tmp; */ /* opregion->swsci_sbcb_sub_functions &= tmp; */
} else { } else {
@ -831,9 +774,10 @@ static void swsci_setup(struct drm_i915_private *dev_priv)
} }
} }
DRM_DEBUG_DRIVER("SWSCI GBDA callbacks %08x, SBCB callbacks %08x\n", drm_dbg(&dev_priv->drm,
opregion->swsci_gbda_sub_functions, "SWSCI GBDA callbacks %08x, SBCB callbacks %08x\n",
opregion->swsci_sbcb_sub_functions); opregion->swsci_gbda_sub_functions,
opregion->swsci_sbcb_sub_functions);
} }
static int intel_no_opregion_vbt_callback(const struct dmi_system_id *id) static int intel_no_opregion_vbt_callback(const struct dmi_system_id *id)
@ -867,15 +811,17 @@ static int intel_load_vbt_firmware(struct drm_i915_private *dev_priv)
ret = request_firmware(&fw, name, &dev_priv->drm.pdev->dev); ret = request_firmware(&fw, name, &dev_priv->drm.pdev->dev);
if (ret) { if (ret) {
DRM_ERROR("Requesting VBT firmware \"%s\" failed (%d)\n", drm_err(&dev_priv->drm,
name, ret); "Requesting VBT firmware \"%s\" failed (%d)\n",
name, ret);
return ret; return ret;
} }
if (intel_bios_is_valid_vbt(fw->data, fw->size)) { if (intel_bios_is_valid_vbt(fw->data, fw->size)) {
opregion->vbt_firmware = kmemdup(fw->data, fw->size, GFP_KERNEL); opregion->vbt_firmware = kmemdup(fw->data, fw->size, GFP_KERNEL);
if (opregion->vbt_firmware) { if (opregion->vbt_firmware) {
DRM_DEBUG_KMS("Found valid VBT firmware \"%s\"\n", name); drm_dbg_kms(&dev_priv->drm,
"Found valid VBT firmware \"%s\"\n", name);
opregion->vbt = opregion->vbt_firmware; opregion->vbt = opregion->vbt_firmware;
opregion->vbt_size = fw->size; opregion->vbt_size = fw->size;
ret = 0; ret = 0;
@ -883,7 +829,8 @@ static int intel_load_vbt_firmware(struct drm_i915_private *dev_priv)
ret = -ENOMEM; ret = -ENOMEM;
} }
} else { } else {
DRM_DEBUG_KMS("Invalid VBT firmware \"%s\"\n", name); drm_dbg_kms(&dev_priv->drm, "Invalid VBT firmware \"%s\"\n",
name);
ret = -EINVAL; ret = -EINVAL;
} }
@ -910,9 +857,10 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
BUILD_BUG_ON(sizeof(struct opregion_asle_ext) != 0x400); BUILD_BUG_ON(sizeof(struct opregion_asle_ext) != 0x400);
pci_read_config_dword(pdev, ASLS, &asls); pci_read_config_dword(pdev, ASLS, &asls);
DRM_DEBUG_DRIVER("graphic opregion physical addr: 0x%x\n", asls); drm_dbg(&dev_priv->drm, "graphic opregion physical addr: 0x%x\n",
asls);
if (asls == 0) { if (asls == 0) {
DRM_DEBUG_DRIVER("ACPI OpRegion not supported!\n"); drm_dbg(&dev_priv->drm, "ACPI OpRegion not supported!\n");
return -ENOTSUPP; return -ENOTSUPP;
} }
@ -925,21 +873,21 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
memcpy(buf, base, sizeof(buf)); memcpy(buf, base, sizeof(buf));
if (memcmp(buf, OPREGION_SIGNATURE, 16)) { if (memcmp(buf, OPREGION_SIGNATURE, 16)) {
DRM_DEBUG_DRIVER("opregion signature mismatch\n"); drm_dbg(&dev_priv->drm, "opregion signature mismatch\n");
err = -EINVAL; err = -EINVAL;
goto err_out; goto err_out;
} }
opregion->header = base; opregion->header = base;
opregion->lid_state = base + ACPI_CLID; opregion->lid_state = base + ACPI_CLID;
DRM_DEBUG_DRIVER("ACPI OpRegion version %u.%u.%u\n", drm_dbg(&dev_priv->drm, "ACPI OpRegion version %u.%u.%u\n",
opregion->header->over.major, opregion->header->over.major,
opregion->header->over.minor, opregion->header->over.minor,
opregion->header->over.revision); opregion->header->over.revision);
mboxes = opregion->header->mboxes; mboxes = opregion->header->mboxes;
if (mboxes & MBOX_ACPI) { if (mboxes & MBOX_ACPI) {
DRM_DEBUG_DRIVER("Public ACPI methods supported\n"); drm_dbg(&dev_priv->drm, "Public ACPI methods supported\n");
opregion->acpi = base + OPREGION_ACPI_OFFSET; opregion->acpi = base + OPREGION_ACPI_OFFSET;
/* /*
* Indicate we handle monitor hotplug events ourselves so we do * Indicate we handle monitor hotplug events ourselves so we do
@ -951,20 +899,20 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
} }
if (mboxes & MBOX_SWSCI) { if (mboxes & MBOX_SWSCI) {
DRM_DEBUG_DRIVER("SWSCI supported\n"); drm_dbg(&dev_priv->drm, "SWSCI supported\n");
opregion->swsci = base + OPREGION_SWSCI_OFFSET; opregion->swsci = base + OPREGION_SWSCI_OFFSET;
swsci_setup(dev_priv); swsci_setup(dev_priv);
} }
if (mboxes & MBOX_ASLE) { if (mboxes & MBOX_ASLE) {
DRM_DEBUG_DRIVER("ASLE supported\n"); drm_dbg(&dev_priv->drm, "ASLE supported\n");
opregion->asle = base + OPREGION_ASLE_OFFSET; opregion->asle = base + OPREGION_ASLE_OFFSET;
opregion->asle->ardy = ASLE_ARDY_NOT_READY; opregion->asle->ardy = ASLE_ARDY_NOT_READY;
} }
if (mboxes & MBOX_ASLE_EXT) if (mboxes & MBOX_ASLE_EXT)
DRM_DEBUG_DRIVER("ASLE extension supported\n"); drm_dbg(&dev_priv->drm, "ASLE extension supported\n");
if (intel_load_vbt_firmware(dev_priv) == 0) if (intel_load_vbt_firmware(dev_priv) == 0)
goto out; goto out;
@ -984,7 +932,7 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
*/ */
if (opregion->header->over.major > 2 || if (opregion->header->over.major > 2 ||
opregion->header->over.minor >= 1) { opregion->header->over.minor >= 1) {
WARN_ON(rvda < OPREGION_SIZE); drm_WARN_ON(&dev_priv->drm, rvda < OPREGION_SIZE);
rvda += asls; rvda += asls;
} }
@ -995,12 +943,14 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
vbt = opregion->rvda; vbt = opregion->rvda;
vbt_size = opregion->asle->rvds; vbt_size = opregion->asle->rvds;
if (intel_bios_is_valid_vbt(vbt, vbt_size)) { if (intel_bios_is_valid_vbt(vbt, vbt_size)) {
DRM_DEBUG_KMS("Found valid VBT in ACPI OpRegion (RVDA)\n"); drm_dbg_kms(&dev_priv->drm,
"Found valid VBT in ACPI OpRegion (RVDA)\n");
opregion->vbt = vbt; opregion->vbt = vbt;
opregion->vbt_size = vbt_size; opregion->vbt_size = vbt_size;
goto out; goto out;
} else { } else {
DRM_DEBUG_KMS("Invalid VBT in ACPI OpRegion (RVDA)\n"); drm_dbg_kms(&dev_priv->drm,
"Invalid VBT in ACPI OpRegion (RVDA)\n");
memunmap(opregion->rvda); memunmap(opregion->rvda);
opregion->rvda = NULL; opregion->rvda = NULL;
} }
@ -1018,11 +968,13 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
OPREGION_ASLE_EXT_OFFSET : OPREGION_SIZE; OPREGION_ASLE_EXT_OFFSET : OPREGION_SIZE;
vbt_size -= OPREGION_VBT_OFFSET; vbt_size -= OPREGION_VBT_OFFSET;
if (intel_bios_is_valid_vbt(vbt, vbt_size)) { if (intel_bios_is_valid_vbt(vbt, vbt_size)) {
DRM_DEBUG_KMS("Found valid VBT in ACPI OpRegion (Mailbox #4)\n"); drm_dbg_kms(&dev_priv->drm,
"Found valid VBT in ACPI OpRegion (Mailbox #4)\n");
opregion->vbt = vbt; opregion->vbt = vbt;
opregion->vbt_size = vbt_size; opregion->vbt_size = vbt_size;
} else { } else {
DRM_DEBUG_KMS("Invalid VBT in ACPI OpRegion (Mailbox #4)\n"); drm_dbg_kms(&dev_priv->drm,
"Invalid VBT in ACPI OpRegion (Mailbox #4)\n");
} }
out: out:
@ -1058,20 +1010,22 @@ intel_opregion_get_panel_type(struct drm_i915_private *dev_priv)
ret = swsci(dev_priv, SWSCI_GBDA_PANEL_DETAILS, 0x0, &panel_details); ret = swsci(dev_priv, SWSCI_GBDA_PANEL_DETAILS, 0x0, &panel_details);
if (ret) { if (ret) {
DRM_DEBUG_KMS("Failed to get panel details from OpRegion (%d)\n", drm_dbg_kms(&dev_priv->drm,
ret); "Failed to get panel details from OpRegion (%d)\n",
ret);
return ret; return ret;
} }
ret = (panel_details >> 8) & 0xff; ret = (panel_details >> 8) & 0xff;
if (ret > 0x10) { if (ret > 0x10) {
DRM_DEBUG_KMS("Invalid OpRegion panel type 0x%x\n", ret); drm_dbg_kms(&dev_priv->drm,
"Invalid OpRegion panel type 0x%x\n", ret);
return -EINVAL; return -EINVAL;
} }
/* fall back to VBT panel type? */ /* fall back to VBT panel type? */
if (ret == 0x0) { if (ret == 0x0) {
DRM_DEBUG_KMS("No panel type in OpRegion\n"); drm_dbg_kms(&dev_priv->drm, "No panel type in OpRegion\n");
return -ENODEV; return -ENODEV;
} }
@ -1081,7 +1035,8 @@ intel_opregion_get_panel_type(struct drm_i915_private *dev_priv)
* via a quirk list :( * via a quirk list :(
*/ */
if (!dmi_check_system(intel_use_opregion_panel_type)) { if (!dmi_check_system(intel_use_opregion_panel_type)) {
DRM_DEBUG_KMS("Ignoring OpRegion panel type (%d)\n", ret - 1); drm_dbg_kms(&dev_priv->drm,
"Ignoring OpRegion panel type (%d)\n", ret - 1);
return -ENODEV; return -ENODEV;
} }

View file

@ -204,9 +204,10 @@ static void i830_overlay_clock_gating(struct drm_i915_private *dev_priv,
/* WA_OVERLAY_CLKGATE:alm */ /* WA_OVERLAY_CLKGATE:alm */
if (enable) if (enable)
I915_WRITE(DSPCLK_GATE_D, 0); intel_de_write(dev_priv, DSPCLK_GATE_D, 0);
else else
I915_WRITE(DSPCLK_GATE_D, OVRUNIT_CLOCK_GATE_DISABLE); intel_de_write(dev_priv, DSPCLK_GATE_D,
OVRUNIT_CLOCK_GATE_DISABLE);
/* WA_DISABLE_L2CACHE_CLOCK_GATING:alm */ /* WA_DISABLE_L2CACHE_CLOCK_GATING:alm */
pci_bus_read_config_byte(pdev->bus, pci_bus_read_config_byte(pdev->bus,
@ -247,7 +248,7 @@ static int intel_overlay_on(struct intel_overlay *overlay)
struct i915_request *rq; struct i915_request *rq;
u32 *cs; u32 *cs;
WARN_ON(overlay->active); drm_WARN_ON(&dev_priv->drm, overlay->active);
rq = alloc_request(overlay, NULL); rq = alloc_request(overlay, NULL);
if (IS_ERR(rq)) if (IS_ERR(rq))
@ -315,13 +316,13 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
u32 flip_addr = overlay->flip_addr; u32 flip_addr = overlay->flip_addr;
u32 tmp, *cs; u32 tmp, *cs;
WARN_ON(!overlay->active); drm_WARN_ON(&dev_priv->drm, !overlay->active);
if (load_polyphase_filter) if (load_polyphase_filter)
flip_addr |= OFC_UPDATE; flip_addr |= OFC_UPDATE;
/* check for underruns */ /* check for underruns */
tmp = I915_READ(DOVSTA); tmp = intel_de_read(dev_priv, DOVSTA);
if (tmp & (1 << 17)) if (tmp & (1 << 17))
DRM_DEBUG("overlay underrun, DOVSTA: %x\n", tmp); DRM_DEBUG("overlay underrun, DOVSTA: %x\n", tmp);
@ -456,7 +457,7 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
if (!overlay->old_vma) if (!overlay->old_vma)
return 0; return 0;
if (!(I915_READ(GEN2_ISR) & I915_OVERLAY_PLANE_FLIP_PENDING_INTERRUPT)) { if (!(intel_de_read(dev_priv, GEN2_ISR) & I915_OVERLAY_PLANE_FLIP_PENDING_INTERRUPT)) {
intel_overlay_release_old_vid_tail(overlay); intel_overlay_release_old_vid_tail(overlay);
return 0; return 0;
} }
@ -759,7 +760,8 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
struct i915_vma *vma; struct i915_vma *vma;
int ret, tmp_width; int ret, tmp_width;
WARN_ON(!drm_modeset_is_locked(&dev_priv->drm.mode_config.connection_mutex)); drm_WARN_ON(&dev_priv->drm,
!drm_modeset_is_locked(&dev_priv->drm.mode_config.connection_mutex));
ret = intel_overlay_release_old_vid(overlay); ret = intel_overlay_release_old_vid(overlay);
if (ret != 0) if (ret != 0)
@ -857,7 +859,8 @@ int intel_overlay_switch_off(struct intel_overlay *overlay)
struct drm_i915_private *dev_priv = overlay->i915; struct drm_i915_private *dev_priv = overlay->i915;
int ret; int ret;
WARN_ON(!drm_modeset_is_locked(&dev_priv->drm.mode_config.connection_mutex)); drm_WARN_ON(&dev_priv->drm,
!drm_modeset_is_locked(&dev_priv->drm.mode_config.connection_mutex));
ret = intel_overlay_recover_from_interrupt(overlay); ret = intel_overlay_recover_from_interrupt(overlay);
if (ret != 0) if (ret != 0)
@ -891,7 +894,7 @@ static int check_overlay_possible_on_crtc(struct intel_overlay *overlay,
static void update_pfit_vscale_ratio(struct intel_overlay *overlay) static void update_pfit_vscale_ratio(struct intel_overlay *overlay)
{ {
struct drm_i915_private *dev_priv = overlay->i915; struct drm_i915_private *dev_priv = overlay->i915;
u32 pfit_control = I915_READ(PFIT_CONTROL); u32 pfit_control = intel_de_read(dev_priv, PFIT_CONTROL);
u32 ratio; u32 ratio;
/* XXX: This is not the same logic as in the xorg driver, but more in /* XXX: This is not the same logic as in the xorg driver, but more in
@ -899,12 +902,12 @@ static void update_pfit_vscale_ratio(struct intel_overlay *overlay)
*/ */
if (INTEL_GEN(dev_priv) >= 4) { if (INTEL_GEN(dev_priv) >= 4) {
/* on i965 use the PGM reg to read out the autoscaler values */ /* on i965 use the PGM reg to read out the autoscaler values */
ratio = I915_READ(PFIT_PGM_RATIOS) >> PFIT_VERT_SCALE_SHIFT_965; ratio = intel_de_read(dev_priv, PFIT_PGM_RATIOS) >> PFIT_VERT_SCALE_SHIFT_965;
} else { } else {
if (pfit_control & VERT_AUTO_SCALE) if (pfit_control & VERT_AUTO_SCALE)
ratio = I915_READ(PFIT_AUTO_RATIOS); ratio = intel_de_read(dev_priv, PFIT_AUTO_RATIOS);
else else
ratio = I915_READ(PFIT_PGM_RATIOS); ratio = intel_de_read(dev_priv, PFIT_PGM_RATIOS);
ratio >>= PFIT_VERT_SCALE_SHIFT; ratio >>= PFIT_VERT_SCALE_SHIFT;
} }
@ -1239,12 +1242,12 @@ int intel_overlay_attrs_ioctl(struct drm_device *dev, void *data,
attrs->saturation = overlay->saturation; attrs->saturation = overlay->saturation;
if (!IS_GEN(dev_priv, 2)) { if (!IS_GEN(dev_priv, 2)) {
attrs->gamma0 = I915_READ(OGAMC0); attrs->gamma0 = intel_de_read(dev_priv, OGAMC0);
attrs->gamma1 = I915_READ(OGAMC1); attrs->gamma1 = intel_de_read(dev_priv, OGAMC1);
attrs->gamma2 = I915_READ(OGAMC2); attrs->gamma2 = intel_de_read(dev_priv, OGAMC2);
attrs->gamma3 = I915_READ(OGAMC3); attrs->gamma3 = intel_de_read(dev_priv, OGAMC3);
attrs->gamma4 = I915_READ(OGAMC4); attrs->gamma4 = intel_de_read(dev_priv, OGAMC4);
attrs->gamma5 = I915_READ(OGAMC5); attrs->gamma5 = intel_de_read(dev_priv, OGAMC5);
} }
} else { } else {
if (attrs->brightness < -128 || attrs->brightness > 127) if (attrs->brightness < -128 || attrs->brightness > 127)
@ -1274,12 +1277,12 @@ int intel_overlay_attrs_ioctl(struct drm_device *dev, void *data,
if (ret) if (ret)
goto out_unlock; goto out_unlock;
I915_WRITE(OGAMC0, attrs->gamma0); intel_de_write(dev_priv, OGAMC0, attrs->gamma0);
I915_WRITE(OGAMC1, attrs->gamma1); intel_de_write(dev_priv, OGAMC1, attrs->gamma1);
I915_WRITE(OGAMC2, attrs->gamma2); intel_de_write(dev_priv, OGAMC2, attrs->gamma2);
I915_WRITE(OGAMC3, attrs->gamma3); intel_de_write(dev_priv, OGAMC3, attrs->gamma3);
I915_WRITE(OGAMC4, attrs->gamma4); intel_de_write(dev_priv, OGAMC4, attrs->gamma4);
I915_WRITE(OGAMC5, attrs->gamma5); intel_de_write(dev_priv, OGAMC5, attrs->gamma5);
} }
} }
overlay->color_key_enabled = (attrs->flags & I915_OVERLAY_DISABLE_DEST_COLORKEY) == 0; overlay->color_key_enabled = (attrs->flags & I915_OVERLAY_DISABLE_DEST_COLORKEY) == 0;
@ -1389,7 +1392,7 @@ void intel_overlay_cleanup(struct drm_i915_private *dev_priv)
* Furthermore modesetting teardown happens beforehand so the * Furthermore modesetting teardown happens beforehand so the
* hardware should be off already. * hardware should be off already.
*/ */
WARN_ON(overlay->active); drm_WARN_ON(&dev_priv->drm, overlay->active);
i915_gem_object_put(overlay->reg_bo); i915_gem_object_put(overlay->reg_bo);
i915_active_fini(&overlay->last_flip); i915_active_fini(&overlay->last_flip);
@ -1419,8 +1422,8 @@ intel_overlay_capture_error_state(struct drm_i915_private *dev_priv)
if (error == NULL) if (error == NULL)
return NULL; return NULL;
error->dovsta = I915_READ(DOVSTA); error->dovsta = intel_de_read(dev_priv, DOVSTA);
error->isr = I915_READ(GEN2_ISR); error->isr = intel_de_read(dev_priv, GEN2_ISR);
error->base = overlay->flip_addr; error->base = overlay->flip_addr;
memcpy_fromio(&error->regs, overlay->regs, sizeof(error->regs)); memcpy_fromio(&error->regs, overlay->regs, sizeof(error->regs));

View file

@ -96,8 +96,9 @@ intel_panel_edid_downclock_mode(struct intel_connector *connector,
if (!downclock_mode) if (!downclock_mode)
return NULL; return NULL;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] using downclock mode from EDID: ", drm_dbg_kms(&dev_priv->drm,
connector->base.base.id, connector->base.name); "[CONNECTOR:%d:%s] using downclock mode from EDID: ",
connector->base.base.id, connector->base.name);
drm_mode_debug_printmodeline(downclock_mode); drm_mode_debug_printmodeline(downclock_mode);
return downclock_mode; return downclock_mode;
@ -122,8 +123,9 @@ intel_panel_edid_fixed_mode(struct intel_connector *connector)
if (!fixed_mode) if (!fixed_mode)
return NULL; return NULL;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] using preferred mode from EDID: ", drm_dbg_kms(&dev_priv->drm,
connector->base.base.id, connector->base.name); "[CONNECTOR:%d:%s] using preferred mode from EDID: ",
connector->base.base.id, connector->base.name);
drm_mode_debug_printmodeline(fixed_mode); drm_mode_debug_printmodeline(fixed_mode);
return fixed_mode; return fixed_mode;
@ -138,8 +140,9 @@ intel_panel_edid_fixed_mode(struct intel_connector *connector)
fixed_mode->type |= DRM_MODE_TYPE_PREFERRED; fixed_mode->type |= DRM_MODE_TYPE_PREFERRED;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] using first mode from EDID: ", drm_dbg_kms(&dev_priv->drm,
connector->base.base.id, connector->base.name); "[CONNECTOR:%d:%s] using first mode from EDID: ",
connector->base.base.id, connector->base.name);
drm_mode_debug_printmodeline(fixed_mode); drm_mode_debug_printmodeline(fixed_mode);
return fixed_mode; return fixed_mode;
@ -162,8 +165,8 @@ intel_panel_vbt_fixed_mode(struct intel_connector *connector)
fixed_mode->type |= DRM_MODE_TYPE_PREFERRED; fixed_mode->type |= DRM_MODE_TYPE_PREFERRED;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] using mode from VBT: ", drm_dbg_kms(&dev_priv->drm, "[CONNECTOR:%d:%s] using mode from VBT: ",
connector->base.base.id, connector->base.name); connector->base.base.id, connector->base.name);
drm_mode_debug_printmodeline(fixed_mode); drm_mode_debug_printmodeline(fixed_mode);
info->width_mm = fixed_mode->width_mm; info->width_mm = fixed_mode->width_mm;
@ -423,15 +426,15 @@ void intel_gmch_panel_fitting(struct intel_crtc *intel_crtc,
} }
break; break;
default: default:
WARN(1, "bad panel fit mode: %d\n", fitting_mode); drm_WARN(&dev_priv->drm, 1, "bad panel fit mode: %d\n",
fitting_mode);
return; return;
} }
/* 965+ wants fuzzy fitting */ /* 965+ wants fuzzy fitting */
/* FIXME: handle multiple panels by failing gracefully */ /* FIXME: handle multiple panels by failing gracefully */
if (INTEL_GEN(dev_priv) >= 4) if (INTEL_GEN(dev_priv) >= 4)
pfit_control |= ((intel_crtc->pipe << PFIT_PIPE_SHIFT) | pfit_control |= PFIT_PIPE(intel_crtc->pipe) | PFIT_FILTER_FUZZY;
PFIT_FILTER_FUZZY);
out: out:
if ((pfit_control & PFIT_ENABLE) == 0) { if ((pfit_control & PFIT_ENABLE) == 0) {
@ -520,7 +523,7 @@ static u32 intel_panel_compute_brightness(struct intel_connector *connector,
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
WARN_ON(panel->backlight.max == 0); drm_WARN_ON(&dev_priv->drm, panel->backlight.max == 0);
if (i915_modparams.invert_brightness < 0) if (i915_modparams.invert_brightness < 0)
return val; return val;
@ -537,14 +540,14 @@ static u32 lpt_get_backlight(struct intel_connector *connector)
{ {
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
return I915_READ(BLC_PWM_PCH_CTL2) & BACKLIGHT_DUTY_CYCLE_MASK; return intel_de_read(dev_priv, BLC_PWM_PCH_CTL2) & BACKLIGHT_DUTY_CYCLE_MASK;
} }
static u32 pch_get_backlight(struct intel_connector *connector) static u32 pch_get_backlight(struct intel_connector *connector)
{ {
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
return I915_READ(BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; return intel_de_read(dev_priv, BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK;
} }
static u32 i9xx_get_backlight(struct intel_connector *connector) static u32 i9xx_get_backlight(struct intel_connector *connector)
@ -553,7 +556,7 @@ static u32 i9xx_get_backlight(struct intel_connector *connector)
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
u32 val; u32 val;
val = I915_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; val = intel_de_read(dev_priv, BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK;
if (INTEL_GEN(dev_priv) < 4) if (INTEL_GEN(dev_priv) < 4)
val >>= 1; val >>= 1;
@ -569,10 +572,10 @@ static u32 i9xx_get_backlight(struct intel_connector *connector)
static u32 _vlv_get_backlight(struct drm_i915_private *dev_priv, enum pipe pipe) static u32 _vlv_get_backlight(struct drm_i915_private *dev_priv, enum pipe pipe)
{ {
if (WARN_ON(pipe != PIPE_A && pipe != PIPE_B)) if (drm_WARN_ON(&dev_priv->drm, pipe != PIPE_A && pipe != PIPE_B))
return 0; return 0;
return I915_READ(VLV_BLC_PWM_CTL(pipe)) & BACKLIGHT_DUTY_CYCLE_MASK; return intel_de_read(dev_priv, VLV_BLC_PWM_CTL(pipe)) & BACKLIGHT_DUTY_CYCLE_MASK;
} }
static u32 vlv_get_backlight(struct intel_connector *connector) static u32 vlv_get_backlight(struct intel_connector *connector)
@ -588,7 +591,8 @@ static u32 bxt_get_backlight(struct intel_connector *connector)
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
return I915_READ(BXT_BLC_PWM_DUTY(panel->backlight.controller)); return intel_de_read(dev_priv,
BXT_BLC_PWM_DUTY(panel->backlight.controller));
} }
static u32 pwm_get_backlight(struct intel_connector *connector) static u32 pwm_get_backlight(struct intel_connector *connector)
@ -605,8 +609,8 @@ static void lpt_set_backlight(const struct drm_connector_state *conn_state, u32
struct intel_connector *connector = to_intel_connector(conn_state->connector); struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
u32 val = I915_READ(BLC_PWM_PCH_CTL2) & ~BACKLIGHT_DUTY_CYCLE_MASK; u32 val = intel_de_read(dev_priv, BLC_PWM_PCH_CTL2) & ~BACKLIGHT_DUTY_CYCLE_MASK;
I915_WRITE(BLC_PWM_PCH_CTL2, val | level); intel_de_write(dev_priv, BLC_PWM_PCH_CTL2, val | level);
} }
static void pch_set_backlight(const struct drm_connector_state *conn_state, u32 level) static void pch_set_backlight(const struct drm_connector_state *conn_state, u32 level)
@ -615,8 +619,8 @@ static void pch_set_backlight(const struct drm_connector_state *conn_state, u32
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
u32 tmp; u32 tmp;
tmp = I915_READ(BLC_PWM_CPU_CTL) & ~BACKLIGHT_DUTY_CYCLE_MASK; tmp = intel_de_read(dev_priv, BLC_PWM_CPU_CTL) & ~BACKLIGHT_DUTY_CYCLE_MASK;
I915_WRITE(BLC_PWM_CPU_CTL, tmp | level); intel_de_write(dev_priv, BLC_PWM_CPU_CTL, tmp | level);
} }
static void i9xx_set_backlight(const struct drm_connector_state *conn_state, u32 level) static void i9xx_set_backlight(const struct drm_connector_state *conn_state, u32 level)
@ -626,7 +630,7 @@ static void i9xx_set_backlight(const struct drm_connector_state *conn_state, u32
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
u32 tmp, mask; u32 tmp, mask;
WARN_ON(panel->backlight.max == 0); drm_WARN_ON(&dev_priv->drm, panel->backlight.max == 0);
if (panel->backlight.combination_mode) { if (panel->backlight.combination_mode) {
u8 lbpc; u8 lbpc;
@ -643,8 +647,8 @@ static void i9xx_set_backlight(const struct drm_connector_state *conn_state, u32
mask = BACKLIGHT_DUTY_CYCLE_MASK_PNV; mask = BACKLIGHT_DUTY_CYCLE_MASK_PNV;
} }
tmp = I915_READ(BLC_PWM_CTL) & ~mask; tmp = intel_de_read(dev_priv, BLC_PWM_CTL) & ~mask;
I915_WRITE(BLC_PWM_CTL, tmp | level); intel_de_write(dev_priv, BLC_PWM_CTL, tmp | level);
} }
static void vlv_set_backlight(const struct drm_connector_state *conn_state, u32 level) static void vlv_set_backlight(const struct drm_connector_state *conn_state, u32 level)
@ -654,8 +658,8 @@ static void vlv_set_backlight(const struct drm_connector_state *conn_state, u32
enum pipe pipe = to_intel_crtc(conn_state->crtc)->pipe; enum pipe pipe = to_intel_crtc(conn_state->crtc)->pipe;
u32 tmp; u32 tmp;
tmp = I915_READ(VLV_BLC_PWM_CTL(pipe)) & ~BACKLIGHT_DUTY_CYCLE_MASK; tmp = intel_de_read(dev_priv, VLV_BLC_PWM_CTL(pipe)) & ~BACKLIGHT_DUTY_CYCLE_MASK;
I915_WRITE(VLV_BLC_PWM_CTL(pipe), tmp | level); intel_de_write(dev_priv, VLV_BLC_PWM_CTL(pipe), tmp | level);
} }
static void bxt_set_backlight(const struct drm_connector_state *conn_state, u32 level) static void bxt_set_backlight(const struct drm_connector_state *conn_state, u32 level)
@ -664,7 +668,8 @@ static void bxt_set_backlight(const struct drm_connector_state *conn_state, u32
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
I915_WRITE(BXT_BLC_PWM_DUTY(panel->backlight.controller), level); intel_de_write(dev_priv,
BXT_BLC_PWM_DUTY(panel->backlight.controller), level);
} }
static void pwm_set_backlight(const struct drm_connector_state *conn_state, u32 level) static void pwm_set_backlight(const struct drm_connector_state *conn_state, u32 level)
@ -709,7 +714,7 @@ void intel_panel_set_backlight_acpi(const struct drm_connector_state *conn_state
mutex_lock(&dev_priv->backlight_lock); mutex_lock(&dev_priv->backlight_lock);
WARN_ON(panel->backlight.max == 0); drm_WARN_ON(&dev_priv->drm, panel->backlight.max == 0);
hw_level = clamp_user_to_hw(connector, user_level, user_max); hw_level = clamp_user_to_hw(connector, user_level, user_max);
panel->backlight.level = hw_level; panel->backlight.level = hw_level;
@ -742,14 +747,16 @@ static void lpt_disable_backlight(const struct drm_connector_state *old_conn_sta
* This needs rework if we need to add support for CPU PWM on PCH split * This needs rework if we need to add support for CPU PWM on PCH split
* platforms. * platforms.
*/ */
tmp = I915_READ(BLC_PWM_CPU_CTL2); tmp = intel_de_read(dev_priv, BLC_PWM_CPU_CTL2);
if (tmp & BLM_PWM_ENABLE) { if (tmp & BLM_PWM_ENABLE) {
DRM_DEBUG_KMS("cpu backlight was enabled, disabling\n"); drm_dbg_kms(&dev_priv->drm,
I915_WRITE(BLC_PWM_CPU_CTL2, tmp & ~BLM_PWM_ENABLE); "cpu backlight was enabled, disabling\n");
intel_de_write(dev_priv, BLC_PWM_CPU_CTL2,
tmp & ~BLM_PWM_ENABLE);
} }
tmp = I915_READ(BLC_PWM_PCH_CTL1); tmp = intel_de_read(dev_priv, BLC_PWM_PCH_CTL1);
I915_WRITE(BLC_PWM_PCH_CTL1, tmp & ~BLM_PCH_PWM_ENABLE); intel_de_write(dev_priv, BLC_PWM_PCH_CTL1, tmp & ~BLM_PCH_PWM_ENABLE);
} }
static void pch_disable_backlight(const struct drm_connector_state *old_conn_state) static void pch_disable_backlight(const struct drm_connector_state *old_conn_state)
@ -760,11 +767,11 @@ static void pch_disable_backlight(const struct drm_connector_state *old_conn_sta
intel_panel_actually_set_backlight(old_conn_state, 0); intel_panel_actually_set_backlight(old_conn_state, 0);
tmp = I915_READ(BLC_PWM_CPU_CTL2); tmp = intel_de_read(dev_priv, BLC_PWM_CPU_CTL2);
I915_WRITE(BLC_PWM_CPU_CTL2, tmp & ~BLM_PWM_ENABLE); intel_de_write(dev_priv, BLC_PWM_CPU_CTL2, tmp & ~BLM_PWM_ENABLE);
tmp = I915_READ(BLC_PWM_PCH_CTL1); tmp = intel_de_read(dev_priv, BLC_PWM_PCH_CTL1);
I915_WRITE(BLC_PWM_PCH_CTL1, tmp & ~BLM_PCH_PWM_ENABLE); intel_de_write(dev_priv, BLC_PWM_PCH_CTL1, tmp & ~BLM_PCH_PWM_ENABLE);
} }
static void i9xx_disable_backlight(const struct drm_connector_state *old_conn_state) static void i9xx_disable_backlight(const struct drm_connector_state *old_conn_state)
@ -779,8 +786,8 @@ static void i965_disable_backlight(const struct drm_connector_state *old_conn_st
intel_panel_actually_set_backlight(old_conn_state, 0); intel_panel_actually_set_backlight(old_conn_state, 0);
tmp = I915_READ(BLC_PWM_CTL2); tmp = intel_de_read(dev_priv, BLC_PWM_CTL2);
I915_WRITE(BLC_PWM_CTL2, tmp & ~BLM_PWM_ENABLE); intel_de_write(dev_priv, BLC_PWM_CTL2, tmp & ~BLM_PWM_ENABLE);
} }
static void vlv_disable_backlight(const struct drm_connector_state *old_conn_state) static void vlv_disable_backlight(const struct drm_connector_state *old_conn_state)
@ -792,8 +799,9 @@ static void vlv_disable_backlight(const struct drm_connector_state *old_conn_sta
intel_panel_actually_set_backlight(old_conn_state, 0); intel_panel_actually_set_backlight(old_conn_state, 0);
tmp = I915_READ(VLV_BLC_PWM_CTL2(pipe)); tmp = intel_de_read(dev_priv, VLV_BLC_PWM_CTL2(pipe));
I915_WRITE(VLV_BLC_PWM_CTL2(pipe), tmp & ~BLM_PWM_ENABLE); intel_de_write(dev_priv, VLV_BLC_PWM_CTL2(pipe),
tmp & ~BLM_PWM_ENABLE);
} }
static void bxt_disable_backlight(const struct drm_connector_state *old_conn_state) static void bxt_disable_backlight(const struct drm_connector_state *old_conn_state)
@ -805,14 +813,15 @@ static void bxt_disable_backlight(const struct drm_connector_state *old_conn_sta
intel_panel_actually_set_backlight(old_conn_state, 0); intel_panel_actually_set_backlight(old_conn_state, 0);
tmp = I915_READ(BXT_BLC_PWM_CTL(panel->backlight.controller)); tmp = intel_de_read(dev_priv,
I915_WRITE(BXT_BLC_PWM_CTL(panel->backlight.controller), BXT_BLC_PWM_CTL(panel->backlight.controller));
tmp & ~BXT_BLC_PWM_ENABLE); intel_de_write(dev_priv, BXT_BLC_PWM_CTL(panel->backlight.controller),
tmp & ~BXT_BLC_PWM_ENABLE);
if (panel->backlight.controller == 1) { if (panel->backlight.controller == 1) {
val = I915_READ(UTIL_PIN_CTL); val = intel_de_read(dev_priv, UTIL_PIN_CTL);
val &= ~UTIL_PIN_ENABLE; val &= ~UTIL_PIN_ENABLE;
I915_WRITE(UTIL_PIN_CTL, val); intel_de_write(dev_priv, UTIL_PIN_CTL, val);
} }
} }
@ -825,9 +834,10 @@ static void cnp_disable_backlight(const struct drm_connector_state *old_conn_sta
intel_panel_actually_set_backlight(old_conn_state, 0); intel_panel_actually_set_backlight(old_conn_state, 0);
tmp = I915_READ(BXT_BLC_PWM_CTL(panel->backlight.controller)); tmp = intel_de_read(dev_priv,
I915_WRITE(BXT_BLC_PWM_CTL(panel->backlight.controller), BXT_BLC_PWM_CTL(panel->backlight.controller));
tmp & ~BXT_BLC_PWM_ENABLE); intel_de_write(dev_priv, BXT_BLC_PWM_CTL(panel->backlight.controller),
tmp & ~BXT_BLC_PWM_ENABLE);
} }
static void pwm_disable_backlight(const struct drm_connector_state *old_conn_state) static void pwm_disable_backlight(const struct drm_connector_state *old_conn_state)
@ -857,7 +867,8 @@ void intel_panel_disable_backlight(const struct drm_connector_state *old_conn_st
* another client is not activated. * another client is not activated.
*/ */
if (dev_priv->drm.switch_power_state == DRM_SWITCH_POWER_CHANGING) { if (dev_priv->drm.switch_power_state == DRM_SWITCH_POWER_CHANGING) {
DRM_DEBUG_DRIVER("Skipping backlight disable on vga switch\n"); drm_dbg(&dev_priv->drm,
"Skipping backlight disable on vga switch\n");
return; return;
} }
@ -879,31 +890,31 @@ static void lpt_enable_backlight(const struct intel_crtc_state *crtc_state,
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
u32 pch_ctl1, pch_ctl2, schicken; u32 pch_ctl1, pch_ctl2, schicken;
pch_ctl1 = I915_READ(BLC_PWM_PCH_CTL1); pch_ctl1 = intel_de_read(dev_priv, BLC_PWM_PCH_CTL1);
if (pch_ctl1 & BLM_PCH_PWM_ENABLE) { if (pch_ctl1 & BLM_PCH_PWM_ENABLE) {
DRM_DEBUG_KMS("pch backlight already enabled\n"); drm_dbg_kms(&dev_priv->drm, "pch backlight already enabled\n");
pch_ctl1 &= ~BLM_PCH_PWM_ENABLE; pch_ctl1 &= ~BLM_PCH_PWM_ENABLE;
I915_WRITE(BLC_PWM_PCH_CTL1, pch_ctl1); intel_de_write(dev_priv, BLC_PWM_PCH_CTL1, pch_ctl1);
} }
if (HAS_PCH_LPT(dev_priv)) { if (HAS_PCH_LPT(dev_priv)) {
schicken = I915_READ(SOUTH_CHICKEN2); schicken = intel_de_read(dev_priv, SOUTH_CHICKEN2);
if (panel->backlight.alternate_pwm_increment) if (panel->backlight.alternate_pwm_increment)
schicken |= LPT_PWM_GRANULARITY; schicken |= LPT_PWM_GRANULARITY;
else else
schicken &= ~LPT_PWM_GRANULARITY; schicken &= ~LPT_PWM_GRANULARITY;
I915_WRITE(SOUTH_CHICKEN2, schicken); intel_de_write(dev_priv, SOUTH_CHICKEN2, schicken);
} else { } else {
schicken = I915_READ(SOUTH_CHICKEN1); schicken = intel_de_read(dev_priv, SOUTH_CHICKEN1);
if (panel->backlight.alternate_pwm_increment) if (panel->backlight.alternate_pwm_increment)
schicken |= SPT_PWM_GRANULARITY; schicken |= SPT_PWM_GRANULARITY;
else else
schicken &= ~SPT_PWM_GRANULARITY; schicken &= ~SPT_PWM_GRANULARITY;
I915_WRITE(SOUTH_CHICKEN1, schicken); intel_de_write(dev_priv, SOUTH_CHICKEN1, schicken);
} }
pch_ctl2 = panel->backlight.max << 16; pch_ctl2 = panel->backlight.max << 16;
I915_WRITE(BLC_PWM_PCH_CTL2, pch_ctl2); intel_de_write(dev_priv, BLC_PWM_PCH_CTL2, pch_ctl2);
pch_ctl1 = 0; pch_ctl1 = 0;
if (panel->backlight.active_low_pwm) if (panel->backlight.active_low_pwm)
@ -913,9 +924,10 @@ static void lpt_enable_backlight(const struct intel_crtc_state *crtc_state,
if (HAS_PCH_LPT(dev_priv)) if (HAS_PCH_LPT(dev_priv))
pch_ctl1 |= BLM_PCH_OVERRIDE_ENABLE; pch_ctl1 |= BLM_PCH_OVERRIDE_ENABLE;
I915_WRITE(BLC_PWM_PCH_CTL1, pch_ctl1); intel_de_write(dev_priv, BLC_PWM_PCH_CTL1, pch_ctl1);
POSTING_READ(BLC_PWM_PCH_CTL1); intel_de_posting_read(dev_priv, BLC_PWM_PCH_CTL1);
I915_WRITE(BLC_PWM_PCH_CTL1, pch_ctl1 | BLM_PCH_PWM_ENABLE); intel_de_write(dev_priv, BLC_PWM_PCH_CTL1,
pch_ctl1 | BLM_PCH_PWM_ENABLE);
/* This won't stick until the above enable. */ /* This won't stick until the above enable. */
intel_panel_actually_set_backlight(conn_state, panel->backlight.level); intel_panel_actually_set_backlight(conn_state, panel->backlight.level);
@ -930,41 +942,42 @@ static void pch_enable_backlight(const struct intel_crtc_state *crtc_state,
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
u32 cpu_ctl2, pch_ctl1, pch_ctl2; u32 cpu_ctl2, pch_ctl1, pch_ctl2;
cpu_ctl2 = I915_READ(BLC_PWM_CPU_CTL2); cpu_ctl2 = intel_de_read(dev_priv, BLC_PWM_CPU_CTL2);
if (cpu_ctl2 & BLM_PWM_ENABLE) { if (cpu_ctl2 & BLM_PWM_ENABLE) {
DRM_DEBUG_KMS("cpu backlight already enabled\n"); drm_dbg_kms(&dev_priv->drm, "cpu backlight already enabled\n");
cpu_ctl2 &= ~BLM_PWM_ENABLE; cpu_ctl2 &= ~BLM_PWM_ENABLE;
I915_WRITE(BLC_PWM_CPU_CTL2, cpu_ctl2); intel_de_write(dev_priv, BLC_PWM_CPU_CTL2, cpu_ctl2);
} }
pch_ctl1 = I915_READ(BLC_PWM_PCH_CTL1); pch_ctl1 = intel_de_read(dev_priv, BLC_PWM_PCH_CTL1);
if (pch_ctl1 & BLM_PCH_PWM_ENABLE) { if (pch_ctl1 & BLM_PCH_PWM_ENABLE) {
DRM_DEBUG_KMS("pch backlight already enabled\n"); drm_dbg_kms(&dev_priv->drm, "pch backlight already enabled\n");
pch_ctl1 &= ~BLM_PCH_PWM_ENABLE; pch_ctl1 &= ~BLM_PCH_PWM_ENABLE;
I915_WRITE(BLC_PWM_PCH_CTL1, pch_ctl1); intel_de_write(dev_priv, BLC_PWM_PCH_CTL1, pch_ctl1);
} }
if (cpu_transcoder == TRANSCODER_EDP) if (cpu_transcoder == TRANSCODER_EDP)
cpu_ctl2 = BLM_TRANSCODER_EDP; cpu_ctl2 = BLM_TRANSCODER_EDP;
else else
cpu_ctl2 = BLM_PIPE(cpu_transcoder); cpu_ctl2 = BLM_PIPE(cpu_transcoder);
I915_WRITE(BLC_PWM_CPU_CTL2, cpu_ctl2); intel_de_write(dev_priv, BLC_PWM_CPU_CTL2, cpu_ctl2);
POSTING_READ(BLC_PWM_CPU_CTL2); intel_de_posting_read(dev_priv, BLC_PWM_CPU_CTL2);
I915_WRITE(BLC_PWM_CPU_CTL2, cpu_ctl2 | BLM_PWM_ENABLE); intel_de_write(dev_priv, BLC_PWM_CPU_CTL2, cpu_ctl2 | BLM_PWM_ENABLE);
/* This won't stick until the above enable. */ /* This won't stick until the above enable. */
intel_panel_actually_set_backlight(conn_state, panel->backlight.level); intel_panel_actually_set_backlight(conn_state, panel->backlight.level);
pch_ctl2 = panel->backlight.max << 16; pch_ctl2 = panel->backlight.max << 16;
I915_WRITE(BLC_PWM_PCH_CTL2, pch_ctl2); intel_de_write(dev_priv, BLC_PWM_PCH_CTL2, pch_ctl2);
pch_ctl1 = 0; pch_ctl1 = 0;
if (panel->backlight.active_low_pwm) if (panel->backlight.active_low_pwm)
pch_ctl1 |= BLM_PCH_POLARITY; pch_ctl1 |= BLM_PCH_POLARITY;
I915_WRITE(BLC_PWM_PCH_CTL1, pch_ctl1); intel_de_write(dev_priv, BLC_PWM_PCH_CTL1, pch_ctl1);
POSTING_READ(BLC_PWM_PCH_CTL1); intel_de_posting_read(dev_priv, BLC_PWM_PCH_CTL1);
I915_WRITE(BLC_PWM_PCH_CTL1, pch_ctl1 | BLM_PCH_PWM_ENABLE); intel_de_write(dev_priv, BLC_PWM_PCH_CTL1,
pch_ctl1 | BLM_PCH_PWM_ENABLE);
} }
static void i9xx_enable_backlight(const struct intel_crtc_state *crtc_state, static void i9xx_enable_backlight(const struct intel_crtc_state *crtc_state,
@ -975,10 +988,10 @@ static void i9xx_enable_backlight(const struct intel_crtc_state *crtc_state,
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
u32 ctl, freq; u32 ctl, freq;
ctl = I915_READ(BLC_PWM_CTL); ctl = intel_de_read(dev_priv, BLC_PWM_CTL);
if (ctl & BACKLIGHT_DUTY_CYCLE_MASK_PNV) { if (ctl & BACKLIGHT_DUTY_CYCLE_MASK_PNV) {
DRM_DEBUG_KMS("backlight already enabled\n"); drm_dbg_kms(&dev_priv->drm, "backlight already enabled\n");
I915_WRITE(BLC_PWM_CTL, 0); intel_de_write(dev_priv, BLC_PWM_CTL, 0);
} }
freq = panel->backlight.max; freq = panel->backlight.max;
@ -991,8 +1004,8 @@ static void i9xx_enable_backlight(const struct intel_crtc_state *crtc_state,
if (IS_PINEVIEW(dev_priv) && panel->backlight.active_low_pwm) if (IS_PINEVIEW(dev_priv) && panel->backlight.active_low_pwm)
ctl |= BLM_POLARITY_PNV; ctl |= BLM_POLARITY_PNV;
I915_WRITE(BLC_PWM_CTL, ctl); intel_de_write(dev_priv, BLC_PWM_CTL, ctl);
POSTING_READ(BLC_PWM_CTL); intel_de_posting_read(dev_priv, BLC_PWM_CTL);
/* XXX: combine this into above write? */ /* XXX: combine this into above write? */
intel_panel_actually_set_backlight(conn_state, panel->backlight.level); intel_panel_actually_set_backlight(conn_state, panel->backlight.level);
@ -1003,7 +1016,7 @@ static void i9xx_enable_backlight(const struct intel_crtc_state *crtc_state,
* that has backlight. * that has backlight.
*/ */
if (IS_GEN(dev_priv, 2)) if (IS_GEN(dev_priv, 2))
I915_WRITE(BLC_HIST_CTL, BLM_HISTOGRAM_ENABLE); intel_de_write(dev_priv, BLC_HIST_CTL, BLM_HISTOGRAM_ENABLE);
} }
static void i965_enable_backlight(const struct intel_crtc_state *crtc_state, static void i965_enable_backlight(const struct intel_crtc_state *crtc_state,
@ -1015,11 +1028,11 @@ static void i965_enable_backlight(const struct intel_crtc_state *crtc_state,
enum pipe pipe = to_intel_crtc(conn_state->crtc)->pipe; enum pipe pipe = to_intel_crtc(conn_state->crtc)->pipe;
u32 ctl, ctl2, freq; u32 ctl, ctl2, freq;
ctl2 = I915_READ(BLC_PWM_CTL2); ctl2 = intel_de_read(dev_priv, BLC_PWM_CTL2);
if (ctl2 & BLM_PWM_ENABLE) { if (ctl2 & BLM_PWM_ENABLE) {
DRM_DEBUG_KMS("backlight already enabled\n"); drm_dbg_kms(&dev_priv->drm, "backlight already enabled\n");
ctl2 &= ~BLM_PWM_ENABLE; ctl2 &= ~BLM_PWM_ENABLE;
I915_WRITE(BLC_PWM_CTL2, ctl2); intel_de_write(dev_priv, BLC_PWM_CTL2, ctl2);
} }
freq = panel->backlight.max; freq = panel->backlight.max;
@ -1027,16 +1040,16 @@ static void i965_enable_backlight(const struct intel_crtc_state *crtc_state,
freq /= 0xff; freq /= 0xff;
ctl = freq << 16; ctl = freq << 16;
I915_WRITE(BLC_PWM_CTL, ctl); intel_de_write(dev_priv, BLC_PWM_CTL, ctl);
ctl2 = BLM_PIPE(pipe); ctl2 = BLM_PIPE(pipe);
if (panel->backlight.combination_mode) if (panel->backlight.combination_mode)
ctl2 |= BLM_COMBINATION_MODE; ctl2 |= BLM_COMBINATION_MODE;
if (panel->backlight.active_low_pwm) if (panel->backlight.active_low_pwm)
ctl2 |= BLM_POLARITY_I965; ctl2 |= BLM_POLARITY_I965;
I915_WRITE(BLC_PWM_CTL2, ctl2); intel_de_write(dev_priv, BLC_PWM_CTL2, ctl2);
POSTING_READ(BLC_PWM_CTL2); intel_de_posting_read(dev_priv, BLC_PWM_CTL2);
I915_WRITE(BLC_PWM_CTL2, ctl2 | BLM_PWM_ENABLE); intel_de_write(dev_priv, BLC_PWM_CTL2, ctl2 | BLM_PWM_ENABLE);
intel_panel_actually_set_backlight(conn_state, panel->backlight.level); intel_panel_actually_set_backlight(conn_state, panel->backlight.level);
} }
@ -1050,15 +1063,15 @@ static void vlv_enable_backlight(const struct intel_crtc_state *crtc_state,
enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe; enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
u32 ctl, ctl2; u32 ctl, ctl2;
ctl2 = I915_READ(VLV_BLC_PWM_CTL2(pipe)); ctl2 = intel_de_read(dev_priv, VLV_BLC_PWM_CTL2(pipe));
if (ctl2 & BLM_PWM_ENABLE) { if (ctl2 & BLM_PWM_ENABLE) {
DRM_DEBUG_KMS("backlight already enabled\n"); drm_dbg_kms(&dev_priv->drm, "backlight already enabled\n");
ctl2 &= ~BLM_PWM_ENABLE; ctl2 &= ~BLM_PWM_ENABLE;
I915_WRITE(VLV_BLC_PWM_CTL2(pipe), ctl2); intel_de_write(dev_priv, VLV_BLC_PWM_CTL2(pipe), ctl2);
} }
ctl = panel->backlight.max << 16; ctl = panel->backlight.max << 16;
I915_WRITE(VLV_BLC_PWM_CTL(pipe), ctl); intel_de_write(dev_priv, VLV_BLC_PWM_CTL(pipe), ctl);
/* XXX: combine this into above write? */ /* XXX: combine this into above write? */
intel_panel_actually_set_backlight(conn_state, panel->backlight.level); intel_panel_actually_set_backlight(conn_state, panel->backlight.level);
@ -1066,9 +1079,10 @@ static void vlv_enable_backlight(const struct intel_crtc_state *crtc_state,
ctl2 = 0; ctl2 = 0;
if (panel->backlight.active_low_pwm) if (panel->backlight.active_low_pwm)
ctl2 |= BLM_POLARITY_I965; ctl2 |= BLM_POLARITY_I965;
I915_WRITE(VLV_BLC_PWM_CTL2(pipe), ctl2); intel_de_write(dev_priv, VLV_BLC_PWM_CTL2(pipe), ctl2);
POSTING_READ(VLV_BLC_PWM_CTL2(pipe)); intel_de_posting_read(dev_priv, VLV_BLC_PWM_CTL2(pipe));
I915_WRITE(VLV_BLC_PWM_CTL2(pipe), ctl2 | BLM_PWM_ENABLE); intel_de_write(dev_priv, VLV_BLC_PWM_CTL2(pipe),
ctl2 | BLM_PWM_ENABLE);
} }
static void bxt_enable_backlight(const struct intel_crtc_state *crtc_state, static void bxt_enable_backlight(const struct intel_crtc_state *crtc_state,
@ -1082,30 +1096,34 @@ static void bxt_enable_backlight(const struct intel_crtc_state *crtc_state,
/* Controller 1 uses the utility pin. */ /* Controller 1 uses the utility pin. */
if (panel->backlight.controller == 1) { if (panel->backlight.controller == 1) {
val = I915_READ(UTIL_PIN_CTL); val = intel_de_read(dev_priv, UTIL_PIN_CTL);
if (val & UTIL_PIN_ENABLE) { if (val & UTIL_PIN_ENABLE) {
DRM_DEBUG_KMS("util pin already enabled\n"); drm_dbg_kms(&dev_priv->drm,
"util pin already enabled\n");
val &= ~UTIL_PIN_ENABLE; val &= ~UTIL_PIN_ENABLE;
I915_WRITE(UTIL_PIN_CTL, val); intel_de_write(dev_priv, UTIL_PIN_CTL, val);
} }
val = 0; val = 0;
if (panel->backlight.util_pin_active_low) if (panel->backlight.util_pin_active_low)
val |= UTIL_PIN_POLARITY; val |= UTIL_PIN_POLARITY;
I915_WRITE(UTIL_PIN_CTL, val | UTIL_PIN_PIPE(pipe) | intel_de_write(dev_priv, UTIL_PIN_CTL,
UTIL_PIN_MODE_PWM | UTIL_PIN_ENABLE); val | UTIL_PIN_PIPE(pipe) | UTIL_PIN_MODE_PWM | UTIL_PIN_ENABLE);
} }
pwm_ctl = I915_READ(BXT_BLC_PWM_CTL(panel->backlight.controller)); pwm_ctl = intel_de_read(dev_priv,
BXT_BLC_PWM_CTL(panel->backlight.controller));
if (pwm_ctl & BXT_BLC_PWM_ENABLE) { if (pwm_ctl & BXT_BLC_PWM_ENABLE) {
DRM_DEBUG_KMS("backlight already enabled\n"); drm_dbg_kms(&dev_priv->drm, "backlight already enabled\n");
pwm_ctl &= ~BXT_BLC_PWM_ENABLE; pwm_ctl &= ~BXT_BLC_PWM_ENABLE;
I915_WRITE(BXT_BLC_PWM_CTL(panel->backlight.controller), intel_de_write(dev_priv,
pwm_ctl); BXT_BLC_PWM_CTL(panel->backlight.controller),
pwm_ctl);
} }
I915_WRITE(BXT_BLC_PWM_FREQ(panel->backlight.controller), intel_de_write(dev_priv,
panel->backlight.max); BXT_BLC_PWM_FREQ(panel->backlight.controller),
panel->backlight.max);
intel_panel_actually_set_backlight(conn_state, panel->backlight.level); intel_panel_actually_set_backlight(conn_state, panel->backlight.level);
@ -1113,10 +1131,12 @@ static void bxt_enable_backlight(const struct intel_crtc_state *crtc_state,
if (panel->backlight.active_low_pwm) if (panel->backlight.active_low_pwm)
pwm_ctl |= BXT_BLC_PWM_POLARITY; pwm_ctl |= BXT_BLC_PWM_POLARITY;
I915_WRITE(BXT_BLC_PWM_CTL(panel->backlight.controller), pwm_ctl); intel_de_write(dev_priv, BXT_BLC_PWM_CTL(panel->backlight.controller),
POSTING_READ(BXT_BLC_PWM_CTL(panel->backlight.controller)); pwm_ctl);
I915_WRITE(BXT_BLC_PWM_CTL(panel->backlight.controller), intel_de_posting_read(dev_priv,
pwm_ctl | BXT_BLC_PWM_ENABLE); BXT_BLC_PWM_CTL(panel->backlight.controller));
intel_de_write(dev_priv, BXT_BLC_PWM_CTL(panel->backlight.controller),
pwm_ctl | BXT_BLC_PWM_ENABLE);
} }
static void cnp_enable_backlight(const struct intel_crtc_state *crtc_state, static void cnp_enable_backlight(const struct intel_crtc_state *crtc_state,
@ -1127,16 +1147,19 @@ static void cnp_enable_backlight(const struct intel_crtc_state *crtc_state,
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
u32 pwm_ctl; u32 pwm_ctl;
pwm_ctl = I915_READ(BXT_BLC_PWM_CTL(panel->backlight.controller)); pwm_ctl = intel_de_read(dev_priv,
BXT_BLC_PWM_CTL(panel->backlight.controller));
if (pwm_ctl & BXT_BLC_PWM_ENABLE) { if (pwm_ctl & BXT_BLC_PWM_ENABLE) {
DRM_DEBUG_KMS("backlight already enabled\n"); drm_dbg_kms(&dev_priv->drm, "backlight already enabled\n");
pwm_ctl &= ~BXT_BLC_PWM_ENABLE; pwm_ctl &= ~BXT_BLC_PWM_ENABLE;
I915_WRITE(BXT_BLC_PWM_CTL(panel->backlight.controller), intel_de_write(dev_priv,
pwm_ctl); BXT_BLC_PWM_CTL(panel->backlight.controller),
pwm_ctl);
} }
I915_WRITE(BXT_BLC_PWM_FREQ(panel->backlight.controller), intel_de_write(dev_priv,
panel->backlight.max); BXT_BLC_PWM_FREQ(panel->backlight.controller),
panel->backlight.max);
intel_panel_actually_set_backlight(conn_state, panel->backlight.level); intel_panel_actually_set_backlight(conn_state, panel->backlight.level);
@ -1144,10 +1167,12 @@ static void cnp_enable_backlight(const struct intel_crtc_state *crtc_state,
if (panel->backlight.active_low_pwm) if (panel->backlight.active_low_pwm)
pwm_ctl |= BXT_BLC_PWM_POLARITY; pwm_ctl |= BXT_BLC_PWM_POLARITY;
I915_WRITE(BXT_BLC_PWM_CTL(panel->backlight.controller), pwm_ctl); intel_de_write(dev_priv, BXT_BLC_PWM_CTL(panel->backlight.controller),
POSTING_READ(BXT_BLC_PWM_CTL(panel->backlight.controller)); pwm_ctl);
I915_WRITE(BXT_BLC_PWM_CTL(panel->backlight.controller), intel_de_posting_read(dev_priv,
pwm_ctl | BXT_BLC_PWM_ENABLE); BXT_BLC_PWM_CTL(panel->backlight.controller));
intel_de_write(dev_priv, BXT_BLC_PWM_CTL(panel->backlight.controller),
pwm_ctl | BXT_BLC_PWM_ENABLE);
} }
static void pwm_enable_backlight(const struct intel_crtc_state *crtc_state, static void pwm_enable_backlight(const struct intel_crtc_state *crtc_state,
@ -1194,7 +1219,7 @@ void intel_panel_enable_backlight(const struct intel_crtc_state *crtc_state,
if (!panel->backlight.present) if (!panel->backlight.present)
return; return;
DRM_DEBUG_KMS("pipe %c\n", pipe_name(pipe)); drm_dbg_kms(&dev_priv->drm, "pipe %c\n", pipe_name(pipe));
mutex_lock(&dev_priv->backlight_lock); mutex_lock(&dev_priv->backlight_lock);
@ -1219,7 +1244,7 @@ static u32 intel_panel_get_backlight(struct intel_connector *connector)
mutex_unlock(&dev_priv->backlight_lock); mutex_unlock(&dev_priv->backlight_lock);
DRM_DEBUG_DRIVER("get backlight PWM = %d\n", val); drm_dbg(&dev_priv->drm, "get backlight PWM = %d\n", val);
return val; return val;
} }
@ -1237,7 +1262,7 @@ static void intel_panel_set_backlight(const struct drm_connector_state *conn_sta
mutex_lock(&dev_priv->backlight_lock); mutex_lock(&dev_priv->backlight_lock);
WARN_ON(panel->backlight.max == 0); drm_WARN_ON(&dev_priv->drm, panel->backlight.max == 0);
hw_level = scale_user_to_hw(connector, user_level, user_max); hw_level = scale_user_to_hw(connector, user_level, user_max);
panel->backlight.level = hw_level; panel->backlight.level = hw_level;
@ -1380,7 +1405,8 @@ static u32 cnp_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
{ {
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
return DIV_ROUND_CLOSEST(KHz(dev_priv->rawclk_freq), pwm_freq_hz); return DIV_ROUND_CLOSEST(KHz(RUNTIME_INFO(dev_priv)->rawclk_freq),
pwm_freq_hz);
} }
/* /*
@ -1441,7 +1467,8 @@ static u32 pch_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
{ {
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
return DIV_ROUND_CLOSEST(KHz(dev_priv->rawclk_freq), pwm_freq_hz * 128); return DIV_ROUND_CLOSEST(KHz(RUNTIME_INFO(dev_priv)->rawclk_freq),
pwm_freq_hz * 128);
} }
/* /*
@ -1458,7 +1485,7 @@ static u32 i9xx_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
int clock; int clock;
if (IS_PINEVIEW(dev_priv)) if (IS_PINEVIEW(dev_priv))
clock = KHz(dev_priv->rawclk_freq); clock = KHz(RUNTIME_INFO(dev_priv)->rawclk_freq);
else else
clock = KHz(dev_priv->cdclk.hw.cdclk); clock = KHz(dev_priv->cdclk.hw.cdclk);
@ -1476,7 +1503,7 @@ static u32 i965_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
int clock; int clock;
if (IS_G4X(dev_priv)) if (IS_G4X(dev_priv))
clock = KHz(dev_priv->rawclk_freq); clock = KHz(RUNTIME_INFO(dev_priv)->rawclk_freq);
else else
clock = KHz(dev_priv->cdclk.hw.cdclk); clock = KHz(dev_priv->cdclk.hw.cdclk);
@ -1493,14 +1520,14 @@ static u32 vlv_hz_to_pwm(struct intel_connector *connector, u32 pwm_freq_hz)
struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
int mul, clock; int mul, clock;
if ((I915_READ(CBR1_VLV) & CBR_PWM_CLOCK_MUX_SELECT) == 0) { if ((intel_de_read(dev_priv, CBR1_VLV) & CBR_PWM_CLOCK_MUX_SELECT) == 0) {
if (IS_CHERRYVIEW(dev_priv)) if (IS_CHERRYVIEW(dev_priv))
clock = KHz(19200); clock = KHz(19200);
else else
clock = MHz(25); clock = MHz(25);
mul = 16; mul = 16;
} else { } else {
clock = KHz(dev_priv->rawclk_freq); clock = KHz(RUNTIME_INFO(dev_priv)->rawclk_freq);
mul = 128; mul = 128;
} }
@ -1515,22 +1542,26 @@ static u32 get_backlight_max_vbt(struct intel_connector *connector)
u32 pwm; u32 pwm;
if (!panel->backlight.hz_to_pwm) { if (!panel->backlight.hz_to_pwm) {
DRM_DEBUG_KMS("backlight frequency conversion not supported\n"); drm_dbg_kms(&dev_priv->drm,
"backlight frequency conversion not supported\n");
return 0; return 0;
} }
if (pwm_freq_hz) { if (pwm_freq_hz) {
DRM_DEBUG_KMS("VBT defined backlight frequency %u Hz\n", drm_dbg_kms(&dev_priv->drm,
pwm_freq_hz); "VBT defined backlight frequency %u Hz\n",
pwm_freq_hz);
} else { } else {
pwm_freq_hz = 200; pwm_freq_hz = 200;
DRM_DEBUG_KMS("default backlight frequency %u Hz\n", drm_dbg_kms(&dev_priv->drm,
pwm_freq_hz); "default backlight frequency %u Hz\n",
pwm_freq_hz);
} }
pwm = panel->backlight.hz_to_pwm(connector, pwm_freq_hz); pwm = panel->backlight.hz_to_pwm(connector, pwm_freq_hz);
if (!pwm) { if (!pwm) {
DRM_DEBUG_KMS("backlight frequency conversion failed\n"); drm_dbg_kms(&dev_priv->drm,
"backlight frequency conversion failed\n");
return 0; return 0;
} }
@ -1546,7 +1577,7 @@ static u32 get_backlight_min_vbt(struct intel_connector *connector)
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
int min; int min;
WARN_ON(panel->backlight.max == 0); drm_WARN_ON(&dev_priv->drm, panel->backlight.max == 0);
/* /*
* XXX: If the vbt value is 255, it makes min equal to max, which leads * XXX: If the vbt value is 255, it makes min equal to max, which leads
@ -1557,8 +1588,9 @@ static u32 get_backlight_min_vbt(struct intel_connector *connector)
*/ */
min = clamp_t(int, dev_priv->vbt.backlight.min_brightness, 0, 64); min = clamp_t(int, dev_priv->vbt.backlight.min_brightness, 0, 64);
if (min != dev_priv->vbt.backlight.min_brightness) { if (min != dev_priv->vbt.backlight.min_brightness) {
DRM_DEBUG_KMS("clamping VBT min backlight %d/255 to %d/255\n", drm_dbg_kms(&dev_priv->drm,
dev_priv->vbt.backlight.min_brightness, min); "clamping VBT min backlight %d/255 to %d/255\n",
dev_priv->vbt.backlight.min_brightness, min);
} }
/* vbt value is a coefficient in range [0..255] */ /* vbt value is a coefficient in range [0..255] */
@ -1573,18 +1605,18 @@ static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unus
bool alt, cpu_mode; bool alt, cpu_mode;
if (HAS_PCH_LPT(dev_priv)) if (HAS_PCH_LPT(dev_priv))
alt = I915_READ(SOUTH_CHICKEN2) & LPT_PWM_GRANULARITY; alt = intel_de_read(dev_priv, SOUTH_CHICKEN2) & LPT_PWM_GRANULARITY;
else else
alt = I915_READ(SOUTH_CHICKEN1) & SPT_PWM_GRANULARITY; alt = intel_de_read(dev_priv, SOUTH_CHICKEN1) & SPT_PWM_GRANULARITY;
panel->backlight.alternate_pwm_increment = alt; panel->backlight.alternate_pwm_increment = alt;
pch_ctl1 = I915_READ(BLC_PWM_PCH_CTL1); pch_ctl1 = intel_de_read(dev_priv, BLC_PWM_PCH_CTL1);
panel->backlight.active_low_pwm = pch_ctl1 & BLM_PCH_POLARITY; panel->backlight.active_low_pwm = pch_ctl1 & BLM_PCH_POLARITY;
pch_ctl2 = I915_READ(BLC_PWM_PCH_CTL2); pch_ctl2 = intel_de_read(dev_priv, BLC_PWM_PCH_CTL2);
panel->backlight.max = pch_ctl2 >> 16; panel->backlight.max = pch_ctl2 >> 16;
cpu_ctl2 = I915_READ(BLC_PWM_CPU_CTL2); cpu_ctl2 = intel_de_read(dev_priv, BLC_PWM_CPU_CTL2);
if (!panel->backlight.max) if (!panel->backlight.max)
panel->backlight.max = get_backlight_max_vbt(connector); panel->backlight.max = get_backlight_max_vbt(connector);
@ -1608,13 +1640,16 @@ static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unus
panel->backlight.max); panel->backlight.max);
if (cpu_mode) { if (cpu_mode) {
DRM_DEBUG_KMS("CPU backlight register was enabled, switching to PCH override\n"); drm_dbg_kms(&dev_priv->drm,
"CPU backlight register was enabled, switching to PCH override\n");
/* Write converted CPU PWM value to PCH override register */ /* Write converted CPU PWM value to PCH override register */
lpt_set_backlight(connector->base.state, panel->backlight.level); lpt_set_backlight(connector->base.state, panel->backlight.level);
I915_WRITE(BLC_PWM_PCH_CTL1, pch_ctl1 | BLM_PCH_OVERRIDE_ENABLE); intel_de_write(dev_priv, BLC_PWM_PCH_CTL1,
pch_ctl1 | BLM_PCH_OVERRIDE_ENABLE);
I915_WRITE(BLC_PWM_CPU_CTL2, cpu_ctl2 & ~BLM_PWM_ENABLE); intel_de_write(dev_priv, BLC_PWM_CPU_CTL2,
cpu_ctl2 & ~BLM_PWM_ENABLE);
} }
return 0; return 0;
@ -1626,10 +1661,10 @@ static int pch_setup_backlight(struct intel_connector *connector, enum pipe unus
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
u32 cpu_ctl2, pch_ctl1, pch_ctl2, val; u32 cpu_ctl2, pch_ctl1, pch_ctl2, val;
pch_ctl1 = I915_READ(BLC_PWM_PCH_CTL1); pch_ctl1 = intel_de_read(dev_priv, BLC_PWM_PCH_CTL1);
panel->backlight.active_low_pwm = pch_ctl1 & BLM_PCH_POLARITY; panel->backlight.active_low_pwm = pch_ctl1 & BLM_PCH_POLARITY;
pch_ctl2 = I915_READ(BLC_PWM_PCH_CTL2); pch_ctl2 = intel_de_read(dev_priv, BLC_PWM_PCH_CTL2);
panel->backlight.max = pch_ctl2 >> 16; panel->backlight.max = pch_ctl2 >> 16;
if (!panel->backlight.max) if (!panel->backlight.max)
@ -1645,7 +1680,7 @@ static int pch_setup_backlight(struct intel_connector *connector, enum pipe unus
panel->backlight.level = clamp(val, panel->backlight.min, panel->backlight.level = clamp(val, panel->backlight.min,
panel->backlight.max); panel->backlight.max);
cpu_ctl2 = I915_READ(BLC_PWM_CPU_CTL2); cpu_ctl2 = intel_de_read(dev_priv, BLC_PWM_CPU_CTL2);
panel->backlight.enabled = (cpu_ctl2 & BLM_PWM_ENABLE) && panel->backlight.enabled = (cpu_ctl2 & BLM_PWM_ENABLE) &&
(pch_ctl1 & BLM_PCH_PWM_ENABLE); (pch_ctl1 & BLM_PCH_PWM_ENABLE);
@ -1658,7 +1693,7 @@ static int i9xx_setup_backlight(struct intel_connector *connector, enum pipe unu
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
u32 ctl, val; u32 ctl, val;
ctl = I915_READ(BLC_PWM_CTL); ctl = intel_de_read(dev_priv, BLC_PWM_CTL);
if (IS_GEN(dev_priv, 2) || IS_I915GM(dev_priv) || IS_I945GM(dev_priv)) if (IS_GEN(dev_priv, 2) || IS_I915GM(dev_priv) || IS_I945GM(dev_priv))
panel->backlight.combination_mode = ctl & BLM_LEGACY_MODE; panel->backlight.combination_mode = ctl & BLM_LEGACY_MODE;
@ -1697,11 +1732,11 @@ static int i965_setup_backlight(struct intel_connector *connector, enum pipe unu
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
u32 ctl, ctl2, val; u32 ctl, ctl2, val;
ctl2 = I915_READ(BLC_PWM_CTL2); ctl2 = intel_de_read(dev_priv, BLC_PWM_CTL2);
panel->backlight.combination_mode = ctl2 & BLM_COMBINATION_MODE; panel->backlight.combination_mode = ctl2 & BLM_COMBINATION_MODE;
panel->backlight.active_low_pwm = ctl2 & BLM_POLARITY_I965; panel->backlight.active_low_pwm = ctl2 & BLM_POLARITY_I965;
ctl = I915_READ(BLC_PWM_CTL); ctl = intel_de_read(dev_priv, BLC_PWM_CTL);
panel->backlight.max = ctl >> 16; panel->backlight.max = ctl >> 16;
if (!panel->backlight.max) if (!panel->backlight.max)
@ -1731,13 +1766,13 @@ static int vlv_setup_backlight(struct intel_connector *connector, enum pipe pipe
struct intel_panel *panel = &connector->panel; struct intel_panel *panel = &connector->panel;
u32 ctl, ctl2, val; u32 ctl, ctl2, val;
if (WARN_ON(pipe != PIPE_A && pipe != PIPE_B)) if (drm_WARN_ON(&dev_priv->drm, pipe != PIPE_A && pipe != PIPE_B))
return -ENODEV; return -ENODEV;
ctl2 = I915_READ(VLV_BLC_PWM_CTL2(pipe)); ctl2 = intel_de_read(dev_priv, VLV_BLC_PWM_CTL2(pipe));
panel->backlight.active_low_pwm = ctl2 & BLM_POLARITY_I965; panel->backlight.active_low_pwm = ctl2 & BLM_POLARITY_I965;
ctl = I915_READ(VLV_BLC_PWM_CTL(pipe)); ctl = intel_de_read(dev_priv, VLV_BLC_PWM_CTL(pipe));
panel->backlight.max = ctl >> 16; panel->backlight.max = ctl >> 16;
if (!panel->backlight.max) if (!panel->backlight.max)
@ -1767,18 +1802,20 @@ bxt_setup_backlight(struct intel_connector *connector, enum pipe unused)
panel->backlight.controller = dev_priv->vbt.backlight.controller; panel->backlight.controller = dev_priv->vbt.backlight.controller;
pwm_ctl = I915_READ(BXT_BLC_PWM_CTL(panel->backlight.controller)); pwm_ctl = intel_de_read(dev_priv,
BXT_BLC_PWM_CTL(panel->backlight.controller));
/* Controller 1 uses the utility pin. */ /* Controller 1 uses the utility pin. */
if (panel->backlight.controller == 1) { if (panel->backlight.controller == 1) {
val = I915_READ(UTIL_PIN_CTL); val = intel_de_read(dev_priv, UTIL_PIN_CTL);
panel->backlight.util_pin_active_low = panel->backlight.util_pin_active_low =
val & UTIL_PIN_POLARITY; val & UTIL_PIN_POLARITY;
} }
panel->backlight.active_low_pwm = pwm_ctl & BXT_BLC_PWM_POLARITY; panel->backlight.active_low_pwm = pwm_ctl & BXT_BLC_PWM_POLARITY;
panel->backlight.max = panel->backlight.max =
I915_READ(BXT_BLC_PWM_FREQ(panel->backlight.controller)); intel_de_read(dev_priv,
BXT_BLC_PWM_FREQ(panel->backlight.controller));
if (!panel->backlight.max) if (!panel->backlight.max)
panel->backlight.max = get_backlight_max_vbt(connector); panel->backlight.max = get_backlight_max_vbt(connector);
@ -1812,11 +1849,13 @@ cnp_setup_backlight(struct intel_connector *connector, enum pipe unused)
*/ */
panel->backlight.controller = 0; panel->backlight.controller = 0;
pwm_ctl = I915_READ(BXT_BLC_PWM_CTL(panel->backlight.controller)); pwm_ctl = intel_de_read(dev_priv,
BXT_BLC_PWM_CTL(panel->backlight.controller));
panel->backlight.active_low_pwm = pwm_ctl & BXT_BLC_PWM_POLARITY; panel->backlight.active_low_pwm = pwm_ctl & BXT_BLC_PWM_POLARITY;
panel->backlight.max = panel->backlight.max =
I915_READ(BXT_BLC_PWM_FREQ(panel->backlight.controller)); intel_de_read(dev_priv,
BXT_BLC_PWM_FREQ(panel->backlight.controller));
if (!panel->backlight.max) if (!panel->backlight.max)
panel->backlight.max = get_backlight_max_vbt(connector); panel->backlight.max = get_backlight_max_vbt(connector);
@ -1855,7 +1894,8 @@ static int pwm_setup_backlight(struct intel_connector *connector,
} }
if (IS_ERR(panel->backlight.pwm)) { if (IS_ERR(panel->backlight.pwm)) {
DRM_ERROR("Failed to get the %s PWM chip\n", desc); drm_err(&dev_priv->drm, "Failed to get the %s PWM chip\n",
desc);
panel->backlight.pwm = NULL; panel->backlight.pwm = NULL;
return -ENODEV; return -ENODEV;
} }
@ -1869,7 +1909,7 @@ static int pwm_setup_backlight(struct intel_connector *connector,
retval = pwm_config(panel->backlight.pwm, CRC_PMIC_PWM_PERIOD_NS, retval = pwm_config(panel->backlight.pwm, CRC_PMIC_PWM_PERIOD_NS,
CRC_PMIC_PWM_PERIOD_NS); CRC_PMIC_PWM_PERIOD_NS);
if (retval < 0) { if (retval < 0) {
DRM_ERROR("Failed to configure the pwm chip\n"); drm_err(&dev_priv->drm, "Failed to configure the pwm chip\n");
pwm_put(panel->backlight.pwm); pwm_put(panel->backlight.pwm);
panel->backlight.pwm = NULL; panel->backlight.pwm = NULL;
return retval; return retval;
@ -1882,7 +1922,8 @@ static int pwm_setup_backlight(struct intel_connector *connector,
CRC_PMIC_PWM_PERIOD_NS); CRC_PMIC_PWM_PERIOD_NS);
panel->backlight.enabled = panel->backlight.level != 0; panel->backlight.enabled = panel->backlight.level != 0;
DRM_INFO("Using %s PWM for LCD backlight control\n", desc); drm_info(&dev_priv->drm, "Using %s PWM for LCD backlight control\n",
desc);
return 0; return 0;
} }
@ -1913,15 +1954,17 @@ int intel_panel_setup_backlight(struct drm_connector *connector, enum pipe pipe)
if (!dev_priv->vbt.backlight.present) { if (!dev_priv->vbt.backlight.present) {
if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) { if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) {
DRM_DEBUG_KMS("no backlight present per VBT, but present per quirk\n"); drm_dbg_kms(&dev_priv->drm,
"no backlight present per VBT, but present per quirk\n");
} else { } else {
DRM_DEBUG_KMS("no backlight present per VBT\n"); drm_dbg_kms(&dev_priv->drm,
"no backlight present per VBT\n");
return 0; return 0;
} }
} }
/* ensure intel_panel has been initialized first */ /* ensure intel_panel has been initialized first */
if (WARN_ON(!panel->backlight.setup)) if (drm_WARN_ON(&dev_priv->drm, !panel->backlight.setup))
return -ENODEV; return -ENODEV;
/* set level and max in panel struct */ /* set level and max in panel struct */
@ -1930,17 +1973,19 @@ int intel_panel_setup_backlight(struct drm_connector *connector, enum pipe pipe)
mutex_unlock(&dev_priv->backlight_lock); mutex_unlock(&dev_priv->backlight_lock);
if (ret) { if (ret) {
DRM_DEBUG_KMS("failed to setup backlight for connector %s\n", drm_dbg_kms(&dev_priv->drm,
connector->name); "failed to setup backlight for connector %s\n",
connector->name);
return ret; return ret;
} }
panel->backlight.present = true; panel->backlight.present = true;
DRM_DEBUG_KMS("Connector %s backlight initialized, %s, brightness %u/%u\n", drm_dbg_kms(&dev_priv->drm,
connector->name, "Connector %s backlight initialized, %s, brightness %u/%u\n",
enableddisabled(panel->backlight.enabled), connector->name,
panel->backlight.level, panel->backlight.max); enableddisabled(panel->backlight.enabled),
panel->backlight.level, panel->backlight.max);
return 0; return 0;
} }

View file

@ -110,8 +110,8 @@ static int i9xx_pipe_crc_auto_source(struct drm_i915_private *dev_priv,
*source = INTEL_PIPE_CRC_SOURCE_DP_D; *source = INTEL_PIPE_CRC_SOURCE_DP_D;
break; break;
default: default:
WARN(1, "nonexisting DP port %c\n", drm_WARN(dev, 1, "nonexisting DP port %c\n",
port_name(dig_port->base.port)); port_name(dig_port->base.port));
break; break;
} }
break; break;
@ -172,7 +172,7 @@ static int vlv_pipe_crc_ctl_reg(struct drm_i915_private *dev_priv,
* - DisplayPort scrambling: used for EMI reduction * - DisplayPort scrambling: used for EMI reduction
*/ */
if (need_stable_symbols) { if (need_stable_symbols) {
u32 tmp = I915_READ(PORT_DFT2_G4X); u32 tmp = intel_de_read(dev_priv, PORT_DFT2_G4X);
tmp |= DC_BALANCE_RESET_VLV; tmp |= DC_BALANCE_RESET_VLV;
switch (pipe) { switch (pipe) {
@ -188,7 +188,7 @@ static int vlv_pipe_crc_ctl_reg(struct drm_i915_private *dev_priv,
default: default:
return -EINVAL; return -EINVAL;
} }
I915_WRITE(PORT_DFT2_G4X, tmp); intel_de_write(dev_priv, PORT_DFT2_G4X, tmp);
} }
return 0; return 0;
@ -237,7 +237,7 @@ static int i9xx_pipe_crc_ctl_reg(struct drm_i915_private *dev_priv,
static void vlv_undo_pipe_scramble_reset(struct drm_i915_private *dev_priv, static void vlv_undo_pipe_scramble_reset(struct drm_i915_private *dev_priv,
enum pipe pipe) enum pipe pipe)
{ {
u32 tmp = I915_READ(PORT_DFT2_G4X); u32 tmp = intel_de_read(dev_priv, PORT_DFT2_G4X);
switch (pipe) { switch (pipe) {
case PIPE_A: case PIPE_A:
@ -254,7 +254,7 @@ static void vlv_undo_pipe_scramble_reset(struct drm_i915_private *dev_priv,
} }
if (!(tmp & PIPE_SCRAMBLE_RESET_MASK)) if (!(tmp & PIPE_SCRAMBLE_RESET_MASK))
tmp &= ~DC_BALANCE_RESET_VLV; tmp &= ~DC_BALANCE_RESET_VLV;
I915_WRITE(PORT_DFT2_G4X, tmp); intel_de_write(dev_priv, PORT_DFT2_G4X, tmp);
} }
static int ilk_pipe_crc_ctl_reg(enum intel_pipe_crc_source *source, static int ilk_pipe_crc_ctl_reg(enum intel_pipe_crc_source *source,
@ -328,7 +328,8 @@ put_state:
drm_atomic_state_put(state); drm_atomic_state_put(state);
unlock: unlock:
WARN(ret, "Toggling workaround to %i returns %i\n", enable, ret); drm_WARN(&dev_priv->drm, ret,
"Toggling workaround to %i returns %i\n", enable, ret);
drm_modeset_drop_locks(&ctx); drm_modeset_drop_locks(&ctx);
drm_modeset_acquire_fini(&ctx); drm_modeset_acquire_fini(&ctx);
} }
@ -570,7 +571,7 @@ int intel_crtc_verify_crc_source(struct drm_crtc *crtc, const char *source_name,
enum intel_pipe_crc_source source; enum intel_pipe_crc_source source;
if (display_crc_ctl_parse_source(source_name, &source) < 0) { if (display_crc_ctl_parse_source(source_name, &source) < 0) {
DRM_DEBUG_DRIVER("unknown source %s\n", source_name); drm_dbg(&dev_priv->drm, "unknown source %s\n", source_name);
return -EINVAL; return -EINVAL;
} }
@ -595,14 +596,15 @@ int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name)
bool enable; bool enable;
if (display_crc_ctl_parse_source(source_name, &source) < 0) { if (display_crc_ctl_parse_source(source_name, &source) < 0) {
DRM_DEBUG_DRIVER("unknown source %s\n", source_name); drm_dbg(&dev_priv->drm, "unknown source %s\n", source_name);
return -EINVAL; return -EINVAL;
} }
power_domain = POWER_DOMAIN_PIPE(crtc->index); power_domain = POWER_DOMAIN_PIPE(crtc->index);
wakeref = intel_display_power_get_if_enabled(dev_priv, power_domain); wakeref = intel_display_power_get_if_enabled(dev_priv, power_domain);
if (!wakeref) { if (!wakeref) {
DRM_DEBUG_KMS("Trying to capture CRC while pipe is off\n"); drm_dbg_kms(&dev_priv->drm,
"Trying to capture CRC while pipe is off\n");
return -EIO; return -EIO;
} }
@ -615,8 +617,8 @@ int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name)
goto out; goto out;
pipe_crc->source = source; pipe_crc->source = source;
I915_WRITE(PIPE_CRC_CTL(crtc->index), val); intel_de_write(dev_priv, PIPE_CRC_CTL(crtc->index), val);
POSTING_READ(PIPE_CRC_CTL(crtc->index)); intel_de_posting_read(dev_priv, PIPE_CRC_CTL(crtc->index));
if (!source) { if (!source) {
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
@ -650,8 +652,8 @@ void intel_crtc_enable_pipe_crc(struct intel_crtc *intel_crtc)
/* Don't need pipe_crc->lock here, IRQs are not generated. */ /* Don't need pipe_crc->lock here, IRQs are not generated. */
pipe_crc->skipped = 0; pipe_crc->skipped = 0;
I915_WRITE(PIPE_CRC_CTL(crtc->index), val); intel_de_write(dev_priv, PIPE_CRC_CTL(crtc->index), val);
POSTING_READ(PIPE_CRC_CTL(crtc->index)); intel_de_posting_read(dev_priv, PIPE_CRC_CTL(crtc->index));
} }
void intel_crtc_disable_pipe_crc(struct intel_crtc *intel_crtc) void intel_crtc_disable_pipe_crc(struct intel_crtc *intel_crtc)
@ -665,7 +667,7 @@ void intel_crtc_disable_pipe_crc(struct intel_crtc *intel_crtc)
pipe_crc->skipped = INT_MIN; pipe_crc->skipped = INT_MIN;
spin_unlock_irq(&pipe_crc->lock); spin_unlock_irq(&pipe_crc->lock);
I915_WRITE(PIPE_CRC_CTL(crtc->index), 0); intel_de_write(dev_priv, PIPE_CRC_CTL(crtc->index), 0);
POSTING_READ(PIPE_CRC_CTL(crtc->index)); intel_de_posting_read(dev_priv, PIPE_CRC_CTL(crtc->index));
intel_synchronize_irq(dev_priv); intel_synchronize_irq(dev_priv);
} }

View file

@ -59,11 +59,28 @@
* get called by the frontbuffer tracking code. Note that because of locking * get called by the frontbuffer tracking code. Note that because of locking
* issues the self-refresh re-enable code is done from a work queue, which * issues the self-refresh re-enable code is done from a work queue, which
* must be correctly synchronized/cancelled when shutting down the pipe." * must be correctly synchronized/cancelled when shutting down the pipe."
*
* DC3CO (DC3 clock off)
*
* On top of PSR2, GEN12 adds a intermediate power savings state that turns
* clock off automatically during PSR2 idle state.
* The smaller overhead of DC3co entry/exit vs. the overhead of PSR2 deep sleep
* entry/exit allows the HW to enter a low-power state even when page flipping
* periodically (for instance a 30fps video playback scenario).
*
* Every time a flips occurs PSR2 will get out of deep sleep state(if it was),
* so DC3CO is enabled and tgl_dc3co_disable_work is schedule to run after 6
* frames, if no other flip occurs and the function above is executed, DC3CO is
* disabled and PSR2 is configured to enter deep sleep, resetting again in case
* of another flip.
* Front buffer modifications do not trigger DC3CO activation on purpose as it
* would bring a lot of complexity and most of the moderns systems will only
* use page flips.
*/ */
static bool psr_global_enabled(u32 debug) static bool psr_global_enabled(struct drm_i915_private *i915)
{ {
switch (debug & I915_PSR_DEBUG_MODE_MASK) { switch (i915->psr.debug & I915_PSR_DEBUG_MODE_MASK) {
case I915_PSR_DEBUG_DEFAULT: case I915_PSR_DEBUG_DEFAULT:
return i915_modparams.enable_psr; return i915_modparams.enable_psr;
case I915_PSR_DEBUG_DISABLE: case I915_PSR_DEBUG_DISABLE:
@ -77,8 +94,8 @@ static bool intel_psr2_enabled(struct drm_i915_private *dev_priv,
const struct intel_crtc_state *crtc_state) const struct intel_crtc_state *crtc_state)
{ {
/* Cannot enable DSC and PSR2 simultaneously */ /* Cannot enable DSC and PSR2 simultaneously */
WARN_ON(crtc_state->dsc.compression_enable && drm_WARN_ON(&dev_priv->drm, crtc_state->dsc.compression_enable &&
crtc_state->has_psr2); crtc_state->has_psr2);
switch (dev_priv->psr.debug & I915_PSR_DEBUG_MODE_MASK) { switch (dev_priv->psr.debug & I915_PSR_DEBUG_MODE_MASK) {
case I915_PSR_DEBUG_DISABLE: case I915_PSR_DEBUG_DISABLE:
@ -114,10 +131,10 @@ static void psr_irq_control(struct drm_i915_private *dev_priv)
EDP_PSR_PRE_ENTRY(trans_shift); EDP_PSR_PRE_ENTRY(trans_shift);
/* Warning: it is masking/setting reserved bits too */ /* Warning: it is masking/setting reserved bits too */
val = I915_READ(imr_reg); val = intel_de_read(dev_priv, imr_reg);
val &= ~EDP_PSR_TRANS_MASK(trans_shift); val &= ~EDP_PSR_TRANS_MASK(trans_shift);
val |= ~mask; val |= ~mask;
I915_WRITE(imr_reg, val); intel_de_write(dev_priv, imr_reg, val);
} }
static void psr_event_print(u32 val, bool psr2_enabled) static void psr_event_print(u32 val, bool psr2_enabled)
@ -174,20 +191,24 @@ void intel_psr_irq_handler(struct drm_i915_private *dev_priv, u32 psr_iir)
if (psr_iir & EDP_PSR_PRE_ENTRY(trans_shift)) { if (psr_iir & EDP_PSR_PRE_ENTRY(trans_shift)) {
dev_priv->psr.last_entry_attempt = time_ns; dev_priv->psr.last_entry_attempt = time_ns;
DRM_DEBUG_KMS("[transcoder %s] PSR entry attempt in 2 vblanks\n", drm_dbg_kms(&dev_priv->drm,
transcoder_name(cpu_transcoder)); "[transcoder %s] PSR entry attempt in 2 vblanks\n",
transcoder_name(cpu_transcoder));
} }
if (psr_iir & EDP_PSR_POST_EXIT(trans_shift)) { if (psr_iir & EDP_PSR_POST_EXIT(trans_shift)) {
dev_priv->psr.last_exit = time_ns; dev_priv->psr.last_exit = time_ns;
DRM_DEBUG_KMS("[transcoder %s] PSR exit completed\n", drm_dbg_kms(&dev_priv->drm,
transcoder_name(cpu_transcoder)); "[transcoder %s] PSR exit completed\n",
transcoder_name(cpu_transcoder));
if (INTEL_GEN(dev_priv) >= 9) { if (INTEL_GEN(dev_priv) >= 9) {
u32 val = I915_READ(PSR_EVENT(cpu_transcoder)); u32 val = intel_de_read(dev_priv,
PSR_EVENT(cpu_transcoder));
bool psr2_enabled = dev_priv->psr.psr2_enabled; bool psr2_enabled = dev_priv->psr.psr2_enabled;
I915_WRITE(PSR_EVENT(cpu_transcoder), val); intel_de_write(dev_priv, PSR_EVENT(cpu_transcoder),
val);
psr_event_print(val, psr2_enabled); psr_event_print(val, psr2_enabled);
} }
} }
@ -195,7 +216,7 @@ void intel_psr_irq_handler(struct drm_i915_private *dev_priv, u32 psr_iir)
if (psr_iir & EDP_PSR_ERROR(trans_shift)) { if (psr_iir & EDP_PSR_ERROR(trans_shift)) {
u32 val; u32 val;
DRM_WARN("[transcoder %s] PSR aux error\n", drm_warn(&dev_priv->drm, "[transcoder %s] PSR aux error\n",
transcoder_name(cpu_transcoder)); transcoder_name(cpu_transcoder));
dev_priv->psr.irq_aux_error = true; dev_priv->psr.irq_aux_error = true;
@ -208,9 +229,9 @@ void intel_psr_irq_handler(struct drm_i915_private *dev_priv, u32 psr_iir)
* again so we don't care about unmask the interruption * again so we don't care about unmask the interruption
* or unset irq_aux_error. * or unset irq_aux_error.
*/ */
val = I915_READ(imr_reg); val = intel_de_read(dev_priv, imr_reg);
val |= EDP_PSR_ERROR(trans_shift); val |= EDP_PSR_ERROR(trans_shift);
I915_WRITE(imr_reg, val); intel_de_write(dev_priv, imr_reg, val);
schedule_work(&dev_priv->psr.work); schedule_work(&dev_priv->psr.work);
} }
@ -270,7 +291,8 @@ void intel_psr_init_dpcd(struct intel_dp *intel_dp)
to_i915(dp_to_dig_port(intel_dp)->base.base.dev); to_i915(dp_to_dig_port(intel_dp)->base.base.dev);
if (dev_priv->psr.dp) { if (dev_priv->psr.dp) {
DRM_WARN("More than one eDP panel found, PSR support should be extended\n"); drm_warn(&dev_priv->drm,
"More than one eDP panel found, PSR support should be extended\n");
return; return;
} }
@ -279,16 +301,18 @@ void intel_psr_init_dpcd(struct intel_dp *intel_dp)
if (!intel_dp->psr_dpcd[0]) if (!intel_dp->psr_dpcd[0])
return; return;
DRM_DEBUG_KMS("eDP panel supports PSR version %x\n", drm_dbg_kms(&dev_priv->drm, "eDP panel supports PSR version %x\n",
intel_dp->psr_dpcd[0]); intel_dp->psr_dpcd[0]);
if (drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_NO_PSR)) { if (drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_NO_PSR)) {
DRM_DEBUG_KMS("PSR support not currently available for this panel\n"); drm_dbg_kms(&dev_priv->drm,
"PSR support not currently available for this panel\n");
return; return;
} }
if (!(intel_dp->edp_dpcd[1] & DP_EDP_SET_POWER_CAP)) { if (!(intel_dp->edp_dpcd[1] & DP_EDP_SET_POWER_CAP)) {
DRM_DEBUG_KMS("Panel lacks power state control, PSR cannot be enabled\n"); drm_dbg_kms(&dev_priv->drm,
"Panel lacks power state control, PSR cannot be enabled\n");
return; return;
} }
@ -316,8 +340,8 @@ void intel_psr_init_dpcd(struct intel_dp *intel_dp)
* GTC first. * GTC first.
*/ */
dev_priv->psr.sink_psr2_support = y_req && alpm; dev_priv->psr.sink_psr2_support = y_req && alpm;
DRM_DEBUG_KMS("PSR2 %ssupported\n", drm_dbg_kms(&dev_priv->drm, "PSR2 %ssupported\n",
dev_priv->psr.sink_psr2_support ? "" : "not "); dev_priv->psr.sink_psr2_support ? "" : "not ");
if (dev_priv->psr.sink_psr2_support) { if (dev_priv->psr.sink_psr2_support) {
dev_priv->psr.colorimetry_support = dev_priv->psr.colorimetry_support =
@ -380,8 +404,9 @@ static void hsw_psr_setup_aux(struct intel_dp *intel_dp)
BUILD_BUG_ON(sizeof(aux_msg) > 20); BUILD_BUG_ON(sizeof(aux_msg) > 20);
for (i = 0; i < sizeof(aux_msg); i += 4) for (i = 0; i < sizeof(aux_msg); i += 4)
I915_WRITE(EDP_PSR_AUX_DATA(dev_priv->psr.transcoder, i >> 2), intel_de_write(dev_priv,
intel_dp_pack_aux(&aux_msg[i], sizeof(aux_msg) - i)); EDP_PSR_AUX_DATA(dev_priv->psr.transcoder, i >> 2),
intel_dp_pack_aux(&aux_msg[i], sizeof(aux_msg) - i));
aux_clock_divider = intel_dp->get_aux_clock_divider(intel_dp, 0); aux_clock_divider = intel_dp->get_aux_clock_divider(intel_dp, 0);
@ -391,7 +416,8 @@ static void hsw_psr_setup_aux(struct intel_dp *intel_dp)
/* Select only valid bits for SRD_AUX_CTL */ /* Select only valid bits for SRD_AUX_CTL */
aux_ctl &= psr_aux_mask; aux_ctl &= psr_aux_mask;
I915_WRITE(EDP_PSR_AUX_CTL(dev_priv->psr.transcoder), aux_ctl); intel_de_write(dev_priv, EDP_PSR_AUX_CTL(dev_priv->psr.transcoder),
aux_ctl);
} }
static void intel_psr_enable_sink(struct intel_dp *intel_dp) static void intel_psr_enable_sink(struct intel_dp *intel_dp)
@ -454,22 +480,30 @@ static u32 intel_psr1_get_tp_time(struct intel_dp *intel_dp)
return val; return val;
} }
static u8 psr_compute_idle_frames(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
int idle_frames;
/* Let's use 6 as the minimum to cover all known cases including the
* off-by-one issue that HW has in some cases.
*/
idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
idle_frames = max(idle_frames, dev_priv->psr.sink_sync_latency + 1);
if (drm_WARN_ON(&dev_priv->drm, idle_frames > 0xf))
idle_frames = 0xf;
return idle_frames;
}
static void hsw_activate_psr1(struct intel_dp *intel_dp) static void hsw_activate_psr1(struct intel_dp *intel_dp)
{ {
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 max_sleep_time = 0x1f; u32 max_sleep_time = 0x1f;
u32 val = EDP_PSR_ENABLE; u32 val = EDP_PSR_ENABLE;
/* Let's use 6 as the minimum to cover all known cases including the val |= psr_compute_idle_frames(intel_dp) << EDP_PSR_IDLE_FRAME_SHIFT;
* off-by-one issue that HW has in some cases.
*/
int idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
/* sink_sync_latency of 8 means source has to wait for more than 8
* frames, we'll go with 9 frames for now
*/
idle_frames = max(idle_frames, dev_priv->psr.sink_sync_latency + 1);
val |= idle_frames << EDP_PSR_IDLE_FRAME_SHIFT;
val |= max_sleep_time << EDP_PSR_MAX_SLEEP_TIME_SHIFT; val |= max_sleep_time << EDP_PSR_MAX_SLEEP_TIME_SHIFT;
if (IS_HASWELL(dev_priv)) if (IS_HASWELL(dev_priv))
@ -483,9 +517,9 @@ static void hsw_activate_psr1(struct intel_dp *intel_dp)
if (INTEL_GEN(dev_priv) >= 8) if (INTEL_GEN(dev_priv) >= 8)
val |= EDP_PSR_CRC_ENABLE; val |= EDP_PSR_CRC_ENABLE;
val |= (I915_READ(EDP_PSR_CTL(dev_priv->psr.transcoder)) & val |= (intel_de_read(dev_priv, EDP_PSR_CTL(dev_priv->psr.transcoder)) &
EDP_PSR_RESTORE_PSR_ACTIVE_CTX_MASK); EDP_PSR_RESTORE_PSR_ACTIVE_CTX_MASK);
I915_WRITE(EDP_PSR_CTL(dev_priv->psr.transcoder), val); intel_de_write(dev_priv, EDP_PSR_CTL(dev_priv->psr.transcoder), val);
} }
static void hsw_activate_psr2(struct intel_dp *intel_dp) static void hsw_activate_psr2(struct intel_dp *intel_dp)
@ -493,13 +527,7 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp)
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 val; u32 val;
/* Let's use 6 as the minimum to cover all known cases including the val = psr_compute_idle_frames(intel_dp) << EDP_PSR2_IDLE_FRAME_SHIFT;
* off-by-one issue that HW has in some cases.
*/
int idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
idle_frames = max(idle_frames, dev_priv->psr.sink_sync_latency + 1);
val = idle_frames << EDP_PSR2_IDLE_FRAME_SHIFT;
val |= EDP_PSR2_ENABLE | EDP_SU_TRACK_ENABLE; val |= EDP_PSR2_ENABLE | EDP_SU_TRACK_ENABLE;
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
@ -521,9 +549,9 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp)
* PSR2 HW is incorrectly using EDP_PSR_TP1_TP3_SEL and BSpec is * PSR2 HW is incorrectly using EDP_PSR_TP1_TP3_SEL and BSpec is
* recommending keep this bit unset while PSR2 is enabled. * recommending keep this bit unset while PSR2 is enabled.
*/ */
I915_WRITE(EDP_PSR_CTL(dev_priv->psr.transcoder), 0); intel_de_write(dev_priv, EDP_PSR_CTL(dev_priv->psr.transcoder), 0);
I915_WRITE(EDP_PSR2_CTL(dev_priv->psr.transcoder), val); intel_de_write(dev_priv, EDP_PSR2_CTL(dev_priv->psr.transcoder), val);
} }
static bool static bool
@ -552,10 +580,10 @@ static void psr2_program_idle_frames(struct drm_i915_private *dev_priv,
u32 val; u32 val;
idle_frames <<= EDP_PSR2_IDLE_FRAME_SHIFT; idle_frames <<= EDP_PSR2_IDLE_FRAME_SHIFT;
val = I915_READ(EDP_PSR2_CTL(dev_priv->psr.transcoder)); val = intel_de_read(dev_priv, EDP_PSR2_CTL(dev_priv->psr.transcoder));
val &= ~EDP_PSR2_IDLE_FRAME_MASK; val &= ~EDP_PSR2_IDLE_FRAME_MASK;
val |= idle_frames; val |= idle_frames;
I915_WRITE(EDP_PSR2_CTL(dev_priv->psr.transcoder), val); intel_de_write(dev_priv, EDP_PSR2_CTL(dev_priv->psr.transcoder), val);
} }
static void tgl_psr2_enable_dc3co(struct drm_i915_private *dev_priv) static void tgl_psr2_enable_dc3co(struct drm_i915_private *dev_priv)
@ -566,29 +594,22 @@ static void tgl_psr2_enable_dc3co(struct drm_i915_private *dev_priv)
static void tgl_psr2_disable_dc3co(struct drm_i915_private *dev_priv) static void tgl_psr2_disable_dc3co(struct drm_i915_private *dev_priv)
{ {
int idle_frames; struct intel_dp *intel_dp = dev_priv->psr.dp;
intel_display_power_set_target_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6); intel_display_power_set_target_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6);
/* psr2_program_idle_frames(dev_priv, psr_compute_idle_frames(intel_dp));
* Restore PSR2 idle frame let's use 6 as the minimum to cover all known
* cases including the off-by-one issue that HW has in some cases.
*/
idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
idle_frames = max(idle_frames, dev_priv->psr.sink_sync_latency + 1);
psr2_program_idle_frames(dev_priv, idle_frames);
} }
static void tgl_dc5_idle_thread(struct work_struct *work) static void tgl_dc3co_disable_work(struct work_struct *work)
{ {
struct drm_i915_private *dev_priv = struct drm_i915_private *dev_priv =
container_of(work, typeof(*dev_priv), psr.idle_work.work); container_of(work, typeof(*dev_priv), psr.dc3co_work.work);
mutex_lock(&dev_priv->psr.lock); mutex_lock(&dev_priv->psr.lock);
/* If delayed work is pending, it is not idle */ /* If delayed work is pending, it is not idle */
if (delayed_work_pending(&dev_priv->psr.idle_work)) if (delayed_work_pending(&dev_priv->psr.dc3co_work))
goto unlock; goto unlock;
DRM_DEBUG_KMS("DC5/6 idle thread\n");
tgl_psr2_disable_dc3co(dev_priv); tgl_psr2_disable_dc3co(dev_priv);
unlock: unlock:
mutex_unlock(&dev_priv->psr.lock); mutex_unlock(&dev_priv->psr.lock);
@ -599,11 +620,41 @@ static void tgl_disallow_dc3co_on_psr2_exit(struct drm_i915_private *dev_priv)
if (!dev_priv->psr.dc3co_enabled) if (!dev_priv->psr.dc3co_enabled)
return; return;
cancel_delayed_work(&dev_priv->psr.idle_work); cancel_delayed_work(&dev_priv->psr.dc3co_work);
/* Before PSR2 exit disallow dc3co*/ /* Before PSR2 exit disallow dc3co*/
tgl_psr2_disable_dc3co(dev_priv); tgl_psr2_disable_dc3co(dev_priv);
} }
static void
tgl_dc3co_exitline_compute_config(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
const u32 crtc_vdisplay = crtc_state->uapi.adjusted_mode.crtc_vdisplay;
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 exit_scanlines;
if (!(dev_priv->csr.allowed_dc_mask & DC_STATE_EN_DC3CO))
return;
/* B.Specs:49196 DC3CO only works with pipeA and DDIA.*/
if (to_intel_crtc(crtc_state->uapi.crtc)->pipe != PIPE_A ||
dig_port->base.port != PORT_A)
return;
/*
* DC3CO Exit time 200us B.Spec 49196
* PSR2 transcoder Early Exit scanlines = ROUNDUP(200 / line time) + 1
*/
exit_scanlines =
intel_usecs_to_scanlines(&crtc_state->uapi.adjusted_mode, 200) + 1;
if (drm_WARN_ON(&dev_priv->drm, exit_scanlines > crtc_vdisplay))
return;
crtc_state->dc3co_exitline = crtc_vdisplay - exit_scanlines;
}
static bool intel_psr2_config_valid(struct intel_dp *intel_dp, static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state) struct intel_crtc_state *crtc_state)
{ {
@ -616,8 +667,9 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
return false; return false;
if (!transcoder_has_psr2(dev_priv, crtc_state->cpu_transcoder)) { if (!transcoder_has_psr2(dev_priv, crtc_state->cpu_transcoder)) {
DRM_DEBUG_KMS("PSR2 not supported in transcoder %s\n", drm_dbg_kms(&dev_priv->drm,
transcoder_name(crtc_state->cpu_transcoder)); "PSR2 not supported in transcoder %s\n",
transcoder_name(crtc_state->cpu_transcoder));
return false; return false;
} }
@ -627,7 +679,8 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
* over PSR2. * over PSR2.
*/ */
if (crtc_state->dsc.compression_enable) { if (crtc_state->dsc.compression_enable) {
DRM_DEBUG_KMS("PSR2 cannot be enabled since DSC is enabled\n"); drm_dbg_kms(&dev_priv->drm,
"PSR2 cannot be enabled since DSC is enabled\n");
return false; return false;
} }
@ -646,15 +699,17 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
} }
if (crtc_hdisplay > psr_max_h || crtc_vdisplay > psr_max_v) { if (crtc_hdisplay > psr_max_h || crtc_vdisplay > psr_max_v) {
DRM_DEBUG_KMS("PSR2 not enabled, resolution %dx%d > max supported %dx%d\n", drm_dbg_kms(&dev_priv->drm,
crtc_hdisplay, crtc_vdisplay, "PSR2 not enabled, resolution %dx%d > max supported %dx%d\n",
psr_max_h, psr_max_v); crtc_hdisplay, crtc_vdisplay,
psr_max_h, psr_max_v);
return false; return false;
} }
if (crtc_state->pipe_bpp > max_bpp) { if (crtc_state->pipe_bpp > max_bpp) {
DRM_DEBUG_KMS("PSR2 not enabled, pipe bpp %d > max supported %d\n", drm_dbg_kms(&dev_priv->drm,
crtc_state->pipe_bpp, max_bpp); "PSR2 not enabled, pipe bpp %d > max supported %d\n",
crtc_state->pipe_bpp, max_bpp);
return false; return false;
} }
@ -665,16 +720,19 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
* x granularity. * x granularity.
*/ */
if (crtc_hdisplay % dev_priv->psr.su_x_granularity) { if (crtc_hdisplay % dev_priv->psr.su_x_granularity) {
DRM_DEBUG_KMS("PSR2 not enabled, hdisplay(%d) not multiple of %d\n", drm_dbg_kms(&dev_priv->drm,
crtc_hdisplay, dev_priv->psr.su_x_granularity); "PSR2 not enabled, hdisplay(%d) not multiple of %d\n",
crtc_hdisplay, dev_priv->psr.su_x_granularity);
return false; return false;
} }
if (crtc_state->crc_enabled) { if (crtc_state->crc_enabled) {
DRM_DEBUG_KMS("PSR2 not enabled because it would inhibit pipe CRC calculation\n"); drm_dbg_kms(&dev_priv->drm,
"PSR2 not enabled because it would inhibit pipe CRC calculation\n");
return false; return false;
} }
tgl_dc3co_exitline_compute_config(intel_dp, crtc_state);
return true; return true;
} }
@ -700,31 +758,36 @@ void intel_psr_compute_config(struct intel_dp *intel_dp,
* hardcoded to PORT_A * hardcoded to PORT_A
*/ */
if (dig_port->base.port != PORT_A) { if (dig_port->base.port != PORT_A) {
DRM_DEBUG_KMS("PSR condition failed: Port not supported\n"); drm_dbg_kms(&dev_priv->drm,
"PSR condition failed: Port not supported\n");
return; return;
} }
if (dev_priv->psr.sink_not_reliable) { if (dev_priv->psr.sink_not_reliable) {
DRM_DEBUG_KMS("PSR sink implementation is not reliable\n"); drm_dbg_kms(&dev_priv->drm,
"PSR sink implementation is not reliable\n");
return; return;
} }
if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) { if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) {
DRM_DEBUG_KMS("PSR condition failed: Interlaced mode enabled\n"); drm_dbg_kms(&dev_priv->drm,
"PSR condition failed: Interlaced mode enabled\n");
return; return;
} }
psr_setup_time = drm_dp_psr_setup_time(intel_dp->psr_dpcd); psr_setup_time = drm_dp_psr_setup_time(intel_dp->psr_dpcd);
if (psr_setup_time < 0) { if (psr_setup_time < 0) {
DRM_DEBUG_KMS("PSR condition failed: Invalid PSR setup time (0x%02x)\n", drm_dbg_kms(&dev_priv->drm,
intel_dp->psr_dpcd[1]); "PSR condition failed: Invalid PSR setup time (0x%02x)\n",
intel_dp->psr_dpcd[1]);
return; return;
} }
if (intel_usecs_to_scanlines(adjusted_mode, psr_setup_time) > if (intel_usecs_to_scanlines(adjusted_mode, psr_setup_time) >
adjusted_mode->crtc_vtotal - adjusted_mode->crtc_vdisplay - 1) { adjusted_mode->crtc_vtotal - adjusted_mode->crtc_vdisplay - 1) {
DRM_DEBUG_KMS("PSR condition failed: PSR setup time (%d us) too long\n", drm_dbg_kms(&dev_priv->drm,
psr_setup_time); "PSR condition failed: PSR setup time (%d us) too long\n",
psr_setup_time);
return; return;
} }
@ -737,10 +800,12 @@ static void intel_psr_activate(struct intel_dp *intel_dp)
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
if (transcoder_has_psr2(dev_priv, dev_priv->psr.transcoder)) if (transcoder_has_psr2(dev_priv, dev_priv->psr.transcoder))
WARN_ON(I915_READ(EDP_PSR2_CTL(dev_priv->psr.transcoder)) & EDP_PSR2_ENABLE); drm_WARN_ON(&dev_priv->drm,
intel_de_read(dev_priv, EDP_PSR2_CTL(dev_priv->psr.transcoder)) & EDP_PSR2_ENABLE);
WARN_ON(I915_READ(EDP_PSR_CTL(dev_priv->psr.transcoder)) & EDP_PSR_ENABLE); drm_WARN_ON(&dev_priv->drm,
WARN_ON(dev_priv->psr.active); intel_de_read(dev_priv, EDP_PSR_CTL(dev_priv->psr.transcoder)) & EDP_PSR_ENABLE);
drm_WARN_ON(&dev_priv->drm, dev_priv->psr.active);
lockdep_assert_held(&dev_priv->psr.lock); lockdep_assert_held(&dev_priv->psr.lock);
/* psr1 and psr2 are mutually exclusive.*/ /* psr1 and psr2 are mutually exclusive.*/
@ -768,11 +833,11 @@ static void intel_psr_enable_source(struct intel_dp *intel_dp,
if (dev_priv->psr.psr2_enabled && (IS_GEN(dev_priv, 9) && if (dev_priv->psr.psr2_enabled && (IS_GEN(dev_priv, 9) &&
!IS_GEMINILAKE(dev_priv))) { !IS_GEMINILAKE(dev_priv))) {
i915_reg_t reg = CHICKEN_TRANS(cpu_transcoder); i915_reg_t reg = CHICKEN_TRANS(cpu_transcoder);
u32 chicken = I915_READ(reg); u32 chicken = intel_de_read(dev_priv, reg);
chicken |= PSR2_VSC_ENABLE_PROG_HEADER | chicken |= PSR2_VSC_ENABLE_PROG_HEADER |
PSR2_ADD_VERTICAL_LINE_COUNT; PSR2_ADD_VERTICAL_LINE_COUNT;
I915_WRITE(reg, chicken); intel_de_write(dev_priv, reg, chicken);
} }
/* /*
@ -789,9 +854,24 @@ static void intel_psr_enable_source(struct intel_dp *intel_dp,
if (INTEL_GEN(dev_priv) < 11) if (INTEL_GEN(dev_priv) < 11)
mask |= EDP_PSR_DEBUG_MASK_DISP_REG_WRITE; mask |= EDP_PSR_DEBUG_MASK_DISP_REG_WRITE;
I915_WRITE(EDP_PSR_DEBUG(dev_priv->psr.transcoder), mask); intel_de_write(dev_priv, EDP_PSR_DEBUG(dev_priv->psr.transcoder),
mask);
psr_irq_control(dev_priv); psr_irq_control(dev_priv);
if (crtc_state->dc3co_exitline) {
u32 val;
/*
* TODO: if future platforms supports DC3CO in more than one
* transcoder, EXITLINE will need to be unset when disabling PSR
*/
val = intel_de_read(dev_priv, EXITLINE(cpu_transcoder));
val &= ~EXITLINE_MASK;
val |= crtc_state->dc3co_exitline << EXITLINE_SHIFT;
val |= EXITLINE_ENABLE;
intel_de_write(dev_priv, EXITLINE(cpu_transcoder), val);
}
} }
static void intel_psr_enable_locked(struct drm_i915_private *dev_priv, static void intel_psr_enable_locked(struct drm_i915_private *dev_priv,
@ -800,14 +880,16 @@ static void intel_psr_enable_locked(struct drm_i915_private *dev_priv,
struct intel_dp *intel_dp = dev_priv->psr.dp; struct intel_dp *intel_dp = dev_priv->psr.dp;
u32 val; u32 val;
WARN_ON(dev_priv->psr.enabled); drm_WARN_ON(&dev_priv->drm, dev_priv->psr.enabled);
dev_priv->psr.psr2_enabled = intel_psr2_enabled(dev_priv, crtc_state); dev_priv->psr.psr2_enabled = intel_psr2_enabled(dev_priv, crtc_state);
dev_priv->psr.busy_frontbuffer_bits = 0; dev_priv->psr.busy_frontbuffer_bits = 0;
dev_priv->psr.pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe; dev_priv->psr.pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
dev_priv->psr.dc3co_enabled = !!crtc_state->dc3co_exitline; dev_priv->psr.dc3co_enabled = !!crtc_state->dc3co_exitline;
dev_priv->psr.dc3co_exit_delay = intel_get_frame_time_us(crtc_state);
dev_priv->psr.transcoder = crtc_state->cpu_transcoder; dev_priv->psr.transcoder = crtc_state->cpu_transcoder;
/* DC5/DC6 requires at least 6 idle frames */
val = usecs_to_jiffies(intel_get_frame_time_us(crtc_state) * 6);
dev_priv->psr.dc3co_exit_delay = val;
/* /*
* If a PSR error happened and the driver is reloaded, the EDP_PSR_IIR * If a PSR error happened and the driver is reloaded, the EDP_PSR_IIR
@ -818,20 +900,22 @@ static void intel_psr_enable_locked(struct drm_i915_private *dev_priv,
* to avoid any rendering problems. * to avoid any rendering problems.
*/ */
if (INTEL_GEN(dev_priv) >= 12) { if (INTEL_GEN(dev_priv) >= 12) {
val = I915_READ(TRANS_PSR_IIR(dev_priv->psr.transcoder)); val = intel_de_read(dev_priv,
TRANS_PSR_IIR(dev_priv->psr.transcoder));
val &= EDP_PSR_ERROR(0); val &= EDP_PSR_ERROR(0);
} else { } else {
val = I915_READ(EDP_PSR_IIR); val = intel_de_read(dev_priv, EDP_PSR_IIR);
val &= EDP_PSR_ERROR(dev_priv->psr.transcoder); val &= EDP_PSR_ERROR(dev_priv->psr.transcoder);
} }
if (val) { if (val) {
dev_priv->psr.sink_not_reliable = true; dev_priv->psr.sink_not_reliable = true;
DRM_DEBUG_KMS("PSR interruption error set, not enabling PSR\n"); drm_dbg_kms(&dev_priv->drm,
"PSR interruption error set, not enabling PSR\n");
return; return;
} }
DRM_DEBUG_KMS("Enabling PSR%s\n", drm_dbg_kms(&dev_priv->drm, "Enabling PSR%s\n",
dev_priv->psr.psr2_enabled ? "2" : "1"); dev_priv->psr.psr2_enabled ? "2" : "1");
intel_psr_setup_vsc(intel_dp, crtc_state); intel_psr_setup_vsc(intel_dp, crtc_state);
intel_psr_enable_sink(intel_dp); intel_psr_enable_sink(intel_dp);
intel_psr_enable_source(intel_dp, crtc_state); intel_psr_enable_source(intel_dp, crtc_state);
@ -852,18 +936,20 @@ void intel_psr_enable(struct intel_dp *intel_dp,
{ {
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
if (!CAN_PSR(dev_priv) || dev_priv->psr.dp != intel_dp)
return;
dev_priv->psr.force_mode_changed = false;
if (!crtc_state->has_psr) if (!crtc_state->has_psr)
return; return;
if (WARN_ON(!CAN_PSR(dev_priv))) drm_WARN_ON(&dev_priv->drm, dev_priv->drrs.dp);
return;
WARN_ON(dev_priv->drrs.dp);
mutex_lock(&dev_priv->psr.lock); mutex_lock(&dev_priv->psr.lock);
if (!psr_global_enabled(dev_priv->psr.debug)) { if (!psr_global_enabled(dev_priv)) {
DRM_DEBUG_KMS("PSR disabled by flag\n"); drm_dbg_kms(&dev_priv->drm, "PSR disabled by flag\n");
goto unlock; goto unlock;
} }
@ -879,27 +965,33 @@ static void intel_psr_exit(struct drm_i915_private *dev_priv)
if (!dev_priv->psr.active) { if (!dev_priv->psr.active) {
if (transcoder_has_psr2(dev_priv, dev_priv->psr.transcoder)) { if (transcoder_has_psr2(dev_priv, dev_priv->psr.transcoder)) {
val = I915_READ(EDP_PSR2_CTL(dev_priv->psr.transcoder)); val = intel_de_read(dev_priv,
WARN_ON(val & EDP_PSR2_ENABLE); EDP_PSR2_CTL(dev_priv->psr.transcoder));
drm_WARN_ON(&dev_priv->drm, val & EDP_PSR2_ENABLE);
} }
val = I915_READ(EDP_PSR_CTL(dev_priv->psr.transcoder)); val = intel_de_read(dev_priv,
WARN_ON(val & EDP_PSR_ENABLE); EDP_PSR_CTL(dev_priv->psr.transcoder));
drm_WARN_ON(&dev_priv->drm, val & EDP_PSR_ENABLE);
return; return;
} }
if (dev_priv->psr.psr2_enabled) { if (dev_priv->psr.psr2_enabled) {
tgl_disallow_dc3co_on_psr2_exit(dev_priv); tgl_disallow_dc3co_on_psr2_exit(dev_priv);
val = I915_READ(EDP_PSR2_CTL(dev_priv->psr.transcoder)); val = intel_de_read(dev_priv,
WARN_ON(!(val & EDP_PSR2_ENABLE)); EDP_PSR2_CTL(dev_priv->psr.transcoder));
drm_WARN_ON(&dev_priv->drm, !(val & EDP_PSR2_ENABLE));
val &= ~EDP_PSR2_ENABLE; val &= ~EDP_PSR2_ENABLE;
I915_WRITE(EDP_PSR2_CTL(dev_priv->psr.transcoder), val); intel_de_write(dev_priv,
EDP_PSR2_CTL(dev_priv->psr.transcoder), val);
} else { } else {
val = I915_READ(EDP_PSR_CTL(dev_priv->psr.transcoder)); val = intel_de_read(dev_priv,
WARN_ON(!(val & EDP_PSR_ENABLE)); EDP_PSR_CTL(dev_priv->psr.transcoder));
drm_WARN_ON(&dev_priv->drm, !(val & EDP_PSR_ENABLE));
val &= ~EDP_PSR_ENABLE; val &= ~EDP_PSR_ENABLE;
I915_WRITE(EDP_PSR_CTL(dev_priv->psr.transcoder), val); intel_de_write(dev_priv,
EDP_PSR_CTL(dev_priv->psr.transcoder), val);
} }
dev_priv->psr.active = false; dev_priv->psr.active = false;
} }
@ -915,8 +1007,8 @@ static void intel_psr_disable_locked(struct intel_dp *intel_dp)
if (!dev_priv->psr.enabled) if (!dev_priv->psr.enabled)
return; return;
DRM_DEBUG_KMS("Disabling PSR%s\n", drm_dbg_kms(&dev_priv->drm, "Disabling PSR%s\n",
dev_priv->psr.psr2_enabled ? "2" : "1"); dev_priv->psr.psr2_enabled ? "2" : "1");
intel_psr_exit(dev_priv); intel_psr_exit(dev_priv);
@ -931,7 +1023,7 @@ static void intel_psr_disable_locked(struct intel_dp *intel_dp)
/* Wait till PSR is idle */ /* Wait till PSR is idle */
if (intel_de_wait_for_clear(dev_priv, psr_status, if (intel_de_wait_for_clear(dev_priv, psr_status,
psr_status_mask, 2000)) psr_status_mask, 2000))
DRM_ERROR("Timed out waiting PSR idle state\n"); drm_err(&dev_priv->drm, "Timed out waiting PSR idle state\n");
/* Disable PSR on Sink */ /* Disable PSR on Sink */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, 0); drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, 0);
@ -957,7 +1049,7 @@ void intel_psr_disable(struct intel_dp *intel_dp,
if (!old_crtc_state->has_psr) if (!old_crtc_state->has_psr)
return; return;
if (WARN_ON(!CAN_PSR(dev_priv))) if (drm_WARN_ON(&dev_priv->drm, !CAN_PSR(dev_priv)))
return; return;
mutex_lock(&dev_priv->psr.lock); mutex_lock(&dev_priv->psr.lock);
@ -966,7 +1058,7 @@ void intel_psr_disable(struct intel_dp *intel_dp,
mutex_unlock(&dev_priv->psr.lock); mutex_unlock(&dev_priv->psr.lock);
cancel_work_sync(&dev_priv->psr.work); cancel_work_sync(&dev_priv->psr.work);
cancel_delayed_work_sync(&dev_priv->psr.idle_work); cancel_delayed_work_sync(&dev_priv->psr.dc3co_work);
} }
static void psr_force_hw_tracking_exit(struct drm_i915_private *dev_priv) static void psr_force_hw_tracking_exit(struct drm_i915_private *dev_priv)
@ -981,7 +1073,7 @@ static void psr_force_hw_tracking_exit(struct drm_i915_private *dev_priv)
* but it makes more sense write to the current active * but it makes more sense write to the current active
* pipe. * pipe.
*/ */
I915_WRITE(CURSURFLIVE(dev_priv->psr.pipe), 0); intel_de_write(dev_priv, CURSURFLIVE(dev_priv->psr.pipe), 0);
else else
/* /*
* A write to CURSURFLIVE do not cause HW tracking to exit PSR * A write to CURSURFLIVE do not cause HW tracking to exit PSR
@ -1009,9 +1101,11 @@ void intel_psr_update(struct intel_dp *intel_dp,
if (!CAN_PSR(dev_priv) || READ_ONCE(psr->dp) != intel_dp) if (!CAN_PSR(dev_priv) || READ_ONCE(psr->dp) != intel_dp)
return; return;
dev_priv->psr.force_mode_changed = false;
mutex_lock(&dev_priv->psr.lock); mutex_lock(&dev_priv->psr.lock);
enable = crtc_state->has_psr && psr_global_enabled(psr->debug); enable = crtc_state->has_psr && psr_global_enabled(dev_priv);
psr2_enable = intel_psr2_enabled(dev_priv, crtc_state); psr2_enable = intel_psr2_enabled(dev_priv, crtc_state);
if (enable == psr->enabled && psr2_enable == psr->psr2_enabled) { if (enable == psr->enabled && psr2_enable == psr->psr2_enabled) {
@ -1099,7 +1193,8 @@ static bool __psr_wait_for_idle_locked(struct drm_i915_private *dev_priv)
err = intel_de_wait_for_clear(dev_priv, reg, mask, 50); err = intel_de_wait_for_clear(dev_priv, reg, mask, 50);
if (err) if (err)
DRM_ERROR("Timed out waiting for PSR Idle for re-enable\n"); drm_err(&dev_priv->drm,
"Timed out waiting for PSR Idle for re-enable\n");
/* After the unlocked wait, verify that PSR is still wanted! */ /* After the unlocked wait, verify that PSR is still wanted! */
mutex_lock(&dev_priv->psr.lock); mutex_lock(&dev_priv->psr.lock);
@ -1163,7 +1258,7 @@ int intel_psr_debug_set(struct drm_i915_private *dev_priv, u64 val)
if (val & ~(I915_PSR_DEBUG_IRQ | I915_PSR_DEBUG_MODE_MASK) || if (val & ~(I915_PSR_DEBUG_IRQ | I915_PSR_DEBUG_MODE_MASK) ||
mode > I915_PSR_DEBUG_FORCE_PSR1) { mode > I915_PSR_DEBUG_FORCE_PSR1) {
DRM_DEBUG_KMS("Invalid debug mask %llx\n", val); drm_dbg_kms(&dev_priv->drm, "Invalid debug mask %llx\n", val);
return -EINVAL; return -EINVAL;
} }
@ -1275,14 +1370,12 @@ void intel_psr_invalidate(struct drm_i915_private *dev_priv,
* When we will be completely rely on PSR2 S/W tracking in future, * When we will be completely rely on PSR2 S/W tracking in future,
* intel_psr_flush() will invalidate and flush the PSR for ORIGIN_FLIP * intel_psr_flush() will invalidate and flush the PSR for ORIGIN_FLIP
* event also therefore tgl_dc3co_flush() require to be changed * event also therefore tgl_dc3co_flush() require to be changed
* accrodingly in future. * accordingly in future.
*/ */
static void static void
tgl_dc3co_flush(struct drm_i915_private *dev_priv, tgl_dc3co_flush(struct drm_i915_private *dev_priv,
unsigned int frontbuffer_bits, enum fb_op_origin origin) unsigned int frontbuffer_bits, enum fb_op_origin origin)
{ {
u32 delay;
mutex_lock(&dev_priv->psr.lock); mutex_lock(&dev_priv->psr.lock);
if (!dev_priv->psr.dc3co_enabled) if (!dev_priv->psr.dc3co_enabled)
@ -1300,10 +1393,8 @@ tgl_dc3co_flush(struct drm_i915_private *dev_priv,
goto unlock; goto unlock;
tgl_psr2_enable_dc3co(dev_priv); tgl_psr2_enable_dc3co(dev_priv);
/* DC5/DC6 required idle frames = 6 */ mod_delayed_work(system_wq, &dev_priv->psr.dc3co_work,
delay = 6 * dev_priv->psr.dc3co_exit_delay; dev_priv->psr.dc3co_exit_delay);
mod_delayed_work(system_wq, &dev_priv->psr.idle_work,
usecs_to_jiffies(delay));
unlock: unlock:
mutex_unlock(&dev_priv->psr.lock); mutex_unlock(&dev_priv->psr.lock);
@ -1387,7 +1478,7 @@ void intel_psr_init(struct drm_i915_private *dev_priv)
dev_priv->psr.link_standby = dev_priv->vbt.psr.full_link; dev_priv->psr.link_standby = dev_priv->vbt.psr.full_link;
INIT_WORK(&dev_priv->psr.work, intel_psr_work); INIT_WORK(&dev_priv->psr.work, intel_psr_work);
INIT_DELAYED_WORK(&dev_priv->psr.idle_work, tgl_dc5_idle_thread); INIT_DELAYED_WORK(&dev_priv->psr.dc3co_work, tgl_dc3co_disable_work);
mutex_init(&dev_priv->psr.lock); mutex_init(&dev_priv->psr.lock);
} }
@ -1423,14 +1514,15 @@ static void psr_alpm_check(struct intel_dp *intel_dp)
r = drm_dp_dpcd_readb(aux, DP_RECEIVER_ALPM_STATUS, &val); r = drm_dp_dpcd_readb(aux, DP_RECEIVER_ALPM_STATUS, &val);
if (r != 1) { if (r != 1) {
DRM_ERROR("Error reading ALPM status\n"); drm_err(&dev_priv->drm, "Error reading ALPM status\n");
return; return;
} }
if (val & DP_ALPM_LOCK_TIMEOUT_ERROR) { if (val & DP_ALPM_LOCK_TIMEOUT_ERROR) {
intel_psr_disable_locked(intel_dp); intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true; psr->sink_not_reliable = true;
DRM_DEBUG_KMS("ALPM lock timeout error, disabling PSR\n"); drm_dbg_kms(&dev_priv->drm,
"ALPM lock timeout error, disabling PSR\n");
/* Clearing error */ /* Clearing error */
drm_dp_dpcd_writeb(aux, DP_RECEIVER_ALPM_STATUS, val); drm_dp_dpcd_writeb(aux, DP_RECEIVER_ALPM_STATUS, val);
@ -1446,14 +1538,15 @@ static void psr_capability_changed_check(struct intel_dp *intel_dp)
r = drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_ESI, &val); r = drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_ESI, &val);
if (r != 1) { if (r != 1) {
DRM_ERROR("Error reading DP_PSR_ESI\n"); drm_err(&dev_priv->drm, "Error reading DP_PSR_ESI\n");
return; return;
} }
if (val & DP_PSR_CAPS_CHANGE) { if (val & DP_PSR_CAPS_CHANGE) {
intel_psr_disable_locked(intel_dp); intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true; psr->sink_not_reliable = true;
DRM_DEBUG_KMS("Sink PSR capability changed, disabling PSR\n"); drm_dbg_kms(&dev_priv->drm,
"Sink PSR capability changed, disabling PSR\n");
/* Clearing it */ /* Clearing it */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ESI, val); drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ESI, val);
@ -1478,7 +1571,8 @@ void intel_psr_short_pulse(struct intel_dp *intel_dp)
goto exit; goto exit;
if (psr_get_status_and_error_status(intel_dp, &status, &error_status)) { if (psr_get_status_and_error_status(intel_dp, &status, &error_status)) {
DRM_ERROR("Error reading PSR status or error status\n"); drm_err(&dev_priv->drm,
"Error reading PSR status or error status\n");
goto exit; goto exit;
} }
@ -1488,17 +1582,22 @@ void intel_psr_short_pulse(struct intel_dp *intel_dp)
} }
if (status == DP_PSR_SINK_INTERNAL_ERROR && !error_status) if (status == DP_PSR_SINK_INTERNAL_ERROR && !error_status)
DRM_DEBUG_KMS("PSR sink internal error, disabling PSR\n"); drm_dbg_kms(&dev_priv->drm,
"PSR sink internal error, disabling PSR\n");
if (error_status & DP_PSR_RFB_STORAGE_ERROR) if (error_status & DP_PSR_RFB_STORAGE_ERROR)
DRM_DEBUG_KMS("PSR RFB storage error, disabling PSR\n"); drm_dbg_kms(&dev_priv->drm,
"PSR RFB storage error, disabling PSR\n");
if (error_status & DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR) if (error_status & DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR)
DRM_DEBUG_KMS("PSR VSC SDP uncorrectable error, disabling PSR\n"); drm_dbg_kms(&dev_priv->drm,
"PSR VSC SDP uncorrectable error, disabling PSR\n");
if (error_status & DP_PSR_LINK_CRC_ERROR) if (error_status & DP_PSR_LINK_CRC_ERROR)
DRM_DEBUG_KMS("PSR Link CRC error, disabling PSR\n"); drm_dbg_kms(&dev_priv->drm,
"PSR Link CRC error, disabling PSR\n");
if (error_status & ~errors) if (error_status & ~errors)
DRM_ERROR("PSR_ERROR_STATUS unhandled errors %x\n", drm_err(&dev_priv->drm,
error_status & ~errors); "PSR_ERROR_STATUS unhandled errors %x\n",
error_status & ~errors);
/* clear status register */ /* clear status register */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ERROR_STATUS, error_status); drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ERROR_STATUS, error_status);
@ -1534,16 +1633,29 @@ void intel_psr_atomic_check(struct drm_connector *connector,
struct drm_crtc_state *crtc_state; struct drm_crtc_state *crtc_state;
if (!CAN_PSR(dev_priv) || !new_state->crtc || if (!CAN_PSR(dev_priv) || !new_state->crtc ||
dev_priv->psr.initially_probed) !dev_priv->psr.force_mode_changed)
return; return;
intel_connector = to_intel_connector(connector); intel_connector = to_intel_connector(connector);
dig_port = enc_to_dig_port(intel_connector->encoder); dig_port = enc_to_dig_port(intel_attached_encoder(intel_connector));
if (dev_priv->psr.dp != &dig_port->dp) if (dev_priv->psr.dp != &dig_port->dp)
return; return;
crtc_state = drm_atomic_get_new_crtc_state(new_state->state, crtc_state = drm_atomic_get_new_crtc_state(new_state->state,
new_state->crtc); new_state->crtc);
crtc_state->mode_changed = true; crtc_state->mode_changed = true;
dev_priv->psr.initially_probed = true; }
void intel_psr_set_force_mode_changed(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv;
if (!intel_dp)
return;
dev_priv = dp_to_i915(intel_dp);
if (!CAN_PSR(dev_priv) || intel_dp != dev_priv->psr.dp)
return;
dev_priv->psr.force_mode_changed = true;
} }

View file

@ -40,5 +40,6 @@ bool intel_psr_enabled(struct intel_dp *intel_dp);
void intel_psr_atomic_check(struct drm_connector *connector, void intel_psr_atomic_check(struct drm_connector *connector,
struct drm_connector_state *old_state, struct drm_connector_state *old_state,
struct drm_connector_state *new_state); struct drm_connector_state *new_state);
void intel_psr_set_force_mode_changed(struct intel_dp *intel_dp);
#endif /* __INTEL_PSR_H__ */ #endif /* __INTEL_PSR_H__ */

View file

@ -14,7 +14,7 @@
static void quirk_ssc_force_disable(struct drm_i915_private *i915) static void quirk_ssc_force_disable(struct drm_i915_private *i915)
{ {
i915->quirks |= QUIRK_LVDS_SSC_DISABLE; i915->quirks |= QUIRK_LVDS_SSC_DISABLE;
DRM_INFO("applying lvds SSC disable quirk\n"); drm_info(&i915->drm, "applying lvds SSC disable quirk\n");
} }
/* /*
@ -24,14 +24,14 @@ static void quirk_ssc_force_disable(struct drm_i915_private *i915)
static void quirk_invert_brightness(struct drm_i915_private *i915) static void quirk_invert_brightness(struct drm_i915_private *i915)
{ {
i915->quirks |= QUIRK_INVERT_BRIGHTNESS; i915->quirks |= QUIRK_INVERT_BRIGHTNESS;
DRM_INFO("applying inverted panel brightness quirk\n"); drm_info(&i915->drm, "applying inverted panel brightness quirk\n");
} }
/* Some VBT's incorrectly indicate no backlight is present */ /* Some VBT's incorrectly indicate no backlight is present */
static void quirk_backlight_present(struct drm_i915_private *i915) static void quirk_backlight_present(struct drm_i915_private *i915)
{ {
i915->quirks |= QUIRK_BACKLIGHT_PRESENT; i915->quirks |= QUIRK_BACKLIGHT_PRESENT;
DRM_INFO("applying backlight present quirk\n"); drm_info(&i915->drm, "applying backlight present quirk\n");
} }
/* Toshiba Satellite P50-C-18C requires T12 delay to be min 800ms /* Toshiba Satellite P50-C-18C requires T12 delay to be min 800ms
@ -40,7 +40,7 @@ static void quirk_backlight_present(struct drm_i915_private *i915)
static void quirk_increase_t12_delay(struct drm_i915_private *i915) static void quirk_increase_t12_delay(struct drm_i915_private *i915)
{ {
i915->quirks |= QUIRK_INCREASE_T12_DELAY; i915->quirks |= QUIRK_INCREASE_T12_DELAY;
DRM_INFO("Applying T12 delay quirk\n"); drm_info(&i915->drm, "Applying T12 delay quirk\n");
} }
/* /*
@ -50,7 +50,7 @@ static void quirk_increase_t12_delay(struct drm_i915_private *i915)
static void quirk_increase_ddi_disabled_time(struct drm_i915_private *i915) static void quirk_increase_ddi_disabled_time(struct drm_i915_private *i915)
{ {
i915->quirks |= QUIRK_INCREASE_DDI_DISABLED_TIME; i915->quirks |= QUIRK_INCREASE_DDI_DISABLED_TIME;
DRM_INFO("Applying Increase DDI Disabled quirk\n"); drm_info(&i915->drm, "Applying Increase DDI Disabled quirk\n");
} }
struct intel_quirk { struct intel_quirk {

View file

@ -217,23 +217,23 @@ static void intel_sdvo_write_sdvox(struct intel_sdvo *intel_sdvo, u32 val)
int i; int i;
if (HAS_PCH_SPLIT(dev_priv)) { if (HAS_PCH_SPLIT(dev_priv)) {
I915_WRITE(intel_sdvo->sdvo_reg, val); intel_de_write(dev_priv, intel_sdvo->sdvo_reg, val);
POSTING_READ(intel_sdvo->sdvo_reg); intel_de_posting_read(dev_priv, intel_sdvo->sdvo_reg);
/* /*
* HW workaround, need to write this twice for issue * HW workaround, need to write this twice for issue
* that may result in first write getting masked. * that may result in first write getting masked.
*/ */
if (HAS_PCH_IBX(dev_priv)) { if (HAS_PCH_IBX(dev_priv)) {
I915_WRITE(intel_sdvo->sdvo_reg, val); intel_de_write(dev_priv, intel_sdvo->sdvo_reg, val);
POSTING_READ(intel_sdvo->sdvo_reg); intel_de_posting_read(dev_priv, intel_sdvo->sdvo_reg);
} }
return; return;
} }
if (intel_sdvo->port == PORT_B) if (intel_sdvo->port == PORT_B)
cval = I915_READ(GEN3_SDVOC); cval = intel_de_read(dev_priv, GEN3_SDVOC);
else else
bval = I915_READ(GEN3_SDVOB); bval = intel_de_read(dev_priv, GEN3_SDVOB);
/* /*
* Write the registers twice for luck. Sometimes, * Write the registers twice for luck. Sometimes,
@ -241,11 +241,11 @@ static void intel_sdvo_write_sdvox(struct intel_sdvo *intel_sdvo, u32 val)
* The BIOS does this too. Yay, magic * The BIOS does this too. Yay, magic
*/ */
for (i = 0; i < 2; i++) { for (i = 0; i < 2; i++) {
I915_WRITE(GEN3_SDVOB, bval); intel_de_write(dev_priv, GEN3_SDVOB, bval);
POSTING_READ(GEN3_SDVOB); intel_de_posting_read(dev_priv, GEN3_SDVOB);
I915_WRITE(GEN3_SDVOC, cval); intel_de_write(dev_priv, GEN3_SDVOC, cval);
POSTING_READ(GEN3_SDVOC); intel_de_posting_read(dev_priv, GEN3_SDVOC);
} }
} }
@ -414,12 +414,10 @@ static void intel_sdvo_debug_write(struct intel_sdvo *intel_sdvo, u8 cmd,
{ {
const char *cmd_name; const char *cmd_name;
int i, pos = 0; int i, pos = 0;
#define BUF_LEN 256 char buffer[64];
char buffer[BUF_LEN];
#define BUF_PRINT(args...) \ #define BUF_PRINT(args...) \
pos += snprintf(buffer + pos, max_t(int, BUF_LEN - pos, 0), args) pos += snprintf(buffer + pos, max_t(int, sizeof(buffer) - pos, 0), args)
for (i = 0; i < args_len; i++) { for (i = 0; i < args_len; i++) {
BUF_PRINT("%02X ", ((u8 *)args)[i]); BUF_PRINT("%02X ", ((u8 *)args)[i]);
@ -433,9 +431,9 @@ static void intel_sdvo_debug_write(struct intel_sdvo *intel_sdvo, u8 cmd,
BUF_PRINT("(%s)", cmd_name); BUF_PRINT("(%s)", cmd_name);
else else
BUF_PRINT("(%02X)", cmd); BUF_PRINT("(%02X)", cmd);
BUG_ON(pos >= BUF_LEN - 1);
WARN_ON(pos >= sizeof(buffer) - 1);
#undef BUF_PRINT #undef BUF_PRINT
#undef BUF_LEN
DRM_DEBUG_KMS("%s: W: %02X %s\n", SDVO_NAME(intel_sdvo), cmd, buffer); DRM_DEBUG_KMS("%s: W: %02X %s\n", SDVO_NAME(intel_sdvo), cmd, buffer);
} }
@ -540,8 +538,7 @@ static bool intel_sdvo_read_response(struct intel_sdvo *intel_sdvo,
u8 retry = 15; /* 5 quick checks, followed by 10 long checks */ u8 retry = 15; /* 5 quick checks, followed by 10 long checks */
u8 status; u8 status;
int i, pos = 0; int i, pos = 0;
#define BUF_LEN 256 char buffer[64];
char buffer[BUF_LEN];
buffer[0] = '\0'; buffer[0] = '\0';
@ -581,7 +578,7 @@ static bool intel_sdvo_read_response(struct intel_sdvo *intel_sdvo,
} }
#define BUF_PRINT(args...) \ #define BUF_PRINT(args...) \
pos += snprintf(buffer + pos, max_t(int, BUF_LEN - pos, 0), args) pos += snprintf(buffer + pos, max_t(int, sizeof(buffer) - pos, 0), args)
cmd_status = sdvo_cmd_status(status); cmd_status = sdvo_cmd_status(status);
if (cmd_status) if (cmd_status)
@ -600,9 +597,9 @@ static bool intel_sdvo_read_response(struct intel_sdvo *intel_sdvo,
goto log_fail; goto log_fail;
BUF_PRINT(" %02X", ((u8 *)response)[i]); BUF_PRINT(" %02X", ((u8 *)response)[i]);
} }
BUG_ON(pos >= BUF_LEN - 1);
WARN_ON(pos >= sizeof(buffer) - 1);
#undef BUF_PRINT #undef BUF_PRINT
#undef BUF_LEN
DRM_DEBUG_KMS("%s: R: %s\n", SDVO_NAME(intel_sdvo), buffer); DRM_DEBUG_KMS("%s: R: %s\n", SDVO_NAME(intel_sdvo), buffer);
return true; return true;
@ -1267,6 +1264,13 @@ static void i9xx_adjust_sdvo_tv_clock(struct intel_crtc_state *pipe_config)
pipe_config->clock_set = true; pipe_config->clock_set = true;
} }
static bool intel_has_hdmi_sink(struct intel_sdvo *sdvo,
const struct drm_connector_state *conn_state)
{
return sdvo->has_hdmi_monitor &&
READ_ONCE(to_intel_digital_connector_state(conn_state)->force_audio) != HDMI_AUDIO_OFF_DVI;
}
static int intel_sdvo_compute_config(struct intel_encoder *encoder, static int intel_sdvo_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config, struct intel_crtc_state *pipe_config,
struct drm_connector_state *conn_state) struct drm_connector_state *conn_state)
@ -1322,12 +1326,15 @@ static int intel_sdvo_compute_config(struct intel_encoder *encoder,
pipe_config->pixel_multiplier = pipe_config->pixel_multiplier =
intel_sdvo_get_pixel_multiplier(adjusted_mode); intel_sdvo_get_pixel_multiplier(adjusted_mode);
if (intel_sdvo_state->base.force_audio != HDMI_AUDIO_OFF_DVI) pipe_config->has_hdmi_sink = intel_has_hdmi_sink(intel_sdvo, conn_state);
pipe_config->has_hdmi_sink = intel_sdvo->has_hdmi_monitor;
if (intel_sdvo_state->base.force_audio == HDMI_AUDIO_ON || if (pipe_config->has_hdmi_sink) {
(intel_sdvo_state->base.force_audio == HDMI_AUDIO_AUTO && intel_sdvo->has_hdmi_audio)) if (intel_sdvo_state->base.force_audio == HDMI_AUDIO_AUTO)
pipe_config->has_audio = true; pipe_config->has_audio = intel_sdvo->has_hdmi_audio;
else
pipe_config->has_audio =
intel_sdvo_state->base.force_audio == HDMI_AUDIO_ON;
}
if (intel_sdvo_state->base.broadcast_rgb == INTEL_BROADCAST_RGB_AUTO) { if (intel_sdvo_state->base.broadcast_rgb == INTEL_BROADCAST_RGB_AUTO) {
/* /*
@ -1470,7 +1477,8 @@ static void intel_sdvo_pre_enable(struct intel_encoder *intel_encoder,
else else
intel_sdvo_get_dtd_from_mode(&output_dtd, mode); intel_sdvo_get_dtd_from_mode(&output_dtd, mode);
if (!intel_sdvo_set_output_timing(intel_sdvo, &output_dtd)) if (!intel_sdvo_set_output_timing(intel_sdvo, &output_dtd))
DRM_INFO("Setting output timings on %s failed\n", drm_info(&dev_priv->drm,
"Setting output timings on %s failed\n",
SDVO_NAME(intel_sdvo)); SDVO_NAME(intel_sdvo));
/* Set the input timing to the screen. Assume always input 0. */ /* Set the input timing to the screen. Assume always input 0. */
@ -1494,12 +1502,14 @@ static void intel_sdvo_pre_enable(struct intel_encoder *intel_encoder,
if (IS_TV(intel_sdvo_connector) || IS_LVDS(intel_sdvo_connector)) if (IS_TV(intel_sdvo_connector) || IS_LVDS(intel_sdvo_connector))
input_dtd.part2.sdvo_flags = intel_sdvo->dtd_sdvo_flags; input_dtd.part2.sdvo_flags = intel_sdvo->dtd_sdvo_flags;
if (!intel_sdvo_set_input_timing(intel_sdvo, &input_dtd)) if (!intel_sdvo_set_input_timing(intel_sdvo, &input_dtd))
DRM_INFO("Setting input timings on %s failed\n", drm_info(&dev_priv->drm,
"Setting input timings on %s failed\n",
SDVO_NAME(intel_sdvo)); SDVO_NAME(intel_sdvo));
switch (crtc_state->pixel_multiplier) { switch (crtc_state->pixel_multiplier) {
default: default:
WARN(1, "unknown pixel multiplier specified\n"); drm_WARN(&dev_priv->drm, 1,
"unknown pixel multiplier specified\n");
/* fall through */ /* fall through */
case 1: rate = SDVO_CLOCK_RATE_MULT_1X; break; case 1: rate = SDVO_CLOCK_RATE_MULT_1X; break;
case 2: rate = SDVO_CLOCK_RATE_MULT_2X; break; case 2: rate = SDVO_CLOCK_RATE_MULT_2X; break;
@ -1518,7 +1528,7 @@ static void intel_sdvo_pre_enable(struct intel_encoder *intel_encoder,
if (INTEL_GEN(dev_priv) < 5) if (INTEL_GEN(dev_priv) < 5)
sdvox |= SDVO_BORDER_ENABLE; sdvox |= SDVO_BORDER_ENABLE;
} else { } else {
sdvox = I915_READ(intel_sdvo->sdvo_reg); sdvox = intel_de_read(dev_priv, intel_sdvo->sdvo_reg);
if (intel_sdvo->port == PORT_B) if (intel_sdvo->port == PORT_B)
sdvox &= SDVOB_PRESERVE_MASK; sdvox &= SDVOB_PRESERVE_MASK;
else else
@ -1564,7 +1574,7 @@ bool intel_sdvo_port_enabled(struct drm_i915_private *dev_priv,
{ {
u32 val; u32 val;
val = I915_READ(sdvo_reg); val = intel_de_read(dev_priv, sdvo_reg);
/* asserts want to know the pipe even if the port is disabled */ /* asserts want to know the pipe even if the port is disabled */
if (HAS_PCH_CPT(dev_priv)) if (HAS_PCH_CPT(dev_priv))
@ -1607,7 +1617,7 @@ static void intel_sdvo_get_config(struct intel_encoder *encoder,
pipe_config->output_types |= BIT(INTEL_OUTPUT_SDVO); pipe_config->output_types |= BIT(INTEL_OUTPUT_SDVO);
sdvox = I915_READ(intel_sdvo->sdvo_reg); sdvox = intel_de_read(dev_priv, intel_sdvo->sdvo_reg);
ret = intel_sdvo_get_input_timing(intel_sdvo, &dtd); ret = intel_sdvo_get_input_timing(intel_sdvo, &dtd);
if (!ret) { if (!ret) {
@ -1615,7 +1625,7 @@ static void intel_sdvo_get_config(struct intel_encoder *encoder,
* Some sdvo encoders are not spec compliant and don't * Some sdvo encoders are not spec compliant and don't
* implement the mandatory get_timings function. * implement the mandatory get_timings function.
*/ */
DRM_DEBUG_DRIVER("failed to retrieve SDVO DTD\n"); drm_dbg(&dev_priv->drm, "failed to retrieve SDVO DTD\n");
pipe_config->quirks |= PIPE_CONFIG_QUIRK_MODE_SYNC_FLAGS; pipe_config->quirks |= PIPE_CONFIG_QUIRK_MODE_SYNC_FLAGS;
} else { } else {
if (dtd.part2.dtd_flags & DTD_FLAG_HSYNC_POSITIVE) if (dtd.part2.dtd_flags & DTD_FLAG_HSYNC_POSITIVE)
@ -1667,9 +1677,10 @@ static void intel_sdvo_get_config(struct intel_encoder *encoder,
} }
} }
WARN(encoder_pixel_multiplier != pipe_config->pixel_multiplier, drm_WARN(dev,
"SDVO pixel multiplier mismatch, port: %i, encoder: %i\n", encoder_pixel_multiplier != pipe_config->pixel_multiplier,
pipe_config->pixel_multiplier, encoder_pixel_multiplier); "SDVO pixel multiplier mismatch, port: %i, encoder: %i\n",
pipe_config->pixel_multiplier, encoder_pixel_multiplier);
if (sdvox & HDMI_COLOR_RANGE_16_235) if (sdvox & HDMI_COLOR_RANGE_16_235)
pipe_config->limited_color_range = true; pipe_config->limited_color_range = true;
@ -1734,7 +1745,7 @@ static void intel_disable_sdvo(struct intel_encoder *encoder,
intel_sdvo_set_encoder_power_state(intel_sdvo, intel_sdvo_set_encoder_power_state(intel_sdvo,
DRM_MODE_DPMS_OFF); DRM_MODE_DPMS_OFF);
temp = I915_READ(intel_sdvo->sdvo_reg); temp = intel_de_read(dev_priv, intel_sdvo->sdvo_reg);
temp &= ~SDVO_ENABLE; temp &= ~SDVO_ENABLE;
intel_sdvo_write_sdvox(intel_sdvo, temp); intel_sdvo_write_sdvox(intel_sdvo, temp);
@ -1791,7 +1802,7 @@ static void intel_enable_sdvo(struct intel_encoder *encoder,
int i; int i;
bool success; bool success;
temp = I915_READ(intel_sdvo->sdvo_reg); temp = intel_de_read(dev_priv, intel_sdvo->sdvo_reg);
temp |= SDVO_ENABLE; temp |= SDVO_ENABLE;
intel_sdvo_write_sdvox(intel_sdvo, temp); intel_sdvo_write_sdvox(intel_sdvo, temp);
@ -1806,8 +1817,9 @@ static void intel_enable_sdvo(struct intel_encoder *encoder,
* a given it the status is a success, we succeeded. * a given it the status is a success, we succeeded.
*/ */
if (success && !input1) { if (success && !input1) {
DRM_DEBUG_KMS("First %s output reported failure to " drm_dbg_kms(&dev_priv->drm,
"sync\n", SDVO_NAME(intel_sdvo)); "First %s output reported failure to "
"sync\n", SDVO_NAME(intel_sdvo));
} }
if (0) if (0)
@ -2219,8 +2231,8 @@ static void intel_sdvo_get_lvds_modes(struct drm_connector *connector)
struct drm_i915_private *dev_priv = to_i915(connector->dev); struct drm_i915_private *dev_priv = to_i915(connector->dev);
struct drm_display_mode *newmode; struct drm_display_mode *newmode;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", drm_dbg_kms(&dev_priv->drm, "[CONNECTOR:%d:%s]\n",
connector->base.id, connector->name); connector->base.id, connector->name);
/* /*
* Fetch modes from VBT. For SDVO prefer the VBT mode since some * Fetch modes from VBT. For SDVO prefer the VBT mode since some
@ -2709,6 +2721,7 @@ intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device)
* Some SDVO devices have one-shot hotplug interrupts. * Some SDVO devices have one-shot hotplug interrupts.
* Ensure that they get re-enabled when an interrupt happens. * Ensure that they get re-enabled when an interrupt happens.
*/ */
intel_connector->polled = DRM_CONNECTOR_POLL_HPD;
intel_encoder->hotplug = intel_sdvo_hotplug; intel_encoder->hotplug = intel_sdvo_hotplug;
intel_sdvo_enable_hotplug(intel_encoder); intel_sdvo_enable_hotplug(intel_encoder);
} else { } else {
@ -3229,9 +3242,9 @@ static void assert_sdvo_port_valid(const struct drm_i915_private *dev_priv,
enum port port) enum port port)
{ {
if (HAS_PCH_SPLIT(dev_priv)) if (HAS_PCH_SPLIT(dev_priv))
WARN_ON(port != PORT_B); drm_WARN_ON(&dev_priv->drm, port != PORT_B);
else else
WARN_ON(port != PORT_B && port != PORT_C); drm_WARN_ON(&dev_priv->drm, port != PORT_B && port != PORT_C);
} }
bool intel_sdvo_init(struct drm_i915_private *dev_priv, bool intel_sdvo_init(struct drm_i915_private *dev_priv,
@ -3269,8 +3282,9 @@ bool intel_sdvo_init(struct drm_i915_private *dev_priv,
u8 byte; u8 byte;
if (!intel_sdvo_read_byte(intel_sdvo, i, &byte)) { if (!intel_sdvo_read_byte(intel_sdvo, i, &byte)) {
DRM_DEBUG_KMS("No SDVO device found on %s\n", drm_dbg_kms(&dev_priv->drm,
SDVO_NAME(intel_sdvo)); "No SDVO device found on %s\n",
SDVO_NAME(intel_sdvo));
goto err; goto err;
} }
} }
@ -3293,8 +3307,9 @@ bool intel_sdvo_init(struct drm_i915_private *dev_priv,
if (intel_sdvo_output_setup(intel_sdvo, if (intel_sdvo_output_setup(intel_sdvo,
intel_sdvo->caps.output_flags) != true) { intel_sdvo->caps.output_flags) != true) {
DRM_DEBUG_KMS("SDVO output failed to setup on %s\n", drm_dbg_kms(&dev_priv->drm,
SDVO_NAME(intel_sdvo)); "SDVO output failed to setup on %s\n",
SDVO_NAME(intel_sdvo));
/* Output_setup can leave behind connectors! */ /* Output_setup can leave behind connectors! */
goto err_output; goto err_output;
} }
@ -3331,7 +3346,7 @@ bool intel_sdvo_init(struct drm_i915_private *dev_priv,
&intel_sdvo->pixel_clock_max)) &intel_sdvo->pixel_clock_max))
goto err_output; goto err_output;
DRM_DEBUG_KMS("%s device VID/DID: %02X:%02X.%02X, " drm_dbg_kms(&dev_priv->drm, "%s device VID/DID: %02X:%02X.%02X, "
"clock range %dMHz - %dMHz, " "clock range %dMHz - %dMHz, "
"input 1: %c, input 2: %c, " "input 1: %c, input 2: %c, "
"output 1: %c, output 2: %c\n", "output 1: %c, output 2: %c\n",

View file

@ -104,7 +104,7 @@ void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state)
if (min <= 0 || max <= 0) if (min <= 0 || max <= 0)
goto irq_disable; goto irq_disable;
if (WARN_ON(drm_crtc_vblank_get(&crtc->base))) if (drm_WARN_ON(&dev_priv->drm, drm_crtc_vblank_get(&crtc->base)))
goto irq_disable; goto irq_disable;
/* /*
@ -113,8 +113,9 @@ void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state)
* re-entry as well. * re-entry as well.
*/ */
if (intel_psr_wait_for_idle(new_crtc_state, &psr_status)) if (intel_psr_wait_for_idle(new_crtc_state, &psr_status))
DRM_ERROR("PSR idle timed out 0x%x, atomic update may fail\n", drm_err(&dev_priv->drm,
psr_status); "PSR idle timed out 0x%x, atomic update may fail\n",
psr_status);
local_irq_disable(); local_irq_disable();
@ -135,8 +136,9 @@ void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state)
break; break;
if (!timeout) { if (!timeout) {
DRM_ERROR("Potential atomic update failure on pipe %c\n", drm_err(&dev_priv->drm,
pipe_name(crtc->pipe)); "Potential atomic update failure on pipe %c\n",
pipe_name(crtc->pipe));
break; break;
} }
@ -204,7 +206,8 @@ void intel_pipe_update_end(struct intel_crtc_state *new_crtc_state)
* event outside of the critical section - the spinlock might spin for a * event outside of the critical section - the spinlock might spin for a
* while ... */ * while ... */
if (new_crtc_state->uapi.event) { if (new_crtc_state->uapi.event) {
WARN_ON(drm_crtc_vblank_get(&crtc->base) != 0); drm_WARN_ON(&dev_priv->drm,
drm_crtc_vblank_get(&crtc->base) != 0);
spin_lock(&crtc->base.dev->event_lock); spin_lock(&crtc->base.dev->event_lock);
drm_crtc_arm_vblank_event(&crtc->base, drm_crtc_arm_vblank_event(&crtc->base,
@ -221,17 +224,20 @@ void intel_pipe_update_end(struct intel_crtc_state *new_crtc_state)
if (crtc->debug.start_vbl_count && if (crtc->debug.start_vbl_count &&
crtc->debug.start_vbl_count != end_vbl_count) { crtc->debug.start_vbl_count != end_vbl_count) {
DRM_ERROR("Atomic update failure on pipe %c (start=%u end=%u) time %lld us, min %d, max %d, scanline start %d, end %d\n", drm_err(&dev_priv->drm,
pipe_name(pipe), crtc->debug.start_vbl_count, "Atomic update failure on pipe %c (start=%u end=%u) time %lld us, min %d, max %d, scanline start %d, end %d\n",
end_vbl_count, pipe_name(pipe), crtc->debug.start_vbl_count,
ktime_us_delta(end_vbl_time, crtc->debug.start_vbl_time), end_vbl_count,
crtc->debug.min_vbl, crtc->debug.max_vbl, ktime_us_delta(end_vbl_time,
crtc->debug.scanline_start, scanline_end); crtc->debug.start_vbl_time),
crtc->debug.min_vbl, crtc->debug.max_vbl,
crtc->debug.scanline_start, scanline_end);
} }
#ifdef CONFIG_DRM_I915_DEBUG_VBLANK_EVADE #ifdef CONFIG_DRM_I915_DEBUG_VBLANK_EVADE
else if (ktime_us_delta(end_vbl_time, crtc->debug.start_vbl_time) > else if (ktime_us_delta(end_vbl_time, crtc->debug.start_vbl_time) >
VBLANK_EVASION_TIME_US) VBLANK_EVASION_TIME_US)
DRM_WARN("Atomic update on pipe (%c) took %lld us, max time under evasion is %u us\n", drm_warn(&dev_priv->drm,
"Atomic update on pipe (%c) took %lld us, max time under evasion is %u us\n",
pipe_name(pipe), pipe_name(pipe),
ktime_us_delta(end_vbl_time, crtc->debug.start_vbl_time), ktime_us_delta(end_vbl_time, crtc->debug.start_vbl_time),
VBLANK_EVASION_TIME_US); VBLANK_EVASION_TIME_US);
@ -434,14 +440,16 @@ skl_program_scaler(struct intel_plane *plane,
uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false); uv_rgb_vphase = skl_scaler_calc_phase(1, vscale, false);
} }
I915_WRITE_FW(SKL_PS_CTRL(pipe, scaler_id), intel_de_write_fw(dev_priv, SKL_PS_CTRL(pipe, scaler_id),
PS_SCALER_EN | PS_PLANE_SEL(plane->id) | scaler->mode); PS_SCALER_EN | PS_PLANE_SEL(plane->id) | scaler->mode);
I915_WRITE_FW(SKL_PS_VPHASE(pipe, scaler_id), intel_de_write_fw(dev_priv, SKL_PS_VPHASE(pipe, scaler_id),
PS_Y_PHASE(y_vphase) | PS_UV_RGB_PHASE(uv_rgb_vphase)); PS_Y_PHASE(y_vphase) | PS_UV_RGB_PHASE(uv_rgb_vphase));
I915_WRITE_FW(SKL_PS_HPHASE(pipe, scaler_id), intel_de_write_fw(dev_priv, SKL_PS_HPHASE(pipe, scaler_id),
PS_Y_PHASE(y_hphase) | PS_UV_RGB_PHASE(uv_rgb_hphase)); PS_Y_PHASE(y_hphase) | PS_UV_RGB_PHASE(uv_rgb_hphase));
I915_WRITE_FW(SKL_PS_WIN_POS(pipe, scaler_id), (crtc_x << 16) | crtc_y); intel_de_write_fw(dev_priv, SKL_PS_WIN_POS(pipe, scaler_id),
I915_WRITE_FW(SKL_PS_WIN_SZ(pipe, scaler_id), (crtc_w << 16) | crtc_h); (crtc_x << 16) | crtc_y);
intel_de_write_fw(dev_priv, SKL_PS_WIN_SZ(pipe, scaler_id),
(crtc_w << 16) | crtc_h);
} }
/* Preoffset values for YUV to RGB Conversion */ /* Preoffset values for YUV to RGB Conversion */
@ -547,28 +555,37 @@ icl_program_input_csc(struct intel_plane *plane,
else else
csc = input_csc_matrix_lr[plane_state->hw.color_encoding]; csc = input_csc_matrix_lr[plane_state->hw.color_encoding];
I915_WRITE_FW(PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0), ROFF(csc[0]) | intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0),
GOFF(csc[1])); ROFF(csc[0]) | GOFF(csc[1]));
I915_WRITE_FW(PLANE_INPUT_CSC_COEFF(pipe, plane_id, 1), BOFF(csc[2])); intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 1),
I915_WRITE_FW(PLANE_INPUT_CSC_COEFF(pipe, plane_id, 2), ROFF(csc[3]) | BOFF(csc[2]));
GOFF(csc[4])); intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 2),
I915_WRITE_FW(PLANE_INPUT_CSC_COEFF(pipe, plane_id, 3), BOFF(csc[5])); ROFF(csc[3]) | GOFF(csc[4]));
I915_WRITE_FW(PLANE_INPUT_CSC_COEFF(pipe, plane_id, 4), ROFF(csc[6]) | intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 3),
GOFF(csc[7])); BOFF(csc[5]));
I915_WRITE_FW(PLANE_INPUT_CSC_COEFF(pipe, plane_id, 5), BOFF(csc[8])); intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 4),
ROFF(csc[6]) | GOFF(csc[7]));
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 5),
BOFF(csc[8]));
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0), intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0),
PREOFF_YUV_TO_RGB_HI); PREOFF_YUV_TO_RGB_HI);
if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE) if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1), 0); intel_de_write_fw(dev_priv,
PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
0);
else else
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1), intel_de_write_fw(dev_priv,
PREOFF_YUV_TO_RGB_ME); PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2), PREOFF_YUV_TO_RGB_ME);
PREOFF_YUV_TO_RGB_LO); intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2),
I915_WRITE_FW(PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 0), 0x0); PREOFF_YUV_TO_RGB_LO);
I915_WRITE_FW(PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 1), 0x0); intel_de_write_fw(dev_priv,
I915_WRITE_FW(PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 2), 0x0); PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 0), 0x0);
intel_de_write_fw(dev_priv,
PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 1), 0x0);
intel_de_write_fw(dev_priv,
PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 2), 0x0);
} }
static void static void
@ -623,44 +640,49 @@ skl_program_plane(struct intel_plane *plane,
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
I915_WRITE_FW(PLANE_STRIDE(pipe, plane_id), stride); intel_de_write_fw(dev_priv, PLANE_STRIDE(pipe, plane_id), stride);
I915_WRITE_FW(PLANE_POS(pipe, plane_id), (crtc_y << 16) | crtc_x); intel_de_write_fw(dev_priv, PLANE_POS(pipe, plane_id),
I915_WRITE_FW(PLANE_SIZE(pipe, plane_id), (src_h << 16) | src_w); (crtc_y << 16) | crtc_x);
intel_de_write_fw(dev_priv, PLANE_SIZE(pipe, plane_id),
(src_h << 16) | src_w);
if (INTEL_GEN(dev_priv) < 12) if (INTEL_GEN(dev_priv) < 12)
aux_dist |= aux_stride; aux_dist |= aux_stride;
I915_WRITE_FW(PLANE_AUX_DIST(pipe, plane_id), aux_dist); intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id), aux_dist);
if (icl_is_hdr_plane(dev_priv, plane_id)) if (icl_is_hdr_plane(dev_priv, plane_id))
I915_WRITE_FW(PLANE_CUS_CTL(pipe, plane_id), plane_state->cus_ctl); intel_de_write_fw(dev_priv, PLANE_CUS_CTL(pipe, plane_id),
plane_state->cus_ctl);
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
I915_WRITE_FW(PLANE_COLOR_CTL(pipe, plane_id), plane_color_ctl); intel_de_write_fw(dev_priv, PLANE_COLOR_CTL(pipe, plane_id),
plane_color_ctl);
if (fb->format->is_yuv && icl_is_hdr_plane(dev_priv, plane_id)) if (fb->format->is_yuv && icl_is_hdr_plane(dev_priv, plane_id))
icl_program_input_csc(plane, crtc_state, plane_state); icl_program_input_csc(plane, crtc_state, plane_state);
skl_write_plane_wm(plane, crtc_state); skl_write_plane_wm(plane, crtc_state);
I915_WRITE_FW(PLANE_KEYVAL(pipe, plane_id), key->min_value); intel_de_write_fw(dev_priv, PLANE_KEYVAL(pipe, plane_id),
I915_WRITE_FW(PLANE_KEYMSK(pipe, plane_id), keymsk); key->min_value);
I915_WRITE_FW(PLANE_KEYMAX(pipe, plane_id), keymax); intel_de_write_fw(dev_priv, PLANE_KEYMSK(pipe, plane_id), keymsk);
intel_de_write_fw(dev_priv, PLANE_KEYMAX(pipe, plane_id), keymax);
I915_WRITE_FW(PLANE_OFFSET(pipe, plane_id), (y << 16) | x); intel_de_write_fw(dev_priv, PLANE_OFFSET(pipe, plane_id),
(y << 16) | x);
if (INTEL_GEN(dev_priv) < 11) if (INTEL_GEN(dev_priv) < 11)
I915_WRITE_FW(PLANE_AUX_OFFSET(pipe, plane_id), intel_de_write_fw(dev_priv, PLANE_AUX_OFFSET(pipe, plane_id),
(plane_state->color_plane[1].y << 16) | (plane_state->color_plane[1].y << 16) | plane_state->color_plane[1].x);
plane_state->color_plane[1].x);
/* /*
* The control register self-arms if the plane was previously * The control register self-arms if the plane was previously
* disabled. Try to make the plane enable atomic by writing * disabled. Try to make the plane enable atomic by writing
* the control register just before the surface register. * the control register just before the surface register.
*/ */
I915_WRITE_FW(PLANE_CTL(pipe, plane_id), plane_ctl); intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl);
I915_WRITE_FW(PLANE_SURF(pipe, plane_id), intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id),
intel_plane_ggtt_offset(plane_state) + surf_addr); intel_plane_ggtt_offset(plane_state) + surf_addr);
if (plane_state->scaler_id >= 0) if (plane_state->scaler_id >= 0)
skl_program_scaler(plane, crtc_state, plane_state); skl_program_scaler(plane, crtc_state, plane_state);
@ -693,12 +715,12 @@ skl_disable_plane(struct intel_plane *plane,
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
if (icl_is_hdr_plane(dev_priv, plane_id)) if (icl_is_hdr_plane(dev_priv, plane_id))
I915_WRITE_FW(PLANE_CUS_CTL(pipe, plane_id), 0); intel_de_write_fw(dev_priv, PLANE_CUS_CTL(pipe, plane_id), 0);
skl_write_plane_wm(plane, crtc_state); skl_write_plane_wm(plane, crtc_state);
I915_WRITE_FW(PLANE_CTL(pipe, plane_id), 0); intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), 0);
I915_WRITE_FW(PLANE_SURF(pipe, plane_id), 0); intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), 0);
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags); spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
} }
@ -718,7 +740,7 @@ skl_plane_get_hw_state(struct intel_plane *plane,
if (!wakeref) if (!wakeref)
return false; return false;
ret = I915_READ(PLANE_CTL(plane->pipe, plane_id)) & PLANE_CTL_ENABLE; ret = intel_de_read(dev_priv, PLANE_CTL(plane->pipe, plane_id)) & PLANE_CTL_ENABLE;
*pipe = plane->pipe; *pipe = plane->pipe;
@ -774,23 +796,36 @@ chv_update_csc(const struct intel_plane_state *plane_state)
if (!fb->format->is_yuv) if (!fb->format->is_yuv)
return; return;
I915_WRITE_FW(SPCSCYGOFF(plane_id), SPCSC_OOFF(0) | SPCSC_IOFF(0)); intel_de_write_fw(dev_priv, SPCSCYGOFF(plane_id),
I915_WRITE_FW(SPCSCCBOFF(plane_id), SPCSC_OOFF(0) | SPCSC_IOFF(0)); SPCSC_OOFF(0) | SPCSC_IOFF(0));
I915_WRITE_FW(SPCSCCROFF(plane_id), SPCSC_OOFF(0) | SPCSC_IOFF(0)); intel_de_write_fw(dev_priv, SPCSCCBOFF(plane_id),
SPCSC_OOFF(0) | SPCSC_IOFF(0));
intel_de_write_fw(dev_priv, SPCSCCROFF(plane_id),
SPCSC_OOFF(0) | SPCSC_IOFF(0));
I915_WRITE_FW(SPCSCC01(plane_id), SPCSC_C1(csc[1]) | SPCSC_C0(csc[0])); intel_de_write_fw(dev_priv, SPCSCC01(plane_id),
I915_WRITE_FW(SPCSCC23(plane_id), SPCSC_C1(csc[3]) | SPCSC_C0(csc[2])); SPCSC_C1(csc[1]) | SPCSC_C0(csc[0]));
I915_WRITE_FW(SPCSCC45(plane_id), SPCSC_C1(csc[5]) | SPCSC_C0(csc[4])); intel_de_write_fw(dev_priv, SPCSCC23(plane_id),
I915_WRITE_FW(SPCSCC67(plane_id), SPCSC_C1(csc[7]) | SPCSC_C0(csc[6])); SPCSC_C1(csc[3]) | SPCSC_C0(csc[2]));
I915_WRITE_FW(SPCSCC8(plane_id), SPCSC_C0(csc[8])); intel_de_write_fw(dev_priv, SPCSCC45(plane_id),
SPCSC_C1(csc[5]) | SPCSC_C0(csc[4]));
intel_de_write_fw(dev_priv, SPCSCC67(plane_id),
SPCSC_C1(csc[7]) | SPCSC_C0(csc[6]));
intel_de_write_fw(dev_priv, SPCSCC8(plane_id), SPCSC_C0(csc[8]));
I915_WRITE_FW(SPCSCYGICLAMP(plane_id), SPCSC_IMAX(1023) | SPCSC_IMIN(0)); intel_de_write_fw(dev_priv, SPCSCYGICLAMP(plane_id),
I915_WRITE_FW(SPCSCCBICLAMP(plane_id), SPCSC_IMAX(512) | SPCSC_IMIN(-512)); SPCSC_IMAX(1023) | SPCSC_IMIN(0));
I915_WRITE_FW(SPCSCCRICLAMP(plane_id), SPCSC_IMAX(512) | SPCSC_IMIN(-512)); intel_de_write_fw(dev_priv, SPCSCCBICLAMP(plane_id),
SPCSC_IMAX(512) | SPCSC_IMIN(-512));
intel_de_write_fw(dev_priv, SPCSCCRICLAMP(plane_id),
SPCSC_IMAX(512) | SPCSC_IMIN(-512));
I915_WRITE_FW(SPCSCYGOCLAMP(plane_id), SPCSC_OMAX(1023) | SPCSC_OMIN(0)); intel_de_write_fw(dev_priv, SPCSCYGOCLAMP(plane_id),
I915_WRITE_FW(SPCSCCBOCLAMP(plane_id), SPCSC_OMAX(1023) | SPCSC_OMIN(0)); SPCSC_OMAX(1023) | SPCSC_OMIN(0));
I915_WRITE_FW(SPCSCCROCLAMP(plane_id), SPCSC_OMAX(1023) | SPCSC_OMIN(0)); intel_de_write_fw(dev_priv, SPCSCCBOCLAMP(plane_id),
SPCSC_OMAX(1023) | SPCSC_OMIN(0));
intel_de_write_fw(dev_priv, SPCSCCROCLAMP(plane_id),
SPCSC_OMAX(1023) | SPCSC_OMIN(0));
} }
#define SIN_0 0 #define SIN_0 0
@ -829,10 +864,10 @@ vlv_update_clrc(const struct intel_plane_state *plane_state)
} }
/* FIXME these register are single buffered :( */ /* FIXME these register are single buffered :( */
I915_WRITE_FW(SPCLRC0(pipe, plane_id), intel_de_write_fw(dev_priv, SPCLRC0(pipe, plane_id),
SP_CONTRAST(contrast) | SP_BRIGHTNESS(brightness)); SP_CONTRAST(contrast) | SP_BRIGHTNESS(brightness));
I915_WRITE_FW(SPCLRC1(pipe, plane_id), intel_de_write_fw(dev_priv, SPCLRC1(pipe, plane_id),
SP_SH_SIN(sh_sin) | SP_SH_COS(sh_cos)); SP_SH_SIN(sh_sin) | SP_SH_COS(sh_cos));
} }
static void static void
@ -1019,10 +1054,8 @@ static void vlv_update_gamma(const struct intel_plane_state *plane_state)
/* FIXME these register are single buffered :( */ /* FIXME these register are single buffered :( */
/* The two end points are implicit (0.0 and 1.0) */ /* The two end points are implicit (0.0 and 1.0) */
for (i = 1; i < 8 - 1; i++) for (i = 1; i < 8 - 1; i++)
I915_WRITE_FW(SPGAMC(pipe, plane_id, i - 1), intel_de_write_fw(dev_priv, SPGAMC(pipe, plane_id, i - 1),
gamma[i] << 16 | gamma[i] << 16 | gamma[i] << 8 | gamma[i]);
gamma[i] << 8 |
gamma[i]);
} }
static void static void
@ -1055,32 +1088,37 @@ vlv_update_plane(struct intel_plane *plane,
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
I915_WRITE_FW(SPSTRIDE(pipe, plane_id), intel_de_write_fw(dev_priv, SPSTRIDE(pipe, plane_id),
plane_state->color_plane[0].stride); plane_state->color_plane[0].stride);
I915_WRITE_FW(SPPOS(pipe, plane_id), (crtc_y << 16) | crtc_x); intel_de_write_fw(dev_priv, SPPOS(pipe, plane_id),
I915_WRITE_FW(SPSIZE(pipe, plane_id), (crtc_h << 16) | crtc_w); (crtc_y << 16) | crtc_x);
I915_WRITE_FW(SPCONSTALPHA(pipe, plane_id), 0); intel_de_write_fw(dev_priv, SPSIZE(pipe, plane_id),
(crtc_h << 16) | crtc_w);
intel_de_write_fw(dev_priv, SPCONSTALPHA(pipe, plane_id), 0);
if (IS_CHERRYVIEW(dev_priv) && pipe == PIPE_B) if (IS_CHERRYVIEW(dev_priv) && pipe == PIPE_B)
chv_update_csc(plane_state); chv_update_csc(plane_state);
if (key->flags) { if (key->flags) {
I915_WRITE_FW(SPKEYMINVAL(pipe, plane_id), key->min_value); intel_de_write_fw(dev_priv, SPKEYMINVAL(pipe, plane_id),
I915_WRITE_FW(SPKEYMSK(pipe, plane_id), key->channel_mask); key->min_value);
I915_WRITE_FW(SPKEYMAXVAL(pipe, plane_id), key->max_value); intel_de_write_fw(dev_priv, SPKEYMSK(pipe, plane_id),
key->channel_mask);
intel_de_write_fw(dev_priv, SPKEYMAXVAL(pipe, plane_id),
key->max_value);
} }
I915_WRITE_FW(SPLINOFF(pipe, plane_id), linear_offset); intel_de_write_fw(dev_priv, SPLINOFF(pipe, plane_id), linear_offset);
I915_WRITE_FW(SPTILEOFF(pipe, plane_id), (y << 16) | x); intel_de_write_fw(dev_priv, SPTILEOFF(pipe, plane_id), (y << 16) | x);
/* /*
* The control register self-arms if the plane was previously * The control register self-arms if the plane was previously
* disabled. Try to make the plane enable atomic by writing * disabled. Try to make the plane enable atomic by writing
* the control register just before the surface register. * the control register just before the surface register.
*/ */
I915_WRITE_FW(SPCNTR(pipe, plane_id), sprctl); intel_de_write_fw(dev_priv, SPCNTR(pipe, plane_id), sprctl);
I915_WRITE_FW(SPSURF(pipe, plane_id), intel_de_write_fw(dev_priv, SPSURF(pipe, plane_id),
intel_plane_ggtt_offset(plane_state) + sprsurf_offset); intel_plane_ggtt_offset(plane_state) + sprsurf_offset);
vlv_update_clrc(plane_state); vlv_update_clrc(plane_state);
vlv_update_gamma(plane_state); vlv_update_gamma(plane_state);
@ -1099,8 +1137,8 @@ vlv_disable_plane(struct intel_plane *plane,
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
I915_WRITE_FW(SPCNTR(pipe, plane_id), 0); intel_de_write_fw(dev_priv, SPCNTR(pipe, plane_id), 0);
I915_WRITE_FW(SPSURF(pipe, plane_id), 0); intel_de_write_fw(dev_priv, SPSURF(pipe, plane_id), 0);
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags); spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
} }
@ -1120,7 +1158,7 @@ vlv_plane_get_hw_state(struct intel_plane *plane,
if (!wakeref) if (!wakeref)
return false; return false;
ret = I915_READ(SPCNTR(plane->pipe, plane_id)) & SP_ENABLE; ret = intel_de_read(dev_priv, SPCNTR(plane->pipe, plane_id)) & SP_ENABLE;
*pipe = plane->pipe; *pipe = plane->pipe;
@ -1424,19 +1462,17 @@ static void ivb_update_gamma(const struct intel_plane_state *plane_state)
/* FIXME these register are single buffered :( */ /* FIXME these register are single buffered :( */
for (i = 0; i < 16; i++) for (i = 0; i < 16; i++)
I915_WRITE_FW(SPRGAMC(pipe, i), intel_de_write_fw(dev_priv, SPRGAMC(pipe, i),
gamma[i] << 20 | gamma[i] << 20 | gamma[i] << 10 | gamma[i]);
gamma[i] << 10 |
gamma[i]);
I915_WRITE_FW(SPRGAMC16(pipe, 0), gamma[i]); intel_de_write_fw(dev_priv, SPRGAMC16(pipe, 0), gamma[i]);
I915_WRITE_FW(SPRGAMC16(pipe, 1), gamma[i]); intel_de_write_fw(dev_priv, SPRGAMC16(pipe, 1), gamma[i]);
I915_WRITE_FW(SPRGAMC16(pipe, 2), gamma[i]); intel_de_write_fw(dev_priv, SPRGAMC16(pipe, 2), gamma[i]);
i++; i++;
I915_WRITE_FW(SPRGAMC17(pipe, 0), gamma[i]); intel_de_write_fw(dev_priv, SPRGAMC17(pipe, 0), gamma[i]);
I915_WRITE_FW(SPRGAMC17(pipe, 1), gamma[i]); intel_de_write_fw(dev_priv, SPRGAMC17(pipe, 1), gamma[i]);
I915_WRITE_FW(SPRGAMC17(pipe, 2), gamma[i]); intel_de_write_fw(dev_priv, SPRGAMC17(pipe, 2), gamma[i]);
i++; i++;
} }
@ -1476,25 +1512,27 @@ ivb_update_plane(struct intel_plane *plane,
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
I915_WRITE_FW(SPRSTRIDE(pipe), plane_state->color_plane[0].stride); intel_de_write_fw(dev_priv, SPRSTRIDE(pipe),
I915_WRITE_FW(SPRPOS(pipe), (crtc_y << 16) | crtc_x); plane_state->color_plane[0].stride);
I915_WRITE_FW(SPRSIZE(pipe), (crtc_h << 16) | crtc_w); intel_de_write_fw(dev_priv, SPRPOS(pipe), (crtc_y << 16) | crtc_x);
intel_de_write_fw(dev_priv, SPRSIZE(pipe), (crtc_h << 16) | crtc_w);
if (IS_IVYBRIDGE(dev_priv)) if (IS_IVYBRIDGE(dev_priv))
I915_WRITE_FW(SPRSCALE(pipe), sprscale); intel_de_write_fw(dev_priv, SPRSCALE(pipe), sprscale);
if (key->flags) { if (key->flags) {
I915_WRITE_FW(SPRKEYVAL(pipe), key->min_value); intel_de_write_fw(dev_priv, SPRKEYVAL(pipe), key->min_value);
I915_WRITE_FW(SPRKEYMSK(pipe), key->channel_mask); intel_de_write_fw(dev_priv, SPRKEYMSK(pipe),
I915_WRITE_FW(SPRKEYMAX(pipe), key->max_value); key->channel_mask);
intel_de_write_fw(dev_priv, SPRKEYMAX(pipe), key->max_value);
} }
/* HSW consolidates SPRTILEOFF and SPRLINOFF into a single SPROFFSET /* HSW consolidates SPRTILEOFF and SPRLINOFF into a single SPROFFSET
* register */ * register */
if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {
I915_WRITE_FW(SPROFFSET(pipe), (y << 16) | x); intel_de_write_fw(dev_priv, SPROFFSET(pipe), (y << 16) | x);
} else { } else {
I915_WRITE_FW(SPRLINOFF(pipe), linear_offset); intel_de_write_fw(dev_priv, SPRLINOFF(pipe), linear_offset);
I915_WRITE_FW(SPRTILEOFF(pipe), (y << 16) | x); intel_de_write_fw(dev_priv, SPRTILEOFF(pipe), (y << 16) | x);
} }
/* /*
@ -1502,9 +1540,9 @@ ivb_update_plane(struct intel_plane *plane,
* disabled. Try to make the plane enable atomic by writing * disabled. Try to make the plane enable atomic by writing
* the control register just before the surface register. * the control register just before the surface register.
*/ */
I915_WRITE_FW(SPRCTL(pipe), sprctl); intel_de_write_fw(dev_priv, SPRCTL(pipe), sprctl);
I915_WRITE_FW(SPRSURF(pipe), intel_de_write_fw(dev_priv, SPRSURF(pipe),
intel_plane_ggtt_offset(plane_state) + sprsurf_offset); intel_plane_ggtt_offset(plane_state) + sprsurf_offset);
ivb_update_gamma(plane_state); ivb_update_gamma(plane_state);
@ -1521,11 +1559,11 @@ ivb_disable_plane(struct intel_plane *plane,
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
I915_WRITE_FW(SPRCTL(pipe), 0); intel_de_write_fw(dev_priv, SPRCTL(pipe), 0);
/* Disable the scaler */ /* Disable the scaler */
if (IS_IVYBRIDGE(dev_priv)) if (IS_IVYBRIDGE(dev_priv))
I915_WRITE_FW(SPRSCALE(pipe), 0); intel_de_write_fw(dev_priv, SPRSCALE(pipe), 0);
I915_WRITE_FW(SPRSURF(pipe), 0); intel_de_write_fw(dev_priv, SPRSURF(pipe), 0);
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags); spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
} }
@ -1544,7 +1582,7 @@ ivb_plane_get_hw_state(struct intel_plane *plane,
if (!wakeref) if (!wakeref)
return false; return false;
ret = I915_READ(SPRCTL(plane->pipe)) & SPRITE_ENABLE; ret = intel_de_read(dev_priv, SPRCTL(plane->pipe)) & SPRITE_ENABLE;
*pipe = plane->pipe; *pipe = plane->pipe;
@ -1710,10 +1748,8 @@ static void g4x_update_gamma(const struct intel_plane_state *plane_state)
/* FIXME these register are single buffered :( */ /* FIXME these register are single buffered :( */
/* The two end points are implicit (0.0 and 1.0) */ /* The two end points are implicit (0.0 and 1.0) */
for (i = 1; i < 8 - 1; i++) for (i = 1; i < 8 - 1; i++)
I915_WRITE_FW(DVSGAMC_G4X(pipe, i - 1), intel_de_write_fw(dev_priv, DVSGAMC_G4X(pipe, i - 1),
gamma[i] << 16 | gamma[i] << 16 | gamma[i] << 8 | gamma[i]);
gamma[i] << 8 |
gamma[i]);
} }
static void ilk_sprite_linear_gamma(u16 gamma[17]) static void ilk_sprite_linear_gamma(u16 gamma[17])
@ -1741,14 +1777,12 @@ static void ilk_update_gamma(const struct intel_plane_state *plane_state)
/* FIXME these register are single buffered :( */ /* FIXME these register are single buffered :( */
for (i = 0; i < 16; i++) for (i = 0; i < 16; i++)
I915_WRITE_FW(DVSGAMC_ILK(pipe, i), intel_de_write_fw(dev_priv, DVSGAMC_ILK(pipe, i),
gamma[i] << 20 | gamma[i] << 20 | gamma[i] << 10 | gamma[i]);
gamma[i] << 10 |
gamma[i]);
I915_WRITE_FW(DVSGAMCMAX_ILK(pipe, 0), gamma[i]); intel_de_write_fw(dev_priv, DVSGAMCMAX_ILK(pipe, 0), gamma[i]);
I915_WRITE_FW(DVSGAMCMAX_ILK(pipe, 1), gamma[i]); intel_de_write_fw(dev_priv, DVSGAMCMAX_ILK(pipe, 1), gamma[i]);
I915_WRITE_FW(DVSGAMCMAX_ILK(pipe, 2), gamma[i]); intel_de_write_fw(dev_priv, DVSGAMCMAX_ILK(pipe, 2), gamma[i]);
i++; i++;
} }
@ -1788,28 +1822,30 @@ g4x_update_plane(struct intel_plane *plane,
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
I915_WRITE_FW(DVSSTRIDE(pipe), plane_state->color_plane[0].stride); intel_de_write_fw(dev_priv, DVSSTRIDE(pipe),
I915_WRITE_FW(DVSPOS(pipe), (crtc_y << 16) | crtc_x); plane_state->color_plane[0].stride);
I915_WRITE_FW(DVSSIZE(pipe), (crtc_h << 16) | crtc_w); intel_de_write_fw(dev_priv, DVSPOS(pipe), (crtc_y << 16) | crtc_x);
I915_WRITE_FW(DVSSCALE(pipe), dvsscale); intel_de_write_fw(dev_priv, DVSSIZE(pipe), (crtc_h << 16) | crtc_w);
intel_de_write_fw(dev_priv, DVSSCALE(pipe), dvsscale);
if (key->flags) { if (key->flags) {
I915_WRITE_FW(DVSKEYVAL(pipe), key->min_value); intel_de_write_fw(dev_priv, DVSKEYVAL(pipe), key->min_value);
I915_WRITE_FW(DVSKEYMSK(pipe), key->channel_mask); intel_de_write_fw(dev_priv, DVSKEYMSK(pipe),
I915_WRITE_FW(DVSKEYMAX(pipe), key->max_value); key->channel_mask);
intel_de_write_fw(dev_priv, DVSKEYMAX(pipe), key->max_value);
} }
I915_WRITE_FW(DVSLINOFF(pipe), linear_offset); intel_de_write_fw(dev_priv, DVSLINOFF(pipe), linear_offset);
I915_WRITE_FW(DVSTILEOFF(pipe), (y << 16) | x); intel_de_write_fw(dev_priv, DVSTILEOFF(pipe), (y << 16) | x);
/* /*
* The control register self-arms if the plane was previously * The control register self-arms if the plane was previously
* disabled. Try to make the plane enable atomic by writing * disabled. Try to make the plane enable atomic by writing
* the control register just before the surface register. * the control register just before the surface register.
*/ */
I915_WRITE_FW(DVSCNTR(pipe), dvscntr); intel_de_write_fw(dev_priv, DVSCNTR(pipe), dvscntr);
I915_WRITE_FW(DVSSURF(pipe), intel_de_write_fw(dev_priv, DVSSURF(pipe),
intel_plane_ggtt_offset(plane_state) + dvssurf_offset); intel_plane_ggtt_offset(plane_state) + dvssurf_offset);
if (IS_G4X(dev_priv)) if (IS_G4X(dev_priv))
g4x_update_gamma(plane_state); g4x_update_gamma(plane_state);
@ -1829,10 +1865,10 @@ g4x_disable_plane(struct intel_plane *plane,
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
I915_WRITE_FW(DVSCNTR(pipe), 0); intel_de_write_fw(dev_priv, DVSCNTR(pipe), 0);
/* Disable the scaler */ /* Disable the scaler */
I915_WRITE_FW(DVSSCALE(pipe), 0); intel_de_write_fw(dev_priv, DVSSCALE(pipe), 0);
I915_WRITE_FW(DVSSURF(pipe), 0); intel_de_write_fw(dev_priv, DVSSURF(pipe), 0);
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags); spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
} }
@ -1851,7 +1887,7 @@ g4x_plane_get_hw_state(struct intel_plane *plane,
if (!wakeref) if (!wakeref)
return false; return false;
ret = I915_READ(DVSCNTR(plane->pipe)) & DVS_ENABLE; ret = intel_de_read(dev_priv, DVSCNTR(plane->pipe)) & DVS_ENABLE;
*pipe = plane->pipe; *pipe = plane->pipe;
@ -1999,7 +2035,8 @@ int chv_plane_check_rotation(const struct intel_plane_state *plane_state)
if (IS_CHERRYVIEW(dev_priv) && if (IS_CHERRYVIEW(dev_priv) &&
rotation & DRM_MODE_ROTATE_180 && rotation & DRM_MODE_ROTATE_180 &&
rotation & DRM_MODE_REFLECT_X) { rotation & DRM_MODE_REFLECT_X) {
DRM_DEBUG_KMS("Cannot rotate and reflect at the same time\n"); drm_dbg_kms(&dev_priv->drm,
"Cannot rotate and reflect at the same time\n");
return -EINVAL; return -EINVAL;
} }
@ -2054,21 +2091,24 @@ static int skl_plane_check_fb(const struct intel_crtc_state *crtc_state,
if (rotation & ~(DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_180) && if (rotation & ~(DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_180) &&
is_ccs_modifier(fb->modifier)) { is_ccs_modifier(fb->modifier)) {
DRM_DEBUG_KMS("RC support only with 0/180 degree rotation (%x)\n", drm_dbg_kms(&dev_priv->drm,
rotation); "RC support only with 0/180 degree rotation (%x)\n",
rotation);
return -EINVAL; return -EINVAL;
} }
if (rotation & DRM_MODE_REFLECT_X && if (rotation & DRM_MODE_REFLECT_X &&
fb->modifier == DRM_FORMAT_MOD_LINEAR) { fb->modifier == DRM_FORMAT_MOD_LINEAR) {
DRM_DEBUG_KMS("horizontal flip is not supported with linear surface formats\n"); drm_dbg_kms(&dev_priv->drm,
"horizontal flip is not supported with linear surface formats\n");
return -EINVAL; return -EINVAL;
} }
if (drm_rotation_90_or_270(rotation)) { if (drm_rotation_90_or_270(rotation)) {
if (fb->modifier != I915_FORMAT_MOD_Y_TILED && if (fb->modifier != I915_FORMAT_MOD_Y_TILED &&
fb->modifier != I915_FORMAT_MOD_Yf_TILED) { fb->modifier != I915_FORMAT_MOD_Yf_TILED) {
DRM_DEBUG_KMS("Y/Yf tiling required for 90/270!\n"); drm_dbg_kms(&dev_priv->drm,
"Y/Yf tiling required for 90/270!\n");
return -EINVAL; return -EINVAL;
} }
@ -2091,9 +2131,10 @@ static int skl_plane_check_fb(const struct intel_crtc_state *crtc_state,
case DRM_FORMAT_Y216: case DRM_FORMAT_Y216:
case DRM_FORMAT_XVYU12_16161616: case DRM_FORMAT_XVYU12_16161616:
case DRM_FORMAT_XVYU16161616: case DRM_FORMAT_XVYU16161616:
DRM_DEBUG_KMS("Unsupported pixel format %s for 90/270!\n", drm_dbg_kms(&dev_priv->drm,
drm_get_format_name(fb->format->format, "Unsupported pixel format %s for 90/270!\n",
&format_name)); drm_get_format_name(fb->format->format,
&format_name));
return -EINVAL; return -EINVAL;
default: default:
break; break;
@ -2109,7 +2150,8 @@ static int skl_plane_check_fb(const struct intel_crtc_state *crtc_state,
fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS || fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS ||
fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS || fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS ||
fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS)) { fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS)) {
DRM_DEBUG_KMS("Y/Yf tiling not supported in IF-ID mode\n"); drm_dbg_kms(&dev_priv->drm,
"Y/Yf tiling not supported in IF-ID mode\n");
return -EINVAL; return -EINVAL;
} }
@ -2136,10 +2178,11 @@ static int skl_plane_check_dst_coordinates(const struct intel_crtc_state *crtc_s
*/ */
if ((IS_GEMINILAKE(dev_priv) || IS_CANNONLAKE(dev_priv)) && if ((IS_GEMINILAKE(dev_priv) || IS_CANNONLAKE(dev_priv)) &&
(crtc_x + crtc_w < 4 || crtc_x > pipe_src_w - 4)) { (crtc_x + crtc_w < 4 || crtc_x > pipe_src_w - 4)) {
DRM_DEBUG_KMS("requested plane X %s position %d invalid (valid range %d-%d)\n", drm_dbg_kms(&dev_priv->drm,
crtc_x + crtc_w < 4 ? "end" : "start", "requested plane X %s position %d invalid (valid range %d-%d)\n",
crtc_x + crtc_w < 4 ? crtc_x + crtc_w : crtc_x, crtc_x + crtc_w < 4 ? "end" : "start",
4, pipe_src_w - 4); crtc_x + crtc_w < 4 ? crtc_x + crtc_w : crtc_x,
4, pipe_src_w - 4);
return -ERANGE; return -ERANGE;
} }

View file

@ -61,7 +61,7 @@ u32 intel_tc_port_get_lane_mask(struct intel_digital_port *dig_port)
lane_mask = intel_uncore_read(uncore, lane_mask = intel_uncore_read(uncore,
PORT_TX_DFLEXDPSP(dig_port->tc_phy_fia)); PORT_TX_DFLEXDPSP(dig_port->tc_phy_fia));
WARN_ON(lane_mask == 0xffffffff); drm_WARN_ON(&i915->drm, lane_mask == 0xffffffff);
lane_mask &= DP_LANE_ASSIGNMENT_MASK(dig_port->tc_phy_fia_idx); lane_mask &= DP_LANE_ASSIGNMENT_MASK(dig_port->tc_phy_fia_idx);
return lane_mask >> DP_LANE_ASSIGNMENT_SHIFT(dig_port->tc_phy_fia_idx); return lane_mask >> DP_LANE_ASSIGNMENT_SHIFT(dig_port->tc_phy_fia_idx);
@ -76,7 +76,7 @@ u32 intel_tc_port_get_pin_assignment_mask(struct intel_digital_port *dig_port)
pin_mask = intel_uncore_read(uncore, pin_mask = intel_uncore_read(uncore,
PORT_TX_DFLEXPA1(dig_port->tc_phy_fia)); PORT_TX_DFLEXPA1(dig_port->tc_phy_fia));
WARN_ON(pin_mask == 0xffffffff); drm_WARN_ON(&i915->drm, pin_mask == 0xffffffff);
return (pin_mask & DP_PIN_ASSIGNMENT_MASK(dig_port->tc_phy_fia_idx)) >> return (pin_mask & DP_PIN_ASSIGNMENT_MASK(dig_port->tc_phy_fia_idx)) >>
DP_PIN_ASSIGNMENT_SHIFT(dig_port->tc_phy_fia_idx); DP_PIN_ASSIGNMENT_SHIFT(dig_port->tc_phy_fia_idx);
@ -120,7 +120,8 @@ void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port,
struct intel_uncore *uncore = &i915->uncore; struct intel_uncore *uncore = &i915->uncore;
u32 val; u32 val;
WARN_ON(lane_reversal && dig_port->tc_mode != TC_PORT_LEGACY); drm_WARN_ON(&i915->drm,
lane_reversal && dig_port->tc_mode != TC_PORT_LEGACY);
val = intel_uncore_read(uncore, val = intel_uncore_read(uncore,
PORT_TX_DFLEXDPMLE1(dig_port->tc_phy_fia)); PORT_TX_DFLEXDPMLE1(dig_port->tc_phy_fia));
@ -181,8 +182,9 @@ static u32 tc_port_live_status_mask(struct intel_digital_port *dig_port)
PORT_TX_DFLEXDPSP(dig_port->tc_phy_fia)); PORT_TX_DFLEXDPSP(dig_port->tc_phy_fia));
if (val == 0xffffffff) { if (val == 0xffffffff) {
DRM_DEBUG_KMS("Port %s: PHY in TCCOLD, nothing connected\n", drm_dbg_kms(&i915->drm,
dig_port->tc_port_name); "Port %s: PHY in TCCOLD, nothing connected\n",
dig_port->tc_port_name);
return mask; return mask;
} }
@ -195,7 +197,7 @@ static u32 tc_port_live_status_mask(struct intel_digital_port *dig_port)
mask |= BIT(TC_PORT_LEGACY); mask |= BIT(TC_PORT_LEGACY);
/* The sink can be connected only in a single mode. */ /* The sink can be connected only in a single mode. */
if (!WARN_ON(hweight32(mask) > 1)) if (!drm_WARN_ON(&i915->drm, hweight32(mask) > 1))
tc_port_fixup_legacy_flag(dig_port, mask); tc_port_fixup_legacy_flag(dig_port, mask);
return mask; return mask;
@ -210,8 +212,9 @@ static bool icl_tc_phy_status_complete(struct intel_digital_port *dig_port)
val = intel_uncore_read(uncore, val = intel_uncore_read(uncore,
PORT_TX_DFLEXDPPMS(dig_port->tc_phy_fia)); PORT_TX_DFLEXDPPMS(dig_port->tc_phy_fia));
if (val == 0xffffffff) { if (val == 0xffffffff) {
DRM_DEBUG_KMS("Port %s: PHY in TCCOLD, assuming not complete\n", drm_dbg_kms(&i915->drm,
dig_port->tc_port_name); "Port %s: PHY in TCCOLD, assuming not complete\n",
dig_port->tc_port_name);
return false; return false;
} }
@ -228,8 +231,9 @@ static bool icl_tc_phy_set_safe_mode(struct intel_digital_port *dig_port,
val = intel_uncore_read(uncore, val = intel_uncore_read(uncore,
PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia)); PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia));
if (val == 0xffffffff) { if (val == 0xffffffff) {
DRM_DEBUG_KMS("Port %s: PHY in TCCOLD, can't set safe-mode to %s\n", drm_dbg_kms(&i915->drm,
dig_port->tc_port_name, "Port %s: PHY in TCCOLD, can't set safe-mode to %s\n",
dig_port->tc_port_name,
enableddisabled(enable)); enableddisabled(enable));
return false; return false;
@ -243,8 +247,9 @@ static bool icl_tc_phy_set_safe_mode(struct intel_digital_port *dig_port,
PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia), val); PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia), val);
if (enable && wait_for(!icl_tc_phy_status_complete(dig_port), 10)) if (enable && wait_for(!icl_tc_phy_status_complete(dig_port), 10))
DRM_DEBUG_KMS("Port %s: PHY complete clear timed out\n", drm_dbg_kms(&i915->drm,
dig_port->tc_port_name); "Port %s: PHY complete clear timed out\n",
dig_port->tc_port_name);
return true; return true;
} }
@ -258,8 +263,9 @@ static bool icl_tc_phy_is_in_safe_mode(struct intel_digital_port *dig_port)
val = intel_uncore_read(uncore, val = intel_uncore_read(uncore,
PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia)); PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia));
if (val == 0xffffffff) { if (val == 0xffffffff) {
DRM_DEBUG_KMS("Port %s: PHY in TCCOLD, assume safe mode\n", drm_dbg_kms(&i915->drm,
dig_port->tc_port_name); "Port %s: PHY in TCCOLD, assume safe mode\n",
dig_port->tc_port_name);
return true; return true;
} }
@ -409,16 +415,17 @@ static void intel_tc_port_reset_mode(struct intel_digital_port *dig_port,
enum tc_port_mode old_tc_mode = dig_port->tc_mode; enum tc_port_mode old_tc_mode = dig_port->tc_mode;
intel_display_power_flush_work(i915); intel_display_power_flush_work(i915);
WARN_ON(intel_display_power_is_enabled(i915, drm_WARN_ON(&i915->drm,
intel_aux_power_domain(dig_port))); intel_display_power_is_enabled(i915,
intel_aux_power_domain(dig_port)));
icl_tc_phy_disconnect(dig_port); icl_tc_phy_disconnect(dig_port);
icl_tc_phy_connect(dig_port, required_lanes); icl_tc_phy_connect(dig_port, required_lanes);
DRM_DEBUG_KMS("Port %s: TC port mode reset (%s -> %s)\n", drm_dbg_kms(&i915->drm, "Port %s: TC port mode reset (%s -> %s)\n",
dig_port->tc_port_name, dig_port->tc_port_name,
tc_port_mode_name(old_tc_mode), tc_port_mode_name(old_tc_mode),
tc_port_mode_name(dig_port->tc_mode)); tc_port_mode_name(dig_port->tc_mode));
} }
static void static void
@ -503,7 +510,7 @@ static void __intel_tc_port_lock(struct intel_digital_port *dig_port,
intel_tc_port_needs_reset(dig_port)) intel_tc_port_needs_reset(dig_port))
intel_tc_port_reset_mode(dig_port, required_lanes); intel_tc_port_reset_mode(dig_port, required_lanes);
WARN_ON(dig_port->tc_lock_wakeref); drm_WARN_ON(&i915->drm, dig_port->tc_lock_wakeref);
dig_port->tc_lock_wakeref = wakeref; dig_port->tc_lock_wakeref = wakeref;
} }
@ -550,7 +557,7 @@ void intel_tc_port_init(struct intel_digital_port *dig_port, bool is_legacy)
enum port port = dig_port->base.port; enum port port = dig_port->base.port;
enum tc_port tc_port = intel_port_to_tc(i915, port); enum tc_port tc_port = intel_port_to_tc(i915, port);
if (WARN_ON(tc_port == PORT_TC_NONE)) if (drm_WARN_ON(&i915->drm, tc_port == PORT_TC_NONE))
return; return;
snprintf(dig_port->tc_port_name, sizeof(dig_port->tc_port_name), snprintf(dig_port->tc_port_name, sizeof(dig_port->tc_port_name),

View file

@ -907,7 +907,7 @@ static bool
intel_tv_get_hw_state(struct intel_encoder *encoder, enum pipe *pipe) intel_tv_get_hw_state(struct intel_encoder *encoder, enum pipe *pipe)
{ {
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
u32 tmp = I915_READ(TV_CTL); u32 tmp = intel_de_read(dev_priv, TV_CTL);
*pipe = (tmp & TV_ENC_PIPE_SEL_MASK) >> TV_ENC_PIPE_SEL_SHIFT; *pipe = (tmp & TV_ENC_PIPE_SEL_MASK) >> TV_ENC_PIPE_SEL_SHIFT;
@ -926,7 +926,8 @@ intel_enable_tv(struct intel_encoder *encoder,
intel_wait_for_vblank(dev_priv, intel_wait_for_vblank(dev_priv,
to_intel_crtc(pipe_config->uapi.crtc)->pipe); to_intel_crtc(pipe_config->uapi.crtc)->pipe);
I915_WRITE(TV_CTL, I915_READ(TV_CTL) | TV_ENC_ENABLE); intel_de_write(dev_priv, TV_CTL,
intel_de_read(dev_priv, TV_CTL) | TV_ENC_ENABLE);
} }
static void static void
@ -937,7 +938,8 @@ intel_disable_tv(struct intel_encoder *encoder,
struct drm_device *dev = encoder->base.dev; struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev); struct drm_i915_private *dev_priv = to_i915(dev);
I915_WRITE(TV_CTL, I915_READ(TV_CTL) & ~TV_ENC_ENABLE); intel_de_write(dev_priv, TV_CTL,
intel_de_read(dev_priv, TV_CTL) & ~TV_ENC_ENABLE);
} }
static const struct tv_mode *intel_tv_mode_find(const struct drm_connector_state *conn_state) static const struct tv_mode *intel_tv_mode_find(const struct drm_connector_state *conn_state)
@ -1095,11 +1097,11 @@ intel_tv_get_config(struct intel_encoder *encoder,
pipe_config->output_types |= BIT(INTEL_OUTPUT_TVOUT); pipe_config->output_types |= BIT(INTEL_OUTPUT_TVOUT);
tv_ctl = I915_READ(TV_CTL); tv_ctl = intel_de_read(dev_priv, TV_CTL);
hctl1 = I915_READ(TV_H_CTL_1); hctl1 = intel_de_read(dev_priv, TV_H_CTL_1);
hctl3 = I915_READ(TV_H_CTL_3); hctl3 = intel_de_read(dev_priv, TV_H_CTL_3);
vctl1 = I915_READ(TV_V_CTL_1); vctl1 = intel_de_read(dev_priv, TV_V_CTL_1);
vctl2 = I915_READ(TV_V_CTL_2); vctl2 = intel_de_read(dev_priv, TV_V_CTL_2);
tv_mode.htotal = (hctl1 & TV_HTOTAL_MASK) >> TV_HTOTAL_SHIFT; tv_mode.htotal = (hctl1 & TV_HTOTAL_MASK) >> TV_HTOTAL_SHIFT;
tv_mode.hsync_end = (hctl1 & TV_HSYNC_END_MASK) >> TV_HSYNC_END_SHIFT; tv_mode.hsync_end = (hctl1 & TV_HSYNC_END_MASK) >> TV_HSYNC_END_SHIFT;
@ -1134,17 +1136,17 @@ intel_tv_get_config(struct intel_encoder *encoder,
break; break;
} }
tmp = I915_READ(TV_WIN_POS); tmp = intel_de_read(dev_priv, TV_WIN_POS);
xpos = tmp >> 16; xpos = tmp >> 16;
ypos = tmp & 0xffff; ypos = tmp & 0xffff;
tmp = I915_READ(TV_WIN_SIZE); tmp = intel_de_read(dev_priv, TV_WIN_SIZE);
xsize = tmp >> 16; xsize = tmp >> 16;
ysize = tmp & 0xffff; ysize = tmp & 0xffff;
intel_tv_mode_to_mode(&mode, &tv_mode); intel_tv_mode_to_mode(&mode, &tv_mode);
DRM_DEBUG_KMS("TV mode:\n"); drm_dbg_kms(&dev_priv->drm, "TV mode:\n");
drm_mode_debug_printmodeline(&mode); drm_mode_debug_printmodeline(&mode);
intel_tv_scale_mode_horiz(&mode, hdisplay, intel_tv_scale_mode_horiz(&mode, hdisplay,
@ -1200,7 +1202,7 @@ intel_tv_compute_config(struct intel_encoder *encoder,
pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB; pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB;
DRM_DEBUG_KMS("forcing bpc to 8 for TV\n"); drm_dbg_kms(&dev_priv->drm, "forcing bpc to 8 for TV\n");
pipe_config->pipe_bpp = 8*3; pipe_config->pipe_bpp = 8*3;
pipe_config->port_clock = tv_mode->clock; pipe_config->port_clock = tv_mode->clock;
@ -1215,7 +1217,8 @@ intel_tv_compute_config(struct intel_encoder *encoder,
extra = adjusted_mode->crtc_vdisplay - vdisplay; extra = adjusted_mode->crtc_vdisplay - vdisplay;
if (extra < 0) { if (extra < 0) {
DRM_DEBUG_KMS("No vertical scaling for >1024 pixel wide modes\n"); drm_dbg_kms(&dev_priv->drm,
"No vertical scaling for >1024 pixel wide modes\n");
return -EINVAL; return -EINVAL;
} }
@ -1248,7 +1251,7 @@ intel_tv_compute_config(struct intel_encoder *encoder,
tv_conn_state->bypass_vfilter = false; tv_conn_state->bypass_vfilter = false;
} }
DRM_DEBUG_KMS("TV mode:\n"); drm_dbg_kms(&dev_priv->drm, "TV mode:\n");
drm_mode_debug_printmodeline(adjusted_mode); drm_mode_debug_printmodeline(adjusted_mode);
/* /*
@ -1380,16 +1383,16 @@ set_tv_mode_timings(struct drm_i915_private *dev_priv,
vctl7 = (tv_mode->vburst_start_f4 << TV_VBURST_START_F4_SHIFT) | vctl7 = (tv_mode->vburst_start_f4 << TV_VBURST_START_F4_SHIFT) |
(tv_mode->vburst_end_f4 << TV_VBURST_END_F4_SHIFT); (tv_mode->vburst_end_f4 << TV_VBURST_END_F4_SHIFT);
I915_WRITE(TV_H_CTL_1, hctl1); intel_de_write(dev_priv, TV_H_CTL_1, hctl1);
I915_WRITE(TV_H_CTL_2, hctl2); intel_de_write(dev_priv, TV_H_CTL_2, hctl2);
I915_WRITE(TV_H_CTL_3, hctl3); intel_de_write(dev_priv, TV_H_CTL_3, hctl3);
I915_WRITE(TV_V_CTL_1, vctl1); intel_de_write(dev_priv, TV_V_CTL_1, vctl1);
I915_WRITE(TV_V_CTL_2, vctl2); intel_de_write(dev_priv, TV_V_CTL_2, vctl2);
I915_WRITE(TV_V_CTL_3, vctl3); intel_de_write(dev_priv, TV_V_CTL_3, vctl3);
I915_WRITE(TV_V_CTL_4, vctl4); intel_de_write(dev_priv, TV_V_CTL_4, vctl4);
I915_WRITE(TV_V_CTL_5, vctl5); intel_de_write(dev_priv, TV_V_CTL_5, vctl5);
I915_WRITE(TV_V_CTL_6, vctl6); intel_de_write(dev_priv, TV_V_CTL_6, vctl6);
I915_WRITE(TV_V_CTL_7, vctl7); intel_de_write(dev_priv, TV_V_CTL_7, vctl7);
} }
static void set_color_conversion(struct drm_i915_private *dev_priv, static void set_color_conversion(struct drm_i915_private *dev_priv,
@ -1398,18 +1401,18 @@ static void set_color_conversion(struct drm_i915_private *dev_priv,
if (!color_conversion) if (!color_conversion)
return; return;
I915_WRITE(TV_CSC_Y, (color_conversion->ry << 16) | intel_de_write(dev_priv, TV_CSC_Y,
color_conversion->gy); (color_conversion->ry << 16) | color_conversion->gy);
I915_WRITE(TV_CSC_Y2, (color_conversion->by << 16) | intel_de_write(dev_priv, TV_CSC_Y2,
color_conversion->ay); (color_conversion->by << 16) | color_conversion->ay);
I915_WRITE(TV_CSC_U, (color_conversion->ru << 16) | intel_de_write(dev_priv, TV_CSC_U,
color_conversion->gu); (color_conversion->ru << 16) | color_conversion->gu);
I915_WRITE(TV_CSC_U2, (color_conversion->bu << 16) | intel_de_write(dev_priv, TV_CSC_U2,
color_conversion->au); (color_conversion->bu << 16) | color_conversion->au);
I915_WRITE(TV_CSC_V, (color_conversion->rv << 16) | intel_de_write(dev_priv, TV_CSC_V,
color_conversion->gv); (color_conversion->rv << 16) | color_conversion->gv);
I915_WRITE(TV_CSC_V2, (color_conversion->bv << 16) | intel_de_write(dev_priv, TV_CSC_V2,
color_conversion->av); (color_conversion->bv << 16) | color_conversion->av);
} }
static void intel_tv_pre_enable(struct intel_encoder *encoder, static void intel_tv_pre_enable(struct intel_encoder *encoder,
@ -1434,7 +1437,7 @@ static void intel_tv_pre_enable(struct intel_encoder *encoder,
if (!tv_mode) if (!tv_mode)
return; /* can't happen (mode_prepare prevents this) */ return; /* can't happen (mode_prepare prevents this) */
tv_ctl = I915_READ(TV_CTL); tv_ctl = intel_de_read(dev_priv, TV_CTL);
tv_ctl &= TV_CTL_SAVE; tv_ctl &= TV_CTL_SAVE;
switch (intel_tv->type) { switch (intel_tv->type) {
@ -1511,21 +1514,20 @@ static void intel_tv_pre_enable(struct intel_encoder *encoder,
set_tv_mode_timings(dev_priv, tv_mode, burst_ena); set_tv_mode_timings(dev_priv, tv_mode, burst_ena);
I915_WRITE(TV_SC_CTL_1, scctl1); intel_de_write(dev_priv, TV_SC_CTL_1, scctl1);
I915_WRITE(TV_SC_CTL_2, scctl2); intel_de_write(dev_priv, TV_SC_CTL_2, scctl2);
I915_WRITE(TV_SC_CTL_3, scctl3); intel_de_write(dev_priv, TV_SC_CTL_3, scctl3);
set_color_conversion(dev_priv, color_conversion); set_color_conversion(dev_priv, color_conversion);
if (INTEL_GEN(dev_priv) >= 4) if (INTEL_GEN(dev_priv) >= 4)
I915_WRITE(TV_CLR_KNOBS, 0x00404000); intel_de_write(dev_priv, TV_CLR_KNOBS, 0x00404000);
else else
I915_WRITE(TV_CLR_KNOBS, 0x00606000); intel_de_write(dev_priv, TV_CLR_KNOBS, 0x00606000);
if (video_levels) if (video_levels)
I915_WRITE(TV_CLR_LEVEL, intel_de_write(dev_priv, TV_CLR_LEVEL,
((video_levels->black << TV_BLACK_LEVEL_SHIFT) | ((video_levels->black << TV_BLACK_LEVEL_SHIFT) | (video_levels->blank << TV_BLANK_LEVEL_SHIFT)));
(video_levels->blank << TV_BLANK_LEVEL_SHIFT)));
assert_pipe_disabled(dev_priv, pipe_config->cpu_transcoder); assert_pipe_disabled(dev_priv, pipe_config->cpu_transcoder);
@ -1533,7 +1535,7 @@ static void intel_tv_pre_enable(struct intel_encoder *encoder,
tv_filter_ctl = TV_AUTO_SCALE; tv_filter_ctl = TV_AUTO_SCALE;
if (tv_conn_state->bypass_vfilter) if (tv_conn_state->bypass_vfilter)
tv_filter_ctl |= TV_V_FILTER_BYPASS; tv_filter_ctl |= TV_V_FILTER_BYPASS;
I915_WRITE(TV_FILTER_CTL_1, tv_filter_ctl); intel_de_write(dev_priv, TV_FILTER_CTL_1, tv_filter_ctl);
xsize = tv_mode->hblank_start - tv_mode->hblank_end; xsize = tv_mode->hblank_start - tv_mode->hblank_end;
ysize = intel_tv_mode_vdisplay(tv_mode); ysize = intel_tv_mode_vdisplay(tv_mode);
@ -1544,20 +1546,25 @@ static void intel_tv_pre_enable(struct intel_encoder *encoder,
conn_state->tv.margins.right); conn_state->tv.margins.right);
ysize -= (tv_conn_state->margins.top + ysize -= (tv_conn_state->margins.top +
tv_conn_state->margins.bottom); tv_conn_state->margins.bottom);
I915_WRITE(TV_WIN_POS, (xpos<<16)|ypos); intel_de_write(dev_priv, TV_WIN_POS, (xpos << 16) | ypos);
I915_WRITE(TV_WIN_SIZE, (xsize<<16)|ysize); intel_de_write(dev_priv, TV_WIN_SIZE, (xsize << 16) | ysize);
j = 0; j = 0;
for (i = 0; i < 60; i++) for (i = 0; i < 60; i++)
I915_WRITE(TV_H_LUMA(i), tv_mode->filter_table[j++]); intel_de_write(dev_priv, TV_H_LUMA(i),
tv_mode->filter_table[j++]);
for (i = 0; i < 60; i++) for (i = 0; i < 60; i++)
I915_WRITE(TV_H_CHROMA(i), tv_mode->filter_table[j++]); intel_de_write(dev_priv, TV_H_CHROMA(i),
tv_mode->filter_table[j++]);
for (i = 0; i < 43; i++) for (i = 0; i < 43; i++)
I915_WRITE(TV_V_LUMA(i), tv_mode->filter_table[j++]); intel_de_write(dev_priv, TV_V_LUMA(i),
tv_mode->filter_table[j++]);
for (i = 0; i < 43; i++) for (i = 0; i < 43; i++)
I915_WRITE(TV_V_CHROMA(i), tv_mode->filter_table[j++]); intel_de_write(dev_priv, TV_V_CHROMA(i),
I915_WRITE(TV_DAC, I915_READ(TV_DAC) & TV_DAC_SAVE); tv_mode->filter_table[j++]);
I915_WRITE(TV_CTL, tv_ctl); intel_de_write(dev_priv, TV_DAC,
intel_de_read(dev_priv, TV_DAC) & TV_DAC_SAVE);
intel_de_write(dev_priv, TV_CTL, tv_ctl);
} }
static int static int
@ -1581,8 +1588,8 @@ intel_tv_detect_type(struct intel_tv *intel_tv,
spin_unlock_irq(&dev_priv->irq_lock); spin_unlock_irq(&dev_priv->irq_lock);
} }
save_tv_dac = tv_dac = I915_READ(TV_DAC); save_tv_dac = tv_dac = intel_de_read(dev_priv, TV_DAC);
save_tv_ctl = tv_ctl = I915_READ(TV_CTL); save_tv_ctl = tv_ctl = intel_de_read(dev_priv, TV_CTL);
/* Poll for TV detection */ /* Poll for TV detection */
tv_ctl &= ~(TV_ENC_ENABLE | TV_ENC_PIPE_SEL_MASK | TV_TEST_MODE_MASK); tv_ctl &= ~(TV_ENC_ENABLE | TV_ENC_PIPE_SEL_MASK | TV_TEST_MODE_MASK);
@ -1608,15 +1615,15 @@ intel_tv_detect_type(struct intel_tv *intel_tv,
tv_dac &= ~(TVDAC_STATE_CHG_EN | TVDAC_A_SENSE_CTL | tv_dac &= ~(TVDAC_STATE_CHG_EN | TVDAC_A_SENSE_CTL |
TVDAC_B_SENSE_CTL | TVDAC_C_SENSE_CTL); TVDAC_B_SENSE_CTL | TVDAC_C_SENSE_CTL);
I915_WRITE(TV_CTL, tv_ctl); intel_de_write(dev_priv, TV_CTL, tv_ctl);
I915_WRITE(TV_DAC, tv_dac); intel_de_write(dev_priv, TV_DAC, tv_dac);
POSTING_READ(TV_DAC); intel_de_posting_read(dev_priv, TV_DAC);
intel_wait_for_vblank(dev_priv, intel_crtc->pipe); intel_wait_for_vblank(dev_priv, intel_crtc->pipe);
type = -1; type = -1;
tv_dac = I915_READ(TV_DAC); tv_dac = intel_de_read(dev_priv, TV_DAC);
DRM_DEBUG_KMS("TV detected: %x, %x\n", tv_ctl, tv_dac); drm_dbg_kms(&dev_priv->drm, "TV detected: %x, %x\n", tv_ctl, tv_dac);
/* /*
* A B C * A B C
* 0 1 1 Composite * 0 1 1 Composite
@ -1624,22 +1631,25 @@ intel_tv_detect_type(struct intel_tv *intel_tv,
* 0 0 0 Component * 0 0 0 Component
*/ */
if ((tv_dac & TVDAC_SENSE_MASK) == (TVDAC_B_SENSE | TVDAC_C_SENSE)) { if ((tv_dac & TVDAC_SENSE_MASK) == (TVDAC_B_SENSE | TVDAC_C_SENSE)) {
DRM_DEBUG_KMS("Detected Composite TV connection\n"); drm_dbg_kms(&dev_priv->drm,
"Detected Composite TV connection\n");
type = DRM_MODE_CONNECTOR_Composite; type = DRM_MODE_CONNECTOR_Composite;
} else if ((tv_dac & (TVDAC_A_SENSE|TVDAC_B_SENSE)) == TVDAC_A_SENSE) { } else if ((tv_dac & (TVDAC_A_SENSE|TVDAC_B_SENSE)) == TVDAC_A_SENSE) {
DRM_DEBUG_KMS("Detected S-Video TV connection\n"); drm_dbg_kms(&dev_priv->drm,
"Detected S-Video TV connection\n");
type = DRM_MODE_CONNECTOR_SVIDEO; type = DRM_MODE_CONNECTOR_SVIDEO;
} else if ((tv_dac & TVDAC_SENSE_MASK) == 0) { } else if ((tv_dac & TVDAC_SENSE_MASK) == 0) {
DRM_DEBUG_KMS("Detected Component TV connection\n"); drm_dbg_kms(&dev_priv->drm,
"Detected Component TV connection\n");
type = DRM_MODE_CONNECTOR_Component; type = DRM_MODE_CONNECTOR_Component;
} else { } else {
DRM_DEBUG_KMS("Unrecognised TV connection\n"); drm_dbg_kms(&dev_priv->drm, "Unrecognised TV connection\n");
type = -1; type = -1;
} }
I915_WRITE(TV_DAC, save_tv_dac & ~TVDAC_STATE_CHG_EN); intel_de_write(dev_priv, TV_DAC, save_tv_dac & ~TVDAC_STATE_CHG_EN);
I915_WRITE(TV_CTL, save_tv_ctl); intel_de_write(dev_priv, TV_CTL, save_tv_ctl);
POSTING_READ(TV_CTL); intel_de_posting_read(dev_priv, TV_CTL);
/* For unknown reasons the hw barfs if we don't do this vblank wait. */ /* For unknown reasons the hw barfs if we don't do this vblank wait. */
intel_wait_for_vblank(dev_priv, intel_crtc->pipe); intel_wait_for_vblank(dev_priv, intel_crtc->pipe);
@ -1794,7 +1804,7 @@ intel_tv_get_modes(struct drm_connector *connector)
*/ */
intel_tv_mode_to_mode(mode, tv_mode); intel_tv_mode_to_mode(mode, tv_mode);
if (count == 0) { if (count == 0) {
DRM_DEBUG_KMS("TV mode:\n"); drm_dbg_kms(&dev_priv->drm, "TV mode:\n");
drm_mode_debug_printmodeline(mode); drm_mode_debug_printmodeline(mode);
} }
intel_tv_scale_mode_horiz(mode, input->w, 0, 0); intel_tv_scale_mode_horiz(mode, input->w, 0, 0);
@ -1870,11 +1880,11 @@ intel_tv_init(struct drm_i915_private *dev_priv)
int i, initial_mode = 0; int i, initial_mode = 0;
struct drm_connector_state *state; struct drm_connector_state *state;
if ((I915_READ(TV_CTL) & TV_FUSE_STATE_MASK) == TV_FUSE_STATE_DISABLED) if ((intel_de_read(dev_priv, TV_CTL) & TV_FUSE_STATE_MASK) == TV_FUSE_STATE_DISABLED)
return; return;
if (!intel_bios_is_tv_present(dev_priv)) { if (!intel_bios_is_tv_present(dev_priv)) {
DRM_DEBUG_KMS("Integrated TV is not present.\n"); drm_dbg_kms(&dev_priv->drm, "Integrated TV is not present.\n");
return; return;
} }
@ -1882,15 +1892,15 @@ intel_tv_init(struct drm_i915_private *dev_priv)
* Sanity check the TV output by checking to see if the * Sanity check the TV output by checking to see if the
* DAC register holds a value * DAC register holds a value
*/ */
save_tv_dac = I915_READ(TV_DAC); save_tv_dac = intel_de_read(dev_priv, TV_DAC);
I915_WRITE(TV_DAC, save_tv_dac | TVDAC_STATE_CHG_EN); intel_de_write(dev_priv, TV_DAC, save_tv_dac | TVDAC_STATE_CHG_EN);
tv_dac_on = I915_READ(TV_DAC); tv_dac_on = intel_de_read(dev_priv, TV_DAC);
I915_WRITE(TV_DAC, save_tv_dac & ~TVDAC_STATE_CHG_EN); intel_de_write(dev_priv, TV_DAC, save_tv_dac & ~TVDAC_STATE_CHG_EN);
tv_dac_off = I915_READ(TV_DAC); tv_dac_off = intel_de_read(dev_priv, TV_DAC);
I915_WRITE(TV_DAC, save_tv_dac); intel_de_write(dev_priv, TV_DAC, save_tv_dac);
/* /*
* If the register does not hold the state change enable * If the register does not hold the state change enable

View file

@ -111,7 +111,7 @@ enum bdb_block_id {
BDB_LVDS_LFP_DATA_PTRS = 41, BDB_LVDS_LFP_DATA_PTRS = 41,
BDB_LVDS_LFP_DATA = 42, BDB_LVDS_LFP_DATA = 42,
BDB_LVDS_BACKLIGHT = 43, BDB_LVDS_BACKLIGHT = 43,
BDB_LVDS_POWER = 44, BDB_LFP_POWER = 44,
BDB_MIPI_CONFIG = 52, BDB_MIPI_CONFIG = 52,
BDB_MIPI_SEQUENCE = 53, BDB_MIPI_SEQUENCE = 53,
BDB_COMPRESSION_PARAMETERS = 56, BDB_COMPRESSION_PARAMETERS = 56,

View file

@ -374,7 +374,7 @@ static bool is_pipe_dsc(const struct intel_crtc_state *crtc_state)
return false; return false;
/* There's no pipe A DSC engine on ICL */ /* There's no pipe A DSC engine on ICL */
WARN_ON(crtc->pipe == PIPE_A); drm_WARN_ON(&i915->drm, crtc->pipe == PIPE_A);
return true; return true;
} }
@ -518,119 +518,149 @@ static void intel_dsc_pps_configure(struct intel_encoder *encoder,
pps_val |= DSC_422_ENABLE; pps_val |= DSC_422_ENABLE;
if (vdsc_cfg->vbr_enable) if (vdsc_cfg->vbr_enable)
pps_val |= DSC_VBR_ENABLE; pps_val |= DSC_VBR_ENABLE;
DRM_INFO("PPS0 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS0 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_0, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_0,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_0, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_0,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_0(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_0(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_0(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_0(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_1 registers */ /* Populate PICTURE_PARAMETER_SET_1 registers */
pps_val = 0; pps_val = 0;
pps_val |= DSC_BPP(vdsc_cfg->bits_per_pixel); pps_val |= DSC_BPP(vdsc_cfg->bits_per_pixel);
DRM_INFO("PPS1 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS1 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_1, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_1,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_1, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_1,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_1(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_1(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_1(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_1(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_2 registers */ /* Populate PICTURE_PARAMETER_SET_2 registers */
pps_val = 0; pps_val = 0;
pps_val |= DSC_PIC_HEIGHT(vdsc_cfg->pic_height) | pps_val |= DSC_PIC_HEIGHT(vdsc_cfg->pic_height) |
DSC_PIC_WIDTH(vdsc_cfg->pic_width / num_vdsc_instances); DSC_PIC_WIDTH(vdsc_cfg->pic_width / num_vdsc_instances);
DRM_INFO("PPS2 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS2 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_2, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_2,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_2, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_2,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_2(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_2(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_2(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_2(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_3 registers */ /* Populate PICTURE_PARAMETER_SET_3 registers */
pps_val = 0; pps_val = 0;
pps_val |= DSC_SLICE_HEIGHT(vdsc_cfg->slice_height) | pps_val |= DSC_SLICE_HEIGHT(vdsc_cfg->slice_height) |
DSC_SLICE_WIDTH(vdsc_cfg->slice_width); DSC_SLICE_WIDTH(vdsc_cfg->slice_width);
DRM_INFO("PPS3 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS3 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_3, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_3,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_3, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_3,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_3(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_3(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_3(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_3(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_4 registers */ /* Populate PICTURE_PARAMETER_SET_4 registers */
pps_val = 0; pps_val = 0;
pps_val |= DSC_INITIAL_XMIT_DELAY(vdsc_cfg->initial_xmit_delay) | pps_val |= DSC_INITIAL_XMIT_DELAY(vdsc_cfg->initial_xmit_delay) |
DSC_INITIAL_DEC_DELAY(vdsc_cfg->initial_dec_delay); DSC_INITIAL_DEC_DELAY(vdsc_cfg->initial_dec_delay);
DRM_INFO("PPS4 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS4 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_4, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_4,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_4, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_4,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_4(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_4(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_4(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_4(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_5 registers */ /* Populate PICTURE_PARAMETER_SET_5 registers */
pps_val = 0; pps_val = 0;
pps_val |= DSC_SCALE_INC_INT(vdsc_cfg->scale_increment_interval) | pps_val |= DSC_SCALE_INC_INT(vdsc_cfg->scale_increment_interval) |
DSC_SCALE_DEC_INT(vdsc_cfg->scale_decrement_interval); DSC_SCALE_DEC_INT(vdsc_cfg->scale_decrement_interval);
DRM_INFO("PPS5 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS5 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_5, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_5,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_5, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_5,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_5(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_5(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_5(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_5(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_6 registers */ /* Populate PICTURE_PARAMETER_SET_6 registers */
@ -639,80 +669,100 @@ static void intel_dsc_pps_configure(struct intel_encoder *encoder,
DSC_FIRST_LINE_BPG_OFFSET(vdsc_cfg->first_line_bpg_offset) | DSC_FIRST_LINE_BPG_OFFSET(vdsc_cfg->first_line_bpg_offset) |
DSC_FLATNESS_MIN_QP(vdsc_cfg->flatness_min_qp) | DSC_FLATNESS_MIN_QP(vdsc_cfg->flatness_min_qp) |
DSC_FLATNESS_MAX_QP(vdsc_cfg->flatness_max_qp); DSC_FLATNESS_MAX_QP(vdsc_cfg->flatness_max_qp);
DRM_INFO("PPS6 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS6 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_6, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_6,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_6, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_6,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_6(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_6(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_6(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_6(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_7 registers */ /* Populate PICTURE_PARAMETER_SET_7 registers */
pps_val = 0; pps_val = 0;
pps_val |= DSC_SLICE_BPG_OFFSET(vdsc_cfg->slice_bpg_offset) | pps_val |= DSC_SLICE_BPG_OFFSET(vdsc_cfg->slice_bpg_offset) |
DSC_NFL_BPG_OFFSET(vdsc_cfg->nfl_bpg_offset); DSC_NFL_BPG_OFFSET(vdsc_cfg->nfl_bpg_offset);
DRM_INFO("PPS7 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS7 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_7, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_7,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_7, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_7,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_7(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_7(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_7(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_7(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_8 registers */ /* Populate PICTURE_PARAMETER_SET_8 registers */
pps_val = 0; pps_val = 0;
pps_val |= DSC_FINAL_OFFSET(vdsc_cfg->final_offset) | pps_val |= DSC_FINAL_OFFSET(vdsc_cfg->final_offset) |
DSC_INITIAL_OFFSET(vdsc_cfg->initial_offset); DSC_INITIAL_OFFSET(vdsc_cfg->initial_offset);
DRM_INFO("PPS8 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS8 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_8, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_8,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_8, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_8,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_8(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_8(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_8(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_8(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_9 registers */ /* Populate PICTURE_PARAMETER_SET_9 registers */
pps_val = 0; pps_val = 0;
pps_val |= DSC_RC_MODEL_SIZE(DSC_RC_MODEL_SIZE_CONST) | pps_val |= DSC_RC_MODEL_SIZE(DSC_RC_MODEL_SIZE_CONST) |
DSC_RC_EDGE_FACTOR(DSC_RC_EDGE_FACTOR_CONST); DSC_RC_EDGE_FACTOR(DSC_RC_EDGE_FACTOR_CONST);
DRM_INFO("PPS9 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS9 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_9, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_9,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_9, pps_val); intel_de_write(dev_priv, DSCC_PICTURE_PARAMETER_SET_9,
pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_9(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_9(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_9(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_9(pipe),
pps_val);
} }
/* Populate PICTURE_PARAMETER_SET_10 registers */ /* Populate PICTURE_PARAMETER_SET_10 registers */
@ -721,20 +771,25 @@ static void intel_dsc_pps_configure(struct intel_encoder *encoder,
DSC_RC_QUANT_INC_LIMIT1(vdsc_cfg->rc_quant_incr_limit1) | DSC_RC_QUANT_INC_LIMIT1(vdsc_cfg->rc_quant_incr_limit1) |
DSC_RC_TARGET_OFF_HIGH(DSC_RC_TGT_OFFSET_HI_CONST) | DSC_RC_TARGET_OFF_HIGH(DSC_RC_TGT_OFFSET_HI_CONST) |
DSC_RC_TARGET_OFF_LOW(DSC_RC_TGT_OFFSET_LO_CONST); DSC_RC_TARGET_OFF_LOW(DSC_RC_TGT_OFFSET_LO_CONST);
DRM_INFO("PPS10 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS10 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_10, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_10,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_10, pps_val); intel_de_write(dev_priv,
DSCC_PICTURE_PARAMETER_SET_10, pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_10(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_10(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_10(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_10(pipe),
pps_val);
} }
/* Populate Picture parameter set 16 */ /* Populate Picture parameter set 16 */
@ -744,20 +799,25 @@ static void intel_dsc_pps_configure(struct intel_encoder *encoder,
vdsc_cfg->slice_width) | vdsc_cfg->slice_width) |
DSC_SLICE_ROW_PER_FRAME(vdsc_cfg->pic_height / DSC_SLICE_ROW_PER_FRAME(vdsc_cfg->pic_height /
vdsc_cfg->slice_height); vdsc_cfg->slice_height);
DRM_INFO("PPS16 = 0x%08x\n", pps_val); drm_info(&dev_priv->drm, "PPS16 = 0x%08x\n", pps_val);
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_16, pps_val); intel_de_write(dev_priv, DSCA_PICTURE_PARAMETER_SET_16,
pps_val);
/* /*
* If 2 VDSC instances are needed, configure PPS for second * If 2 VDSC instances are needed, configure PPS for second
* VDSC * VDSC
*/ */
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(DSCC_PICTURE_PARAMETER_SET_16, pps_val); intel_de_write(dev_priv,
DSCC_PICTURE_PARAMETER_SET_16, pps_val);
} else { } else {
I915_WRITE(ICL_DSC0_PICTURE_PARAMETER_SET_16(pipe), pps_val); intel_de_write(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_16(pipe),
pps_val);
if (crtc_state->dsc.dsc_split) if (crtc_state->dsc.dsc_split)
I915_WRITE(ICL_DSC1_PICTURE_PARAMETER_SET_16(pipe), intel_de_write(dev_priv,
pps_val); ICL_DSC1_PICTURE_PARAMETER_SET_16(pipe),
pps_val);
} }
/* Populate the RC_BUF_THRESH registers */ /* Populate the RC_BUF_THRESH registers */
@ -766,42 +826,50 @@ static void intel_dsc_pps_configure(struct intel_encoder *encoder,
rc_buf_thresh_dword[i / 4] |= rc_buf_thresh_dword[i / 4] |=
(u32)(vdsc_cfg->rc_buf_thresh[i] << (u32)(vdsc_cfg->rc_buf_thresh[i] <<
BITS_PER_BYTE * (i % 4)); BITS_PER_BYTE * (i % 4));
DRM_INFO(" RC_BUF_THRESH%d = 0x%08x\n", i, drm_info(&dev_priv->drm, " RC_BUF_THRESH%d = 0x%08x\n", i,
rc_buf_thresh_dword[i / 4]); rc_buf_thresh_dword[i / 4]);
} }
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_RC_BUF_THRESH_0, rc_buf_thresh_dword[0]); intel_de_write(dev_priv, DSCA_RC_BUF_THRESH_0,
I915_WRITE(DSCA_RC_BUF_THRESH_0_UDW, rc_buf_thresh_dword[1]); rc_buf_thresh_dword[0]);
I915_WRITE(DSCA_RC_BUF_THRESH_1, rc_buf_thresh_dword[2]); intel_de_write(dev_priv, DSCA_RC_BUF_THRESH_0_UDW,
I915_WRITE(DSCA_RC_BUF_THRESH_1_UDW, rc_buf_thresh_dword[3]); rc_buf_thresh_dword[1]);
intel_de_write(dev_priv, DSCA_RC_BUF_THRESH_1,
rc_buf_thresh_dword[2]);
intel_de_write(dev_priv, DSCA_RC_BUF_THRESH_1_UDW,
rc_buf_thresh_dword[3]);
if (crtc_state->dsc.dsc_split) { if (crtc_state->dsc.dsc_split) {
I915_WRITE(DSCC_RC_BUF_THRESH_0, intel_de_write(dev_priv, DSCC_RC_BUF_THRESH_0,
rc_buf_thresh_dword[0]); rc_buf_thresh_dword[0]);
I915_WRITE(DSCC_RC_BUF_THRESH_0_UDW, intel_de_write(dev_priv, DSCC_RC_BUF_THRESH_0_UDW,
rc_buf_thresh_dword[1]); rc_buf_thresh_dword[1]);
I915_WRITE(DSCC_RC_BUF_THRESH_1, intel_de_write(dev_priv, DSCC_RC_BUF_THRESH_1,
rc_buf_thresh_dword[2]); rc_buf_thresh_dword[2]);
I915_WRITE(DSCC_RC_BUF_THRESH_1_UDW, intel_de_write(dev_priv, DSCC_RC_BUF_THRESH_1_UDW,
rc_buf_thresh_dword[3]); rc_buf_thresh_dword[3]);
} }
} else { } else {
I915_WRITE(ICL_DSC0_RC_BUF_THRESH_0(pipe), intel_de_write(dev_priv, ICL_DSC0_RC_BUF_THRESH_0(pipe),
rc_buf_thresh_dword[0]); rc_buf_thresh_dword[0]);
I915_WRITE(ICL_DSC0_RC_BUF_THRESH_0_UDW(pipe), intel_de_write(dev_priv, ICL_DSC0_RC_BUF_THRESH_0_UDW(pipe),
rc_buf_thresh_dword[1]); rc_buf_thresh_dword[1]);
I915_WRITE(ICL_DSC0_RC_BUF_THRESH_1(pipe), intel_de_write(dev_priv, ICL_DSC0_RC_BUF_THRESH_1(pipe),
rc_buf_thresh_dword[2]); rc_buf_thresh_dword[2]);
I915_WRITE(ICL_DSC0_RC_BUF_THRESH_1_UDW(pipe), intel_de_write(dev_priv, ICL_DSC0_RC_BUF_THRESH_1_UDW(pipe),
rc_buf_thresh_dword[3]); rc_buf_thresh_dword[3]);
if (crtc_state->dsc.dsc_split) { if (crtc_state->dsc.dsc_split) {
I915_WRITE(ICL_DSC1_RC_BUF_THRESH_0(pipe), intel_de_write(dev_priv,
rc_buf_thresh_dword[0]); ICL_DSC1_RC_BUF_THRESH_0(pipe),
I915_WRITE(ICL_DSC1_RC_BUF_THRESH_0_UDW(pipe), rc_buf_thresh_dword[0]);
rc_buf_thresh_dword[1]); intel_de_write(dev_priv,
I915_WRITE(ICL_DSC1_RC_BUF_THRESH_1(pipe), ICL_DSC1_RC_BUF_THRESH_0_UDW(pipe),
rc_buf_thresh_dword[2]); rc_buf_thresh_dword[1]);
I915_WRITE(ICL_DSC1_RC_BUF_THRESH_1_UDW(pipe), intel_de_write(dev_priv,
rc_buf_thresh_dword[3]); ICL_DSC1_RC_BUF_THRESH_1(pipe),
rc_buf_thresh_dword[2]);
intel_de_write(dev_priv,
ICL_DSC1_RC_BUF_THRESH_1_UDW(pipe),
rc_buf_thresh_dword[3]);
} }
} }
@ -815,78 +883,94 @@ static void intel_dsc_pps_configure(struct intel_encoder *encoder,
RC_MAX_QP_SHIFT) | RC_MAX_QP_SHIFT) |
(vdsc_cfg->rc_range_params[i].range_min_qp << (vdsc_cfg->rc_range_params[i].range_min_qp <<
RC_MIN_QP_SHIFT)) << 16 * (i % 2)); RC_MIN_QP_SHIFT)) << 16 * (i % 2));
DRM_INFO(" RC_RANGE_PARAM_%d = 0x%08x\n", i, drm_info(&dev_priv->drm, " RC_RANGE_PARAM_%d = 0x%08x\n", i,
rc_range_params_dword[i / 2]); rc_range_params_dword[i / 2]);
} }
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_0, intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_0,
rc_range_params_dword[0]); rc_range_params_dword[0]);
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_0_UDW, intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_0_UDW,
rc_range_params_dword[1]); rc_range_params_dword[1]);
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_1, intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_1,
rc_range_params_dword[2]); rc_range_params_dword[2]);
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_1_UDW, intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_1_UDW,
rc_range_params_dword[3]); rc_range_params_dword[3]);
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_2, intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_2,
rc_range_params_dword[4]); rc_range_params_dword[4]);
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_2_UDW, intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_2_UDW,
rc_range_params_dword[5]); rc_range_params_dword[5]);
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_3, intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_3,
rc_range_params_dword[6]); rc_range_params_dword[6]);
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_3_UDW, intel_de_write(dev_priv, DSCA_RC_RANGE_PARAMETERS_3_UDW,
rc_range_params_dword[7]); rc_range_params_dword[7]);
if (crtc_state->dsc.dsc_split) { if (crtc_state->dsc.dsc_split) {
I915_WRITE(DSCC_RC_RANGE_PARAMETERS_0, intel_de_write(dev_priv, DSCC_RC_RANGE_PARAMETERS_0,
rc_range_params_dword[0]); rc_range_params_dword[0]);
I915_WRITE(DSCC_RC_RANGE_PARAMETERS_0_UDW, intel_de_write(dev_priv,
rc_range_params_dword[1]); DSCC_RC_RANGE_PARAMETERS_0_UDW,
I915_WRITE(DSCC_RC_RANGE_PARAMETERS_1, rc_range_params_dword[1]);
rc_range_params_dword[2]); intel_de_write(dev_priv, DSCC_RC_RANGE_PARAMETERS_1,
I915_WRITE(DSCC_RC_RANGE_PARAMETERS_1_UDW, rc_range_params_dword[2]);
rc_range_params_dword[3]); intel_de_write(dev_priv,
I915_WRITE(DSCC_RC_RANGE_PARAMETERS_2, DSCC_RC_RANGE_PARAMETERS_1_UDW,
rc_range_params_dword[4]); rc_range_params_dword[3]);
I915_WRITE(DSCC_RC_RANGE_PARAMETERS_2_UDW, intel_de_write(dev_priv, DSCC_RC_RANGE_PARAMETERS_2,
rc_range_params_dword[5]); rc_range_params_dword[4]);
I915_WRITE(DSCC_RC_RANGE_PARAMETERS_3, intel_de_write(dev_priv,
rc_range_params_dword[6]); DSCC_RC_RANGE_PARAMETERS_2_UDW,
I915_WRITE(DSCC_RC_RANGE_PARAMETERS_3_UDW, rc_range_params_dword[5]);
rc_range_params_dword[7]); intel_de_write(dev_priv, DSCC_RC_RANGE_PARAMETERS_3,
rc_range_params_dword[6]);
intel_de_write(dev_priv,
DSCC_RC_RANGE_PARAMETERS_3_UDW,
rc_range_params_dword[7]);
} }
} else { } else {
I915_WRITE(ICL_DSC0_RC_RANGE_PARAMETERS_0(pipe), intel_de_write(dev_priv, ICL_DSC0_RC_RANGE_PARAMETERS_0(pipe),
rc_range_params_dword[0]); rc_range_params_dword[0]);
I915_WRITE(ICL_DSC0_RC_RANGE_PARAMETERS_0_UDW(pipe), intel_de_write(dev_priv,
rc_range_params_dword[1]); ICL_DSC0_RC_RANGE_PARAMETERS_0_UDW(pipe),
I915_WRITE(ICL_DSC0_RC_RANGE_PARAMETERS_1(pipe), rc_range_params_dword[1]);
rc_range_params_dword[2]); intel_de_write(dev_priv, ICL_DSC0_RC_RANGE_PARAMETERS_1(pipe),
I915_WRITE(ICL_DSC0_RC_RANGE_PARAMETERS_1_UDW(pipe), rc_range_params_dword[2]);
rc_range_params_dword[3]); intel_de_write(dev_priv,
I915_WRITE(ICL_DSC0_RC_RANGE_PARAMETERS_2(pipe), ICL_DSC0_RC_RANGE_PARAMETERS_1_UDW(pipe),
rc_range_params_dword[4]); rc_range_params_dword[3]);
I915_WRITE(ICL_DSC0_RC_RANGE_PARAMETERS_2_UDW(pipe), intel_de_write(dev_priv, ICL_DSC0_RC_RANGE_PARAMETERS_2(pipe),
rc_range_params_dword[5]); rc_range_params_dword[4]);
I915_WRITE(ICL_DSC0_RC_RANGE_PARAMETERS_3(pipe), intel_de_write(dev_priv,
rc_range_params_dword[6]); ICL_DSC0_RC_RANGE_PARAMETERS_2_UDW(pipe),
I915_WRITE(ICL_DSC0_RC_RANGE_PARAMETERS_3_UDW(pipe), rc_range_params_dword[5]);
rc_range_params_dword[7]); intel_de_write(dev_priv, ICL_DSC0_RC_RANGE_PARAMETERS_3(pipe),
rc_range_params_dword[6]);
intel_de_write(dev_priv,
ICL_DSC0_RC_RANGE_PARAMETERS_3_UDW(pipe),
rc_range_params_dword[7]);
if (crtc_state->dsc.dsc_split) { if (crtc_state->dsc.dsc_split) {
I915_WRITE(ICL_DSC1_RC_RANGE_PARAMETERS_0(pipe), intel_de_write(dev_priv,
rc_range_params_dword[0]); ICL_DSC1_RC_RANGE_PARAMETERS_0(pipe),
I915_WRITE(ICL_DSC1_RC_RANGE_PARAMETERS_0_UDW(pipe), rc_range_params_dword[0]);
rc_range_params_dword[1]); intel_de_write(dev_priv,
I915_WRITE(ICL_DSC1_RC_RANGE_PARAMETERS_1(pipe), ICL_DSC1_RC_RANGE_PARAMETERS_0_UDW(pipe),
rc_range_params_dword[2]); rc_range_params_dword[1]);
I915_WRITE(ICL_DSC1_RC_RANGE_PARAMETERS_1_UDW(pipe), intel_de_write(dev_priv,
rc_range_params_dword[3]); ICL_DSC1_RC_RANGE_PARAMETERS_1(pipe),
I915_WRITE(ICL_DSC1_RC_RANGE_PARAMETERS_2(pipe), rc_range_params_dword[2]);
rc_range_params_dword[4]); intel_de_write(dev_priv,
I915_WRITE(ICL_DSC1_RC_RANGE_PARAMETERS_2_UDW(pipe), ICL_DSC1_RC_RANGE_PARAMETERS_1_UDW(pipe),
rc_range_params_dword[5]); rc_range_params_dword[3]);
I915_WRITE(ICL_DSC1_RC_RANGE_PARAMETERS_3(pipe), intel_de_write(dev_priv,
rc_range_params_dword[6]); ICL_DSC1_RC_RANGE_PARAMETERS_2(pipe),
I915_WRITE(ICL_DSC1_RC_RANGE_PARAMETERS_3_UDW(pipe), rc_range_params_dword[4]);
rc_range_params_dword[7]); intel_de_write(dev_priv,
ICL_DSC1_RC_RANGE_PARAMETERS_2_UDW(pipe),
rc_range_params_dword[5]);
intel_de_write(dev_priv,
ICL_DSC1_RC_RANGE_PARAMETERS_3(pipe),
rc_range_params_dword[6]);
intel_de_write(dev_priv,
ICL_DSC1_RC_RANGE_PARAMETERS_3_UDW(pipe),
rc_range_params_dword[7]);
} }
} }
} }
@ -912,11 +996,11 @@ void intel_dsc_get_config(struct intel_encoder *encoder,
return; return;
if (!is_pipe_dsc(crtc_state)) { if (!is_pipe_dsc(crtc_state)) {
dss_ctl1 = I915_READ(DSS_CTL1); dss_ctl1 = intel_de_read(dev_priv, DSS_CTL1);
dss_ctl2 = I915_READ(DSS_CTL2); dss_ctl2 = intel_de_read(dev_priv, DSS_CTL2);
} else { } else {
dss_ctl1 = I915_READ(ICL_PIPE_DSS_CTL1(pipe)); dss_ctl1 = intel_de_read(dev_priv, ICL_PIPE_DSS_CTL1(pipe));
dss_ctl2 = I915_READ(ICL_PIPE_DSS_CTL2(pipe)); dss_ctl2 = intel_de_read(dev_priv, ICL_PIPE_DSS_CTL2(pipe));
} }
crtc_state->dsc.compression_enable = dss_ctl2 & LEFT_BRANCH_VDSC_ENABLE; crtc_state->dsc.compression_enable = dss_ctl2 & LEFT_BRANCH_VDSC_ENABLE;
@ -930,9 +1014,10 @@ void intel_dsc_get_config(struct intel_encoder *encoder,
/* PPS1 */ /* PPS1 */
if (!is_pipe_dsc(crtc_state)) if (!is_pipe_dsc(crtc_state))
val = I915_READ(DSCA_PICTURE_PARAMETER_SET_1); val = intel_de_read(dev_priv, DSCA_PICTURE_PARAMETER_SET_1);
else else
val = I915_READ(ICL_DSC0_PICTURE_PARAMETER_SET_1(pipe)); val = intel_de_read(dev_priv,
ICL_DSC0_PICTURE_PARAMETER_SET_1(pipe));
vdsc_cfg->bits_per_pixel = val; vdsc_cfg->bits_per_pixel = val;
crtc_state->dsc.compressed_bpp = vdsc_cfg->bits_per_pixel >> 4; crtc_state->dsc.compressed_bpp = vdsc_cfg->bits_per_pixel >> 4;
out: out:
@ -1013,8 +1098,8 @@ void intel_dsc_enable(struct intel_encoder *encoder,
dss_ctl2_val |= RIGHT_BRANCH_VDSC_ENABLE; dss_ctl2_val |= RIGHT_BRANCH_VDSC_ENABLE;
dss_ctl1_val |= JOINER_ENABLE; dss_ctl1_val |= JOINER_ENABLE;
} }
I915_WRITE(dss_ctl1_reg, dss_ctl1_val); intel_de_write(dev_priv, dss_ctl1_reg, dss_ctl1_val);
I915_WRITE(dss_ctl2_reg, dss_ctl2_val); intel_de_write(dev_priv, dss_ctl2_reg, dss_ctl2_val);
} }
void intel_dsc_disable(const struct intel_crtc_state *old_crtc_state) void intel_dsc_disable(const struct intel_crtc_state *old_crtc_state)
@ -1035,17 +1120,17 @@ void intel_dsc_disable(const struct intel_crtc_state *old_crtc_state)
dss_ctl1_reg = ICL_PIPE_DSS_CTL1(pipe); dss_ctl1_reg = ICL_PIPE_DSS_CTL1(pipe);
dss_ctl2_reg = ICL_PIPE_DSS_CTL2(pipe); dss_ctl2_reg = ICL_PIPE_DSS_CTL2(pipe);
} }
dss_ctl1_val = I915_READ(dss_ctl1_reg); dss_ctl1_val = intel_de_read(dev_priv, dss_ctl1_reg);
if (dss_ctl1_val & JOINER_ENABLE) if (dss_ctl1_val & JOINER_ENABLE)
dss_ctl1_val &= ~JOINER_ENABLE; dss_ctl1_val &= ~JOINER_ENABLE;
I915_WRITE(dss_ctl1_reg, dss_ctl1_val); intel_de_write(dev_priv, dss_ctl1_reg, dss_ctl1_val);
dss_ctl2_val = I915_READ(dss_ctl2_reg); dss_ctl2_val = intel_de_read(dev_priv, dss_ctl2_reg);
if (dss_ctl2_val & LEFT_BRANCH_VDSC_ENABLE || if (dss_ctl2_val & LEFT_BRANCH_VDSC_ENABLE ||
dss_ctl2_val & RIGHT_BRANCH_VDSC_ENABLE) dss_ctl2_val & RIGHT_BRANCH_VDSC_ENABLE)
dss_ctl2_val &= ~(LEFT_BRANCH_VDSC_ENABLE | dss_ctl2_val &= ~(LEFT_BRANCH_VDSC_ENABLE |
RIGHT_BRANCH_VDSC_ENABLE); RIGHT_BRANCH_VDSC_ENABLE);
I915_WRITE(dss_ctl2_reg, dss_ctl2_val); intel_de_write(dev_priv, dss_ctl2_reg, dss_ctl2_val);
/* Disable Power wells for VDSC/joining */ /* Disable Power wells for VDSC/joining */
intel_display_power_put_unchecked(dev_priv, intel_display_power_put_unchecked(dev_priv,

View file

@ -9,6 +9,7 @@
#include <drm/i915_drm.h> #include <drm/i915_drm.h>
#include "i915_drv.h" #include "i915_drv.h"
#include "intel_de.h"
#include "intel_vga.h" #include "intel_vga.h"
static i915_reg_t intel_vga_cntrl_reg(struct drm_i915_private *i915) static i915_reg_t intel_vga_cntrl_reg(struct drm_i915_private *i915)
@ -36,16 +37,17 @@ void intel_vga_disable(struct drm_i915_private *dev_priv)
vga_put(pdev, VGA_RSRC_LEGACY_IO); vga_put(pdev, VGA_RSRC_LEGACY_IO);
udelay(300); udelay(300);
I915_WRITE(vga_reg, VGA_DISP_DISABLE); intel_de_write(dev_priv, vga_reg, VGA_DISP_DISABLE);
POSTING_READ(vga_reg); intel_de_posting_read(dev_priv, vga_reg);
} }
void intel_vga_redisable_power_on(struct drm_i915_private *dev_priv) void intel_vga_redisable_power_on(struct drm_i915_private *dev_priv)
{ {
i915_reg_t vga_reg = intel_vga_cntrl_reg(dev_priv); i915_reg_t vga_reg = intel_vga_cntrl_reg(dev_priv);
if (!(I915_READ(vga_reg) & VGA_DISP_DISABLE)) { if (!(intel_de_read(dev_priv, vga_reg) & VGA_DISP_DISABLE)) {
DRM_DEBUG_KMS("Something enabled VGA plane, disabling it\n"); drm_dbg_kms(&dev_priv->drm,
"Something enabled VGA plane, disabling it\n");
intel_vga_disable(dev_priv); intel_vga_disable(dev_priv);
} }
} }
@ -98,7 +100,7 @@ intel_vga_set_state(struct drm_i915_private *i915, bool enable_decode)
u16 gmch_ctrl; u16 gmch_ctrl;
if (pci_read_config_word(i915->bridge_dev, reg, &gmch_ctrl)) { if (pci_read_config_word(i915->bridge_dev, reg, &gmch_ctrl)) {
DRM_ERROR("failed to read control word\n"); drm_err(&i915->drm, "failed to read control word\n");
return -EIO; return -EIO;
} }
@ -111,7 +113,7 @@ intel_vga_set_state(struct drm_i915_private *i915, bool enable_decode)
gmch_ctrl |= INTEL_GMCH_VGA_DISABLE; gmch_ctrl |= INTEL_GMCH_VGA_DISABLE;
if (pci_write_config_word(i915->bridge_dev, reg, gmch_ctrl)) { if (pci_write_config_word(i915->bridge_dev, reg, gmch_ctrl)) {
DRM_ERROR("failed to write control word\n"); drm_err(&i915->drm, "failed to write control word\n");
return -EIO; return -EIO;
} }

File diff suppressed because it is too large Load diff

View file

@ -64,7 +64,7 @@ static int dsi_calc_mnp(struct drm_i915_private *dev_priv,
/* target_dsi_clk is expected in kHz */ /* target_dsi_clk is expected in kHz */
if (target_dsi_clk < 300000 || target_dsi_clk > 1150000) { if (target_dsi_clk < 300000 || target_dsi_clk > 1150000) {
DRM_ERROR("DSI CLK Out of Range\n"); drm_err(&dev_priv->drm, "DSI CLK Out of Range\n");
return -ECHRNG; return -ECHRNG;
} }
@ -126,7 +126,7 @@ int vlv_dsi_pll_compute(struct intel_encoder *encoder,
ret = dsi_calc_mnp(dev_priv, config, dsi_clk); ret = dsi_calc_mnp(dev_priv, config, dsi_clk);
if (ret) { if (ret) {
DRM_DEBUG_KMS("dsi_calc_mnp failed\n"); drm_dbg_kms(&dev_priv->drm, "dsi_calc_mnp failed\n");
return ret; return ret;
} }
@ -138,8 +138,8 @@ int vlv_dsi_pll_compute(struct intel_encoder *encoder,
config->dsi_pll.ctrl |= DSI_PLL_VCO_EN; config->dsi_pll.ctrl |= DSI_PLL_VCO_EN;
DRM_DEBUG_KMS("dsi pll div %08x, ctrl %08x\n", drm_dbg_kms(&dev_priv->drm, "dsi pll div %08x, ctrl %08x\n",
config->dsi_pll.div, config->dsi_pll.ctrl); config->dsi_pll.div, config->dsi_pll.ctrl);
return 0; return 0;
} }
@ -149,7 +149,7 @@ void vlv_dsi_pll_enable(struct intel_encoder *encoder,
{ {
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
DRM_DEBUG_KMS("\n"); drm_dbg_kms(&dev_priv->drm, "\n");
vlv_cck_get(dev_priv); vlv_cck_get(dev_priv);
@ -169,12 +169,12 @@ void vlv_dsi_pll_enable(struct intel_encoder *encoder,
DSI_PLL_LOCK, 20)) { DSI_PLL_LOCK, 20)) {
vlv_cck_put(dev_priv); vlv_cck_put(dev_priv);
DRM_ERROR("DSI PLL lock failed\n"); drm_err(&dev_priv->drm, "DSI PLL lock failed\n");
return; return;
} }
vlv_cck_put(dev_priv); vlv_cck_put(dev_priv);
DRM_DEBUG_KMS("DSI PLL locked\n"); drm_dbg_kms(&dev_priv->drm, "DSI PLL locked\n");
} }
void vlv_dsi_pll_disable(struct intel_encoder *encoder) void vlv_dsi_pll_disable(struct intel_encoder *encoder)
@ -182,7 +182,7 @@ void vlv_dsi_pll_disable(struct intel_encoder *encoder)
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
u32 tmp; u32 tmp;
DRM_DEBUG_KMS("\n"); drm_dbg_kms(&dev_priv->drm, "\n");
vlv_cck_get(dev_priv); vlv_cck_get(dev_priv);
@ -201,7 +201,7 @@ bool bxt_dsi_pll_is_enabled(struct drm_i915_private *dev_priv)
u32 mask; u32 mask;
mask = BXT_DSI_PLL_DO_ENABLE | BXT_DSI_PLL_LOCKED; mask = BXT_DSI_PLL_DO_ENABLE | BXT_DSI_PLL_LOCKED;
val = I915_READ(BXT_DSI_PLL_ENABLE); val = intel_de_read(dev_priv, BXT_DSI_PLL_ENABLE);
enabled = (val & mask) == mask; enabled = (val & mask) == mask;
if (!enabled) if (!enabled)
@ -215,15 +215,17 @@ bool bxt_dsi_pll_is_enabled(struct drm_i915_private *dev_priv)
* times, and since accessing DSI registers with invalid dividers * times, and since accessing DSI registers with invalid dividers
* causes a system hang. * causes a system hang.
*/ */
val = I915_READ(BXT_DSI_PLL_CTL); val = intel_de_read(dev_priv, BXT_DSI_PLL_CTL);
if (IS_GEMINILAKE(dev_priv)) { if (IS_GEMINILAKE(dev_priv)) {
if (!(val & BXT_DSIA_16X_MASK)) { if (!(val & BXT_DSIA_16X_MASK)) {
DRM_DEBUG_DRIVER("Invalid PLL divider (%08x)\n", val); drm_dbg(&dev_priv->drm,
"Invalid PLL divider (%08x)\n", val);
enabled = false; enabled = false;
} }
} else { } else {
if (!(val & BXT_DSIA_16X_MASK) || !(val & BXT_DSIC_16X_MASK)) { if (!(val & BXT_DSIA_16X_MASK) || !(val & BXT_DSIC_16X_MASK)) {
DRM_DEBUG_DRIVER("Invalid PLL divider (%08x)\n", val); drm_dbg(&dev_priv->drm,
"Invalid PLL divider (%08x)\n", val);
enabled = false; enabled = false;
} }
} }
@ -236,11 +238,11 @@ void bxt_dsi_pll_disable(struct intel_encoder *encoder)
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
u32 val; u32 val;
DRM_DEBUG_KMS("\n"); drm_dbg_kms(&dev_priv->drm, "\n");
val = I915_READ(BXT_DSI_PLL_ENABLE); val = intel_de_read(dev_priv, BXT_DSI_PLL_ENABLE);
val &= ~BXT_DSI_PLL_DO_ENABLE; val &= ~BXT_DSI_PLL_DO_ENABLE;
I915_WRITE(BXT_DSI_PLL_ENABLE, val); intel_de_write(dev_priv, BXT_DSI_PLL_ENABLE, val);
/* /*
* PLL lock should deassert within 200us. * PLL lock should deassert within 200us.
@ -248,7 +250,8 @@ void bxt_dsi_pll_disable(struct intel_encoder *encoder)
*/ */
if (intel_de_wait_for_clear(dev_priv, BXT_DSI_PLL_ENABLE, if (intel_de_wait_for_clear(dev_priv, BXT_DSI_PLL_ENABLE,
BXT_DSI_PLL_LOCKED, 1)) BXT_DSI_PLL_LOCKED, 1))
DRM_ERROR("Timeout waiting for PLL lock deassertion\n"); drm_err(&dev_priv->drm,
"Timeout waiting for PLL lock deassertion\n");
} }
u32 vlv_dsi_get_pclk(struct intel_encoder *encoder, u32 vlv_dsi_get_pclk(struct intel_encoder *encoder,
@ -263,7 +266,7 @@ u32 vlv_dsi_get_pclk(struct intel_encoder *encoder,
int refclk = IS_CHERRYVIEW(dev_priv) ? 100000 : 25000; int refclk = IS_CHERRYVIEW(dev_priv) ? 100000 : 25000;
int i; int i;
DRM_DEBUG_KMS("\n"); drm_dbg_kms(&dev_priv->drm, "\n");
vlv_cck_get(dev_priv); vlv_cck_get(dev_priv);
pll_ctl = vlv_cck_read(dev_priv, CCK_REG_DSI_PLL_CONTROL); pll_ctl = vlv_cck_read(dev_priv, CCK_REG_DSI_PLL_CONTROL);
@ -292,7 +295,7 @@ u32 vlv_dsi_get_pclk(struct intel_encoder *encoder,
p--; p--;
if (!p) { if (!p) {
DRM_ERROR("wrong P1 divisor\n"); drm_err(&dev_priv->drm, "wrong P1 divisor\n");
return 0; return 0;
} }
@ -302,7 +305,7 @@ u32 vlv_dsi_get_pclk(struct intel_encoder *encoder,
} }
if (i == ARRAY_SIZE(lfsr_converts)) { if (i == ARRAY_SIZE(lfsr_converts)) {
DRM_ERROR("wrong m_seed programmed\n"); drm_err(&dev_priv->drm, "wrong m_seed programmed\n");
return 0; return 0;
} }
@ -325,7 +328,7 @@ u32 bxt_dsi_get_pclk(struct intel_encoder *encoder,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
int bpp = mipi_dsi_pixel_format_to_bpp(intel_dsi->pixel_format); int bpp = mipi_dsi_pixel_format_to_bpp(intel_dsi->pixel_format);
config->dsi_pll.ctrl = I915_READ(BXT_DSI_PLL_CTL); config->dsi_pll.ctrl = intel_de_read(dev_priv, BXT_DSI_PLL_CTL);
dsi_ratio = config->dsi_pll.ctrl & BXT_DSI_PLL_RATIO_MASK; dsi_ratio = config->dsi_pll.ctrl & BXT_DSI_PLL_RATIO_MASK;
@ -333,7 +336,7 @@ u32 bxt_dsi_get_pclk(struct intel_encoder *encoder,
pclk = DIV_ROUND_CLOSEST(dsi_clk * intel_dsi->lane_count, bpp); pclk = DIV_ROUND_CLOSEST(dsi_clk * intel_dsi->lane_count, bpp);
DRM_DEBUG_DRIVER("Calculated pclk=%u\n", pclk); drm_dbg(&dev_priv->drm, "Calculated pclk=%u\n", pclk);
return pclk; return pclk;
} }
@ -343,11 +346,10 @@ void vlv_dsi_reset_clocks(struct intel_encoder *encoder, enum port port)
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
temp = I915_READ(MIPI_CTRL(port)); temp = intel_de_read(dev_priv, MIPI_CTRL(port));
temp &= ~ESCAPE_CLOCK_DIVIDER_MASK; temp &= ~ESCAPE_CLOCK_DIVIDER_MASK;
I915_WRITE(MIPI_CTRL(port), temp | intel_de_write(dev_priv, MIPI_CTRL(port),
intel_dsi->escape_clk_div << temp | intel_dsi->escape_clk_div << ESCAPE_CLOCK_DIVIDER_SHIFT);
ESCAPE_CLOCK_DIVIDER_SHIFT);
} }
static void glk_dsi_program_esc_clock(struct drm_device *dev, static void glk_dsi_program_esc_clock(struct drm_device *dev,
@ -393,8 +395,10 @@ static void glk_dsi_program_esc_clock(struct drm_device *dev,
else else
txesc2_div = 10; txesc2_div = 10;
I915_WRITE(MIPIO_TXESC_CLK_DIV1, (1 << (txesc1_div - 1)) & GLK_TX_ESC_CLK_DIV1_MASK); intel_de_write(dev_priv, MIPIO_TXESC_CLK_DIV1,
I915_WRITE(MIPIO_TXESC_CLK_DIV2, (1 << (txesc2_div - 1)) & GLK_TX_ESC_CLK_DIV2_MASK); (1 << (txesc1_div - 1)) & GLK_TX_ESC_CLK_DIV1_MASK);
intel_de_write(dev_priv, MIPIO_TXESC_CLK_DIV2,
(1 << (txesc2_div - 1)) & GLK_TX_ESC_CLK_DIV2_MASK);
} }
/* Program BXT Mipi clocks and dividers */ /* Program BXT Mipi clocks and dividers */
@ -412,7 +416,7 @@ static void bxt_dsi_program_clocks(struct drm_device *dev, enum port port,
u32 mipi_8by3_divider; u32 mipi_8by3_divider;
/* Clear old configurations */ /* Clear old configurations */
tmp = I915_READ(BXT_MIPI_CLOCK_CTL); tmp = intel_de_read(dev_priv, BXT_MIPI_CLOCK_CTL);
tmp &= ~(BXT_MIPI_TX_ESCLK_FIXDIV_MASK(port)); tmp &= ~(BXT_MIPI_TX_ESCLK_FIXDIV_MASK(port));
tmp &= ~(BXT_MIPI_RX_ESCLK_UPPER_FIXDIV_MASK(port)); tmp &= ~(BXT_MIPI_RX_ESCLK_UPPER_FIXDIV_MASK(port));
tmp &= ~(BXT_MIPI_8X_BY3_DIVIDER_MASK(port)); tmp &= ~(BXT_MIPI_8X_BY3_DIVIDER_MASK(port));
@ -448,7 +452,7 @@ static void bxt_dsi_program_clocks(struct drm_device *dev, enum port port,
tmp |= BXT_MIPI_RX_ESCLK_LOWER_DIVIDER(port, rx_div_lower); tmp |= BXT_MIPI_RX_ESCLK_LOWER_DIVIDER(port, rx_div_lower);
tmp |= BXT_MIPI_RX_ESCLK_UPPER_DIVIDER(port, rx_div_upper); tmp |= BXT_MIPI_RX_ESCLK_UPPER_DIVIDER(port, rx_div_upper);
I915_WRITE(BXT_MIPI_CLOCK_CTL, tmp); intel_de_write(dev_priv, BXT_MIPI_CLOCK_CTL, tmp);
} }
int bxt_dsi_pll_compute(struct intel_encoder *encoder, int bxt_dsi_pll_compute(struct intel_encoder *encoder,
@ -478,10 +482,11 @@ int bxt_dsi_pll_compute(struct intel_encoder *encoder,
} }
if (dsi_ratio < dsi_ratio_min || dsi_ratio > dsi_ratio_max) { if (dsi_ratio < dsi_ratio_min || dsi_ratio > dsi_ratio_max) {
DRM_ERROR("Cant get a suitable ratio from DSI PLL ratios\n"); drm_err(&dev_priv->drm,
"Cant get a suitable ratio from DSI PLL ratios\n");
return -ECHRNG; return -ECHRNG;
} else } else
DRM_DEBUG_KMS("DSI PLL calculation is Done!!\n"); drm_dbg_kms(&dev_priv->drm, "DSI PLL calculation is Done!!\n");
/* /*
* Program DSI ratio and Select MIPIC and MIPIA PLL output as 8x * Program DSI ratio and Select MIPIC and MIPIA PLL output as 8x
@ -507,11 +512,11 @@ void bxt_dsi_pll_enable(struct intel_encoder *encoder,
enum port port; enum port port;
u32 val; u32 val;
DRM_DEBUG_KMS("\n"); drm_dbg_kms(&dev_priv->drm, "\n");
/* Configure PLL vales */ /* Configure PLL vales */
I915_WRITE(BXT_DSI_PLL_CTL, config->dsi_pll.ctrl); intel_de_write(dev_priv, BXT_DSI_PLL_CTL, config->dsi_pll.ctrl);
POSTING_READ(BXT_DSI_PLL_CTL); intel_de_posting_read(dev_priv, BXT_DSI_PLL_CTL);
/* Program TX, RX, Dphy clocks */ /* Program TX, RX, Dphy clocks */
if (IS_BROXTON(dev_priv)) { if (IS_BROXTON(dev_priv)) {
@ -522,18 +527,19 @@ void bxt_dsi_pll_enable(struct intel_encoder *encoder,
} }
/* Enable DSI PLL */ /* Enable DSI PLL */
val = I915_READ(BXT_DSI_PLL_ENABLE); val = intel_de_read(dev_priv, BXT_DSI_PLL_ENABLE);
val |= BXT_DSI_PLL_DO_ENABLE; val |= BXT_DSI_PLL_DO_ENABLE;
I915_WRITE(BXT_DSI_PLL_ENABLE, val); intel_de_write(dev_priv, BXT_DSI_PLL_ENABLE, val);
/* Timeout and fail if PLL not locked */ /* Timeout and fail if PLL not locked */
if (intel_de_wait_for_set(dev_priv, BXT_DSI_PLL_ENABLE, if (intel_de_wait_for_set(dev_priv, BXT_DSI_PLL_ENABLE,
BXT_DSI_PLL_LOCKED, 1)) { BXT_DSI_PLL_LOCKED, 1)) {
DRM_ERROR("Timed out waiting for DSI PLL to lock\n"); drm_err(&dev_priv->drm,
"Timed out waiting for DSI PLL to lock\n");
return; return;
} }
DRM_DEBUG_KMS("DSI PLL locked\n"); drm_dbg_kms(&dev_priv->drm, "DSI PLL locked\n");
} }
void bxt_dsi_reset_clocks(struct intel_encoder *encoder, enum port port) void bxt_dsi_reset_clocks(struct intel_encoder *encoder, enum port port)
@ -544,20 +550,20 @@ void bxt_dsi_reset_clocks(struct intel_encoder *encoder, enum port port)
/* Clear old configurations */ /* Clear old configurations */
if (IS_BROXTON(dev_priv)) { if (IS_BROXTON(dev_priv)) {
tmp = I915_READ(BXT_MIPI_CLOCK_CTL); tmp = intel_de_read(dev_priv, BXT_MIPI_CLOCK_CTL);
tmp &= ~(BXT_MIPI_TX_ESCLK_FIXDIV_MASK(port)); tmp &= ~(BXT_MIPI_TX_ESCLK_FIXDIV_MASK(port));
tmp &= ~(BXT_MIPI_RX_ESCLK_UPPER_FIXDIV_MASK(port)); tmp &= ~(BXT_MIPI_RX_ESCLK_UPPER_FIXDIV_MASK(port));
tmp &= ~(BXT_MIPI_8X_BY3_DIVIDER_MASK(port)); tmp &= ~(BXT_MIPI_8X_BY3_DIVIDER_MASK(port));
tmp &= ~(BXT_MIPI_RX_ESCLK_LOWER_FIXDIV_MASK(port)); tmp &= ~(BXT_MIPI_RX_ESCLK_LOWER_FIXDIV_MASK(port));
I915_WRITE(BXT_MIPI_CLOCK_CTL, tmp); intel_de_write(dev_priv, BXT_MIPI_CLOCK_CTL, tmp);
} else { } else {
tmp = I915_READ(MIPIO_TXESC_CLK_DIV1); tmp = intel_de_read(dev_priv, MIPIO_TXESC_CLK_DIV1);
tmp &= ~GLK_TX_ESC_CLK_DIV1_MASK; tmp &= ~GLK_TX_ESC_CLK_DIV1_MASK;
I915_WRITE(MIPIO_TXESC_CLK_DIV1, tmp); intel_de_write(dev_priv, MIPIO_TXESC_CLK_DIV1, tmp);
tmp = I915_READ(MIPIO_TXESC_CLK_DIV2); tmp = intel_de_read(dev_priv, MIPIO_TXESC_CLK_DIV2);
tmp &= ~GLK_TX_ESC_CLK_DIV2_MASK; tmp &= ~GLK_TX_ESC_CLK_DIV2_MASK;
I915_WRITE(MIPIO_TXESC_CLK_DIV2, tmp); intel_de_write(dev_priv, MIPIO_TXESC_CLK_DIV2, tmp);
} }
I915_WRITE(MIPI_EOT_DISABLE(port), CLOCKSTOP); intel_de_write(dev_priv, MIPI_EOT_DISABLE(port), CLOCKSTOP);
} }

View file

@ -72,9 +72,7 @@
#include "gt/gen6_ppgtt.h" #include "gt/gen6_ppgtt.h"
#include "gt/intel_context.h" #include "gt/intel_context.h"
#include "gt/intel_engine_heartbeat.h" #include "gt/intel_engine_heartbeat.h"
#include "gt/intel_engine_pm.h"
#include "gt/intel_engine_user.h" #include "gt/intel_engine_user.h"
#include "gt/intel_lrc_reg.h"
#include "gt/intel_ring.h" #include "gt/intel_ring.h"
#include "i915_gem_context.h" #include "i915_gem_context.h"
@ -272,7 +270,8 @@ static struct i915_gem_engines *default_engines(struct i915_gem_context *ctx)
if (!e) if (!e)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
init_rcu_head(&e->rcu); e->ctx = ctx;
for_each_engine(engine, gt, id) { for_each_engine(engine, gt, id) {
struct intel_context *ce; struct intel_context *ce;
@ -421,7 +420,7 @@ static struct intel_engine_cs *__active_engine(struct i915_request *rq)
} }
engine = NULL; engine = NULL;
if (i915_request_is_active(rq) && !rq->fence.error) if (i915_request_is_active(rq) && rq->fence.error != -EIO)
engine = rq->engine; engine = rq->engine;
spin_unlock_irq(&locked->active.lock); spin_unlock_irq(&locked->active.lock);
@ -452,7 +451,7 @@ static struct intel_engine_cs *active_engine(struct intel_context *ce)
return engine; return engine;
} }
static void kill_context(struct i915_gem_context *ctx) static void kill_engines(struct i915_gem_engines *engines)
{ {
struct i915_gem_engines_iter it; struct i915_gem_engines_iter it;
struct intel_context *ce; struct intel_context *ce;
@ -464,7 +463,7 @@ static void kill_context(struct i915_gem_context *ctx)
* However, we only care about pending requests, so only include * However, we only care about pending requests, so only include
* engines on which there are incomplete requests. * engines on which there are incomplete requests.
*/ */
for_each_gem_engine(ce, __context_engines_static(ctx), it) { for_each_gem_engine(ce, engines, it) {
struct intel_engine_cs *engine; struct intel_engine_cs *engine;
if (intel_context_set_banned(ce)) if (intel_context_set_banned(ce))
@ -486,10 +485,39 @@ static void kill_context(struct i915_gem_context *ctx)
* the context from the GPU, we have to resort to a full * the context from the GPU, we have to resort to a full
* reset. We hope the collateral damage is worth it. * reset. We hope the collateral damage is worth it.
*/ */
__reset_context(ctx, engine); __reset_context(engines->ctx, engine);
} }
} }
static void kill_stale_engines(struct i915_gem_context *ctx)
{
struct i915_gem_engines *pos, *next;
unsigned long flags;
spin_lock_irqsave(&ctx->stale.lock, flags);
list_for_each_entry_safe(pos, next, &ctx->stale.engines, link) {
if (!i915_sw_fence_await(&pos->fence))
continue;
spin_unlock_irqrestore(&ctx->stale.lock, flags);
kill_engines(pos);
spin_lock_irqsave(&ctx->stale.lock, flags);
list_safe_reset_next(pos, next, link);
list_del_init(&pos->link); /* decouple from FENCE_COMPLETE */
i915_sw_fence_complete(&pos->fence);
}
spin_unlock_irqrestore(&ctx->stale.lock, flags);
}
static void kill_context(struct i915_gem_context *ctx)
{
kill_stale_engines(ctx);
kill_engines(__context_engines_static(ctx));
}
static void set_closed_name(struct i915_gem_context *ctx) static void set_closed_name(struct i915_gem_context *ctx)
{ {
char *s; char *s;
@ -565,6 +593,22 @@ static int __context_set_persistence(struct i915_gem_context *ctx, bool state)
if (!(ctx->i915->caps.scheduler & I915_SCHEDULER_CAP_PREEMPTION)) if (!(ctx->i915->caps.scheduler & I915_SCHEDULER_CAP_PREEMPTION))
return -ENODEV; return -ENODEV;
/*
* If the cancel fails, we then need to reset, cleanly!
*
* If the per-engine reset fails, all hope is lost! We resort
* to a full GPU reset in that unlikely case, but realistically
* if the engine could not reset, the full reset does not fare
* much better. The damage has been done.
*
* However, if we cannot reset an engine by itself, we cannot
* cleanup a hanging persistent context without causing
* colateral damage, and we should not pretend we can by
* exposing the interface.
*/
if (!intel_has_reset_engine(&ctx->i915->gt))
return -ENODEV;
i915_gem_context_clear_persistence(ctx); i915_gem_context_clear_persistence(ctx);
} }
@ -588,6 +632,9 @@ __create_context(struct drm_i915_private *i915)
ctx->sched.priority = I915_USER_PRIORITY(I915_PRIORITY_NORMAL); ctx->sched.priority = I915_USER_PRIORITY(I915_PRIORITY_NORMAL);
mutex_init(&ctx->mutex); mutex_init(&ctx->mutex);
spin_lock_init(&ctx->stale.lock);
INIT_LIST_HEAD(&ctx->stale.engines);
mutex_init(&ctx->engines_mutex); mutex_init(&ctx->engines_mutex);
e = default_engines(ctx); e = default_engines(ctx);
if (IS_ERR(e)) { if (IS_ERR(e)) {
@ -708,8 +755,8 @@ i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
ppgtt = i915_ppgtt_create(&i915->gt); ppgtt = i915_ppgtt_create(&i915->gt);
if (IS_ERR(ppgtt)) { if (IS_ERR(ppgtt)) {
DRM_DEBUG_DRIVER("PPGTT setup failed (%ld)\n", drm_dbg(&i915->drm, "PPGTT setup failed (%ld)\n",
PTR_ERR(ppgtt)); PTR_ERR(ppgtt));
context_close(ctx); context_close(ctx);
return ERR_CAST(ppgtt); return ERR_CAST(ppgtt);
} }
@ -751,9 +798,9 @@ static void init_contexts(struct i915_gem_contexts *gc)
void i915_gem_init__contexts(struct drm_i915_private *i915) void i915_gem_init__contexts(struct drm_i915_private *i915)
{ {
init_contexts(&i915->gem.contexts); init_contexts(&i915->gem.contexts);
DRM_DEBUG_DRIVER("%s context support initialized\n", drm_dbg(&i915->drm, "%s context support initialized\n",
DRIVER_CAPS(i915)->has_logical_contexts ? DRIVER_CAPS(i915)->has_logical_contexts ?
"logical" : "fake"); "logical" : "fake");
} }
void i915_gem_driver_release__contexts(struct drm_i915_private *i915) void i915_gem_driver_release__contexts(struct drm_i915_private *i915)
@ -761,12 +808,6 @@ void i915_gem_driver_release__contexts(struct drm_i915_private *i915)
flush_work(&i915->gem.contexts.free_work); flush_work(&i915->gem.contexts.free_work);
} }
static int vm_idr_cleanup(int id, void *p, void *data)
{
i915_vm_put(p);
return 0;
}
static int gem_context_register(struct i915_gem_context *ctx, static int gem_context_register(struct i915_gem_context *ctx,
struct drm_i915_file_private *fpriv, struct drm_i915_file_private *fpriv,
u32 *id) u32 *id)
@ -804,8 +845,8 @@ int i915_gem_context_open(struct drm_i915_private *i915,
xa_init_flags(&file_priv->context_xa, XA_FLAGS_ALLOC); xa_init_flags(&file_priv->context_xa, XA_FLAGS_ALLOC);
mutex_init(&file_priv->vm_idr_lock); /* 0 reserved for invalid/unassigned ppgtt */
idr_init_base(&file_priv->vm_idr, 1); xa_init_flags(&file_priv->vm_xa, XA_FLAGS_ALLOC1);
ctx = i915_gem_create_context(i915, 0); ctx = i915_gem_create_context(i915, 0);
if (IS_ERR(ctx)) { if (IS_ERR(ctx)) {
@ -823,9 +864,8 @@ int i915_gem_context_open(struct drm_i915_private *i915,
err_ctx: err_ctx:
context_close(ctx); context_close(ctx);
err: err:
idr_destroy(&file_priv->vm_idr); xa_destroy(&file_priv->vm_xa);
xa_destroy(&file_priv->context_xa); xa_destroy(&file_priv->context_xa);
mutex_destroy(&file_priv->vm_idr_lock);
return err; return err;
} }
@ -833,6 +873,7 @@ void i915_gem_context_close(struct drm_file *file)
{ {
struct drm_i915_file_private *file_priv = file->driver_priv; struct drm_i915_file_private *file_priv = file->driver_priv;
struct drm_i915_private *i915 = file_priv->dev_priv; struct drm_i915_private *i915 = file_priv->dev_priv;
struct i915_address_space *vm;
struct i915_gem_context *ctx; struct i915_gem_context *ctx;
unsigned long idx; unsigned long idx;
@ -840,9 +881,9 @@ void i915_gem_context_close(struct drm_file *file)
context_close(ctx); context_close(ctx);
xa_destroy(&file_priv->context_xa); xa_destroy(&file_priv->context_xa);
idr_for_each(&file_priv->vm_idr, vm_idr_cleanup, NULL); xa_for_each(&file_priv->vm_xa, idx, vm)
idr_destroy(&file_priv->vm_idr); i915_vm_put(vm);
mutex_destroy(&file_priv->vm_idr_lock); xa_destroy(&file_priv->vm_xa);
contexts_flush_free(&i915->gem.contexts); contexts_flush_free(&i915->gem.contexts);
} }
@ -854,6 +895,7 @@ int i915_gem_vm_create_ioctl(struct drm_device *dev, void *data,
struct drm_i915_gem_vm_control *args = data; struct drm_i915_gem_vm_control *args = data;
struct drm_i915_file_private *file_priv = file->driver_priv; struct drm_i915_file_private *file_priv = file->driver_priv;
struct i915_ppgtt *ppgtt; struct i915_ppgtt *ppgtt;
u32 id;
int err; int err;
if (!HAS_FULL_PPGTT(i915)) if (!HAS_FULL_PPGTT(i915))
@ -876,23 +918,15 @@ int i915_gem_vm_create_ioctl(struct drm_device *dev, void *data,
goto err_put; goto err_put;
} }
err = mutex_lock_interruptible(&file_priv->vm_idr_lock); err = xa_alloc(&file_priv->vm_xa, &id, &ppgtt->vm,
xa_limit_32b, GFP_KERNEL);
if (err) if (err)
goto err_put; goto err_put;
err = idr_alloc(&file_priv->vm_idr, &ppgtt->vm, 0, 0, GFP_KERNEL); GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */
if (err < 0) args->vm_id = id;
goto err_unlock;
GEM_BUG_ON(err == 0); /* reserved for invalid/unassigned ppgtt */
mutex_unlock(&file_priv->vm_idr_lock);
args->vm_id = err;
return 0; return 0;
err_unlock:
mutex_unlock(&file_priv->vm_idr_lock);
err_put: err_put:
i915_vm_put(&ppgtt->vm); i915_vm_put(&ppgtt->vm);
return err; return err;
@ -904,8 +938,6 @@ int i915_gem_vm_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_i915_file_private *file_priv = file->driver_priv; struct drm_i915_file_private *file_priv = file->driver_priv;
struct drm_i915_gem_vm_control *args = data; struct drm_i915_gem_vm_control *args = data;
struct i915_address_space *vm; struct i915_address_space *vm;
int err;
u32 id;
if (args->flags) if (args->flags)
return -EINVAL; return -EINVAL;
@ -913,17 +945,7 @@ int i915_gem_vm_destroy_ioctl(struct drm_device *dev, void *data,
if (args->extensions) if (args->extensions)
return -EINVAL; return -EINVAL;
id = args->vm_id; vm = xa_erase(&file_priv->vm_xa, args->vm_id);
if (!id)
return -ENOENT;
err = mutex_lock_interruptible(&file_priv->vm_idr_lock);
if (err)
return err;
vm = idr_remove(&file_priv->vm_idr, id);
mutex_unlock(&file_priv->vm_idr_lock);
if (!vm) if (!vm)
return -ENOENT; return -ENOENT;
@ -1021,7 +1043,8 @@ static int get_ppgtt(struct drm_i915_file_private *file_priv,
struct drm_i915_gem_context_param *args) struct drm_i915_gem_context_param *args)
{ {
struct i915_address_space *vm; struct i915_address_space *vm;
int ret; int err;
u32 id;
if (!rcu_access_pointer(ctx->vm)) if (!rcu_access_pointer(ctx->vm))
return -ENODEV; return -ENODEV;
@ -1029,27 +1052,22 @@ static int get_ppgtt(struct drm_i915_file_private *file_priv,
rcu_read_lock(); rcu_read_lock();
vm = context_get_vm_rcu(ctx); vm = context_get_vm_rcu(ctx);
rcu_read_unlock(); rcu_read_unlock();
if (!vm)
return -ENODEV;
ret = mutex_lock_interruptible(&file_priv->vm_idr_lock); err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL);
if (ret) if (err)
goto err_put; goto err_put;
ret = idr_alloc(&file_priv->vm_idr, vm, 0, 0, GFP_KERNEL);
GEM_BUG_ON(!ret);
if (ret < 0)
goto err_unlock;
i915_vm_open(vm); i915_vm_open(vm);
GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */
args->value = id;
args->size = 0; args->size = 0;
args->value = ret;
ret = 0;
err_unlock:
mutex_unlock(&file_priv->vm_idr_lock);
err_put: err_put:
i915_vm_put(vm); i915_vm_put(vm);
return ret; return err;
} }
static void set_ppgtt_barrier(void *data) static void set_ppgtt_barrier(void *data)
@ -1151,7 +1169,7 @@ static int set_ppgtt(struct drm_i915_file_private *file_priv,
return -ENOENT; return -ENOENT;
rcu_read_lock(); rcu_read_lock();
vm = idr_find(&file_priv->vm_idr, args->value); vm = xa_load(&file_priv->vm_xa, args->value);
if (vm && !kref_get_unless_zero(&vm->ref)) if (vm && !kref_get_unless_zero(&vm->ref))
vm = NULL; vm = NULL;
rcu_read_unlock(); rcu_read_unlock();
@ -1197,89 +1215,6 @@ out:
return err; return err;
} }
static int gen8_emit_rpcs_config(struct i915_request *rq,
struct intel_context *ce,
struct intel_sseu sseu)
{
u64 offset;
u32 *cs;
cs = intel_ring_begin(rq, 4);
if (IS_ERR(cs))
return PTR_ERR(cs);
offset = i915_ggtt_offset(ce->state) +
LRC_STATE_PN * PAGE_SIZE +
CTX_R_PWR_CLK_STATE * 4;
*cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
*cs++ = lower_32_bits(offset);
*cs++ = upper_32_bits(offset);
*cs++ = intel_sseu_make_rpcs(rq->i915, &sseu);
intel_ring_advance(rq, cs);
return 0;
}
static int
gen8_modify_rpcs(struct intel_context *ce, struct intel_sseu sseu)
{
struct i915_request *rq;
int ret;
lockdep_assert_held(&ce->pin_mutex);
/*
* If the context is not idle, we have to submit an ordered request to
* modify its context image via the kernel context (writing to our own
* image, or into the registers directory, does not stick). Pristine
* and idle contexts will be configured on pinning.
*/
if (!intel_context_pin_if_active(ce))
return 0;
rq = intel_engine_create_kernel_request(ce->engine);
if (IS_ERR(rq)) {
ret = PTR_ERR(rq);
goto out_unpin;
}
/* Serialise with the remote context */
ret = intel_context_prepare_remote_request(ce, rq);
if (ret == 0)
ret = gen8_emit_rpcs_config(rq, ce, sseu);
i915_request_add(rq);
out_unpin:
intel_context_unpin(ce);
return ret;
}
static int
intel_context_reconfigure_sseu(struct intel_context *ce, struct intel_sseu sseu)
{
int ret;
GEM_BUG_ON(INTEL_GEN(ce->engine->i915) < 8);
ret = intel_context_lock_pinned(ce);
if (ret)
return ret;
/* Nothing to do if unmodified. */
if (!memcmp(&ce->sseu, &sseu, sizeof(sseu)))
goto unlock;
ret = gen8_modify_rpcs(ce, sseu);
if (!ret)
ce->sseu = sseu;
unlock:
intel_context_unlock_pinned(ce);
return ret;
}
static int static int
user_to_context_sseu(struct drm_i915_private *i915, user_to_context_sseu(struct drm_i915_private *i915,
const struct drm_i915_gem_context_param_sseu *user, const struct drm_i915_gem_context_param_sseu *user,
@ -1444,6 +1379,7 @@ set_engines__load_balance(struct i915_user_extension __user *base, void *data)
struct i915_context_engines_load_balance __user *ext = struct i915_context_engines_load_balance __user *ext =
container_of_user(base, typeof(*ext), base); container_of_user(base, typeof(*ext), base);
const struct set_engines *set = data; const struct set_engines *set = data;
struct drm_i915_private *i915 = set->ctx->i915;
struct intel_engine_cs *stack[16]; struct intel_engine_cs *stack[16];
struct intel_engine_cs **siblings; struct intel_engine_cs **siblings;
struct intel_context *ce; struct intel_context *ce;
@ -1451,24 +1387,25 @@ set_engines__load_balance(struct i915_user_extension __user *base, void *data)
unsigned int n; unsigned int n;
int err; int err;
if (!HAS_EXECLISTS(set->ctx->i915)) if (!HAS_EXECLISTS(i915))
return -ENODEV; return -ENODEV;
if (USES_GUC_SUBMISSION(set->ctx->i915)) if (intel_uc_uses_guc_submission(&i915->gt.uc))
return -ENODEV; /* not implement yet */ return -ENODEV; /* not implement yet */
if (get_user(idx, &ext->engine_index)) if (get_user(idx, &ext->engine_index))
return -EFAULT; return -EFAULT;
if (idx >= set->engines->num_engines) { if (idx >= set->engines->num_engines) {
DRM_DEBUG("Invalid placement value, %d >= %d\n", drm_dbg(&i915->drm, "Invalid placement value, %d >= %d\n",
idx, set->engines->num_engines); idx, set->engines->num_engines);
return -EINVAL; return -EINVAL;
} }
idx = array_index_nospec(idx, set->engines->num_engines); idx = array_index_nospec(idx, set->engines->num_engines);
if (set->engines->engines[idx]) { if (set->engines->engines[idx]) {
DRM_DEBUG("Invalid placement[%d], already occupied\n", idx); drm_dbg(&i915->drm,
"Invalid placement[%d], already occupied\n", idx);
return -EEXIST; return -EEXIST;
} }
@ -1500,12 +1437,13 @@ set_engines__load_balance(struct i915_user_extension __user *base, void *data)
goto out_siblings; goto out_siblings;
} }
siblings[n] = intel_engine_lookup_user(set->ctx->i915, siblings[n] = intel_engine_lookup_user(i915,
ci.engine_class, ci.engine_class,
ci.engine_instance); ci.engine_instance);
if (!siblings[n]) { if (!siblings[n]) {
DRM_DEBUG("Invalid sibling[%d]: { class:%d, inst:%d }\n", drm_dbg(&i915->drm,
n, ci.engine_class, ci.engine_instance); "Invalid sibling[%d]: { class:%d, inst:%d }\n",
n, ci.engine_class, ci.engine_instance);
err = -EINVAL; err = -EINVAL;
goto out_siblings; goto out_siblings;
} }
@ -1538,6 +1476,7 @@ set_engines__bond(struct i915_user_extension __user *base, void *data)
struct i915_context_engines_bond __user *ext = struct i915_context_engines_bond __user *ext =
container_of_user(base, typeof(*ext), base); container_of_user(base, typeof(*ext), base);
const struct set_engines *set = data; const struct set_engines *set = data;
struct drm_i915_private *i915 = set->ctx->i915;
struct i915_engine_class_instance ci; struct i915_engine_class_instance ci;
struct intel_engine_cs *virtual; struct intel_engine_cs *virtual;
struct intel_engine_cs *master; struct intel_engine_cs *master;
@ -1548,14 +1487,15 @@ set_engines__bond(struct i915_user_extension __user *base, void *data)
return -EFAULT; return -EFAULT;
if (idx >= set->engines->num_engines) { if (idx >= set->engines->num_engines) {
DRM_DEBUG("Invalid index for virtual engine: %d >= %d\n", drm_dbg(&i915->drm,
idx, set->engines->num_engines); "Invalid index for virtual engine: %d >= %d\n",
idx, set->engines->num_engines);
return -EINVAL; return -EINVAL;
} }
idx = array_index_nospec(idx, set->engines->num_engines); idx = array_index_nospec(idx, set->engines->num_engines);
if (!set->engines->engines[idx]) { if (!set->engines->engines[idx]) {
DRM_DEBUG("Invalid engine at %d\n", idx); drm_dbg(&i915->drm, "Invalid engine at %d\n", idx);
return -EINVAL; return -EINVAL;
} }
virtual = set->engines->engines[idx]->engine; virtual = set->engines->engines[idx]->engine;
@ -1573,11 +1513,12 @@ set_engines__bond(struct i915_user_extension __user *base, void *data)
if (copy_from_user(&ci, &ext->master, sizeof(ci))) if (copy_from_user(&ci, &ext->master, sizeof(ci)))
return -EFAULT; return -EFAULT;
master = intel_engine_lookup_user(set->ctx->i915, master = intel_engine_lookup_user(i915,
ci.engine_class, ci.engine_instance); ci.engine_class, ci.engine_instance);
if (!master) { if (!master) {
DRM_DEBUG("Unrecognised master engine: { class:%u, instance:%u }\n", drm_dbg(&i915->drm,
ci.engine_class, ci.engine_instance); "Unrecognised master engine: { class:%u, instance:%u }\n",
ci.engine_class, ci.engine_instance);
return -EINVAL; return -EINVAL;
} }
@ -1590,12 +1531,13 @@ set_engines__bond(struct i915_user_extension __user *base, void *data)
if (copy_from_user(&ci, &ext->engines[n], sizeof(ci))) if (copy_from_user(&ci, &ext->engines[n], sizeof(ci)))
return -EFAULT; return -EFAULT;
bond = intel_engine_lookup_user(set->ctx->i915, bond = intel_engine_lookup_user(i915,
ci.engine_class, ci.engine_class,
ci.engine_instance); ci.engine_instance);
if (!bond) { if (!bond) {
DRM_DEBUG("Unrecognised engine[%d] for bonding: { class:%d, instance: %d }\n", drm_dbg(&i915->drm,
n, ci.engine_class, ci.engine_instance); "Unrecognised engine[%d] for bonding: { class:%d, instance: %d }\n",
n, ci.engine_class, ci.engine_instance);
return -EINVAL; return -EINVAL;
} }
@ -1620,10 +1562,82 @@ static const i915_user_extension_fn set_engines__extensions[] = {
[I915_CONTEXT_ENGINES_EXT_BOND] = set_engines__bond, [I915_CONTEXT_ENGINES_EXT_BOND] = set_engines__bond,
}; };
static int engines_notify(struct i915_sw_fence *fence,
enum i915_sw_fence_notify state)
{
struct i915_gem_engines *engines =
container_of(fence, typeof(*engines), fence);
switch (state) {
case FENCE_COMPLETE:
if (!list_empty(&engines->link)) {
struct i915_gem_context *ctx = engines->ctx;
unsigned long flags;
spin_lock_irqsave(&ctx->stale.lock, flags);
list_del(&engines->link);
spin_unlock_irqrestore(&ctx->stale.lock, flags);
}
break;
case FENCE_FREE:
init_rcu_head(&engines->rcu);
call_rcu(&engines->rcu, free_engines_rcu);
break;
}
return NOTIFY_DONE;
}
static void engines_idle_release(struct i915_gem_engines *engines)
{
struct i915_gem_engines_iter it;
struct intel_context *ce;
unsigned long flags;
GEM_BUG_ON(!engines);
i915_sw_fence_init(&engines->fence, engines_notify);
INIT_LIST_HEAD(&engines->link);
spin_lock_irqsave(&engines->ctx->stale.lock, flags);
if (!i915_gem_context_is_closed(engines->ctx))
list_add(&engines->link, &engines->ctx->stale.engines);
spin_unlock_irqrestore(&engines->ctx->stale.lock, flags);
if (list_empty(&engines->link)) /* raced, already closed */
goto kill;
for_each_gem_engine(ce, engines, it) {
struct dma_fence *fence;
int err;
if (!ce->timeline)
continue;
fence = i915_active_fence_get(&ce->timeline->last_request);
if (!fence)
continue;
err = i915_sw_fence_await_dma_fence(&engines->fence,
fence, 0,
GFP_KERNEL);
dma_fence_put(fence);
if (err < 0)
goto kill;
}
goto out;
kill:
kill_engines(engines);
out:
i915_sw_fence_commit(&engines->fence);
}
static int static int
set_engines(struct i915_gem_context *ctx, set_engines(struct i915_gem_context *ctx,
const struct drm_i915_gem_context_param *args) const struct drm_i915_gem_context_param *args)
{ {
struct drm_i915_private *i915 = ctx->i915;
struct i915_context_param_engines __user *user = struct i915_context_param_engines __user *user =
u64_to_user_ptr(args->value); u64_to_user_ptr(args->value);
struct set_engines set = { .ctx = ctx }; struct set_engines set = { .ctx = ctx };
@ -1645,8 +1659,8 @@ set_engines(struct i915_gem_context *ctx,
BUILD_BUG_ON(!IS_ALIGNED(sizeof(*user), sizeof(*user->engines))); BUILD_BUG_ON(!IS_ALIGNED(sizeof(*user), sizeof(*user->engines)));
if (args->size < sizeof(*user) || if (args->size < sizeof(*user) ||
!IS_ALIGNED(args->size, sizeof(*user->engines))) { !IS_ALIGNED(args->size, sizeof(*user->engines))) {
DRM_DEBUG("Invalid size for engine array: %d\n", drm_dbg(&i915->drm, "Invalid size for engine array: %d\n",
args->size); args->size);
return -EINVAL; return -EINVAL;
} }
@ -1661,7 +1675,8 @@ set_engines(struct i915_gem_context *ctx,
if (!set.engines) if (!set.engines)
return -ENOMEM; return -ENOMEM;
init_rcu_head(&set.engines->rcu); set.engines->ctx = ctx;
for (n = 0; n < num_engines; n++) { for (n = 0; n < num_engines; n++) {
struct i915_engine_class_instance ci; struct i915_engine_class_instance ci;
struct intel_engine_cs *engine; struct intel_engine_cs *engine;
@ -1682,8 +1697,9 @@ set_engines(struct i915_gem_context *ctx,
ci.engine_class, ci.engine_class,
ci.engine_instance); ci.engine_instance);
if (!engine) { if (!engine) {
DRM_DEBUG("Invalid engine[%d]: { class:%d, instance:%d }\n", drm_dbg(&i915->drm,
n, ci.engine_class, ci.engine_instance); "Invalid engine[%d]: { class:%d, instance:%d }\n",
n, ci.engine_class, ci.engine_instance);
__free_engines(set.engines, n); __free_engines(set.engines, n);
return -ENOENT; return -ENOENT;
} }
@ -1720,7 +1736,8 @@ replace:
set.engines = rcu_replace_pointer(ctx->engines, set.engines, 1); set.engines = rcu_replace_pointer(ctx->engines, set.engines, 1);
mutex_unlock(&ctx->engines_mutex); mutex_unlock(&ctx->engines_mutex);
call_rcu(&set.engines->rcu, free_engines_rcu); /* Keep track of old engine sets for kill_context() */
engines_idle_release(set.engines);
return 0; return 0;
} }
@ -1735,7 +1752,6 @@ __copy_engines(struct i915_gem_engines *e)
if (!copy) if (!copy)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
init_rcu_head(&copy->rcu);
for (n = 0; n < e->num_engines; n++) { for (n = 0; n < e->num_engines; n++) {
if (e->engines[n]) if (e->engines[n])
copy->engines[n] = intel_context_get(e->engines[n]); copy->engines[n] = intel_context_get(e->engines[n]);
@ -1979,7 +1995,8 @@ static int clone_engines(struct i915_gem_context *dst,
if (!clone) if (!clone)
goto err_unlock; goto err_unlock;
init_rcu_head(&clone->rcu); clone->ctx = dst;
for (n = 0; n < e->num_engines; n++) { for (n = 0; n < e->num_engines; n++) {
struct intel_engine_cs *engine; struct intel_engine_cs *engine;
@ -2197,8 +2214,9 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
ext_data.fpriv = file->driver_priv; ext_data.fpriv = file->driver_priv;
if (client_is_banned(ext_data.fpriv)) { if (client_is_banned(ext_data.fpriv)) {
DRM_DEBUG("client %s[%d] banned from creating ctx\n", drm_dbg(&i915->drm,
current->comm, task_pid_nr(current)); "client %s[%d] banned from creating ctx\n",
current->comm, task_pid_nr(current));
return -EIO; return -EIO;
} }
@ -2220,7 +2238,7 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
goto err_ctx; goto err_ctx;
args->ctx_id = id; args->ctx_id = id;
DRM_DEBUG("HW context %d created\n", args->ctx_id); drm_dbg(&i915->drm, "HW context %d created\n", args->ctx_id);
return 0; return 0;

View file

@ -20,6 +20,7 @@
#include "gt/intel_context_types.h" #include "gt/intel_context_types.h"
#include "i915_scheduler.h" #include "i915_scheduler.h"
#include "i915_sw_fence.h"
struct pid; struct pid;
@ -30,7 +31,12 @@ struct intel_timeline;
struct intel_ring; struct intel_ring;
struct i915_gem_engines { struct i915_gem_engines {
struct rcu_head rcu; union {
struct list_head link;
struct rcu_head rcu;
};
struct i915_sw_fence fence;
struct i915_gem_context *ctx;
unsigned int num_engines; unsigned int num_engines;
struct intel_context *engines[]; struct intel_context *engines[];
}; };
@ -173,6 +179,11 @@ struct i915_gem_context {
* context in messages. * context in messages.
*/ */
char name[TASK_COMM_LEN + 8]; char name[TASK_COMM_LEN + 8];
struct {
spinlock_t lock;
struct list_head engines;
} stale;
}; };
#endif /* __I915_GEM_CONTEXT_TYPES_H__ */ #endif /* __I915_GEM_CONTEXT_TYPES_H__ */

View file

@ -48,7 +48,9 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
src = sg_next(src); src = sg_next(src);
} }
if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) { if (!dma_map_sg_attrs(attachment->dev,
st->sgl, st->nents, dir,
DMA_ATTR_SKIP_CPU_SYNC)) {
ret = -ENOMEM; ret = -ENOMEM;
goto err_free_sg; goto err_free_sg;
} }
@ -71,7 +73,9 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
{ {
struct drm_i915_gem_object *obj = dma_buf_to_obj(attachment->dmabuf); struct drm_i915_gem_object *obj = dma_buf_to_obj(attachment->dmabuf);
dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir); dma_unmap_sg_attrs(attachment->dev,
sg->sgl, sg->nents, dir,
DMA_ATTR_SKIP_CPU_SYNC);
sg_free_table(sg); sg_free_table(sg);
kfree(sg); kfree(sg);

View file

@ -420,6 +420,7 @@ eb_validate_vma(struct i915_execbuffer *eb,
struct drm_i915_gem_exec_object2 *entry, struct drm_i915_gem_exec_object2 *entry,
struct i915_vma *vma) struct i915_vma *vma)
{ {
struct drm_i915_private *i915 = eb->i915;
if (unlikely(entry->flags & eb->invalid_flags)) if (unlikely(entry->flags & eb->invalid_flags))
return -EINVAL; return -EINVAL;
@ -443,8 +444,9 @@ eb_validate_vma(struct i915_execbuffer *eb,
} }
if (unlikely(vma->exec_flags)) { if (unlikely(vma->exec_flags)) {
DRM_DEBUG("Object [handle %d, index %d] appears more than once in object list\n", drm_dbg(&i915->drm,
entry->handle, (int)(entry - eb->exec)); "Object [handle %d, index %d] appears more than once in object list\n",
entry->handle, (int)(entry - eb->exec));
return -EINVAL; return -EINVAL;
} }
@ -1330,6 +1332,7 @@ eb_relocate_entry(struct i915_execbuffer *eb,
struct i915_vma *vma, struct i915_vma *vma,
const struct drm_i915_gem_relocation_entry *reloc) const struct drm_i915_gem_relocation_entry *reloc)
{ {
struct drm_i915_private *i915 = eb->i915;
struct i915_vma *target; struct i915_vma *target;
int err; int err;
@ -1340,7 +1343,7 @@ eb_relocate_entry(struct i915_execbuffer *eb,
/* Validate that the target is in a valid r/w GPU domain */ /* Validate that the target is in a valid r/w GPU domain */
if (unlikely(reloc->write_domain & (reloc->write_domain - 1))) { if (unlikely(reloc->write_domain & (reloc->write_domain - 1))) {
DRM_DEBUG("reloc with multiple write domains: " drm_dbg(&i915->drm, "reloc with multiple write domains: "
"target %d offset %d " "target %d offset %d "
"read %08x write %08x", "read %08x write %08x",
reloc->target_handle, reloc->target_handle,
@ -1351,7 +1354,7 @@ eb_relocate_entry(struct i915_execbuffer *eb,
} }
if (unlikely((reloc->write_domain | reloc->read_domains) if (unlikely((reloc->write_domain | reloc->read_domains)
& ~I915_GEM_GPU_DOMAINS)) { & ~I915_GEM_GPU_DOMAINS)) {
DRM_DEBUG("reloc with read/write non-GPU domains: " drm_dbg(&i915->drm, "reloc with read/write non-GPU domains: "
"target %d offset %d " "target %d offset %d "
"read %08x write %08x", "read %08x write %08x",
reloc->target_handle, reloc->target_handle,
@ -1391,7 +1394,7 @@ eb_relocate_entry(struct i915_execbuffer *eb,
/* Check that the relocation address is valid... */ /* Check that the relocation address is valid... */
if (unlikely(reloc->offset > if (unlikely(reloc->offset >
vma->size - (eb->reloc_cache.use_64bit_reloc ? 8 : 4))) { vma->size - (eb->reloc_cache.use_64bit_reloc ? 8 : 4))) {
DRM_DEBUG("Relocation beyond object bounds: " drm_dbg(&i915->drm, "Relocation beyond object bounds: "
"target %d offset %d size %d.\n", "target %d offset %d size %d.\n",
reloc->target_handle, reloc->target_handle,
(int)reloc->offset, (int)reloc->offset,
@ -1399,7 +1402,7 @@ eb_relocate_entry(struct i915_execbuffer *eb,
return -EINVAL; return -EINVAL;
} }
if (unlikely(reloc->offset & 3)) { if (unlikely(reloc->offset & 3)) {
DRM_DEBUG("Relocation not 4-byte aligned: " drm_dbg(&i915->drm, "Relocation not 4-byte aligned: "
"target %d offset %d.\n", "target %d offset %d.\n",
reloc->target_handle, reloc->target_handle,
(int)reloc->offset); (int)reloc->offset);
@ -1643,9 +1646,6 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb)
const unsigned int count = eb->buffer_count; const unsigned int count = eb->buffer_count;
unsigned int i; unsigned int i;
if (unlikely(i915_modparams.prefault_disable))
return 0;
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
int err; int err;
@ -1921,7 +1921,7 @@ static int i915_reset_gen7_sol_offsets(struct i915_request *rq)
int i; int i;
if (!IS_GEN(rq->i915, 7) || rq->engine->id != RCS0) { if (!IS_GEN(rq->i915, 7) || rq->engine->id != RCS0) {
DRM_DEBUG("sol reset is gen7/rcs only\n"); drm_dbg(&rq->i915->drm, "sol reset is gen7/rcs only\n");
return -EINVAL; return -EINVAL;
} }
@ -2075,6 +2075,7 @@ err_free:
static int eb_parse(struct i915_execbuffer *eb) static int eb_parse(struct i915_execbuffer *eb)
{ {
struct drm_i915_private *i915 = eb->i915;
struct intel_engine_pool_node *pool; struct intel_engine_pool_node *pool;
struct i915_vma *shadow, *trampoline; struct i915_vma *shadow, *trampoline;
unsigned int len; unsigned int len;
@ -2090,7 +2091,8 @@ static int eb_parse(struct i915_execbuffer *eb)
* post-scan tampering * post-scan tampering
*/ */
if (!eb->context->vm->has_read_only) { if (!eb->context->vm->has_read_only) {
DRM_DEBUG("Cannot prevent post-scan tampering without RO capable vm\n"); drm_dbg(&i915->drm,
"Cannot prevent post-scan tampering without RO capable vm\n");
return -EINVAL; return -EINVAL;
} }
} else { } else {
@ -2371,8 +2373,9 @@ eb_select_legacy_ring(struct i915_execbuffer *eb,
if (user_ring_id != I915_EXEC_BSD && if (user_ring_id != I915_EXEC_BSD &&
(args->flags & I915_EXEC_BSD_MASK)) { (args->flags & I915_EXEC_BSD_MASK)) {
DRM_DEBUG("execbuf with non bsd ring but with invalid " drm_dbg(&i915->drm,
"bsd dispatch flags: %d\n", (int)(args->flags)); "execbuf with non bsd ring but with invalid "
"bsd dispatch flags: %d\n", (int)(args->flags));
return -1; return -1;
} }
@ -2386,8 +2389,9 @@ eb_select_legacy_ring(struct i915_execbuffer *eb,
bsd_idx >>= I915_EXEC_BSD_SHIFT; bsd_idx >>= I915_EXEC_BSD_SHIFT;
bsd_idx--; bsd_idx--;
} else { } else {
DRM_DEBUG("execbuf with unknown bsd ring: %u\n", drm_dbg(&i915->drm,
bsd_idx); "execbuf with unknown bsd ring: %u\n",
bsd_idx);
return -1; return -1;
} }
@ -2395,7 +2399,8 @@ eb_select_legacy_ring(struct i915_execbuffer *eb,
} }
if (user_ring_id >= ARRAY_SIZE(user_ring_map)) { if (user_ring_id >= ARRAY_SIZE(user_ring_map)) {
DRM_DEBUG("execbuf with unknown ring: %u\n", user_ring_id); drm_dbg(&i915->drm, "execbuf with unknown ring: %u\n",
user_ring_id);
return -1; return -1;
} }
@ -2669,13 +2674,14 @@ i915_gem_do_execbuffer(struct drm_device *dev,
} }
if (unlikely(*eb.batch->exec_flags & EXEC_OBJECT_WRITE)) { if (unlikely(*eb.batch->exec_flags & EXEC_OBJECT_WRITE)) {
DRM_DEBUG("Attempting to use self-modifying batch buffer\n"); drm_dbg(&i915->drm,
"Attempting to use self-modifying batch buffer\n");
err = -EINVAL; err = -EINVAL;
goto err_vma; goto err_vma;
} }
if (eb.batch_start_offset > eb.batch->size || if (eb.batch_start_offset > eb.batch->size ||
eb.batch_len > eb.batch->size - eb.batch_start_offset) { eb.batch_len > eb.batch->size - eb.batch_start_offset) {
DRM_DEBUG("Attempting to use out-of-bounds batch\n"); drm_dbg(&i915->drm, "Attempting to use out-of-bounds batch\n");
err = -EINVAL; err = -EINVAL;
goto err_vma; goto err_vma;
} }
@ -2707,7 +2713,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
vma = i915_gem_object_ggtt_pin(eb.batch->obj, NULL, 0, 0, 0); vma = i915_gem_object_ggtt_pin(eb.batch->obj, NULL, 0, 0, 0);
if (IS_ERR(vma)) { if (IS_ERR(vma)) {
err = PTR_ERR(vma); err = PTR_ERR(vma);
goto err_vma; goto err_parse;
} }
eb.batch = vma; eb.batch = vma;
@ -2786,6 +2792,7 @@ err_request:
err_batch_unpin: err_batch_unpin:
if (eb.batch_flags & I915_DISPATCH_SECURE) if (eb.batch_flags & I915_DISPATCH_SECURE)
i915_vma_unpin(eb.batch); i915_vma_unpin(eb.batch);
err_parse:
if (eb.batch->private) if (eb.batch->private)
intel_engine_pool_put(eb.batch->private); intel_engine_pool_put(eb.batch->private);
err_vma: err_vma:
@ -2838,6 +2845,7 @@ int
i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data, i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data,
struct drm_file *file) struct drm_file *file)
{ {
struct drm_i915_private *i915 = to_i915(dev);
struct drm_i915_gem_execbuffer *args = data; struct drm_i915_gem_execbuffer *args = data;
struct drm_i915_gem_execbuffer2 exec2; struct drm_i915_gem_execbuffer2 exec2;
struct drm_i915_gem_exec_object *exec_list = NULL; struct drm_i915_gem_exec_object *exec_list = NULL;
@ -2847,7 +2855,7 @@ i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data,
int err; int err;
if (!check_buffer_count(count)) { if (!check_buffer_count(count)) {
DRM_DEBUG("execbuf2 with %zd buffers\n", count); drm_dbg(&i915->drm, "execbuf2 with %zd buffers\n", count);
return -EINVAL; return -EINVAL;
} }
@ -2872,8 +2880,9 @@ i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data,
exec2_list = kvmalloc_array(count + 1, eb_element_size(), exec2_list = kvmalloc_array(count + 1, eb_element_size(),
__GFP_NOWARN | GFP_KERNEL); __GFP_NOWARN | GFP_KERNEL);
if (exec_list == NULL || exec2_list == NULL) { if (exec_list == NULL || exec2_list == NULL) {
DRM_DEBUG("Failed to allocate exec list for %d buffers\n", drm_dbg(&i915->drm,
args->buffer_count); "Failed to allocate exec list for %d buffers\n",
args->buffer_count);
kvfree(exec_list); kvfree(exec_list);
kvfree(exec2_list); kvfree(exec2_list);
return -ENOMEM; return -ENOMEM;
@ -2882,8 +2891,8 @@ i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data,
u64_to_user_ptr(args->buffers_ptr), u64_to_user_ptr(args->buffers_ptr),
sizeof(*exec_list) * count); sizeof(*exec_list) * count);
if (err) { if (err) {
DRM_DEBUG("copy %d exec entries failed %d\n", drm_dbg(&i915->drm, "copy %d exec entries failed %d\n",
args->buffer_count, err); args->buffer_count, err);
kvfree(exec_list); kvfree(exec_list);
kvfree(exec2_list); kvfree(exec2_list);
return -EFAULT; return -EFAULT;
@ -2930,6 +2939,7 @@ int
i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data,
struct drm_file *file) struct drm_file *file)
{ {
struct drm_i915_private *i915 = to_i915(dev);
struct drm_i915_gem_execbuffer2 *args = data; struct drm_i915_gem_execbuffer2 *args = data;
struct drm_i915_gem_exec_object2 *exec2_list; struct drm_i915_gem_exec_object2 *exec2_list;
struct drm_syncobj **fences = NULL; struct drm_syncobj **fences = NULL;
@ -2937,7 +2947,7 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data,
int err; int err;
if (!check_buffer_count(count)) { if (!check_buffer_count(count)) {
DRM_DEBUG("execbuf2 with %zd buffers\n", count); drm_dbg(&i915->drm, "execbuf2 with %zd buffers\n", count);
return -EINVAL; return -EINVAL;
} }
@ -2949,14 +2959,14 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data,
exec2_list = kvmalloc_array(count + 1, eb_element_size(), exec2_list = kvmalloc_array(count + 1, eb_element_size(),
__GFP_NOWARN | GFP_KERNEL); __GFP_NOWARN | GFP_KERNEL);
if (exec2_list == NULL) { if (exec2_list == NULL) {
DRM_DEBUG("Failed to allocate exec list for %zd buffers\n", drm_dbg(&i915->drm, "Failed to allocate exec list for %zd buffers\n",
count); count);
return -ENOMEM; return -ENOMEM;
} }
if (copy_from_user(exec2_list, if (copy_from_user(exec2_list,
u64_to_user_ptr(args->buffers_ptr), u64_to_user_ptr(args->buffers_ptr),
sizeof(*exec2_list) * count)) { sizeof(*exec2_list) * count)) {
DRM_DEBUG("copy %zd exec entries failed\n", count); drm_dbg(&i915->drm, "copy %zd exec entries failed\n", count);
kvfree(exec2_list); kvfree(exec2_list);
return -EFAULT; return -EFAULT;
} }

View file

@ -613,8 +613,7 @@ __assign_mmap_offset(struct drm_file *file,
if (!obj) if (!obj)
return -ENOENT; return -ENOENT;
if (mmap_type == I915_MMAP_TYPE_GTT && if (i915_gem_object_never_mmap(obj)) {
i915_gem_object_never_bind_ggtt(obj)) {
err = -ENODEV; err = -ENODEV;
goto out; goto out;
} }

View file

@ -225,6 +225,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
/* But keep the pointer alive for RCU-protected lookups */ /* But keep the pointer alive for RCU-protected lookups */
call_rcu(&obj->rcu, __i915_gem_free_object_rcu); call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
cond_resched();
} }
intel_runtime_pm_put(&i915->runtime_pm, wakeref); intel_runtime_pm_put(&i915->runtime_pm, wakeref);
} }

View file

@ -194,9 +194,9 @@ i915_gem_object_is_proxy(const struct drm_i915_gem_object *obj)
} }
static inline bool static inline bool
i915_gem_object_never_bind_ggtt(const struct drm_i915_gem_object *obj) i915_gem_object_never_mmap(const struct drm_i915_gem_object *obj)
{ {
return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_GGTT); return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_MMAP);
} }
static inline bool static inline bool

View file

@ -34,7 +34,7 @@ struct drm_i915_gem_object_ops {
#define I915_GEM_OBJECT_HAS_IOMEM BIT(1) #define I915_GEM_OBJECT_HAS_IOMEM BIT(1)
#define I915_GEM_OBJECT_IS_SHRINKABLE BIT(2) #define I915_GEM_OBJECT_IS_SHRINKABLE BIT(2)
#define I915_GEM_OBJECT_IS_PROXY BIT(3) #define I915_GEM_OBJECT_IS_PROXY BIT(3)
#define I915_GEM_OBJECT_NO_GGTT BIT(4) #define I915_GEM_OBJECT_NO_MMAP BIT(4)
#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(5) #define I915_GEM_OBJECT_ASYNC_CANCEL BIT(5)
/* Interface between the GEM object and its backing storage. /* Interface between the GEM object and its backing storage.
@ -285,9 +285,6 @@ struct drm_i915_gem_object {
void *gvt_info; void *gvt_info;
}; };
/** for phys allocated objects */
struct drm_dma_handle *phys_handle;
}; };
static inline struct drm_i915_gem_object * static inline struct drm_i915_gem_object *

View file

@ -83,10 +83,12 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj) int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
{ {
struct drm_i915_private *i915 = to_i915(obj->base.dev);
int err; int err;
if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) { if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
DRM_DEBUG("Attempting to obtain a purgeable object\n"); drm_dbg(&i915->drm,
"Attempting to obtain a purgeable object\n");
return -EFAULT; return -EFAULT;
} }

View file

@ -22,88 +22,87 @@
static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
{ {
struct address_space *mapping = obj->base.filp->f_mapping; struct address_space *mapping = obj->base.filp->f_mapping;
struct drm_dma_handle *phys;
struct sg_table *st;
struct scatterlist *sg; struct scatterlist *sg;
char *vaddr; struct sg_table *st;
dma_addr_t dma;
void *vaddr;
void *dst;
int i; int i;
int err;
if (WARN_ON(i915_gem_object_needs_bit17_swizzle(obj))) if (WARN_ON(i915_gem_object_needs_bit17_swizzle(obj)))
return -EINVAL; return -EINVAL;
/* Always aligning to the object size, allows a single allocation /*
* Always aligning to the object size, allows a single allocation
* to handle all possible callers, and given typical object sizes, * to handle all possible callers, and given typical object sizes,
* the alignment of the buddy allocation will naturally match. * the alignment of the buddy allocation will naturally match.
*/ */
phys = drm_pci_alloc(obj->base.dev, vaddr = dma_alloc_coherent(&obj->base.dev->pdev->dev,
roundup_pow_of_two(obj->base.size), roundup_pow_of_two(obj->base.size),
roundup_pow_of_two(obj->base.size)); &dma, GFP_KERNEL);
if (!phys) if (!vaddr)
return -ENOMEM; return -ENOMEM;
vaddr = phys->vaddr;
for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
struct page *page;
char *src;
page = shmem_read_mapping_page(mapping, i);
if (IS_ERR(page)) {
err = PTR_ERR(page);
goto err_phys;
}
src = kmap_atomic(page);
memcpy(vaddr, src, PAGE_SIZE);
drm_clflush_virt_range(vaddr, PAGE_SIZE);
kunmap_atomic(src);
put_page(page);
vaddr += PAGE_SIZE;
}
intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt);
st = kmalloc(sizeof(*st), GFP_KERNEL); st = kmalloc(sizeof(*st), GFP_KERNEL);
if (!st) { if (!st)
err = -ENOMEM; goto err_pci;
goto err_phys;
}
if (sg_alloc_table(st, 1, GFP_KERNEL)) { if (sg_alloc_table(st, 1, GFP_KERNEL))
kfree(st); goto err_st;
err = -ENOMEM;
goto err_phys;
}
sg = st->sgl; sg = st->sgl;
sg->offset = 0; sg->offset = 0;
sg->length = obj->base.size; sg->length = obj->base.size;
sg_dma_address(sg) = phys->busaddr; sg_assign_page(sg, (struct page *)vaddr);
sg_dma_address(sg) = dma;
sg_dma_len(sg) = obj->base.size; sg_dma_len(sg) = obj->base.size;
obj->phys_handle = phys; dst = vaddr;
for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
struct page *page;
void *src;
page = shmem_read_mapping_page(mapping, i);
if (IS_ERR(page))
goto err_st;
src = kmap_atomic(page);
memcpy(dst, src, PAGE_SIZE);
drm_clflush_virt_range(dst, PAGE_SIZE);
kunmap_atomic(src);
put_page(page);
dst += PAGE_SIZE;
}
intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt);
__i915_gem_object_set_pages(obj, st, sg->length); __i915_gem_object_set_pages(obj, st, sg->length);
return 0; return 0;
err_phys: err_st:
drm_pci_free(obj->base.dev, phys); kfree(st);
err_pci:
return err; dma_free_coherent(&obj->base.dev->pdev->dev,
roundup_pow_of_two(obj->base.size),
vaddr, dma);
return -ENOMEM;
} }
static void static void
i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj, i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
struct sg_table *pages) struct sg_table *pages)
{ {
dma_addr_t dma = sg_dma_address(pages->sgl);
void *vaddr = sg_page(pages->sgl);
__i915_gem_object_release_shmem(obj, pages, false); __i915_gem_object_release_shmem(obj, pages, false);
if (obj->mm.dirty) { if (obj->mm.dirty) {
struct address_space *mapping = obj->base.filp->f_mapping; struct address_space *mapping = obj->base.filp->f_mapping;
char *vaddr = obj->phys_handle->vaddr; void *src = vaddr;
int i; int i;
for (i = 0; i < obj->base.size / PAGE_SIZE; i++) { for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
@ -115,15 +114,16 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
continue; continue;
dst = kmap_atomic(page); dst = kmap_atomic(page);
drm_clflush_virt_range(vaddr, PAGE_SIZE); drm_clflush_virt_range(src, PAGE_SIZE);
memcpy(dst, vaddr, PAGE_SIZE); memcpy(dst, src, PAGE_SIZE);
kunmap_atomic(dst); kunmap_atomic(dst);
set_page_dirty(page); set_page_dirty(page);
if (obj->mm.madv == I915_MADV_WILLNEED) if (obj->mm.madv == I915_MADV_WILLNEED)
mark_page_accessed(page); mark_page_accessed(page);
put_page(page); put_page(page);
vaddr += PAGE_SIZE;
src += PAGE_SIZE;
} }
obj->mm.dirty = false; obj->mm.dirty = false;
} }
@ -131,7 +131,9 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
sg_free_table(pages); sg_free_table(pages);
kfree(pages); kfree(pages);
drm_pci_free(obj->base.dev, obj->phys_handle); dma_free_coherent(&obj->base.dev->pdev->dev,
roundup_pow_of_two(obj->base.size),
vaddr, dma);
} }
static void phys_release(struct drm_i915_gem_object *obj) static void phys_release(struct drm_i915_gem_object *obj)

View file

@ -85,7 +85,8 @@ void i915_gem_suspend_late(struct drm_i915_private *i915)
spin_unlock_irqrestore(&i915->mm.obj_lock, flags); spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
i915_gem_object_lock(obj); i915_gem_object_lock(obj);
WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false)); drm_WARN_ON(&i915->drm,
i915_gem_object_set_to_gtt_domain(obj, false));
i915_gem_object_unlock(obj); i915_gem_object_unlock(obj);
i915_gem_object_put(obj); i915_gem_object_put(obj);

View file

@ -148,7 +148,8 @@ rebuild_st:
last_pfn = page_to_pfn(page); last_pfn = page_to_pfn(page);
/* Check that the i965g/gm workaround works. */ /* Check that the i965g/gm workaround works. */
WARN_ON((gfp & __GFP_DMA32) && (last_pfn >= 0x00100000UL)); drm_WARN_ON(&i915->drm,
(gfp & __GFP_DMA32) && (last_pfn >= 0x00100000UL));
} }
if (sg) { /* loop terminated early; short sg table */ if (sg) { /* loop terminated early; short sg table */
sg_page_sizes |= sg->length; sg_page_sizes |= sg->length;

View file

@ -256,8 +256,7 @@ unsigned long i915_gem_shrink_all(struct drm_i915_private *i915)
with_intel_runtime_pm(&i915->runtime_pm, wakeref) { with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
freed = i915_gem_shrink(i915, -1UL, NULL, freed = i915_gem_shrink(i915, -1UL, NULL,
I915_SHRINK_BOUND | I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND | I915_SHRINK_UNBOUND);
I915_SHRINK_ACTIVE);
} }
return freed; return freed;
@ -336,7 +335,6 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
freed_pages = 0; freed_pages = 0;
with_intel_runtime_pm(&i915->runtime_pm, wakeref) with_intel_runtime_pm(&i915->runtime_pm, wakeref)
freed_pages += i915_gem_shrink(i915, -1UL, NULL, freed_pages += i915_gem_shrink(i915, -1UL, NULL,
I915_SHRINK_ACTIVE |
I915_SHRINK_BOUND | I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND | I915_SHRINK_UNBOUND |
I915_SHRINK_WRITEBACK); I915_SHRINK_WRITEBACK);
@ -403,19 +401,22 @@ void i915_gem_driver_register__shrinker(struct drm_i915_private *i915)
i915->mm.shrinker.count_objects = i915_gem_shrinker_count; i915->mm.shrinker.count_objects = i915_gem_shrinker_count;
i915->mm.shrinker.seeks = DEFAULT_SEEKS; i915->mm.shrinker.seeks = DEFAULT_SEEKS;
i915->mm.shrinker.batch = 4096; i915->mm.shrinker.batch = 4096;
WARN_ON(register_shrinker(&i915->mm.shrinker)); drm_WARN_ON(&i915->drm, register_shrinker(&i915->mm.shrinker));
i915->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom; i915->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom;
WARN_ON(register_oom_notifier(&i915->mm.oom_notifier)); drm_WARN_ON(&i915->drm, register_oom_notifier(&i915->mm.oom_notifier));
i915->mm.vmap_notifier.notifier_call = i915_gem_shrinker_vmap; i915->mm.vmap_notifier.notifier_call = i915_gem_shrinker_vmap;
WARN_ON(register_vmap_purge_notifier(&i915->mm.vmap_notifier)); drm_WARN_ON(&i915->drm,
register_vmap_purge_notifier(&i915->mm.vmap_notifier));
} }
void i915_gem_driver_unregister__shrinker(struct drm_i915_private *i915) void i915_gem_driver_unregister__shrinker(struct drm_i915_private *i915)
{ {
WARN_ON(unregister_vmap_purge_notifier(&i915->mm.vmap_notifier)); drm_WARN_ON(&i915->drm,
WARN_ON(unregister_oom_notifier(&i915->mm.oom_notifier)); unregister_vmap_purge_notifier(&i915->mm.vmap_notifier));
drm_WARN_ON(&i915->drm,
unregister_oom_notifier(&i915->mm.oom_notifier));
unregister_shrinker(&i915->mm.shrinker); unregister_shrinker(&i915->mm.shrinker);
} }

View file

@ -110,8 +110,11 @@ static int i915_adjust_stolen(struct drm_i915_private *i915,
if (stolen[0].start != stolen[1].start || if (stolen[0].start != stolen[1].start ||
stolen[0].end != stolen[1].end) { stolen[0].end != stolen[1].end) {
DRM_DEBUG_DRIVER("GTT within stolen memory at %pR\n", &ggtt_res); drm_dbg(&i915->drm,
DRM_DEBUG_DRIVER("Stolen memory adjusted to %pR\n", dsm); "GTT within stolen memory at %pR\n",
&ggtt_res);
drm_dbg(&i915->drm, "Stolen memory adjusted to %pR\n",
dsm);
} }
} }
@ -142,8 +145,9 @@ static int i915_adjust_stolen(struct drm_i915_private *i915,
* range. Apparently this works. * range. Apparently this works.
*/ */
if (!r && !IS_GEN(i915, 3)) { if (!r && !IS_GEN(i915, 3)) {
DRM_ERROR("conflict detected with stolen region: %pR\n", drm_err(&i915->drm,
dsm); "conflict detected with stolen region: %pR\n",
dsm);
return -EBUSY; return -EBUSY;
} }
@ -171,8 +175,8 @@ static void g4x_get_stolen_reserved(struct drm_i915_private *i915,
ELK_STOLEN_RESERVED); ELK_STOLEN_RESERVED);
resource_size_t stolen_top = i915->dsm.end + 1; resource_size_t stolen_top = i915->dsm.end + 1;
DRM_DEBUG_DRIVER("%s_STOLEN_RESERVED = %08x\n", drm_dbg(&i915->drm, "%s_STOLEN_RESERVED = %08x\n",
IS_GM45(i915) ? "CTG" : "ELK", reg_val); IS_GM45(i915) ? "CTG" : "ELK", reg_val);
if ((reg_val & G4X_STOLEN_RESERVED_ENABLE) == 0) if ((reg_val & G4X_STOLEN_RESERVED_ENABLE) == 0)
return; return;
@ -181,14 +185,16 @@ static void g4x_get_stolen_reserved(struct drm_i915_private *i915,
* Whether ILK really reuses the ELK register for this is unclear. * Whether ILK really reuses the ELK register for this is unclear.
* Let's see if we catch anyone with this supposedly enabled on ILK. * Let's see if we catch anyone with this supposedly enabled on ILK.
*/ */
WARN(IS_GEN(i915, 5), "ILK stolen reserved found? 0x%08x\n", drm_WARN(&i915->drm, IS_GEN(i915, 5),
reg_val); "ILK stolen reserved found? 0x%08x\n",
reg_val);
if (!(reg_val & G4X_STOLEN_RESERVED_ADDR2_MASK)) if (!(reg_val & G4X_STOLEN_RESERVED_ADDR2_MASK))
return; return;
*base = (reg_val & G4X_STOLEN_RESERVED_ADDR2_MASK) << 16; *base = (reg_val & G4X_STOLEN_RESERVED_ADDR2_MASK) << 16;
WARN_ON((reg_val & G4X_STOLEN_RESERVED_ADDR1_MASK) < *base); drm_WARN_ON(&i915->drm,
(reg_val & G4X_STOLEN_RESERVED_ADDR1_MASK) < *base);
*size = stolen_top - *base; *size = stolen_top - *base;
} }
@ -200,7 +206,7 @@ static void gen6_get_stolen_reserved(struct drm_i915_private *i915,
{ {
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED); u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val); drm_dbg(&i915->drm, "GEN6_STOLEN_RESERVED = %08x\n", reg_val);
if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE)) if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
return; return;
@ -234,7 +240,7 @@ static void vlv_get_stolen_reserved(struct drm_i915_private *i915,
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED); u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
resource_size_t stolen_top = i915->dsm.end + 1; resource_size_t stolen_top = i915->dsm.end + 1;
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val); drm_dbg(&i915->drm, "GEN6_STOLEN_RESERVED = %08x\n", reg_val);
if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE)) if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
return; return;
@ -262,7 +268,7 @@ static void gen7_get_stolen_reserved(struct drm_i915_private *i915,
{ {
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED); u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val); drm_dbg(&i915->drm, "GEN6_STOLEN_RESERVED = %08x\n", reg_val);
if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE)) if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
return; return;
@ -289,7 +295,7 @@ static void chv_get_stolen_reserved(struct drm_i915_private *i915,
{ {
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED); u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val); drm_dbg(&i915->drm, "GEN6_STOLEN_RESERVED = %08x\n", reg_val);
if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE)) if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
return; return;
@ -323,7 +329,7 @@ static void bdw_get_stolen_reserved(struct drm_i915_private *i915,
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED); u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
resource_size_t stolen_top = i915->dsm.end + 1; resource_size_t stolen_top = i915->dsm.end + 1;
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val); drm_dbg(&i915->drm, "GEN6_STOLEN_RESERVED = %08x\n", reg_val);
if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE)) if (!(reg_val & GEN6_STOLEN_RESERVED_ENABLE))
return; return;
@ -342,7 +348,7 @@ static void icl_get_stolen_reserved(struct drm_i915_private *i915,
{ {
u64 reg_val = intel_uncore_read64(uncore, GEN6_STOLEN_RESERVED); u64 reg_val = intel_uncore_read64(uncore, GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = 0x%016llx\n", reg_val); drm_dbg(&i915->drm, "GEN6_STOLEN_RESERVED = 0x%016llx\n", reg_val);
*base = reg_val & GEN11_STOLEN_RESERVED_ADDR_MASK; *base = reg_val & GEN11_STOLEN_RESERVED_ADDR_MASK;
@ -453,8 +459,9 @@ static int i915_gem_init_stolen(struct drm_i915_private *i915)
* it likely means we failed to read the registers correctly. * it likely means we failed to read the registers correctly.
*/ */
if (!reserved_base) { if (!reserved_base) {
DRM_ERROR("inconsistent reservation %pa + %pa; ignoring\n", drm_err(&i915->drm,
&reserved_base, &reserved_size); "inconsistent reservation %pa + %pa; ignoring\n",
&reserved_base, &reserved_size);
reserved_base = stolen_top; reserved_base = stolen_top;
reserved_size = 0; reserved_size = 0;
} }
@ -463,8 +470,9 @@ static int i915_gem_init_stolen(struct drm_i915_private *i915)
(struct resource)DEFINE_RES_MEM(reserved_base, reserved_size); (struct resource)DEFINE_RES_MEM(reserved_base, reserved_size);
if (!resource_contains(&i915->dsm, &i915->dsm_reserved)) { if (!resource_contains(&i915->dsm, &i915->dsm_reserved)) {
DRM_ERROR("Stolen reserved area %pR outside stolen memory %pR\n", drm_err(&i915->drm,
&i915->dsm_reserved, &i915->dsm); "Stolen reserved area %pR outside stolen memory %pR\n",
&i915->dsm_reserved, &i915->dsm);
return 0; return 0;
} }
@ -472,9 +480,10 @@ static int i915_gem_init_stolen(struct drm_i915_private *i915)
* memory, so just consider the start. */ * memory, so just consider the start. */
reserved_total = stolen_top - reserved_base; reserved_total = stolen_top - reserved_base;
DRM_DEBUG_DRIVER("Memory reserved for graphics device: %lluK, usable: %lluK\n", drm_dbg(&i915->drm,
(u64)resource_size(&i915->dsm) >> 10, "Memory reserved for graphics device: %lluK, usable: %lluK\n",
((u64)resource_size(&i915->dsm) - reserved_total) >> 10); (u64)resource_size(&i915->dsm) >> 10,
((u64)resource_size(&i915->dsm) - reserved_total) >> 10);
i915->stolen_usable_size = i915->stolen_usable_size =
resource_size(&i915->dsm) - reserved_total; resource_size(&i915->dsm) - reserved_total;
@ -677,26 +686,24 @@ struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915)
struct drm_i915_gem_object * struct drm_i915_gem_object *
i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *i915, i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *i915,
resource_size_t stolen_offset, resource_size_t stolen_offset,
resource_size_t gtt_offset,
resource_size_t size) resource_size_t size)
{ {
struct intel_memory_region *mem = i915->mm.regions[INTEL_REGION_STOLEN]; struct intel_memory_region *mem = i915->mm.regions[INTEL_REGION_STOLEN];
struct i915_ggtt *ggtt = &i915->ggtt;
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
struct drm_mm_node *stolen; struct drm_mm_node *stolen;
struct i915_vma *vma;
int ret; int ret;
if (!drm_mm_initialized(&i915->mm.stolen)) if (!drm_mm_initialized(&i915->mm.stolen))
return ERR_PTR(-ENODEV); return ERR_PTR(-ENODEV);
DRM_DEBUG_DRIVER("creating preallocated stolen object: stolen_offset=%pa, gtt_offset=%pa, size=%pa\n", drm_dbg(&i915->drm,
&stolen_offset, &gtt_offset, &size); "creating preallocated stolen object: stolen_offset=%pa, size=%pa\n",
&stolen_offset, &size);
/* KISS and expect everything to be page-aligned */ /* KISS and expect everything to be page-aligned */
if (WARN_ON(size == 0) || if (GEM_WARN_ON(size == 0) ||
WARN_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE)) || GEM_WARN_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE)) ||
WARN_ON(!IS_ALIGNED(stolen_offset, I915_GTT_MIN_ALIGNMENT))) GEM_WARN_ON(!IS_ALIGNED(stolen_offset, I915_GTT_MIN_ALIGNMENT)))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); stolen = kzalloc(sizeof(*stolen), GFP_KERNEL);
@ -709,68 +716,20 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *i915,
ret = drm_mm_reserve_node(&i915->mm.stolen, stolen); ret = drm_mm_reserve_node(&i915->mm.stolen, stolen);
mutex_unlock(&i915->mm.stolen_lock); mutex_unlock(&i915->mm.stolen_lock);
if (ret) { if (ret) {
DRM_DEBUG_DRIVER("failed to allocate stolen space\n"); obj = ERR_PTR(ret);
kfree(stolen); goto err_free;
return ERR_PTR(ret);
} }
obj = __i915_gem_object_create_stolen(mem, stolen); obj = __i915_gem_object_create_stolen(mem, stolen);
if (IS_ERR(obj)) { if (IS_ERR(obj))
DRM_DEBUG_DRIVER("failed to allocate stolen object\n"); goto err_stolen;
i915_gem_stolen_remove_node(i915, stolen);
kfree(stolen);
return obj;
}
/* Some objects just need physical mem from stolen space */
if (gtt_offset == I915_GTT_OFFSET_NONE)
return obj;
ret = i915_gem_object_pin_pages(obj);
if (ret)
goto err;
vma = i915_vma_instance(obj, &ggtt->vm, NULL);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto err_pages;
}
/* To simplify the initialisation sequence between KMS and GTT,
* we allow construction of the stolen object prior to
* setting up the GTT space. The actual reservation will occur
* later.
*/
mutex_lock(&ggtt->vm.mutex);
ret = i915_gem_gtt_reserve(&ggtt->vm, &vma->node,
size, gtt_offset, obj->cache_level,
0);
if (ret) {
DRM_DEBUG_DRIVER("failed to allocate stolen GTT space\n");
mutex_unlock(&ggtt->vm.mutex);
goto err_pages;
}
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
GEM_BUG_ON(vma->pages);
vma->pages = obj->mm.pages;
atomic_set(&vma->pages_count, I915_VMA_PAGES_ACTIVE);
set_bit(I915_VMA_GLOBAL_BIND_BIT, __i915_vma_flags(vma));
__i915_vma_set_map_and_fenceable(vma);
list_add_tail(&vma->vm_link, &ggtt->vm.bound_list);
mutex_unlock(&ggtt->vm.mutex);
GEM_BUG_ON(i915_gem_object_is_shrinkable(obj));
atomic_inc(&obj->bind_count);
i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
return obj; return obj;
err_pages: err_stolen:
i915_gem_object_unpin_pages(obj); i915_gem_stolen_remove_node(i915, stolen);
err: err_free:
i915_gem_object_put(obj); kfree(stolen);
return ERR_PTR(ret); return obj;
} }

View file

@ -28,7 +28,6 @@ i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
struct drm_i915_gem_object * struct drm_i915_gem_object *
i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv, i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv,
resource_size_t stolen_offset, resource_size_t stolen_offset,
resource_size_t gtt_offset,
resource_size_t size); resource_size_t size);
#endif /* __I915_GEM_STOLEN_H__ */ #endif /* __I915_GEM_STOLEN_H__ */

View file

@ -704,7 +704,7 @@ i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = { static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
I915_GEM_OBJECT_IS_SHRINKABLE | I915_GEM_OBJECT_IS_SHRINKABLE |
I915_GEM_OBJECT_NO_GGTT | I915_GEM_OBJECT_NO_MMAP |
I915_GEM_OBJECT_ASYNC_CANCEL, I915_GEM_OBJECT_ASYNC_CANCEL,
.get_pages = i915_gem_userptr_get_pages, .get_pages = i915_gem_userptr_get_pages,
.put_pages = i915_gem_userptr_put_pages, .put_pages = i915_gem_userptr_put_pages,
@ -770,6 +770,23 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
I915_USERPTR_UNSYNCHRONIZED)) I915_USERPTR_UNSYNCHRONIZED))
return -EINVAL; return -EINVAL;
/*
* XXX: There is a prevalence of the assumption that we fit the
* object's page count inside a 32bit _signed_ variable. Let's document
* this and catch if we ever need to fix it. In the meantime, if you do
* spot such a local variable, please consider fixing!
*
* Aside from our own locals (for which we have no excuse!):
* - sg_table embeds unsigned int for num_pages
* - get_user_pages*() mixed ints with longs
*/
if (args->user_size >> PAGE_SHIFT > INT_MAX)
return -E2BIG;
if (overflows_type(args->user_size, obj->base.size))
return -E2BIG;
if (!args->user_size) if (!args->user_size)
return -EINVAL; return -EINVAL;

View file

@ -1208,107 +1208,6 @@ static int igt_write_huge(struct i915_gem_context *ctx,
return err; return err;
} }
static int igt_ppgtt_exhaust_huge(void *arg)
{
struct i915_gem_context *ctx = arg;
struct drm_i915_private *i915 = ctx->i915;
unsigned long supported = INTEL_INFO(i915)->page_sizes;
static unsigned int pages[ARRAY_SIZE(page_sizes)];
struct drm_i915_gem_object *obj;
unsigned int size_mask;
unsigned int page_mask;
int n, i;
int err = -ENODEV;
if (supported == I915_GTT_PAGE_SIZE_4K)
return 0;
/*
* Sanity check creating objects with a varying mix of page sizes --
* ensuring that our writes lands in the right place.
*/
n = 0;
for_each_set_bit(i, &supported, ilog2(I915_GTT_MAX_PAGE_SIZE) + 1)
pages[n++] = BIT(i);
for (size_mask = 2; size_mask < BIT(n); size_mask++) {
unsigned int size = 0;
for (i = 0; i < n; i++) {
if (size_mask & BIT(i))
size |= pages[i];
}
/*
* For our page mask we want to enumerate all the page-size
* combinations which will fit into our chosen object size.
*/
for (page_mask = 2; page_mask <= size_mask; page_mask++) {
unsigned int page_sizes = 0;
for (i = 0; i < n; i++) {
if (page_mask & BIT(i))
page_sizes |= pages[i];
}
/*
* Ensure that we can actually fill the given object
* with our chosen page mask.
*/
if (!IS_ALIGNED(size, BIT(__ffs(page_sizes))))
continue;
obj = huge_pages_object(i915, size, page_sizes);
if (IS_ERR(obj)) {
err = PTR_ERR(obj);
goto out_device;
}
err = i915_gem_object_pin_pages(obj);
if (err) {
i915_gem_object_put(obj);
if (err == -ENOMEM) {
pr_info("unable to get pages, size=%u, pages=%u\n",
size, page_sizes);
err = 0;
break;
}
pr_err("pin_pages failed, size=%u, pages=%u\n",
size_mask, page_mask);
goto out_device;
}
/* Force the page-size for the gtt insertion */
obj->mm.page_sizes.sg = page_sizes;
err = igt_write_huge(ctx, obj);
if (err) {
pr_err("exhaust write-huge failed with size=%u\n",
size);
goto out_unpin;
}
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj);
i915_gem_object_put(obj);
}
}
goto out_device;
out_unpin:
i915_gem_object_unpin_pages(obj);
i915_gem_object_put(obj);
out_device:
mkwrite_device_info(i915)->page_sizes = supported;
return err;
}
typedef struct drm_i915_gem_object * typedef struct drm_i915_gem_object *
(*igt_create_fn)(struct drm_i915_private *i915, u32 size, u32 flags); (*igt_create_fn)(struct drm_i915_private *i915, u32 size, u32 flags);
@ -1900,7 +1799,6 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *i915)
SUBTEST(igt_shrink_thp), SUBTEST(igt_shrink_thp),
SUBTEST(igt_ppgtt_pin_update), SUBTEST(igt_ppgtt_pin_update),
SUBTEST(igt_tmpfs_fallback), SUBTEST(igt_tmpfs_fallback),
SUBTEST(igt_ppgtt_exhaust_huge),
SUBTEST(igt_ppgtt_smoke_huge), SUBTEST(igt_ppgtt_smoke_huge),
SUBTEST(igt_ppgtt_sanity_check), SUBTEST(igt_ppgtt_sanity_check),
}; };

View file

@ -1465,9 +1465,12 @@ out_file:
static int check_scratch(struct i915_address_space *vm, u64 offset) static int check_scratch(struct i915_address_space *vm, u64 offset)
{ {
struct drm_mm_node *node = struct drm_mm_node *node;
__drm_mm_interval_first(&vm->mm,
offset, offset + sizeof(u32) - 1); mutex_lock(&vm->mutex);
node = __drm_mm_interval_first(&vm->mm,
offset, offset + sizeof(u32) - 1);
mutex_unlock(&vm->mutex);
if (!node || node->start > offset) if (!node || node->start > offset)
return 0; return 0;
@ -1492,6 +1495,10 @@ static int write_to_scratch(struct i915_gem_context *ctx,
GEM_BUG_ON(offset < I915_GTT_PAGE_SIZE); GEM_BUG_ON(offset < I915_GTT_PAGE_SIZE);
err = check_scratch(ctx_vm(ctx), offset);
if (err)
return err;
obj = i915_gem_object_create_internal(i915, PAGE_SIZE); obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
if (IS_ERR(obj)) if (IS_ERR(obj))
return PTR_ERR(obj); return PTR_ERR(obj);
@ -1528,10 +1535,6 @@ static int write_to_scratch(struct i915_gem_context *ctx,
if (err) if (err)
goto out_vm; goto out_vm;
err = check_scratch(vm, offset);
if (err)
goto err_unpin;
rq = igt_request_alloc(ctx, engine); rq = igt_request_alloc(ctx, engine);
if (IS_ERR(rq)) { if (IS_ERR(rq)) {
err = PTR_ERR(rq); err = PTR_ERR(rq);
@ -1575,72 +1578,103 @@ static int read_from_scratch(struct i915_gem_context *ctx,
struct drm_i915_private *i915 = ctx->i915; struct drm_i915_private *i915 = ctx->i915;
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
struct i915_address_space *vm; struct i915_address_space *vm;
const u32 RCS_GPR0 = 0x2600; /* not all engines have their own GPR! */
const u32 result = 0x100; const u32 result = 0x100;
struct i915_request *rq; struct i915_request *rq;
struct i915_vma *vma; struct i915_vma *vma;
unsigned int flags;
u32 *cmd; u32 *cmd;
int err; int err;
GEM_BUG_ON(offset < I915_GTT_PAGE_SIZE); GEM_BUG_ON(offset < I915_GTT_PAGE_SIZE);
err = check_scratch(ctx_vm(ctx), offset);
if (err)
return err;
obj = i915_gem_object_create_internal(i915, PAGE_SIZE); obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
if (IS_ERR(obj)) if (IS_ERR(obj))
return PTR_ERR(obj); return PTR_ERR(obj);
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto out;
}
memset(cmd, POISON_INUSE, PAGE_SIZE);
if (INTEL_GEN(i915) >= 8) { if (INTEL_GEN(i915) >= 8) {
const u32 GPR0 = engine->mmio_base + 0x600;
vm = i915_gem_context_get_vm_rcu(ctx);
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_vm;
}
err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_OFFSET_FIXED);
if (err)
goto out_vm;
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto out;
}
memset(cmd, POISON_INUSE, PAGE_SIZE);
*cmd++ = MI_LOAD_REGISTER_MEM_GEN8; *cmd++ = MI_LOAD_REGISTER_MEM_GEN8;
*cmd++ = RCS_GPR0; *cmd++ = GPR0;
*cmd++ = lower_32_bits(offset); *cmd++ = lower_32_bits(offset);
*cmd++ = upper_32_bits(offset); *cmd++ = upper_32_bits(offset);
*cmd++ = MI_STORE_REGISTER_MEM_GEN8; *cmd++ = MI_STORE_REGISTER_MEM_GEN8;
*cmd++ = RCS_GPR0; *cmd++ = GPR0;
*cmd++ = result; *cmd++ = result;
*cmd++ = 0; *cmd++ = 0;
} else { *cmd = MI_BATCH_BUFFER_END;
*cmd++ = MI_LOAD_REGISTER_MEM;
*cmd++ = RCS_GPR0;
*cmd++ = offset;
*cmd++ = MI_STORE_REGISTER_MEM;
*cmd++ = RCS_GPR0;
*cmd++ = result;
}
*cmd = MI_BATCH_BUFFER_END;
i915_gem_object_flush_map(obj); i915_gem_object_flush_map(obj);
i915_gem_object_unpin_map(obj); i915_gem_object_unpin_map(obj);
flags = 0;
} else {
const u32 reg = engine->mmio_base + 0x420;
/* hsw: register access even to 3DPRIM! is protected */
vm = i915_vm_get(&engine->gt->ggtt->vm);
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_vm;
}
err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
if (err)
goto out_vm;
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto out;
}
memset(cmd, POISON_INUSE, PAGE_SIZE);
*cmd++ = MI_LOAD_REGISTER_MEM;
*cmd++ = reg;
*cmd++ = offset;
*cmd++ = MI_STORE_REGISTER_MEM | MI_USE_GGTT;
*cmd++ = reg;
*cmd++ = vma->node.start + result;
*cmd = MI_BATCH_BUFFER_END;
i915_gem_object_flush_map(obj);
i915_gem_object_unpin_map(obj);
flags = I915_DISPATCH_SECURE;
}
intel_gt_chipset_flush(engine->gt); intel_gt_chipset_flush(engine->gt);
vm = i915_gem_context_get_vm_rcu(ctx);
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_vm;
}
err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_OFFSET_FIXED);
if (err)
goto out_vm;
err = check_scratch(vm, offset);
if (err)
goto err_unpin;
rq = igt_request_alloc(ctx, engine); rq = igt_request_alloc(ctx, engine);
if (IS_ERR(rq)) { if (IS_ERR(rq)) {
err = PTR_ERR(rq); err = PTR_ERR(rq);
goto err_unpin; goto err_unpin;
} }
err = engine->emit_bb_start(rq, vma->node.start, vma->node.size, 0); err = engine->emit_bb_start(rq, vma->node.start, vma->node.size, flags);
if (err) if (err)
goto err_request; goto err_request;
@ -1686,6 +1720,39 @@ out:
return err; return err;
} }
static int check_scratch_page(struct i915_gem_context *ctx, u32 *out)
{
struct i915_address_space *vm;
struct page *page;
u32 *vaddr;
int err = 0;
vm = ctx_vm(ctx);
if (!vm)
return -ENODEV;
page = vm->scratch[0].base.page;
if (!page) {
pr_err("No scratch page!\n");
return -EINVAL;
}
vaddr = kmap(page);
if (!vaddr) {
pr_err("No (mappable) scratch page!\n");
return -EINVAL;
}
memcpy(out, vaddr, sizeof(*out));
if (memchr_inv(vaddr, *out, PAGE_SIZE)) {
pr_err("Inconsistent initial state of scratch page!\n");
err = -EINVAL;
}
kunmap(page);
return err;
}
static int igt_vm_isolation(void *arg) static int igt_vm_isolation(void *arg)
{ {
struct drm_i915_private *i915 = arg; struct drm_i915_private *i915 = arg;
@ -1696,6 +1763,7 @@ static int igt_vm_isolation(void *arg)
I915_RND_STATE(prng); I915_RND_STATE(prng);
struct file *file; struct file *file;
u64 vm_total; u64 vm_total;
u32 expected;
int err; int err;
if (INTEL_GEN(i915) < 7) if (INTEL_GEN(i915) < 7)
@ -1730,6 +1798,15 @@ static int igt_vm_isolation(void *arg)
if (ctx_vm(ctx_a) == ctx_vm(ctx_b)) if (ctx_vm(ctx_a) == ctx_vm(ctx_b))
goto out_file; goto out_file;
/* Read the initial state of the scratch page */
err = check_scratch_page(ctx_a, &expected);
if (err)
goto out_file;
err = check_scratch_page(ctx_b, &expected);
if (err)
goto out_file;
vm_total = ctx_vm(ctx_a)->total; vm_total = ctx_vm(ctx_a)->total;
GEM_BUG_ON(ctx_vm(ctx_b)->total != vm_total); GEM_BUG_ON(ctx_vm(ctx_b)->total != vm_total);
vm_total -= I915_GTT_PAGE_SIZE; vm_total -= I915_GTT_PAGE_SIZE;
@ -1743,6 +1820,10 @@ static int igt_vm_isolation(void *arg)
if (!intel_engine_can_store_dword(engine)) if (!intel_engine_can_store_dword(engine))
continue; continue;
/* Not all engines have their own GPR! */
if (INTEL_GEN(i915) < 8 && engine->class != RENDER_CLASS)
continue;
while (!__igt_timeout(end_time, NULL)) { while (!__igt_timeout(end_time, NULL)) {
u32 value = 0xc5c5c5c5; u32 value = 0xc5c5c5c5;
u64 offset; u64 offset;
@ -1760,7 +1841,7 @@ static int igt_vm_isolation(void *arg)
if (err) if (err)
goto out_file; goto out_file;
if (value) { if (value != expected) {
pr_err("%s: Read %08x from scratch (offset 0x%08x_%08x), after %lu reads!\n", pr_err("%s: Read %08x from scratch (offset 0x%08x_%08x), after %lu reads!\n",
engine->name, value, engine->name, value,
upper_32_bits(offset), upper_32_bits(offset),

View file

@ -210,6 +210,7 @@ static int igt_fill_blt_thread(void *arg)
struct intel_context *ce; struct intel_context *ce;
unsigned int prio; unsigned int prio;
IGT_TIMEOUT(end); IGT_TIMEOUT(end);
u64 total, max;
int err; int err;
ctx = thread->ctx; ctx = thread->ctx;
@ -225,27 +226,32 @@ static int igt_fill_blt_thread(void *arg)
ce = i915_gem_context_get_engine(ctx, BCS0); ce = i915_gem_context_get_engine(ctx, BCS0);
GEM_BUG_ON(IS_ERR(ce)); GEM_BUG_ON(IS_ERR(ce));
/*
* If we have a tiny shared address space, like for the GGTT
* then we can't be too greedy.
*/
max = ce->vm->total;
if (i915_is_ggtt(ce->vm) || thread->ctx)
max = div_u64(max, thread->n_cpus);
max >>= 4;
total = PAGE_SIZE;
do { do {
const u32 max_block_size = S16_MAX * PAGE_SIZE; /* Aim to keep the runtime under reasonable bounds! */
const u32 max_phys_size = SZ_64K;
u32 val = prandom_u32_state(prng); u32 val = prandom_u32_state(prng);
u64 total = ce->vm->total;
u32 phys_sz; u32 phys_sz;
u32 sz; u32 sz;
u32 *vaddr; u32 *vaddr;
u32 i; u32 i;
/* total = min(total, max);
* If we have a tiny shared address space, like for the GGTT sz = i915_prandom_u32_max_state(total, prng) + 1;
* then we can't be too greedy. phys_sz = sz % max_phys_size + 1;
*/
if (i915_is_ggtt(ce->vm))
total = div64_u64(total, thread->n_cpus);
sz = min_t(u64, total >> 4, prandom_u32_state(prng));
phys_sz = sz % (max_block_size + 1);
sz = round_up(sz, PAGE_SIZE); sz = round_up(sz, PAGE_SIZE);
phys_sz = round_up(phys_sz, PAGE_SIZE); phys_sz = round_up(phys_sz, PAGE_SIZE);
phys_sz = min(phys_sz, sz);
pr_debug("%s with phys_sz= %x, sz=%x, val=%x\n", __func__, pr_debug("%s with phys_sz= %x, sz=%x, val=%x\n", __func__,
phys_sz, sz, val); phys_sz, sz, val);
@ -276,13 +282,14 @@ static int igt_fill_blt_thread(void *arg)
if (err) if (err)
goto err_unpin; goto err_unpin;
i915_gem_object_lock(obj); err = i915_gem_object_wait(obj, 0, MAX_SCHEDULE_TIMEOUT);
err = i915_gem_object_set_to_cpu_domain(obj, false);
i915_gem_object_unlock(obj);
if (err) if (err)
goto err_unpin; goto err_unpin;
for (i = 0; i < huge_gem_object_phys_size(obj) / sizeof(u32); ++i) { for (i = 0; i < huge_gem_object_phys_size(obj) / sizeof(u32); i += 17) {
if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ))
drm_clflush_virt_range(&vaddr[i], sizeof(vaddr[i]));
if (vaddr[i] != val) { if (vaddr[i] != val) {
pr_err("vaddr[%u]=%x, expected=%x\n", i, pr_err("vaddr[%u]=%x, expected=%x\n", i,
vaddr[i], val); vaddr[i], val);
@ -293,6 +300,8 @@ static int igt_fill_blt_thread(void *arg)
i915_gem_object_unpin_map(obj); i915_gem_object_unpin_map(obj);
i915_gem_object_put(obj); i915_gem_object_put(obj);
total <<= 1;
} while (!time_after(jiffies, end)); } while (!time_after(jiffies, end));
goto err_flush; goto err_flush;
@ -319,6 +328,7 @@ static int igt_copy_blt_thread(void *arg)
struct intel_context *ce; struct intel_context *ce;
unsigned int prio; unsigned int prio;
IGT_TIMEOUT(end); IGT_TIMEOUT(end);
u64 total, max;
int err; int err;
ctx = thread->ctx; ctx = thread->ctx;
@ -334,23 +344,32 @@ static int igt_copy_blt_thread(void *arg)
ce = i915_gem_context_get_engine(ctx, BCS0); ce = i915_gem_context_get_engine(ctx, BCS0);
GEM_BUG_ON(IS_ERR(ce)); GEM_BUG_ON(IS_ERR(ce));
/*
* If we have a tiny shared address space, like for the GGTT
* then we can't be too greedy.
*/
max = ce->vm->total;
if (i915_is_ggtt(ce->vm) || thread->ctx)
max = div_u64(max, thread->n_cpus);
max >>= 4;
total = PAGE_SIZE;
do { do {
const u32 max_block_size = S16_MAX * PAGE_SIZE; /* Aim to keep the runtime under reasonable bounds! */
const u32 max_phys_size = SZ_64K;
u32 val = prandom_u32_state(prng); u32 val = prandom_u32_state(prng);
u64 total = ce->vm->total;
u32 phys_sz; u32 phys_sz;
u32 sz; u32 sz;
u32 *vaddr; u32 *vaddr;
u32 i; u32 i;
if (i915_is_ggtt(ce->vm)) total = min(total, max);
total = div64_u64(total, thread->n_cpus); sz = i915_prandom_u32_max_state(total, prng) + 1;
phys_sz = sz % max_phys_size + 1;
sz = min_t(u64, total >> 4, prandom_u32_state(prng));
phys_sz = sz % (max_block_size + 1);
sz = round_up(sz, PAGE_SIZE); sz = round_up(sz, PAGE_SIZE);
phys_sz = round_up(phys_sz, PAGE_SIZE); phys_sz = round_up(phys_sz, PAGE_SIZE);
phys_sz = min(phys_sz, sz);
pr_debug("%s with phys_sz= %x, sz=%x, val=%x\n", __func__, pr_debug("%s with phys_sz= %x, sz=%x, val=%x\n", __func__,
phys_sz, sz, val); phys_sz, sz, val);
@ -397,13 +416,14 @@ static int igt_copy_blt_thread(void *arg)
if (err) if (err)
goto err_unpin; goto err_unpin;
i915_gem_object_lock(dst); err = i915_gem_object_wait(dst, 0, MAX_SCHEDULE_TIMEOUT);
err = i915_gem_object_set_to_cpu_domain(dst, false);
i915_gem_object_unlock(dst);
if (err) if (err)
goto err_unpin; goto err_unpin;
for (i = 0; i < huge_gem_object_phys_size(dst) / sizeof(u32); ++i) { for (i = 0; i < huge_gem_object_phys_size(dst) / sizeof(u32); i += 17) {
if (!(dst->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ))
drm_clflush_virt_range(&vaddr[i], sizeof(vaddr[i]));
if (vaddr[i] != val) { if (vaddr[i] != val) {
pr_err("vaddr[%u]=%x, expected=%x\n", i, pr_err("vaddr[%u]=%x, expected=%x\n", i,
vaddr[i], val); vaddr[i], val);
@ -416,6 +436,8 @@ static int igt_copy_blt_thread(void *arg)
i915_gem_object_put(src); i915_gem_object_put(src);
i915_gem_object_put(dst); i915_gem_object_put(dst);
total <<= 1;
} while (!time_after(jiffies, end)); } while (!time_after(jiffies, end));
goto err_flush; goto err_flush;

View file

@ -37,7 +37,7 @@ mock_context(struct drm_i915_private *i915,
if (name) { if (name) {
struct i915_ppgtt *ppgtt; struct i915_ppgtt *ppgtt;
strncpy(ctx->name, name, sizeof(ctx->name)); strncpy(ctx->name, name, sizeof(ctx->name) - 1);
ppgtt = mock_ppgtt(i915, name); ppgtt = mock_ppgtt(i915, name);
if (!ppgtt) if (!ppgtt)
@ -83,6 +83,8 @@ live_context(struct drm_i915_private *i915, struct file *file)
if (IS_ERR(ctx)) if (IS_ERR(ctx))
return ctx; return ctx;
i915_gem_context_set_no_error_capture(ctx);
err = gem_context_register(ctx, to_drm_file(file)->driver_priv, &id); err = gem_context_register(ctx, to_drm_file(file)->driver_priv, &id);
if (err < 0) if (err < 0)
goto err_ctx; goto err_ctx;
@ -105,6 +107,7 @@ kernel_context(struct drm_i915_private *i915)
i915_gem_context_clear_bannable(ctx); i915_gem_context_clear_bannable(ctx);
i915_gem_context_set_persistence(ctx); i915_gem_context_set_persistence(ctx);
i915_gem_context_set_no_error_capture(ctx);
return ctx; return ctx;
} }

View file

@ -136,6 +136,9 @@ static void add_retire(struct intel_breadcrumbs *b, struct intel_timeline *tl)
struct intel_engine_cs *engine = struct intel_engine_cs *engine =
container_of(b, struct intel_engine_cs, breadcrumbs); container_of(b, struct intel_engine_cs, breadcrumbs);
if (unlikely(intel_engine_is_virtual(engine)))
engine = intel_virtual_engine_get_sibling(engine, 0);
intel_engine_add_retire(engine, tl); intel_engine_add_retire(engine, tl);
} }

View file

@ -116,7 +116,8 @@ int __intel_context_do_pin(struct intel_context *ce)
if (unlikely(err)) if (unlikely(err))
goto err_active; goto err_active;
CE_TRACE(ce, "pin ring:{head:%04x, tail:%04x}\n", CE_TRACE(ce, "pin ring:{start:%08x, head:%04x, tail:%04x}\n",
i915_ggtt_offset(ce->ring->vma),
ce->ring->head, ce->ring->tail); ce->ring->head, ce->ring->tail);
smp_mb__before_atomic(); /* flush pin before it is visible */ smp_mb__before_atomic(); /* flush pin before it is visible */
@ -219,7 +220,9 @@ static void __intel_context_retire(struct i915_active *active)
{ {
struct intel_context *ce = container_of(active, typeof(*ce), active); struct intel_context *ce = container_of(active, typeof(*ce), active);
CE_TRACE(ce, "retire\n"); CE_TRACE(ce, "retire runtime: { total:%lluns, avg:%lluns }\n",
intel_context_get_total_runtime_ns(ce),
intel_context_get_avg_runtime_ns(ce));
set_bit(CONTEXT_VALID_BIT, &ce->flags); set_bit(CONTEXT_VALID_BIT, &ce->flags);
if (ce->state) if (ce->state)
@ -280,6 +283,8 @@ intel_context_init(struct intel_context *ce,
ce->sseu = engine->sseu; ce->sseu = engine->sseu;
ce->ring = __intel_context_ring_size(SZ_4K); ce->ring = __intel_context_ring_size(SZ_4K);
ewma_runtime_init(&ce->runtime.avg);
ce->vm = i915_vm_get(engine->gt->vm); ce->vm = i915_vm_get(engine->gt->vm);
INIT_LIST_HEAD(&ce->signal_link); INIT_LIST_HEAD(&ce->signal_link);

View file

@ -12,6 +12,7 @@
#include <linux/types.h> #include <linux/types.h>
#include "i915_active.h" #include "i915_active.h"
#include "i915_drv.h"
#include "intel_context_types.h" #include "intel_context_types.h"
#include "intel_engine_types.h" #include "intel_engine_types.h"
#include "intel_ring_types.h" #include "intel_ring_types.h"
@ -35,6 +36,9 @@ int intel_context_alloc_state(struct intel_context *ce);
void intel_context_free(struct intel_context *ce); void intel_context_free(struct intel_context *ce);
int intel_context_reconfigure_sseu(struct intel_context *ce,
const struct intel_sseu sseu);
/** /**
* intel_context_lock_pinned - Stablises the 'pinned' status of the HW context * intel_context_lock_pinned - Stablises the 'pinned' status of the HW context
* @ce - the context * @ce - the context
@ -224,4 +228,20 @@ intel_context_clear_nopreempt(struct intel_context *ce)
clear_bit(CONTEXT_NOPREEMPT, &ce->flags); clear_bit(CONTEXT_NOPREEMPT, &ce->flags);
} }
static inline u64 intel_context_get_total_runtime_ns(struct intel_context *ce)
{
const u32 period =
RUNTIME_INFO(ce->engine->i915)->cs_timestamp_period_ns;
return READ_ONCE(ce->runtime.total) * period;
}
static inline u64 intel_context_get_avg_runtime_ns(struct intel_context *ce)
{
const u32 period =
RUNTIME_INFO(ce->engine->i915)->cs_timestamp_period_ns;
return mul_u32_u32(ewma_runtime_read(&ce->runtime.avg), period);
}
#endif /* __INTEL_CONTEXT_H__ */ #endif /* __INTEL_CONTEXT_H__ */

View file

@ -0,0 +1,98 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2019 Intel Corporation
*/
#include "i915_drv.h"
#include "i915_vma.h"
#include "intel_context.h"
#include "intel_engine_pm.h"
#include "intel_gpu_commands.h"
#include "intel_lrc.h"
#include "intel_lrc_reg.h"
#include "intel_ring.h"
#include "intel_sseu.h"
static int gen8_emit_rpcs_config(struct i915_request *rq,
const struct intel_context *ce,
const struct intel_sseu sseu)
{
u64 offset;
u32 *cs;
cs = intel_ring_begin(rq, 4);
if (IS_ERR(cs))
return PTR_ERR(cs);
offset = i915_ggtt_offset(ce->state) +
LRC_STATE_PN * PAGE_SIZE +
CTX_R_PWR_CLK_STATE * 4;
*cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
*cs++ = lower_32_bits(offset);
*cs++ = upper_32_bits(offset);
*cs++ = intel_sseu_make_rpcs(rq->i915, &sseu);
intel_ring_advance(rq, cs);
return 0;
}
static int
gen8_modify_rpcs(struct intel_context *ce, const struct intel_sseu sseu)
{
struct i915_request *rq;
int ret;
lockdep_assert_held(&ce->pin_mutex);
/*
* If the context is not idle, we have to submit an ordered request to
* modify its context image via the kernel context (writing to our own
* image, or into the registers directory, does not stick). Pristine
* and idle contexts will be configured on pinning.
*/
if (!intel_context_pin_if_active(ce))
return 0;
rq = intel_engine_create_kernel_request(ce->engine);
if (IS_ERR(rq)) {
ret = PTR_ERR(rq);
goto out_unpin;
}
/* Serialise with the remote context */
ret = intel_context_prepare_remote_request(ce, rq);
if (ret == 0)
ret = gen8_emit_rpcs_config(rq, ce, sseu);
i915_request_add(rq);
out_unpin:
intel_context_unpin(ce);
return ret;
}
int
intel_context_reconfigure_sseu(struct intel_context *ce,
const struct intel_sseu sseu)
{
int ret;
GEM_BUG_ON(INTEL_GEN(ce->engine->i915) < 8);
ret = intel_context_lock_pinned(ce);
if (ret)
return ret;
/* Nothing to do if unmodified. */
if (!memcmp(&ce->sseu, &sseu, sizeof(sseu)))
goto unlock;
ret = gen8_modify_rpcs(ce, sseu);
if (!ret)
ce->sseu = sseu;
unlock:
intel_context_unlock_pinned(ce);
return ret;
}

View file

@ -7,6 +7,7 @@
#ifndef __INTEL_CONTEXT_TYPES__ #ifndef __INTEL_CONTEXT_TYPES__
#define __INTEL_CONTEXT_TYPES__ #define __INTEL_CONTEXT_TYPES__
#include <linux/average.h>
#include <linux/kref.h> #include <linux/kref.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/mutex.h> #include <linux/mutex.h>
@ -19,6 +20,8 @@
#define CONTEXT_REDZONE POISON_INUSE #define CONTEXT_REDZONE POISON_INUSE
DECLARE_EWMA(runtime, 3, 8);
struct i915_gem_context; struct i915_gem_context;
struct i915_vma; struct i915_vma;
struct intel_context; struct intel_context;
@ -68,6 +71,15 @@ struct intel_context {
u64 lrc_desc; u64 lrc_desc;
u32 tag; /* cookie passed to HW to track this context on submission */ u32 tag; /* cookie passed to HW to track this context on submission */
/* Time on GPU as tracked by the hw. */
struct {
struct ewma_runtime avg;
u64 total;
u32 last;
I915_SELFTEST_DECLARE(u32 num_underflow);
I915_SELFTEST_DECLARE(u32 max_underflow);
} runtime;
unsigned int active_count; /* protected by timeline->mutex */ unsigned int active_count; /* protected by timeline->mutex */
atomic_t pin_count; atomic_t pin_count;

View file

@ -192,6 +192,8 @@ void intel_engines_free(struct intel_gt *gt);
int intel_engine_init_common(struct intel_engine_cs *engine); int intel_engine_init_common(struct intel_engine_cs *engine);
void intel_engine_cleanup_common(struct intel_engine_cs *engine); void intel_engine_cleanup_common(struct intel_engine_cs *engine);
int intel_engine_resume(struct intel_engine_cs *engine);
int intel_ring_submission_setup(struct intel_engine_cs *engine); int intel_ring_submission_setup(struct intel_engine_cs *engine);
int intel_engine_stop_cs(struct intel_engine_cs *engine); int intel_engine_stop_cs(struct intel_engine_cs *engine);
@ -303,26 +305,6 @@ intel_engine_find_active_request(struct intel_engine_cs *engine);
u32 intel_engine_context_size(struct intel_gt *gt, u8 class); u32 intel_engine_context_size(struct intel_gt *gt, u8 class);
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
static inline bool inject_preempt_hang(struct intel_engine_execlists *execlists)
{
if (!execlists->preempt_hang.inject_hang)
return false;
complete(&execlists->preempt_hang.completion);
return true;
}
#else
static inline bool inject_preempt_hang(struct intel_engine_execlists *execlists)
{
return false;
}
#endif
void intel_engine_init_active(struct intel_engine_cs *engine, void intel_engine_init_active(struct intel_engine_cs *engine,
unsigned int subclass); unsigned int subclass);
#define ENGINE_PHYSICAL 0 #define ENGINE_PHYSICAL 0

View file

@ -35,6 +35,7 @@
#include "intel_engine_user.h" #include "intel_engine_user.h"
#include "intel_gt.h" #include "intel_gt.h"
#include "intel_gt_requests.h" #include "intel_gt_requests.h"
#include "intel_gt_pm.h"
#include "intel_lrc.h" #include "intel_lrc.h"
#include "intel_reset.h" #include "intel_reset.h"
#include "intel_ring.h" #include "intel_ring.h"
@ -199,10 +200,10 @@ u32 intel_engine_context_size(struct intel_gt *gt, u8 class)
* out in the wash. * out in the wash.
*/ */
cxt_size = intel_uncore_read(uncore, CXT_SIZE) + 1; cxt_size = intel_uncore_read(uncore, CXT_SIZE) + 1;
DRM_DEBUG_DRIVER("gen%d CXT_SIZE = %d bytes [0x%08x]\n", drm_dbg(&gt->i915->drm,
INTEL_GEN(gt->i915), "gen%d CXT_SIZE = %d bytes [0x%08x]\n",
cxt_size * 64, INTEL_GEN(gt->i915), cxt_size * 64,
cxt_size - 1); cxt_size - 1);
return round_up(cxt_size * 64, PAGE_SIZE); return round_up(cxt_size * 64, PAGE_SIZE);
case 3: case 3:
case 2: case 2:
@ -392,8 +393,24 @@ void intel_engines_release(struct intel_gt *gt)
struct intel_engine_cs *engine; struct intel_engine_cs *engine;
enum intel_engine_id id; enum intel_engine_id id;
/*
* Before we release the resources held by engine, we must be certain
* that the HW is no longer accessing them -- having the GPU scribble
* to or read from a page being used for something else causes no end
* of fun.
*
* The GPU should be reset by this point, but assume the worst just
* in case we aborted before completely initialising the engines.
*/
GEM_BUG_ON(intel_gt_pm_is_awake(gt));
if (!INTEL_INFO(gt->i915)->gpu_reset_clobbers_display)
__intel_gt_reset(gt, ALL_ENGINES);
/* Decouple the backend; but keep the layout for late GPU resets */ /* Decouple the backend; but keep the layout for late GPU resets */
for_each_engine(engine, gt, id) { for_each_engine(engine, gt, id) {
intel_wakeref_wait_for_idle(&engine->wakeref);
GEM_BUG_ON(intel_engine_pm_is_awake(engine));
if (!engine->release) if (!engine->release)
continue; continue;
@ -432,9 +449,9 @@ int intel_engines_init_mmio(struct intel_gt *gt)
unsigned int i; unsigned int i;
int err; int err;
WARN_ON(engine_mask == 0); drm_WARN_ON(&i915->drm, engine_mask == 0);
WARN_ON(engine_mask & drm_WARN_ON(&i915->drm, engine_mask &
GENMASK(BITS_PER_TYPE(mask) - 1, I915_NUM_ENGINES)); GENMASK(BITS_PER_TYPE(mask) - 1, I915_NUM_ENGINES));
if (i915_inject_probe_failure(i915)) if (i915_inject_probe_failure(i915))
return -ENODEV; return -ENODEV;
@ -455,7 +472,7 @@ int intel_engines_init_mmio(struct intel_gt *gt)
* are added to the driver by a warning and disabling the forgotten * are added to the driver by a warning and disabling the forgotten
* engines. * engines.
*/ */
if (WARN_ON(mask != engine_mask)) if (drm_WARN_ON(&i915->drm, mask != engine_mask))
device_info->engine_mask = mask; device_info->engine_mask = mask;
RUNTIME_INFO(i915)->num_engines = hweight32(mask); RUNTIME_INFO(i915)->num_engines = hweight32(mask);
@ -510,7 +527,6 @@ static int pin_ggtt_status_page(struct intel_engine_cs *engine,
{ {
unsigned int flags; unsigned int flags;
flags = PIN_GLOBAL;
if (!HAS_LLC(engine->i915) && i915_ggtt_has_aperture(engine->gt->ggtt)) if (!HAS_LLC(engine->i915) && i915_ggtt_has_aperture(engine->gt->ggtt))
/* /*
* On g33, we cannot place HWS above 256MiB, so * On g33, we cannot place HWS above 256MiB, so
@ -523,11 +539,11 @@ static int pin_ggtt_status_page(struct intel_engine_cs *engine,
* above the mappable region (even though we never * above the mappable region (even though we never
* actually map it). * actually map it).
*/ */
flags |= PIN_MAPPABLE; flags = PIN_MAPPABLE;
else else
flags |= PIN_HIGH; flags = PIN_HIGH;
return i915_vma_pin(vma, 0, 0, flags); return i915_ggtt_pin(vma, 0, flags);
} }
static int init_status_page(struct intel_engine_cs *engine) static int init_status_page(struct intel_engine_cs *engine)
@ -546,7 +562,8 @@ static int init_status_page(struct intel_engine_cs *engine)
*/ */
obj = i915_gem_object_create_internal(engine->i915, PAGE_SIZE); obj = i915_gem_object_create_internal(engine->i915, PAGE_SIZE);
if (IS_ERR(obj)) { if (IS_ERR(obj)) {
DRM_ERROR("Failed to allocate status page\n"); drm_err(&engine->i915->drm,
"Failed to allocate status page\n");
return PTR_ERR(obj); return PTR_ERR(obj);
} }
@ -614,15 +631,15 @@ static int engine_setup_common(struct intel_engine_cs *engine)
struct measure_breadcrumb { struct measure_breadcrumb {
struct i915_request rq; struct i915_request rq;
struct intel_timeline timeline;
struct intel_ring ring; struct intel_ring ring;
u32 cs[1024]; u32 cs[1024];
}; };
static int measure_breadcrumb_dw(struct intel_engine_cs *engine) static int measure_breadcrumb_dw(struct intel_context *ce)
{ {
struct intel_engine_cs *engine = ce->engine;
struct measure_breadcrumb *frame; struct measure_breadcrumb *frame;
int dw = -ENOMEM; int dw;
GEM_BUG_ON(!engine->gt->scratch); GEM_BUG_ON(!engine->gt->scratch);
@ -630,39 +647,27 @@ static int measure_breadcrumb_dw(struct intel_engine_cs *engine)
if (!frame) if (!frame)
return -ENOMEM; return -ENOMEM;
if (intel_timeline_init(&frame->timeline, frame->rq.i915 = engine->i915;
engine->gt, frame->rq.engine = engine;
engine->status_page.vma)) frame->rq.context = ce;
goto out_frame; rcu_assign_pointer(frame->rq.timeline, ce->timeline);
mutex_lock(&frame->timeline.mutex);
frame->ring.vaddr = frame->cs; frame->ring.vaddr = frame->cs;
frame->ring.size = sizeof(frame->cs); frame->ring.size = sizeof(frame->cs);
frame->ring.effective_size = frame->ring.size; frame->ring.effective_size = frame->ring.size;
intel_ring_update_space(&frame->ring); intel_ring_update_space(&frame->ring);
frame->rq.i915 = engine->i915;
frame->rq.engine = engine;
frame->rq.ring = &frame->ring; frame->rq.ring = &frame->ring;
rcu_assign_pointer(frame->rq.timeline, &frame->timeline);
dw = intel_timeline_pin(&frame->timeline);
if (dw < 0)
goto out_timeline;
mutex_lock(&ce->timeline->mutex);
spin_lock_irq(&engine->active.lock); spin_lock_irq(&engine->active.lock);
dw = engine->emit_fini_breadcrumb(&frame->rq, frame->cs) - frame->cs; dw = engine->emit_fini_breadcrumb(&frame->rq, frame->cs) - frame->cs;
spin_unlock_irq(&engine->active.lock); spin_unlock_irq(&engine->active.lock);
mutex_unlock(&ce->timeline->mutex);
GEM_BUG_ON(dw & 1); /* RING_TAIL must be qword aligned */ GEM_BUG_ON(dw & 1); /* RING_TAIL must be qword aligned */
intel_timeline_unpin(&frame->timeline);
out_timeline:
mutex_unlock(&frame->timeline.mutex);
intel_timeline_fini(&frame->timeline);
out_frame:
kfree(frame); kfree(frame);
return dw; return dw;
} }
@ -737,12 +742,6 @@ static int engine_init_common(struct intel_engine_cs *engine)
engine->set_default_submission(engine); engine->set_default_submission(engine);
ret = measure_breadcrumb_dw(engine);
if (ret < 0)
return ret;
engine->emit_fini_breadcrumb_dw = ret;
/* /*
* We may need to do things with the shrinker which * We may need to do things with the shrinker which
* require us to immediately switch back to the default * require us to immediately switch back to the default
@ -755,9 +754,18 @@ static int engine_init_common(struct intel_engine_cs *engine)
if (IS_ERR(ce)) if (IS_ERR(ce))
return PTR_ERR(ce); return PTR_ERR(ce);
ret = measure_breadcrumb_dw(ce);
if (ret < 0)
goto err_context;
engine->emit_fini_breadcrumb_dw = ret;
engine->kernel_context = ce; engine->kernel_context = ce;
return 0; return 0;
err_context:
intel_context_put(ce);
return ret;
} }
int intel_engines_init(struct intel_gt *gt) int intel_engines_init(struct intel_gt *gt)
@ -824,6 +832,20 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
intel_wa_list_free(&engine->whitelist); intel_wa_list_free(&engine->whitelist);
} }
/**
* intel_engine_resume - re-initializes the HW state of the engine
* @engine: Engine to resume.
*
* Returns zero on success or an error code on failure.
*/
int intel_engine_resume(struct intel_engine_cs *engine)
{
intel_engine_apply_workarounds(engine);
intel_engine_apply_whitelist(engine);
return engine->resume(engine);
}
u64 intel_engine_get_active_head(const struct intel_engine_cs *engine) u64 intel_engine_get_active_head(const struct intel_engine_cs *engine)
{ {
struct drm_i915_private *i915 = engine->i915; struct drm_i915_private *i915 = engine->i915;
@ -982,6 +1004,12 @@ void intel_engine_get_instdone(const struct intel_engine_cs *engine,
instdone->slice_common = instdone->slice_common =
intel_uncore_read(uncore, GEN7_SC_INSTDONE); intel_uncore_read(uncore, GEN7_SC_INSTDONE);
if (INTEL_GEN(i915) >= 12) {
instdone->slice_common_extra[0] =
intel_uncore_read(uncore, GEN12_SC_INSTDONE_EXTRA);
instdone->slice_common_extra[1] =
intel_uncore_read(uncore, GEN12_SC_INSTDONE_EXTRA2);
}
for_each_instdone_slice_subslice(i915, sseu, slice, subslice) { for_each_instdone_slice_subslice(i915, sseu, slice, subslice) {
instdone->sampler[slice][subslice] = instdone->sampler[slice][subslice] =
read_subslice_reg(engine, slice, subslice, read_subslice_reg(engine, slice, subslice,
@ -1276,8 +1304,14 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
} }
if (INTEL_GEN(dev_priv) >= 6) { if (INTEL_GEN(dev_priv) >= 6) {
drm_printf(m, "\tRING_IMR: %08x\n", drm_printf(m, "\tRING_IMR: 0x%08x\n",
ENGINE_READ(engine, RING_IMR)); ENGINE_READ(engine, RING_IMR));
drm_printf(m, "\tRING_ESR: 0x%08x\n",
ENGINE_READ(engine, RING_ESR));
drm_printf(m, "\tRING_EMR: 0x%08x\n",
ENGINE_READ(engine, RING_EMR));
drm_printf(m, "\tRING_EIR: 0x%08x\n",
ENGINE_READ(engine, RING_EIR));
} }
addr = intel_engine_get_active_head(engine); addr = intel_engine_get_active_head(engine);
@ -1342,7 +1376,7 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
execlists_active_lock_bh(execlists); execlists_active_lock_bh(execlists);
rcu_read_lock(); rcu_read_lock();
for (port = execlists->active; (rq = *port); port++) { for (port = execlists->active; (rq = *port); port++) {
char hdr[80]; char hdr[160];
int len; int len;
len = snprintf(hdr, sizeof(hdr), len = snprintf(hdr, sizeof(hdr),
@ -1352,10 +1386,12 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
struct intel_timeline *tl = get_timeline(rq); struct intel_timeline *tl = get_timeline(rq);
len += snprintf(hdr + len, sizeof(hdr) - len, len += snprintf(hdr + len, sizeof(hdr) - len,
"ring:{start:%08x, hwsp:%08x, seqno:%08x}, ", "ring:{start:%08x, hwsp:%08x, seqno:%08x, runtime:%llums}, ",
i915_ggtt_offset(rq->ring->vma), i915_ggtt_offset(rq->ring->vma),
tl ? tl->hwsp_offset : 0, tl ? tl->hwsp_offset : 0,
hwsp_seqno(rq)); hwsp_seqno(rq),
DIV_ROUND_CLOSEST_ULL(intel_context_get_total_runtime_ns(rq->context),
1000 * 1000));
if (tl) if (tl)
intel_timeline_put(tl); intel_timeline_put(tl);
@ -1657,6 +1693,23 @@ intel_engine_find_active_request(struct intel_engine_cs *engine)
* we only care about the snapshot of this moment. * we only care about the snapshot of this moment.
*/ */
lockdep_assert_held(&engine->active.lock); lockdep_assert_held(&engine->active.lock);
rcu_read_lock();
request = execlists_active(&engine->execlists);
if (request) {
struct intel_timeline *tl = request->context->timeline;
list_for_each_entry_from_reverse(request, &tl->requests, link) {
if (i915_request_completed(request))
break;
active = request;
}
}
rcu_read_unlock();
if (active)
return active;
list_for_each_entry(request, &engine->active.requests, sched.link) { list_for_each_entry(request, &engine->active.requests, sched.link) {
if (i915_request_completed(request)) if (i915_request_completed(request))
continue; continue;

View file

@ -180,7 +180,7 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
struct i915_sched_attr attr = { .priority = I915_PRIORITY_BARRIER }; struct i915_sched_attr attr = { .priority = I915_PRIORITY_BARRIER };
struct intel_context *ce = engine->kernel_context; struct intel_context *ce = engine->kernel_context;
struct i915_request *rq; struct i915_request *rq;
int err = 0; int err;
if (!intel_engine_has_preemption(engine)) if (!intel_engine_has_preemption(engine))
return -ENODEV; return -ENODEV;
@ -188,8 +188,10 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
if (!intel_engine_pm_get_if_awake(engine)) if (!intel_engine_pm_get_if_awake(engine))
return 0; return 0;
if (mutex_lock_interruptible(&ce->timeline->mutex)) if (mutex_lock_interruptible(&ce->timeline->mutex)) {
err = -EINTR;
goto out_rpm; goto out_rpm;
}
intel_context_enter(ce); intel_context_enter(ce);
rq = __i915_request_create(ce, GFP_NOWAIT | __GFP_NOWARN); rq = __i915_request_create(ce, GFP_NOWAIT | __GFP_NOWARN);
@ -204,6 +206,8 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
__i915_request_commit(rq); __i915_request_commit(rq);
__i915_request_queue(rq, &attr); __i915_request_queue(rq, &attr);
GEM_BUG_ON(rq->sched.attr.priority < I915_PRIORITY_BARRIER);
err = 0;
out_unlock: out_unlock:
mutex_unlock(&ce->timeline->mutex); mutex_unlock(&ce->timeline->mutex);

View file

@ -112,7 +112,7 @@ __queue_and_release_pm(struct i915_request *rq,
{ {
struct intel_gt_timelines *timelines = &engine->gt->timelines; struct intel_gt_timelines *timelines = &engine->gt->timelines;
ENGINE_TRACE(engine, "\n"); ENGINE_TRACE(engine, "parking\n");
/* /*
* We have to serialise all potential retirement paths with our * We have to serialise all potential retirement paths with our
@ -249,7 +249,7 @@ static int __engine_park(struct intel_wakeref *wf)
if (!switch_to_kernel_context(engine)) if (!switch_to_kernel_context(engine))
return -EBUSY; return -EBUSY;
ENGINE_TRACE(engine, "\n"); ENGINE_TRACE(engine, "parked\n");
call_idle_barriers(engine); /* cleanup after wedging */ call_idle_barriers(engine); /* cleanup after wedging */

Some files were not shown because too many files have changed in this diff Show more