1
0
Fork 0

Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux

Pull drm updates from Dave Airlie:
 "Been a bit busy, first week of kids school, and waiting on other trees
  to go in before I could send this, so its a bit later than I'd
  normally like.

  Highlights:
   - core:
      timestamp fixes, lots of misc cleanups
   - new drivers:
      bochs virtual vga
   - vmwgfx:
      major overhaul for their nextgen virt gpu.
   - i915:
      runtime D3 on HSW, watermark fixes, power well work, fbc fixes,
      bdw is no longer prelim.
   - nouveau:
      gk110/208 acceleration, more pm groundwork, old overlay support
   - radeon:
      dpm rework and clockgating for CIK, pci config reset, big endian
      fixes
   - tegra:
      panel support and DSI support, build as module, prime.
   - armada, omap, gma500, rcar, exynos, mgag200, cirrus, ast:
      fixes
   - msm:
      hdmi support for mdp5"

* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (595 commits)
  drm/nouveau: resume display if any later suspend bits fail
  drm/nouveau: fix lock unbalance in nouveau_crtc_page_flip
  drm/nouveau: implement hooks for needed for drm vblank timestamping support
  drm/nouveau/disp: add a method to fetch info needed by drm vblank timestamping
  drm/nv50: fill in crtc mode struct members from crtc_mode_fixup
  drm/radeon/dce8: workaround for atom BlankCrtc table
  drm/radeon/DCE4+: clear bios scratch dpms bit (v2)
  drm/radeon: set si_notify_smc_display_change properly
  drm/radeon: fix DAC interrupt handling on DCE5+
  drm/radeon: clean up active vram sizing
  drm/radeon: skip async dma init on r6xx
  drm/radeon/runpm: don't runtime suspend non-PX cards
  drm/radeon: add ring to fence trace functions
  drm/radeon: add missing trace point
  drm/radeon: fix VMID use tracking
  drm: ast,cirrus,mgag200: use drm_can_sleep
  drm/gma500: Lock struct_mutex around cursor updates
  drm/i915: Fix the offset issue for the stolen GEM objects
  DRM: armada: fix missing DRM_KMS_FB_HELPER select
  drm/i915: Decouple GPU error reporting from ring initialisation
  ...
hifive-unleashed-5.1
Linus Torvalds 2014-01-29 20:49:12 -08:00
commit 9b0cd304f2
512 changed files with 35642 additions and 11989 deletions

View File

@ -118,6 +118,9 @@ of the following host1x client modules:
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- dc
- nvidia,head: The number of the display controller head. This is used to
setup the various types of output to receive video data from the given
head.
Each display controller node has a child node, named "rgb", that represents
the RGB output associated with the controller. It can take the following
@ -125,6 +128,7 @@ of the following host1x client modules:
- nvidia,ddc-i2c-bus: phandle of an I2C controller used for DDC EDID probing
- nvidia,hpd-gpio: specifies a GPIO used for hotplug detection
- nvidia,edid: supplies a binary EDID blob
- nvidia,panel: phandle of a display panel
- hdmi: High Definition Multimedia Interface
@ -149,6 +153,7 @@ of the following host1x client modules:
- nvidia,ddc-i2c-bus: phandle of an I2C controller used for DDC EDID probing
- nvidia,hpd-gpio: specifies a GPIO used for hotplug detection
- nvidia,edid: supplies a binary EDID blob
- nvidia,panel: phandle of a display panel
- tvo: TV encoder output
@ -169,11 +174,21 @@ of the following host1x client modules:
- clock-names: Must include the following entries:
- dsi
This MUST be the first entry.
- lp
- parent
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- dsi
- nvidia,mipi-calibrate: Should contain a phandle and a specifier specifying
which pads are used by this DSI output and need to be calibrated. See also
../mipi/nvidia,tegra114-mipi.txt.
Optional properties:
- nvidia,ddc-i2c-bus: phandle of an I2C controller used for DDC EDID probing
- nvidia,hpd-gpio: specifies a GPIO used for hotplug detection
- nvidia,edid: supplies a binary EDID blob
- nvidia,panel: phandle of a display panel
Example:
@ -253,7 +268,7 @@ Example:
interrupts = <0 73 0x04>;
clocks = <&tegra_car TEGRA20_CLK_DISP1>,
<&tegra_car TEGRA20_CLK_PLL_P>;
clock-names = "disp1", "parent";
clock-names = "dc", "parent";
resets = <&tegra_car 27>;
reset-names = "dc";
@ -268,7 +283,7 @@ Example:
interrupts = <0 74 0x04>;
clocks = <&tegra_car TEGRA20_CLK_DISP2>,
<&tegra_car TEGRA20_CLK_PLL_P>;
clock-names = "disp2", "parent";
clock-names = "dc", "parent";
resets = <&tegra_car 26>;
reset-names = "dc";

View File

@ -0,0 +1,98 @@
MIPI DSI (Display Serial Interface) busses
==========================================
The MIPI Display Serial Interface specifies a serial bus and a protocol for
communication between a host and up to four peripherals. This document will
define the syntax used to represent a DSI bus in a device tree.
This document describes DSI bus-specific properties only or defines existing
standard properties in the context of the DSI bus.
Each DSI host provides a DSI bus. The DSI host controller's node contains a
set of properties that characterize the bus. Child nodes describe individual
peripherals on that bus.
The following assumes that only a single peripheral is connected to a DSI
host. Experience shows that this is true for the large majority of setups.
DSI host
--------
In addition to the standard properties and those defined by the parent bus of
a DSI host, the following properties apply to a node representing a DSI host.
Required properties:
- #address-cells: The number of cells required to represent an address on the
bus. DSI peripherals are addressed using a 2-bit virtual channel number, so
a maximum of 4 devices can be addressed on a single bus. Hence the value of
this property should be 1.
- #size-cells: Should be 0. There are cases where it makes sense to use a
different value here. See below.
DSI peripheral
--------------
Peripherals are represented as child nodes of the DSI host's node. Properties
described here apply to all DSI peripherals, but individual bindings may want
to define additional, device-specific properties.
Required properties:
- reg: The virtual channel number of a DSI peripheral. Must be in the range
from 0 to 3.
Some DSI peripherals respond to more than a single virtual channel. In that
case two alternative representations can be chosen:
- The reg property can take multiple entries, one for each virtual channel
that the peripheral responds to.
- If the virtual channels that a peripheral responds to are consecutive, the
#size-cells can be set to 1. The first cell of each entry in the reg
property is the number of the first virtual channel and the second cell is
the number of consecutive virtual channels.
Example
-------
dsi-host {
...
#address-cells = <1>;
#size-cells = <0>;
/* peripheral responds to virtual channel 0 */
peripheral@0 {
compatible = "...";
reg = <0>;
};
...
};
dsi-host {
...
#address-cells = <1>;
#size-cells = <0>;
/* peripheral responds to virtual channels 0 and 2 */
peripheral@0 {
compatible = "...";
reg = <0, 2>;
};
...
};
dsi-host {
...
#address-cells = <1>;
#size-cells = <1>;
/* peripheral responds to virtual channels 1, 2 and 3 */
peripheral@1 {
compatible = "...";
reg = <1 3>;
};
...
};

View File

@ -0,0 +1,41 @@
NVIDIA Tegra MIPI pad calibration controller
Required properties:
- compatible: "nvidia,tegra<chip>-mipi"
- reg: Physical base address and length of the controller's registers.
- clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- mipi-cal
- #nvidia,mipi-calibrate-cells: Should be 1. The cell is a bitmask of the pads
that need to be calibrated for a given device.
User nodes need to contain an nvidia,mipi-calibrate property that has a
phandle to refer to the calibration controller node and a bitmask of the pads
that need to be calibrated.
Example:
mipi: mipi@700e3000 {
compatible = "nvidia,tegra114-mipi";
reg = <0x700e3000 0x100>;
clocks = <&tegra_car TEGRA114_CLK_MIPI_CAL>;
clock-names = "mipi-cal";
#nvidia,mipi-calibrate-cells = <1>;
};
...
host1x@50000000 {
...
dsi@54300000 {
...
nvidia,mipi-calibrate = <&mipi 0x060>;
...
};
...
};

View File

@ -0,0 +1,7 @@
AU Optronics Corporation 10.1" WSVGA TFT LCD panel
Required properties:
- compatible: should be "auo,b101aw03"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

View File

@ -0,0 +1,7 @@
Chunghwa Picture Tubes Ltd. 10.1" WXGA TFT LCD panel
Required properties:
- compatible: should be "chunghwa,claa101wa01a"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

View File

@ -0,0 +1,7 @@
Chunghwa Picture Tubes Ltd. 10.1" WXGA TFT LCD panel
Required properties:
- compatible: should be "chunghwa,claa101wb03"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

View File

@ -0,0 +1,7 @@
Panasonic Corporation 10.1" WUXGA TFT LCD panel
Required properties:
- compatible: should be "panasonic,vvx10f004b00"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

View File

@ -0,0 +1,7 @@
Samsung Electronics 10.1" WSVGA TFT LCD panel
Required properties:
- compatible: should be "samsung,ltn101nt05"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

View File

@ -0,0 +1,21 @@
Simple display panel
Required properties:
- power-supply: regulator to provide the supply voltage
Optional properties:
- ddc-i2c-bus: phandle of an I2C controller used for DDC EDID probing
- enable-gpios: GPIO pin to enable or disable the panel
- backlight: phandle of the backlight device attached to the panel
Example:
panel: panel {
compatible = "cptt,claa101wb01";
ddc-i2c-bus = <&panelddc>;
power-supply = <&vdd_pnl_reg>;
enable-gpios = <&gpio 90 0>;
backlight = <&backlight>;
};

View File

@ -49,7 +49,7 @@ obj-$(CONFIG_GPIO_TB0219) += tb0219.o
obj-$(CONFIG_TELCLOCK) += tlclk.o
obj-$(CONFIG_MWAVE) += mwave/
obj-$(CONFIG_AGP) += agp/
obj-y += agp/
obj-$(CONFIG_PCMCIA) += pcmcia/
obj-$(CONFIG_HANGCHECK_TIMER) += hangcheck-timer.o

View File

@ -68,6 +68,7 @@ config AGP_AMD64
config AGP_INTEL
tristate "Intel 440LX/BX/GX, I8xx and E7x05 chipset support"
depends on AGP && X86
select INTEL_GTT
help
This option gives you AGP support for the GLX component of X
on Intel 440LX/BX/GX, 815, 820, 830, 840, 845, 850, 860, 875,
@ -155,3 +156,7 @@ config AGP_SGI_TIOCA
This option gives you AGP GART support for the SGI TIO chipset
for IA64 processors.
config INTEL_GTT
tristate
depends on X86 && PCI

View File

@ -13,7 +13,7 @@ obj-$(CONFIG_AGP_HP_ZX1) += hp-agp.o
obj-$(CONFIG_AGP_PARISC) += parisc-agp.o
obj-$(CONFIG_AGP_I460) += i460-agp.o
obj-$(CONFIG_AGP_INTEL) += intel-agp.o
obj-$(CONFIG_AGP_INTEL) += intel-gtt.o
obj-$(CONFIG_INTEL_GTT) += intel-gtt.o
obj-$(CONFIG_AGP_NVIDIA) += nvidia-agp.o
obj-$(CONFIG_AGP_SGI_TIOCA) += sgi-agp.o
obj-$(CONFIG_AGP_SIS) += sis-agp.o

View File

@ -14,9 +14,6 @@
#include "intel-agp.h"
#include <drm/intel-gtt.h>
int intel_agp_enabled;
EXPORT_SYMBOL(intel_agp_enabled);
static int intel_fetch_size(void)
{
int i;
@ -806,8 +803,6 @@ static int agp_intel_probe(struct pci_dev *pdev,
found_gmch:
pci_set_drvdata(pdev, bridge);
err = agp_add_bridge(bridge);
if (!err)
intel_agp_enabled = 1;
return err;
}

View File

@ -94,6 +94,7 @@ static struct _intel_private {
#define IS_IRONLAKE intel_private.driver->is_ironlake
#define HAS_PGTBL_EN intel_private.driver->has_pgtbl_enable
#if IS_ENABLED(CONFIG_AGP_INTEL)
static int intel_gtt_map_memory(struct page **pages,
unsigned int num_entries,
struct sg_table *st)
@ -168,6 +169,7 @@ static void i8xx_destroy_pages(struct page *page)
__free_pages(page, 2);
atomic_dec(&agp_bridge->current_memory_agp);
}
#endif
#define I810_GTT_ORDER 4
static int i810_setup(void)
@ -208,6 +210,7 @@ static void i810_cleanup(void)
free_gatt_pages(intel_private.i81x_gtt_table, I810_GTT_ORDER);
}
#if IS_ENABLED(CONFIG_AGP_INTEL)
static int i810_insert_dcache_entries(struct agp_memory *mem, off_t pg_start,
int type)
{
@ -288,6 +291,7 @@ static void intel_i810_free_by_type(struct agp_memory *curr)
}
kfree(curr);
}
#endif
static int intel_gtt_setup_scratch_page(void)
{
@ -645,7 +649,9 @@ static int intel_gtt_init(void)
return -ENOMEM;
}
#if IS_ENABLED(CONFIG_AGP_INTEL)
global_cache_flush(); /* FIXME: ? */
#endif
intel_private.stolen_size = intel_gtt_stolen_size();
@ -666,6 +672,7 @@ static int intel_gtt_init(void)
return 0;
}
#if IS_ENABLED(CONFIG_AGP_INTEL)
static int intel_fake_agp_fetch_size(void)
{
int num_sizes = ARRAY_SIZE(intel_fake_agp_sizes);
@ -684,6 +691,7 @@ static int intel_fake_agp_fetch_size(void)
return 0;
}
#endif
static void i830_cleanup(void)
{
@ -795,6 +803,7 @@ static int i830_setup(void)
return 0;
}
#if IS_ENABLED(CONFIG_AGP_INTEL)
static int intel_fake_agp_create_gatt_table(struct agp_bridge_data *bridge)
{
agp_bridge->gatt_table_real = NULL;
@ -819,6 +828,7 @@ static int intel_fake_agp_configure(void)
return 0;
}
#endif
static bool i830_check_flags(unsigned int flags)
{
@ -857,6 +867,7 @@ void intel_gtt_insert_sg_entries(struct sg_table *st,
}
EXPORT_SYMBOL(intel_gtt_insert_sg_entries);
#if IS_ENABLED(CONFIG_AGP_INTEL)
static void intel_gtt_insert_pages(unsigned int first_entry,
unsigned int num_entries,
struct page **pages,
@ -922,6 +933,7 @@ out_err:
mem->is_flushed = true;
return ret;
}
#endif
void intel_gtt_clear_range(unsigned int first_entry, unsigned int num_entries)
{
@ -935,6 +947,7 @@ void intel_gtt_clear_range(unsigned int first_entry, unsigned int num_entries)
}
EXPORT_SYMBOL(intel_gtt_clear_range);
#if IS_ENABLED(CONFIG_AGP_INTEL)
static int intel_fake_agp_remove_entries(struct agp_memory *mem,
off_t pg_start, int type)
{
@ -976,6 +989,7 @@ static struct agp_memory *intel_fake_agp_alloc_by_type(size_t pg_count,
/* always return NULL for other allocation types for now */
return NULL;
}
#endif
static int intel_alloc_chipset_flush_resource(void)
{
@ -1129,6 +1143,7 @@ static int i9xx_setup(void)
return 0;
}
#if IS_ENABLED(CONFIG_AGP_INTEL)
static const struct agp_bridge_driver intel_fake_agp_driver = {
.owner = THIS_MODULE,
.size_type = FIXED_APER_SIZE,
@ -1150,6 +1165,7 @@ static const struct agp_bridge_driver intel_fake_agp_driver = {
.agp_destroy_page = agp_generic_destroy_page,
.agp_destroy_pages = agp_generic_destroy_pages,
};
#endif
static const struct intel_gtt_driver i81x_gtt_driver = {
.gen = 1,
@ -1367,11 +1383,13 @@ int intel_gmch_probe(struct pci_dev *bridge_pdev, struct pci_dev *gpu_pdev,
intel_private.refcount++;
#if IS_ENABLED(CONFIG_AGP_INTEL)
if (bridge) {
bridge->driver = &intel_fake_agp_driver;
bridge->dev_private_data = &intel_private;
bridge->dev = bridge_pdev;
}
#endif
intel_private.bridge_dev = pci_dev_get(bridge_pdev);

View File

@ -20,6 +20,10 @@ menuconfig DRM
details. You should also select and configure AGP
(/dev/agpgart) support if it is available for your platform.
config DRM_MIPI_DSI
bool
depends on DRM
config DRM_USB
tristate
depends on DRM
@ -188,6 +192,10 @@ source "drivers/gpu/drm/tilcdc/Kconfig"
source "drivers/gpu/drm/qxl/Kconfig"
source "drivers/gpu/drm/bochs/Kconfig"
source "drivers/gpu/drm/msm/Kconfig"
source "drivers/gpu/drm/tegra/Kconfig"
source "drivers/gpu/drm/panel/Kconfig"

View File

@ -18,6 +18,7 @@ drm-y := drm_auth.o drm_buffer.o drm_bufs.o drm_cache.o \
drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
drm-$(CONFIG_PCI) += ati_pcigart.o
drm-$(CONFIG_DRM_PANEL) += drm_panel.o
drm-usb-y := drm_usb.o
@ -31,6 +32,7 @@ obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o
CFLAGS_drm_trace_points.o := -I$(src)
obj-$(CONFIG_DRM) += drm.o
obj-$(CONFIG_DRM_MIPI_DSI) += drm_mipi_dsi.o
obj-$(CONFIG_DRM_USB) += drm_usb.o
obj-$(CONFIG_DRM_TTM) += ttm/
obj-$(CONFIG_DRM_TDFX) += tdfx/
@ -56,6 +58,8 @@ obj-$(CONFIG_DRM_SHMOBILE) +=shmobile/
obj-$(CONFIG_DRM_OMAP) += omapdrm/
obj-$(CONFIG_DRM_TILCDC) += tilcdc/
obj-$(CONFIG_DRM_QXL) += qxl/
obj-$(CONFIG_DRM_BOCHS) += bochs/
obj-$(CONFIG_DRM_MSM) += msm/
obj-$(CONFIG_DRM_TEGRA) += tegra/
obj-y += i2c/
obj-y += panel/

View File

@ -5,6 +5,7 @@ config DRM_ARMADA
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select DRM_KMS_HELPER
select DRM_KMS_FB_HELPER
help
Support the "LCD" controllers found on the Marvell Armada 510
devices. There are two controllers on the device, each controller

View File

@ -128,6 +128,7 @@ static int armada_drm_load(struct drm_device *dev, unsigned long flags)
return -ENOMEM;
}
platform_set_drvdata(dev->platformdev, dev);
dev->dev_private = priv;
/* Get the implementation specific driver data. */
@ -381,7 +382,7 @@ static int armada_drm_probe(struct platform_device *pdev)
static int armada_drm_remove(struct platform_device *pdev)
{
drm_platform_exit(&armada_drm_driver, pdev);
drm_put_dev(platform_get_drvdata(pdev));
return 0;
}

View File

@ -65,7 +65,7 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
* then the BO is being moved and we should
* store up the damage until later.
*/
if (!in_interrupt())
if (!drm_can_sleep())
ret = ast_bo_reserve(bo, true);
if (ret) {
if (ret != -EBUSY)

View File

@ -189,53 +189,6 @@ static int ast_get_dram_info(struct drm_device *dev)
return 0;
}
uint32_t ast_get_max_dclk(struct drm_device *dev, int bpp)
{
struct ast_private *ast = dev->dev_private;
uint32_t dclk, jreg;
uint32_t dram_bus_width, mclk, dram_bandwidth, actual_dram_bandwidth, dram_efficency = 500;
dram_bus_width = ast->dram_bus_width;
mclk = ast->mclk;
if (ast->chip == AST2100 ||
ast->chip == AST1100 ||
ast->chip == AST2200 ||
ast->chip == AST2150 ||
ast->dram_bus_width == 16)
dram_efficency = 600;
else if (ast->chip == AST2300)
dram_efficency = 400;
dram_bandwidth = mclk * dram_bus_width * 2 / 8;
actual_dram_bandwidth = dram_bandwidth * dram_efficency / 1000;
if (ast->chip == AST1180)
dclk = actual_dram_bandwidth / ((bpp + 1) / 8);
else {
jreg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd0, 0xff);
if ((jreg & 0x08) && (ast->chip == AST2000))
dclk = actual_dram_bandwidth / ((bpp + 1 + 16) / 8);
else if ((jreg & 0x08) && (bpp == 8))
dclk = actual_dram_bandwidth / ((bpp + 1 + 24) / 8);
else
dclk = actual_dram_bandwidth / ((bpp + 1) / 8);
}
if (ast->chip == AST2100 ||
ast->chip == AST2200 ||
ast->chip == AST2300 ||
ast->chip == AST1180) {
if (dclk > 200)
dclk = 200;
} else {
if (dclk > 165)
dclk = 165;
}
return dclk;
}
static void ast_user_framebuffer_destroy(struct drm_framebuffer *fb)
{
struct ast_framebuffer *ast_fb = to_ast_framebuffer(fb);
@ -449,7 +402,7 @@ int ast_dumb_create(struct drm_file *file,
return 0;
}
void ast_bo_unref(struct ast_bo **bo)
static void ast_bo_unref(struct ast_bo **bo)
{
struct ttm_buffer_object *tbo;

View File

@ -404,7 +404,7 @@ static void ast_set_ext_reg(struct drm_crtc *crtc, struct drm_display_mode *mode
}
}
void ast_set_sync_reg(struct drm_device *dev, struct drm_display_mode *mode,
static void ast_set_sync_reg(struct drm_device *dev, struct drm_display_mode *mode,
struct ast_vbios_mode_info *vbios_mode)
{
struct ast_private *ast = dev->dev_private;
@ -415,7 +415,7 @@ void ast_set_sync_reg(struct drm_device *dev, struct drm_display_mode *mode,
ast_io_write8(ast, AST_IO_MISC_PORT_WRITE, jreg);
}
bool ast_set_dac_reg(struct drm_crtc *crtc, struct drm_display_mode *mode,
static bool ast_set_dac_reg(struct drm_crtc *crtc, struct drm_display_mode *mode,
struct ast_vbios_mode_info *vbios_mode)
{
switch (crtc->fb->bits_per_pixel) {
@ -427,7 +427,7 @@ bool ast_set_dac_reg(struct drm_crtc *crtc, struct drm_display_mode *mode,
return true;
}
void ast_set_start_address_crt1(struct drm_crtc *crtc, unsigned offset)
static void ast_set_start_address_crt1(struct drm_crtc *crtc, unsigned offset)
{
struct ast_private *ast = crtc->dev->dev_private;
u32 addr;
@ -623,7 +623,7 @@ static const struct drm_crtc_funcs ast_crtc_funcs = {
.destroy = ast_crtc_destroy,
};
int ast_crtc_init(struct drm_device *dev)
static int ast_crtc_init(struct drm_device *dev)
{
struct ast_crtc *crtc;
int i;
@ -710,7 +710,7 @@ static const struct drm_encoder_helper_funcs ast_enc_helper_funcs = {
.mode_set = ast_encoder_mode_set,
};
int ast_encoder_init(struct drm_device *dev)
static int ast_encoder_init(struct drm_device *dev)
{
struct ast_encoder *ast_encoder;
@ -777,7 +777,7 @@ static const struct drm_connector_funcs ast_connector_funcs = {
.destroy = ast_connector_destroy,
};
int ast_connector_init(struct drm_device *dev)
static int ast_connector_init(struct drm_device *dev)
{
struct ast_connector *ast_connector;
struct drm_connector *connector;
@ -810,7 +810,7 @@ int ast_connector_init(struct drm_device *dev)
}
/* allocate cursor cache and pin at start of VRAM */
int ast_cursor_init(struct drm_device *dev)
static int ast_cursor_init(struct drm_device *dev)
{
struct ast_private *ast = dev->dev_private;
int size;
@ -847,7 +847,7 @@ fail:
return ret;
}
void ast_cursor_fini(struct drm_device *dev)
static void ast_cursor_fini(struct drm_device *dev)
{
struct ast_private *ast = dev->dev_private;
ttm_bo_kunmap(&ast->cache_kmap);
@ -965,7 +965,7 @@ static void ast_i2c_destroy(struct ast_i2c_chan *i2c)
kfree(i2c);
}
void ast_show_cursor(struct drm_crtc *crtc)
static void ast_show_cursor(struct drm_crtc *crtc)
{
struct ast_private *ast = crtc->dev->dev_private;
u8 jreg;
@ -976,7 +976,7 @@ void ast_show_cursor(struct drm_crtc *crtc)
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, jreg);
}
void ast_hide_cursor(struct drm_crtc *crtc)
static void ast_hide_cursor(struct drm_crtc *crtc)
{
struct ast_private *ast = crtc->dev->dev_private;
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, 0x00);

View File

@ -80,7 +80,7 @@ static int ast_ttm_global_init(struct ast_private *ast)
return 0;
}
void
static void
ast_ttm_global_release(struct ast_private *ast)
{
if (ast->ttm.mem_global_ref.release == NULL)
@ -102,7 +102,7 @@ static void ast_bo_ttm_destroy(struct ttm_buffer_object *tbo)
kfree(bo);
}
bool ast_ttm_bo_is_ast_bo(struct ttm_buffer_object *bo)
static bool ast_ttm_bo_is_ast_bo(struct ttm_buffer_object *bo)
{
if (bo->destroy == &ast_bo_ttm_destroy)
return true;
@ -208,7 +208,7 @@ static struct ttm_backend_func ast_tt_backend_func = {
};
struct ttm_tt *ast_ttm_tt_create(struct ttm_bo_device *bdev,
static struct ttm_tt *ast_ttm_tt_create(struct ttm_bo_device *bdev,
unsigned long size, uint32_t page_flags,
struct page *dummy_read_page)
{

View File

@ -0,0 +1,11 @@
config DRM_BOCHS
tristate "DRM Support for bochs dispi vga interface (qemu stdvga)"
depends on DRM && PCI
select DRM_KMS_HELPER
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
select DRM_TTM
help
Choose this option for qemu.
If M is selected the module will be called bochs-drm.

View File

@ -0,0 +1,4 @@
ccflags-y := -Iinclude/drm
bochs-drm-y := bochs_drv.o bochs_mm.o bochs_kms.o bochs_fbdev.o bochs_hw.o
obj-$(CONFIG_DRM_BOCHS) += bochs-drm.o

View File

@ -0,0 +1,164 @@
#include <linux/io.h>
#include <linux/fb.h>
#include <drm/drmP.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
#include <ttm/ttm_bo_driver.h>
#include <ttm/ttm_page_alloc.h>
/* ---------------------------------------------------------------------- */
#define VBE_DISPI_IOPORT_INDEX 0x01CE
#define VBE_DISPI_IOPORT_DATA 0x01CF
#define VBE_DISPI_INDEX_ID 0x0
#define VBE_DISPI_INDEX_XRES 0x1
#define VBE_DISPI_INDEX_YRES 0x2
#define VBE_DISPI_INDEX_BPP 0x3
#define VBE_DISPI_INDEX_ENABLE 0x4
#define VBE_DISPI_INDEX_BANK 0x5
#define VBE_DISPI_INDEX_VIRT_WIDTH 0x6
#define VBE_DISPI_INDEX_VIRT_HEIGHT 0x7
#define VBE_DISPI_INDEX_X_OFFSET 0x8
#define VBE_DISPI_INDEX_Y_OFFSET 0x9
#define VBE_DISPI_INDEX_VIDEO_MEMORY_64K 0xa
#define VBE_DISPI_ID0 0xB0C0
#define VBE_DISPI_ID1 0xB0C1
#define VBE_DISPI_ID2 0xB0C2
#define VBE_DISPI_ID3 0xB0C3
#define VBE_DISPI_ID4 0xB0C4
#define VBE_DISPI_ID5 0xB0C5
#define VBE_DISPI_DISABLED 0x00
#define VBE_DISPI_ENABLED 0x01
#define VBE_DISPI_GETCAPS 0x02
#define VBE_DISPI_8BIT_DAC 0x20
#define VBE_DISPI_LFB_ENABLED 0x40
#define VBE_DISPI_NOCLEARMEM 0x80
/* ---------------------------------------------------------------------- */
enum bochs_types {
BOCHS_QEMU_STDVGA,
BOCHS_UNKNOWN,
};
struct bochs_framebuffer {
struct drm_framebuffer base;
struct drm_gem_object *obj;
};
struct bochs_device {
/* hw */
void __iomem *mmio;
int ioports;
void __iomem *fb_map;
unsigned long fb_base;
unsigned long fb_size;
/* mode */
u16 xres;
u16 yres;
u16 yres_virtual;
u32 stride;
u32 bpp;
/* drm */
struct drm_device *dev;
struct drm_crtc crtc;
struct drm_encoder encoder;
struct drm_connector connector;
bool mode_config_initialized;
/* ttm */
struct {
struct drm_global_reference mem_global_ref;
struct ttm_bo_global_ref bo_global_ref;
struct ttm_bo_device bdev;
bool initialized;
} ttm;
/* fbdev */
struct {
struct bochs_framebuffer gfb;
struct drm_fb_helper helper;
int size;
int x1, y1, x2, y2; /* dirty rect */
spinlock_t dirty_lock;
bool initialized;
} fb;
};
#define to_bochs_framebuffer(x) container_of(x, struct bochs_framebuffer, base)
struct bochs_bo {
struct ttm_buffer_object bo;
struct ttm_placement placement;
struct ttm_bo_kmap_obj kmap;
struct drm_gem_object gem;
u32 placements[3];
int pin_count;
};
static inline struct bochs_bo *bochs_bo(struct ttm_buffer_object *bo)
{
return container_of(bo, struct bochs_bo, bo);
}
static inline struct bochs_bo *gem_to_bochs_bo(struct drm_gem_object *gem)
{
return container_of(gem, struct bochs_bo, gem);
}
#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
static inline u64 bochs_bo_mmap_offset(struct bochs_bo *bo)
{
return drm_vma_node_offset_addr(&bo->bo.vma_node);
}
/* ---------------------------------------------------------------------- */
/* bochs_hw.c */
int bochs_hw_init(struct drm_device *dev, uint32_t flags);
void bochs_hw_fini(struct drm_device *dev);
void bochs_hw_setmode(struct bochs_device *bochs,
struct drm_display_mode *mode);
void bochs_hw_setbase(struct bochs_device *bochs,
int x, int y, u64 addr);
/* bochs_mm.c */
int bochs_mm_init(struct bochs_device *bochs);
void bochs_mm_fini(struct bochs_device *bochs);
int bochs_mmap(struct file *filp, struct vm_area_struct *vma);
int bochs_gem_create(struct drm_device *dev, u32 size, bool iskernel,
struct drm_gem_object **obj);
int bochs_gem_init_object(struct drm_gem_object *obj);
void bochs_gem_free_object(struct drm_gem_object *obj);
int bochs_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args);
int bochs_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev,
uint32_t handle, uint64_t *offset);
int bochs_framebuffer_init(struct drm_device *dev,
struct bochs_framebuffer *gfb,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_gem_object *obj);
int bochs_bo_pin(struct bochs_bo *bo, u32 pl_flag, u64 *gpu_addr);
int bochs_bo_unpin(struct bochs_bo *bo);
extern const struct drm_mode_config_funcs bochs_mode_funcs;
/* bochs_kms.c */
int bochs_kms_init(struct bochs_device *bochs);
void bochs_kms_fini(struct bochs_device *bochs);
/* bochs_fbdev.c */
int bochs_fbdev_init(struct bochs_device *bochs);
void bochs_fbdev_fini(struct bochs_device *bochs);

View File

@ -0,0 +1,178 @@
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/slab.h>
#include "bochs.h"
static bool enable_fbdev = true;
module_param_named(fbdev, enable_fbdev, bool, 0444);
MODULE_PARM_DESC(fbdev, "register fbdev device");
/* ---------------------------------------------------------------------- */
/* drm interface */
static int bochs_unload(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
bochs_fbdev_fini(bochs);
bochs_kms_fini(bochs);
bochs_mm_fini(bochs);
bochs_hw_fini(dev);
kfree(bochs);
dev->dev_private = NULL;
return 0;
}
static int bochs_load(struct drm_device *dev, unsigned long flags)
{
struct bochs_device *bochs;
int ret;
bochs = kzalloc(sizeof(*bochs), GFP_KERNEL);
if (bochs == NULL)
return -ENOMEM;
dev->dev_private = bochs;
bochs->dev = dev;
ret = bochs_hw_init(dev, flags);
if (ret)
goto err;
ret = bochs_mm_init(bochs);
if (ret)
goto err;
ret = bochs_kms_init(bochs);
if (ret)
goto err;
if (enable_fbdev)
bochs_fbdev_init(bochs);
return 0;
err:
bochs_unload(dev);
return ret;
}
static const struct file_operations bochs_fops = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl,
#endif
.poll = drm_poll,
.read = drm_read,
.llseek = no_llseek,
.mmap = bochs_mmap,
};
static struct drm_driver bochs_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET,
.load = bochs_load,
.unload = bochs_unload,
.fops = &bochs_fops,
.name = "bochs-drm",
.desc = "bochs dispi vga interface (qemu stdvga)",
.date = "20130925",
.major = 1,
.minor = 0,
.gem_free_object = bochs_gem_free_object,
.dumb_create = bochs_dumb_create,
.dumb_map_offset = bochs_dumb_mmap_offset,
.dumb_destroy = drm_gem_dumb_destroy,
};
/* ---------------------------------------------------------------------- */
/* pci interface */
static int bochs_kick_out_firmware_fb(struct pci_dev *pdev)
{
struct apertures_struct *ap;
ap = alloc_apertures(1);
if (!ap)
return -ENOMEM;
ap->ranges[0].base = pci_resource_start(pdev, 0);
ap->ranges[0].size = pci_resource_len(pdev, 0);
remove_conflicting_framebuffers(ap, "bochsdrmfb", false);
kfree(ap);
return 0;
}
static int bochs_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
int ret;
ret = bochs_kick_out_firmware_fb(pdev);
if (ret)
return ret;
return drm_get_pci_dev(pdev, ent, &bochs_driver);
}
static void bochs_pci_remove(struct pci_dev *pdev)
{
struct drm_device *dev = pci_get_drvdata(pdev);
drm_put_dev(dev);
}
static DEFINE_PCI_DEVICE_TABLE(bochs_pci_tbl) = {
{
.vendor = 0x1234,
.device = 0x1111,
.subvendor = 0x1af4,
.subdevice = 0x1100,
.driver_data = BOCHS_QEMU_STDVGA,
},
{
.vendor = 0x1234,
.device = 0x1111,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.driver_data = BOCHS_UNKNOWN,
},
{ /* end of list */ }
};
static struct pci_driver bochs_pci_driver = {
.name = "bochs-drm",
.id_table = bochs_pci_tbl,
.probe = bochs_pci_probe,
.remove = bochs_pci_remove,
};
/* ---------------------------------------------------------------------- */
/* module init/exit */
static int __init bochs_init(void)
{
return drm_pci_init(&bochs_driver, &bochs_pci_driver);
}
static void __exit bochs_exit(void)
{
drm_pci_exit(&bochs_driver, &bochs_pci_driver);
}
module_init(bochs_init);
module_exit(bochs_exit);
MODULE_DEVICE_TABLE(pci, bochs_pci_tbl);
MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,215 @@
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include "bochs.h"
/* ---------------------------------------------------------------------- */
static struct fb_ops bochsfb_ops = {
.owner = THIS_MODULE,
.fb_check_var = drm_fb_helper_check_var,
.fb_set_par = drm_fb_helper_set_par,
.fb_fillrect = sys_fillrect,
.fb_copyarea = sys_copyarea,
.fb_imageblit = sys_imageblit,
.fb_pan_display = drm_fb_helper_pan_display,
.fb_blank = drm_fb_helper_blank,
.fb_setcmap = drm_fb_helper_setcmap,
};
static int bochsfb_create_object(struct bochs_device *bochs,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_gem_object **gobj_p)
{
struct drm_device *dev = bochs->dev;
struct drm_gem_object *gobj;
u32 size;
int ret = 0;
size = mode_cmd->pitches[0] * mode_cmd->height;
ret = bochs_gem_create(dev, size, true, &gobj);
if (ret)
return ret;
*gobj_p = gobj;
return ret;
}
static int bochsfb_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
struct bochs_device *bochs =
container_of(helper, struct bochs_device, fb.helper);
struct drm_device *dev = bochs->dev;
struct fb_info *info;
struct drm_framebuffer *fb;
struct drm_mode_fb_cmd2 mode_cmd;
struct device *device = &dev->pdev->dev;
struct drm_gem_object *gobj = NULL;
struct bochs_bo *bo = NULL;
int size, ret;
if (sizes->surface_bpp != 32)
return -EINVAL;
mode_cmd.width = sizes->surface_width;
mode_cmd.height = sizes->surface_height;
mode_cmd.pitches[0] = mode_cmd.width * ((sizes->surface_bpp + 7) / 8);
mode_cmd.pixel_format = drm_mode_legacy_fb_format(sizes->surface_bpp,
sizes->surface_depth);
size = mode_cmd.pitches[0] * mode_cmd.height;
/* alloc, pin & map bo */
ret = bochsfb_create_object(bochs, &mode_cmd, &gobj);
if (ret) {
DRM_ERROR("failed to create fbcon backing object %d\n", ret);
return ret;
}
bo = gem_to_bochs_bo(gobj);
ret = ttm_bo_reserve(&bo->bo, true, false, false, 0);
if (ret)
return ret;
ret = bochs_bo_pin(bo, TTM_PL_FLAG_VRAM, NULL);
if (ret) {
DRM_ERROR("failed to pin fbcon\n");
ttm_bo_unreserve(&bo->bo);
return ret;
}
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages,
&bo->kmap);
if (ret) {
DRM_ERROR("failed to kmap fbcon\n");
ttm_bo_unreserve(&bo->bo);
return ret;
}
ttm_bo_unreserve(&bo->bo);
/* init fb device */
info = framebuffer_alloc(0, device);
if (info == NULL)
return -ENOMEM;
info->par = &bochs->fb.helper;
ret = bochs_framebuffer_init(bochs->dev, &bochs->fb.gfb, &mode_cmd, gobj);
if (ret)
return ret;
bochs->fb.size = size;
/* setup helper */
fb = &bochs->fb.gfb.base;
bochs->fb.helper.fb = fb;
bochs->fb.helper.fbdev = info;
strcpy(info->fix.id, "bochsdrmfb");
info->flags = FBINFO_DEFAULT;
info->fbops = &bochsfb_ops;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth);
drm_fb_helper_fill_var(info, &bochs->fb.helper, sizes->fb_width,
sizes->fb_height);
info->screen_base = bo->kmap.virtual;
info->screen_size = size;
#if 0
/* FIXME: get this right for mmap(/dev/fb0) */
info->fix.smem_start = bochs_bo_mmap_offset(bo);
info->fix.smem_len = size;
#endif
ret = fb_alloc_cmap(&info->cmap, 256, 0);
if (ret) {
DRM_ERROR("%s: can't allocate color map\n", info->fix.id);
return -ENOMEM;
}
return 0;
}
static int bochs_fbdev_destroy(struct bochs_device *bochs)
{
struct bochs_framebuffer *gfb = &bochs->fb.gfb;
struct fb_info *info;
DRM_DEBUG_DRIVER("\n");
if (bochs->fb.helper.fbdev) {
info = bochs->fb.helper.fbdev;
unregister_framebuffer(info);
if (info->cmap.len)
fb_dealloc_cmap(&info->cmap);
framebuffer_release(info);
}
if (gfb->obj) {
drm_gem_object_unreference_unlocked(gfb->obj);
gfb->obj = NULL;
}
drm_fb_helper_fini(&bochs->fb.helper);
drm_framebuffer_unregister_private(&gfb->base);
drm_framebuffer_cleanup(&gfb->base);
return 0;
}
void bochs_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
u16 blue, int regno)
{
}
void bochs_fb_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green,
u16 *blue, int regno)
{
*red = regno;
*green = regno;
*blue = regno;
}
static struct drm_fb_helper_funcs bochs_fb_helper_funcs = {
.gamma_set = bochs_fb_gamma_set,
.gamma_get = bochs_fb_gamma_get,
.fb_probe = bochsfb_create,
};
int bochs_fbdev_init(struct bochs_device *bochs)
{
int ret;
bochs->fb.helper.funcs = &bochs_fb_helper_funcs;
spin_lock_init(&bochs->fb.dirty_lock);
ret = drm_fb_helper_init(bochs->dev, &bochs->fb.helper,
1, 1);
if (ret)
return ret;
drm_fb_helper_single_add_all_connectors(&bochs->fb.helper);
drm_helper_disable_unused_functions(bochs->dev);
drm_fb_helper_initial_config(&bochs->fb.helper, 32);
bochs->fb.initialized = true;
return 0;
}
void bochs_fbdev_fini(struct bochs_device *bochs)
{
if (!bochs->fb.initialized)
return;
bochs_fbdev_destroy(bochs);
bochs->fb.initialized = false;
}

View File

@ -0,0 +1,177 @@
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include "bochs.h"
/* ---------------------------------------------------------------------- */
static void bochs_vga_writeb(struct bochs_device *bochs, u16 ioport, u8 val)
{
if (WARN_ON(ioport < 0x3c0 || ioport > 0x3df))
return;
if (bochs->mmio) {
int offset = ioport - 0x3c0 + 0x400;
writeb(val, bochs->mmio + offset);
} else {
outb(val, ioport);
}
}
static u16 bochs_dispi_read(struct bochs_device *bochs, u16 reg)
{
u16 ret = 0;
if (bochs->mmio) {
int offset = 0x500 + (reg << 1);
ret = readw(bochs->mmio + offset);
} else {
outw(reg, VBE_DISPI_IOPORT_INDEX);
ret = inw(VBE_DISPI_IOPORT_DATA);
}
return ret;
}
static void bochs_dispi_write(struct bochs_device *bochs, u16 reg, u16 val)
{
if (bochs->mmio) {
int offset = 0x500 + (reg << 1);
writew(val, bochs->mmio + offset);
} else {
outw(reg, VBE_DISPI_IOPORT_INDEX);
outw(val, VBE_DISPI_IOPORT_DATA);
}
}
int bochs_hw_init(struct drm_device *dev, uint32_t flags)
{
struct bochs_device *bochs = dev->dev_private;
struct pci_dev *pdev = dev->pdev;
unsigned long addr, size, mem, ioaddr, iosize;
u16 id;
if (/* (ent->driver_data == BOCHS_QEMU_STDVGA) && */
(pdev->resource[2].flags & IORESOURCE_MEM)) {
/* mmio bar with vga and bochs registers present */
if (pci_request_region(pdev, 2, "bochs-drm") != 0) {
DRM_ERROR("Cannot request mmio region\n");
return -EBUSY;
}
ioaddr = pci_resource_start(pdev, 2);
iosize = pci_resource_len(pdev, 2);
bochs->mmio = ioremap(ioaddr, iosize);
if (bochs->mmio == NULL) {
DRM_ERROR("Cannot map mmio region\n");
return -ENOMEM;
}
} else {
ioaddr = VBE_DISPI_IOPORT_INDEX;
iosize = 2;
if (!request_region(ioaddr, iosize, "bochs-drm")) {
DRM_ERROR("Cannot request ioports\n");
return -EBUSY;
}
bochs->ioports = 1;
}
id = bochs_dispi_read(bochs, VBE_DISPI_INDEX_ID);
mem = bochs_dispi_read(bochs, VBE_DISPI_INDEX_VIDEO_MEMORY_64K)
* 64 * 1024;
if ((id & 0xfff0) != VBE_DISPI_ID0) {
DRM_ERROR("ID mismatch\n");
return -ENODEV;
}
if ((pdev->resource[0].flags & IORESOURCE_MEM) == 0)
return -ENODEV;
addr = pci_resource_start(pdev, 0);
size = pci_resource_len(pdev, 0);
if (addr == 0)
return -ENODEV;
if (size != mem) {
DRM_ERROR("Size mismatch: pci=%ld, bochs=%ld\n",
size, mem);
size = min(size, mem);
}
if (pci_request_region(pdev, 0, "bochs-drm") != 0) {
DRM_ERROR("Cannot request framebuffer\n");
return -EBUSY;
}
bochs->fb_map = ioremap(addr, size);
if (bochs->fb_map == NULL) {
DRM_ERROR("Cannot map framebuffer\n");
return -ENOMEM;
}
bochs->fb_base = addr;
bochs->fb_size = size;
DRM_INFO("Found bochs VGA, ID 0x%x.\n", id);
DRM_INFO("Framebuffer size %ld kB @ 0x%lx, %s @ 0x%lx.\n",
size / 1024, addr,
bochs->ioports ? "ioports" : "mmio",
ioaddr);
return 0;
}
void bochs_hw_fini(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
if (bochs->mmio)
iounmap(bochs->mmio);
if (bochs->ioports)
release_region(VBE_DISPI_IOPORT_INDEX, 2);
if (bochs->fb_map)
iounmap(bochs->fb_map);
pci_release_regions(dev->pdev);
}
void bochs_hw_setmode(struct bochs_device *bochs,
struct drm_display_mode *mode)
{
bochs->xres = mode->hdisplay;
bochs->yres = mode->vdisplay;
bochs->bpp = 32;
bochs->stride = mode->hdisplay * (bochs->bpp / 8);
bochs->yres_virtual = bochs->fb_size / bochs->stride;
DRM_DEBUG_DRIVER("%dx%d @ %d bpp, vy %d\n",
bochs->xres, bochs->yres, bochs->bpp,
bochs->yres_virtual);
bochs_vga_writeb(bochs, 0x3c0, 0x20); /* unblank */
bochs_dispi_write(bochs, VBE_DISPI_INDEX_BPP, bochs->bpp);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_XRES, bochs->xres);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_YRES, bochs->yres);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_BANK, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_VIRT_WIDTH, bochs->xres);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_VIRT_HEIGHT,
bochs->yres_virtual);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_X_OFFSET, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_Y_OFFSET, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_ENABLE,
VBE_DISPI_ENABLED | VBE_DISPI_LFB_ENABLED);
}
void bochs_hw_setbase(struct bochs_device *bochs,
int x, int y, u64 addr)
{
unsigned long offset = (unsigned long)addr +
y * bochs->stride +
x * (bochs->bpp / 8);
int vy = offset / bochs->stride;
int vx = (offset % bochs->stride) * 8 / bochs->bpp;
DRM_DEBUG_DRIVER("x %d, y %d, addr %llx -> offset %lx, vx %d, vy %d\n",
x, y, addr, offset, vx, vy);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_X_OFFSET, vx);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_Y_OFFSET, vy);
}

View File

@ -0,0 +1,294 @@
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include "bochs.h"
static int defx = 1024;
static int defy = 768;
module_param(defx, int, 0444);
module_param(defy, int, 0444);
MODULE_PARM_DESC(defx, "default x resolution");
MODULE_PARM_DESC(defy, "default y resolution");
/* ---------------------------------------------------------------------- */
static void bochs_crtc_load_lut(struct drm_crtc *crtc)
{
}
static void bochs_crtc_dpms(struct drm_crtc *crtc, int mode)
{
switch (mode) {
case DRM_MODE_DPMS_ON:
case DRM_MODE_DPMS_STANDBY:
case DRM_MODE_DPMS_SUSPEND:
case DRM_MODE_DPMS_OFF:
default:
return;
}
}
static bool bochs_crtc_mode_fixup(struct drm_crtc *crtc,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
return true;
}
static int bochs_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
struct drm_framebuffer *old_fb)
{
struct bochs_device *bochs =
container_of(crtc, struct bochs_device, crtc);
struct bochs_framebuffer *bochs_fb;
struct bochs_bo *bo;
u64 gpu_addr = 0;
int ret;
if (old_fb) {
bochs_fb = to_bochs_framebuffer(old_fb);
bo = gem_to_bochs_bo(bochs_fb->obj);
ret = ttm_bo_reserve(&bo->bo, true, false, false, 0);
if (ret) {
DRM_ERROR("failed to reserve old_fb bo\n");
} else {
bochs_bo_unpin(bo);
ttm_bo_unreserve(&bo->bo);
}
}
if (WARN_ON(crtc->fb == NULL))
return -EINVAL;
bochs_fb = to_bochs_framebuffer(crtc->fb);
bo = gem_to_bochs_bo(bochs_fb->obj);
ret = ttm_bo_reserve(&bo->bo, true, false, false, 0);
if (ret)
return ret;
ret = bochs_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr);
if (ret) {
ttm_bo_unreserve(&bo->bo);
return ret;
}
ttm_bo_unreserve(&bo->bo);
bochs_hw_setbase(bochs, x, y, gpu_addr);
return 0;
}
static int bochs_crtc_mode_set(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode,
int x, int y, struct drm_framebuffer *old_fb)
{
struct bochs_device *bochs =
container_of(crtc, struct bochs_device, crtc);
bochs_hw_setmode(bochs, mode);
bochs_crtc_mode_set_base(crtc, x, y, old_fb);
return 0;
}
static void bochs_crtc_prepare(struct drm_crtc *crtc)
{
}
static void bochs_crtc_commit(struct drm_crtc *crtc)
{
}
static void bochs_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
u16 *blue, uint32_t start, uint32_t size)
{
}
/* These provide the minimum set of functions required to handle a CRTC */
static const struct drm_crtc_funcs bochs_crtc_funcs = {
.gamma_set = bochs_crtc_gamma_set,
.set_config = drm_crtc_helper_set_config,
.destroy = drm_crtc_cleanup,
};
static const struct drm_crtc_helper_funcs bochs_helper_funcs = {
.dpms = bochs_crtc_dpms,
.mode_fixup = bochs_crtc_mode_fixup,
.mode_set = bochs_crtc_mode_set,
.mode_set_base = bochs_crtc_mode_set_base,
.prepare = bochs_crtc_prepare,
.commit = bochs_crtc_commit,
.load_lut = bochs_crtc_load_lut,
};
static void bochs_crtc_init(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
struct drm_crtc *crtc = &bochs->crtc;
drm_crtc_init(dev, crtc, &bochs_crtc_funcs);
drm_mode_crtc_set_gamma_size(crtc, 256);
drm_crtc_helper_add(crtc, &bochs_helper_funcs);
}
static bool bochs_encoder_mode_fixup(struct drm_encoder *encoder,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
return true;
}
static void bochs_encoder_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
}
static void bochs_encoder_dpms(struct drm_encoder *encoder, int state)
{
}
static void bochs_encoder_prepare(struct drm_encoder *encoder)
{
}
static void bochs_encoder_commit(struct drm_encoder *encoder)
{
}
static const struct drm_encoder_helper_funcs bochs_encoder_helper_funcs = {
.dpms = bochs_encoder_dpms,
.mode_fixup = bochs_encoder_mode_fixup,
.mode_set = bochs_encoder_mode_set,
.prepare = bochs_encoder_prepare,
.commit = bochs_encoder_commit,
};
static const struct drm_encoder_funcs bochs_encoder_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
static void bochs_encoder_init(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
struct drm_encoder *encoder = &bochs->encoder;
encoder->possible_crtcs = 0x1;
drm_encoder_init(dev, encoder, &bochs_encoder_encoder_funcs,
DRM_MODE_ENCODER_DAC);
drm_encoder_helper_add(encoder, &bochs_encoder_helper_funcs);
}
int bochs_connector_get_modes(struct drm_connector *connector)
{
int count;
count = drm_add_modes_noedid(connector, 8192, 8192);
drm_set_preferred_mode(connector, defx, defy);
return count;
}
static int bochs_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct bochs_device *bochs =
container_of(connector, struct bochs_device, connector);
unsigned long size = mode->hdisplay * mode->vdisplay * 4;
/*
* Make sure we can fit two framebuffers into video memory.
* This allows up to 1600x1200 with 16 MB (default size).
* If you want more try this:
* 'qemu -vga std -global VGA.vgamem_mb=32 $otherargs'
*/
if (size * 2 > bochs->fb_size)
return MODE_BAD;
return MODE_OK;
}
static struct drm_encoder *
bochs_connector_best_encoder(struct drm_connector *connector)
{
int enc_id = connector->encoder_ids[0];
struct drm_mode_object *obj;
struct drm_encoder *encoder;
/* pick the encoder ids */
if (enc_id) {
obj = drm_mode_object_find(connector->dev, enc_id,
DRM_MODE_OBJECT_ENCODER);
if (!obj)
return NULL;
encoder = obj_to_encoder(obj);
return encoder;
}
return NULL;
}
static enum drm_connector_status bochs_connector_detect(struct drm_connector
*connector, bool force)
{
return connector_status_connected;
}
struct drm_connector_helper_funcs bochs_connector_connector_helper_funcs = {
.get_modes = bochs_connector_get_modes,
.mode_valid = bochs_connector_mode_valid,
.best_encoder = bochs_connector_best_encoder,
};
struct drm_connector_funcs bochs_connector_connector_funcs = {
.dpms = drm_helper_connector_dpms,
.detect = bochs_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = drm_connector_cleanup,
};
static void bochs_connector_init(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
struct drm_connector *connector = &bochs->connector;
drm_connector_init(dev, connector, &bochs_connector_connector_funcs,
DRM_MODE_CONNECTOR_VIRTUAL);
drm_connector_helper_add(connector,
&bochs_connector_connector_helper_funcs);
}
int bochs_kms_init(struct bochs_device *bochs)
{
drm_mode_config_init(bochs->dev);
bochs->mode_config_initialized = true;
bochs->dev->mode_config.max_width = 8192;
bochs->dev->mode_config.max_height = 8192;
bochs->dev->mode_config.fb_base = bochs->fb_base;
bochs->dev->mode_config.preferred_depth = 24;
bochs->dev->mode_config.prefer_shadow = 0;
bochs->dev->mode_config.funcs = (void *)&bochs_mode_funcs;
bochs_crtc_init(bochs->dev);
bochs_encoder_init(bochs->dev);
bochs_connector_init(bochs->dev);
drm_mode_connector_attach_encoder(&bochs->connector,
&bochs->encoder);
return 0;
}
void bochs_kms_fini(struct bochs_device *bochs)
{
if (bochs->mode_config_initialized) {
drm_mode_config_cleanup(bochs->dev);
bochs->mode_config_initialized = false;
}
}

View File

@ -0,0 +1,546 @@
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include "bochs.h"
static void bochs_ttm_placement(struct bochs_bo *bo, int domain);
/* ---------------------------------------------------------------------- */
static inline struct bochs_device *bochs_bdev(struct ttm_bo_device *bd)
{
return container_of(bd, struct bochs_device, ttm.bdev);
}
static int bochs_ttm_mem_global_init(struct drm_global_reference *ref)
{
return ttm_mem_global_init(ref->object);
}
static void bochs_ttm_mem_global_release(struct drm_global_reference *ref)
{
ttm_mem_global_release(ref->object);
}
static int bochs_ttm_global_init(struct bochs_device *bochs)
{
struct drm_global_reference *global_ref;
int r;
global_ref = &bochs->ttm.mem_global_ref;
global_ref->global_type = DRM_GLOBAL_TTM_MEM;
global_ref->size = sizeof(struct ttm_mem_global);
global_ref->init = &bochs_ttm_mem_global_init;
global_ref->release = &bochs_ttm_mem_global_release;
r = drm_global_item_ref(global_ref);
if (r != 0) {
DRM_ERROR("Failed setting up TTM memory accounting "
"subsystem.\n");
return r;
}
bochs->ttm.bo_global_ref.mem_glob =
bochs->ttm.mem_global_ref.object;
global_ref = &bochs->ttm.bo_global_ref.ref;
global_ref->global_type = DRM_GLOBAL_TTM_BO;
global_ref->size = sizeof(struct ttm_bo_global);
global_ref->init = &ttm_bo_global_init;
global_ref->release = &ttm_bo_global_release;
r = drm_global_item_ref(global_ref);
if (r != 0) {
DRM_ERROR("Failed setting up TTM BO subsystem.\n");
drm_global_item_unref(&bochs->ttm.mem_global_ref);
return r;
}
return 0;
}
static void bochs_ttm_global_release(struct bochs_device *bochs)
{
if (bochs->ttm.mem_global_ref.release == NULL)
return;
drm_global_item_unref(&bochs->ttm.bo_global_ref.ref);
drm_global_item_unref(&bochs->ttm.mem_global_ref);
bochs->ttm.mem_global_ref.release = NULL;
}
static void bochs_bo_ttm_destroy(struct ttm_buffer_object *tbo)
{
struct bochs_bo *bo;
bo = container_of(tbo, struct bochs_bo, bo);
drm_gem_object_release(&bo->gem);
kfree(bo);
}
static bool bochs_ttm_bo_is_bochs_bo(struct ttm_buffer_object *bo)
{
if (bo->destroy == &bochs_bo_ttm_destroy)
return true;
return false;
}
static int bochs_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
struct ttm_mem_type_manager *man)
{
switch (type) {
case TTM_PL_SYSTEM:
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
break;
case TTM_PL_VRAM:
man->func = &ttm_bo_manager_func;
man->flags = TTM_MEMTYPE_FLAG_FIXED |
TTM_MEMTYPE_FLAG_MAPPABLE;
man->available_caching = TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_WC;
man->default_caching = TTM_PL_FLAG_WC;
break;
default:
DRM_ERROR("Unsupported memory type %u\n", (unsigned)type);
return -EINVAL;
}
return 0;
}
static void
bochs_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
{
struct bochs_bo *bochsbo = bochs_bo(bo);
if (!bochs_ttm_bo_is_bochs_bo(bo))
return;
bochs_ttm_placement(bochsbo, TTM_PL_FLAG_SYSTEM);
*pl = bochsbo->placement;
}
static int bochs_bo_verify_access(struct ttm_buffer_object *bo,
struct file *filp)
{
struct bochs_bo *bochsbo = bochs_bo(bo);
return drm_vma_node_verify_access(&bochsbo->gem.vma_node, filp);
}
static int bochs_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem)
{
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
struct bochs_device *bochs = bochs_bdev(bdev);
mem->bus.addr = NULL;
mem->bus.offset = 0;
mem->bus.size = mem->num_pages << PAGE_SHIFT;
mem->bus.base = 0;
mem->bus.is_iomem = false;
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
return -EINVAL;
switch (mem->mem_type) {
case TTM_PL_SYSTEM:
/* system memory */
return 0;
case TTM_PL_VRAM:
mem->bus.offset = mem->start << PAGE_SHIFT;
mem->bus.base = bochs->fb_base;
mem->bus.is_iomem = true;
break;
default:
return -EINVAL;
break;
}
return 0;
}
static void bochs_ttm_io_mem_free(struct ttm_bo_device *bdev,
struct ttm_mem_reg *mem)
{
}
static int bochs_bo_move(struct ttm_buffer_object *bo,
bool evict, bool interruptible,
bool no_wait_gpu,
struct ttm_mem_reg *new_mem)
{
return ttm_bo_move_memcpy(bo, evict, no_wait_gpu, new_mem);
}
static void bochs_ttm_backend_destroy(struct ttm_tt *tt)
{
ttm_tt_fini(tt);
kfree(tt);
}
static struct ttm_backend_func bochs_tt_backend_func = {
.destroy = &bochs_ttm_backend_destroy,
};
static struct ttm_tt *bochs_ttm_tt_create(struct ttm_bo_device *bdev,
unsigned long size,
uint32_t page_flags,
struct page *dummy_read_page)
{
struct ttm_tt *tt;
tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL);
if (tt == NULL)
return NULL;
tt->func = &bochs_tt_backend_func;
if (ttm_tt_init(tt, bdev, size, page_flags, dummy_read_page)) {
kfree(tt);
return NULL;
}
return tt;
}
struct ttm_bo_driver bochs_bo_driver = {
.ttm_tt_create = bochs_ttm_tt_create,
.ttm_tt_populate = ttm_pool_populate,
.ttm_tt_unpopulate = ttm_pool_unpopulate,
.init_mem_type = bochs_bo_init_mem_type,
.evict_flags = bochs_bo_evict_flags,
.move = bochs_bo_move,
.verify_access = bochs_bo_verify_access,
.io_mem_reserve = &bochs_ttm_io_mem_reserve,
.io_mem_free = &bochs_ttm_io_mem_free,
};
int bochs_mm_init(struct bochs_device *bochs)
{
struct ttm_bo_device *bdev = &bochs->ttm.bdev;
int ret;
ret = bochs_ttm_global_init(bochs);
if (ret)
return ret;
ret = ttm_bo_device_init(&bochs->ttm.bdev,
bochs->ttm.bo_global_ref.ref.object,
&bochs_bo_driver, DRM_FILE_PAGE_OFFSET,
true);
if (ret) {
DRM_ERROR("Error initialising bo driver; %d\n", ret);
return ret;
}
ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
bochs->fb_size >> PAGE_SHIFT);
if (ret) {
DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
return ret;
}
bochs->ttm.initialized = true;
return 0;
}
void bochs_mm_fini(struct bochs_device *bochs)
{
if (!bochs->ttm.initialized)
return;
ttm_bo_device_release(&bochs->ttm.bdev);
bochs_ttm_global_release(bochs);
bochs->ttm.initialized = false;
}
static void bochs_ttm_placement(struct bochs_bo *bo, int domain)
{
u32 c = 0;
bo->placement.fpfn = 0;
bo->placement.lpfn = 0;
bo->placement.placement = bo->placements;
bo->placement.busy_placement = bo->placements;
if (domain & TTM_PL_FLAG_VRAM) {
bo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED
| TTM_PL_FLAG_VRAM;
}
if (domain & TTM_PL_FLAG_SYSTEM) {
bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
}
if (!c) {
bo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
}
bo->placement.num_placement = c;
bo->placement.num_busy_placement = c;
}
static inline u64 bochs_bo_gpu_offset(struct bochs_bo *bo)
{
return bo->bo.offset;
}
int bochs_bo_pin(struct bochs_bo *bo, u32 pl_flag, u64 *gpu_addr)
{
int i, ret;
if (bo->pin_count) {
bo->pin_count++;
if (gpu_addr)
*gpu_addr = bochs_bo_gpu_offset(bo);
return 0;
}
bochs_ttm_placement(bo, pl_flag);
for (i = 0; i < bo->placement.num_placement; i++)
bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
bo->pin_count = 1;
if (gpu_addr)
*gpu_addr = bochs_bo_gpu_offset(bo);
return 0;
}
int bochs_bo_unpin(struct bochs_bo *bo)
{
int i, ret;
if (!bo->pin_count) {
DRM_ERROR("unpin bad %p\n", bo);
return 0;
}
bo->pin_count--;
if (bo->pin_count)
return 0;
for (i = 0; i < bo->placement.num_placement; i++)
bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
return 0;
}
int bochs_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct drm_file *file_priv;
struct bochs_device *bochs;
if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET))
return drm_mmap(filp, vma);
file_priv = filp->private_data;
bochs = file_priv->minor->dev->dev_private;
return ttm_bo_mmap(filp, vma, &bochs->ttm.bdev);
}
/* ---------------------------------------------------------------------- */
static int bochs_bo_create(struct drm_device *dev, int size, int align,
uint32_t flags, struct bochs_bo **pbochsbo)
{
struct bochs_device *bochs = dev->dev_private;
struct bochs_bo *bochsbo;
size_t acc_size;
int ret;
bochsbo = kzalloc(sizeof(struct bochs_bo), GFP_KERNEL);
if (!bochsbo)
return -ENOMEM;
ret = drm_gem_object_init(dev, &bochsbo->gem, size);
if (ret) {
kfree(bochsbo);
return ret;
}
bochsbo->bo.bdev = &bochs->ttm.bdev;
bochsbo->bo.bdev->dev_mapping = dev->dev_mapping;
bochs_ttm_placement(bochsbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
acc_size = ttm_bo_dma_acc_size(&bochs->ttm.bdev, size,
sizeof(struct bochs_bo));
ret = ttm_bo_init(&bochs->ttm.bdev, &bochsbo->bo, size,
ttm_bo_type_device, &bochsbo->placement,
align >> PAGE_SHIFT, false, NULL, acc_size,
NULL, bochs_bo_ttm_destroy);
if (ret)
return ret;
*pbochsbo = bochsbo;
return 0;
}
int bochs_gem_create(struct drm_device *dev, u32 size, bool iskernel,
struct drm_gem_object **obj)
{
struct bochs_bo *bochsbo;
int ret;
*obj = NULL;
size = ALIGN(size, PAGE_SIZE);
if (size == 0)
return -EINVAL;
ret = bochs_bo_create(dev, size, 0, 0, &bochsbo);
if (ret) {
if (ret != -ERESTARTSYS)
DRM_ERROR("failed to allocate GEM object\n");
return ret;
}
*obj = &bochsbo->gem;
return 0;
}
int bochs_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
struct drm_gem_object *gobj;
u32 handle;
int ret;
args->pitch = args->width * ((args->bpp + 7) / 8);
args->size = args->pitch * args->height;
ret = bochs_gem_create(dev, args->size, false,
&gobj);
if (ret)
return ret;
ret = drm_gem_handle_create(file, gobj, &handle);
drm_gem_object_unreference_unlocked(gobj);
if (ret)
return ret;
args->handle = handle;
return 0;
}
static void bochs_bo_unref(struct bochs_bo **bo)
{
struct ttm_buffer_object *tbo;
if ((*bo) == NULL)
return;
tbo = &((*bo)->bo);
ttm_bo_unref(&tbo);
if (tbo == NULL)
*bo = NULL;
}
void bochs_gem_free_object(struct drm_gem_object *obj)
{
struct bochs_bo *bochs_bo = gem_to_bochs_bo(obj);
if (!bochs_bo)
return;
bochs_bo_unref(&bochs_bo);
}
int bochs_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev,
uint32_t handle, uint64_t *offset)
{
struct drm_gem_object *obj;
int ret;
struct bochs_bo *bo;
mutex_lock(&dev->struct_mutex);
obj = drm_gem_object_lookup(dev, file, handle);
if (obj == NULL) {
ret = -ENOENT;
goto out_unlock;
}
bo = gem_to_bochs_bo(obj);
*offset = bochs_bo_mmap_offset(bo);
drm_gem_object_unreference(obj);
ret = 0;
out_unlock:
mutex_unlock(&dev->struct_mutex);
return ret;
}
/* ---------------------------------------------------------------------- */
static void bochs_user_framebuffer_destroy(struct drm_framebuffer *fb)
{
struct bochs_framebuffer *bochs_fb = to_bochs_framebuffer(fb);
if (bochs_fb->obj)
drm_gem_object_unreference_unlocked(bochs_fb->obj);
drm_framebuffer_cleanup(fb);
kfree(fb);
}
static const struct drm_framebuffer_funcs bochs_fb_funcs = {
.destroy = bochs_user_framebuffer_destroy,
};
int bochs_framebuffer_init(struct drm_device *dev,
struct bochs_framebuffer *gfb,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_gem_object *obj)
{
int ret;
drm_helper_mode_fill_fb_struct(&gfb->base, mode_cmd);
gfb->obj = obj;
ret = drm_framebuffer_init(dev, &gfb->base, &bochs_fb_funcs);
if (ret) {
DRM_ERROR("drm_framebuffer_init failed: %d\n", ret);
return ret;
}
return 0;
}
static struct drm_framebuffer *
bochs_user_framebuffer_create(struct drm_device *dev,
struct drm_file *filp,
struct drm_mode_fb_cmd2 *mode_cmd)
{
struct drm_gem_object *obj;
struct bochs_framebuffer *bochs_fb;
int ret;
DRM_DEBUG_DRIVER("%dx%d, format %c%c%c%c\n",
mode_cmd->width, mode_cmd->height,
(mode_cmd->pixel_format) & 0xff,
(mode_cmd->pixel_format >> 8) & 0xff,
(mode_cmd->pixel_format >> 16) & 0xff,
(mode_cmd->pixel_format >> 24) & 0xff);
if (mode_cmd->pixel_format != DRM_FORMAT_XRGB8888)
return ERR_PTR(-ENOENT);
obj = drm_gem_object_lookup(dev, filp, mode_cmd->handles[0]);
if (obj == NULL)
return ERR_PTR(-ENOENT);
bochs_fb = kzalloc(sizeof(*bochs_fb), GFP_KERNEL);
if (!bochs_fb) {
drm_gem_object_unreference_unlocked(obj);
return ERR_PTR(-ENOMEM);
}
ret = bochs_framebuffer_init(dev, bochs_fb, mode_cmd, obj);
if (ret) {
drm_gem_object_unreference_unlocked(obj);
kfree(bochs_fb);
return ERR_PTR(ret);
}
return &bochs_fb->base;
}
const struct drm_mode_config_funcs bochs_mode_funcs = {
.fb_create = bochs_user_framebuffer_create,
};

View File

@ -222,7 +222,7 @@ void cirrus_fbdev_fini(struct cirrus_device *cdev);
void cirrus_driver_irq_preinstall(struct drm_device *dev);
int cirrus_driver_irq_postinstall(struct drm_device *dev);
void cirrus_driver_irq_uninstall(struct drm_device *dev);
irqreturn_t cirrus_driver_irq_handler(DRM_IRQ_ARGS);
irqreturn_t cirrus_driver_irq_handler(int irq, void *arg);
/* cirrus_kms.c */
int cirrus_driver_load(struct drm_device *dev, unsigned long flags);

View File

@ -39,7 +39,7 @@ static void cirrus_dirty_update(struct cirrus_fbdev *afbdev,
* then the BO is being moved and we should
* store up the damage until later.
*/
if (!in_interrupt())
if (!drm_can_sleep())
ret = cirrus_bo_reserve(bo, true);
if (ret) {
if (ret != -EBUSY)
@ -233,6 +233,9 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
info->apertures->ranges[0].base = cdev->dev->mode_config.fb_base;
info->apertures->ranges[0].size = cdev->mc.vram_size;
info->fix.smem_start = cdev->dev->mode_config.fb_base;
info->fix.smem_len = cdev->mc.vram_size;
info->screen_base = sysram;
info->screen_size = size;

View File

@ -255,7 +255,7 @@ int cirrus_dumb_create(struct drm_file *file,
return 0;
}
void cirrus_bo_unref(struct cirrus_bo **bo)
static void cirrus_bo_unref(struct cirrus_bo **bo)
{
struct ttm_buffer_object *tbo;

View File

@ -102,7 +102,7 @@ static bool cirrus_crtc_mode_fixup(struct drm_crtc *crtc,
return true;
}
void cirrus_set_start_address(struct drm_crtc *crtc, unsigned offset)
static void cirrus_set_start_address(struct drm_crtc *crtc, unsigned offset)
{
struct cirrus_device *cdev = crtc->dev->dev_private;
u32 addr;
@ -273,8 +273,8 @@ static int cirrus_crtc_mode_set(struct drm_crtc *crtc,
sr07 |= 0x11;
break;
case 16:
sr07 |= 0xc1;
hdr = 0xc0;
sr07 |= 0x17;
hdr = 0xc1;
break;
case 24:
sr07 |= 0x15;
@ -453,7 +453,7 @@ static void cirrus_encoder_commit(struct drm_encoder *encoder)
{
}
void cirrus_encoder_destroy(struct drm_encoder *encoder)
static void cirrus_encoder_destroy(struct drm_encoder *encoder)
{
struct cirrus_encoder *cirrus_encoder = to_cirrus_encoder(encoder);
drm_encoder_cleanup(encoder);
@ -492,7 +492,7 @@ static struct drm_encoder *cirrus_encoder_init(struct drm_device *dev)
}
int cirrus_vga_get_modes(struct drm_connector *connector)
static int cirrus_vga_get_modes(struct drm_connector *connector)
{
int count;
@ -509,7 +509,7 @@ static int cirrus_vga_mode_valid(struct drm_connector *connector,
return MODE_OK;
}
struct drm_encoder *cirrus_connector_best_encoder(struct drm_connector
static struct drm_encoder *cirrus_connector_best_encoder(struct drm_connector
*connector)
{
int enc_id = connector->encoder_ids[0];

View File

@ -80,7 +80,7 @@ static int cirrus_ttm_global_init(struct cirrus_device *cirrus)
return 0;
}
void
static void
cirrus_ttm_global_release(struct cirrus_device *cirrus)
{
if (cirrus->ttm.mem_global_ref.release == NULL)
@ -102,7 +102,7 @@ static void cirrus_bo_ttm_destroy(struct ttm_buffer_object *tbo)
kfree(bo);
}
bool cirrus_ttm_bo_is_cirrus_bo(struct ttm_buffer_object *bo)
static bool cirrus_ttm_bo_is_cirrus_bo(struct ttm_buffer_object *bo)
{
if (bo->destroy == &cirrus_bo_ttm_destroy)
return true;
@ -208,7 +208,7 @@ static struct ttm_backend_func cirrus_tt_backend_func = {
};
struct ttm_tt *cirrus_ttm_tt_create(struct ttm_bo_device *bdev,
static struct ttm_tt *cirrus_ttm_tt_create(struct ttm_bo_device *bdev,
unsigned long size, uint32_t page_flags,
struct page *dummy_read_page)
{
@ -375,26 +375,6 @@ int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr)
return 0;
}
int cirrus_bo_unpin(struct cirrus_bo *bo)
{
int i, ret;
if (!bo->pin_count) {
DRM_ERROR("unpin bad %p\n", bo);
return 0;
}
bo->pin_count--;
if (bo->pin_count)
return 0;
for (i = 0; i < bo->placement.num_placement ; i++)
bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret)
return ret;
return 0;
}
int cirrus_bo_push_sysram(struct cirrus_bo *bo)
{
int i, ret;

View File

@ -53,7 +53,7 @@
*/
int drm_agp_info(struct drm_device *dev, struct drm_agp_info *info)
{
DRM_AGP_KERN *kern;
struct agp_kern_info *kern;
if (!dev->agp || !dev->agp->acquired)
return -EINVAL;
@ -198,17 +198,15 @@ int drm_agp_enable_ioctl(struct drm_device *dev, void *data,
int drm_agp_alloc(struct drm_device *dev, struct drm_agp_buffer *request)
{
struct drm_agp_mem *entry;
DRM_AGP_MEM *memory;
struct agp_memory *memory;
unsigned long pages;
u32 type;
if (!dev->agp || !dev->agp->acquired)
return -EINVAL;
if (!(entry = kmalloc(sizeof(*entry), GFP_KERNEL)))
if (!(entry = kzalloc(sizeof(*entry), GFP_KERNEL)))
return -ENOMEM;
memset(entry, 0, sizeof(*entry));
pages = (request->size + PAGE_SIZE - 1) / PAGE_SIZE;
type = (u32) request->type;
if (!(memory = agp_allocate_memory(dev->agp->bridge, pages, type))) {
@ -393,14 +391,16 @@ int drm_agp_free_ioctl(struct drm_device *dev, void *data,
* Gets the drm_agp_t structure which is made available by the agpgart module
* via the inter_module_* functions. Creates and initializes a drm_agp_head
* structure.
*
* Note that final cleanup of the kmalloced structure is directly done in
* drm_pci_agp_destroy.
*/
struct drm_agp_head *drm_agp_init(struct drm_device *dev)
{
struct drm_agp_head *head = NULL;
if (!(head = kmalloc(sizeof(*head), GFP_KERNEL)))
if (!(head = kzalloc(sizeof(*head), GFP_KERNEL)))
return NULL;
memset((void *)head, 0, sizeof(*head));
head->bridge = agp_find_bridge(dev->pdev);
if (!head->bridge) {
if (!(head->bridge = agp_backend_acquire(dev->pdev))) {
@ -439,7 +439,7 @@ void drm_agp_clear(struct drm_device *dev)
{
struct drm_agp_mem *entry, *tempe;
if (!drm_core_has_AGP(dev) || !dev->agp)
if (!dev->agp)
return;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return;
@ -459,21 +459,6 @@ void drm_agp_clear(struct drm_device *dev)
dev->agp->enabled = 0;
}
/**
* drm_agp_destroy - Destroy AGP head
* @dev: DRM device
*
* Destroy resources that were previously allocated via drm_agp_initp. Caller
* must ensure to clean up all AGP resources before calling this. See
* drm_agp_clear().
*
* Call this to destroy AGP heads allocated via drm_agp_init().
*/
void drm_agp_destroy(struct drm_agp_head *agp)
{
kfree(agp);
}
/**
* Binds a collection of pages into AGP memory at the given offset, returning
* the AGP memory structure containing them.
@ -481,14 +466,14 @@ void drm_agp_destroy(struct drm_agp_head *agp)
* No reference is held on the pages during this time -- it is up to the
* caller to handle that.
*/
DRM_AGP_MEM *
struct agp_memory *
drm_agp_bind_pages(struct drm_device *dev,
struct page **pages,
unsigned long num_pages,
uint32_t gtt_offset,
u32 type)
{
DRM_AGP_MEM *mem;
struct agp_memory *mem;
int ret, i;
DRM_DEBUG("\n");

View File

@ -114,7 +114,7 @@ int drm_buffer_copy_from_user(struct drm_buffer *buf,
for (idx = 0; idx < nr_pages; ++idx) {
if (DRM_COPY_FROM_USER(buf->data[idx],
if (copy_from_user(buf->data[idx],
user_data + idx * PAGE_SIZE,
min(PAGE_SIZE, size - idx * PAGE_SIZE))) {
DRM_ERROR("Failed to copy user data (%p) to drm buffer"

View File

@ -261,7 +261,7 @@ static int drm_addmap_core(struct drm_device * dev, resource_size_t offset,
struct drm_agp_mem *entry;
int valid = 0;
if (!drm_core_has_AGP(dev)) {
if (!dev->agp) {
kfree(map);
return -EINVAL;
}
@ -303,9 +303,6 @@ static int drm_addmap_core(struct drm_device * dev, resource_size_t offset,
break;
}
case _DRM_GEM:
DRM_ERROR("tried to addmap GEM object\n");
break;
case _DRM_SCATTER_GATHER:
if (!dev->sg) {
kfree(map);
@ -483,9 +480,6 @@ int drm_rmmap_locked(struct drm_device *dev, struct drm_local_map *map)
dmah.size = map->size;
__drm_pci_free(dev, &dmah);
break;
case _DRM_GEM:
DRM_ERROR("tried to rmmap GEM object\n");
break;
}
kfree(map);
@ -1396,7 +1390,7 @@ int drm_mapbufs(struct drm_device *dev, void *data,
spin_unlock(&dev->count_lock);
if (request->count >= dma->buf_count) {
if ((drm_core_has_AGP(dev) && (dma->flags & _DRM_DMA_USE_AGP))
if ((dev->agp && (dma->flags & _DRM_DMA_USE_AGP))
|| (drm_core_check_feature(dev, DRIVER_SG)
&& (dma->flags & _DRM_DMA_USE_SG))) {
struct drm_local_map *map = dev->agp_buffer_map;

View File

@ -674,6 +674,29 @@ void drm_crtc_cleanup(struct drm_crtc *crtc)
}
EXPORT_SYMBOL(drm_crtc_cleanup);
/**
* drm_crtc_index - find the index of a registered CRTC
* @crtc: CRTC to find index for
*
* Given a registered CRTC, return the index of that CRTC within a DRM
* device's list of CRTCs.
*/
unsigned int drm_crtc_index(struct drm_crtc *crtc)
{
unsigned int index = 0;
struct drm_crtc *tmp;
list_for_each_entry(tmp, &crtc->dev->mode_config.crtc_list, head) {
if (tmp == crtc)
return index;
index++;
}
BUG();
}
EXPORT_SYMBOL(drm_crtc_index);
/**
* drm_mode_probed_add - add a mode to a connector's probed mode list
* @connector: connector the new mode
@ -2767,10 +2790,8 @@ int drm_mode_dirtyfb_ioctl(struct drm_device *dev,
}
if (fb->funcs->dirty) {
drm_modeset_lock_all(dev);
ret = fb->funcs->dirty(fb, file_priv, flags, r->color,
clips, num_clips);
drm_modeset_unlock_all(dev);
} else {
ret = -ENOSYS;
}

View File

@ -324,35 +324,6 @@ void drm_helper_disable_unused_functions(struct drm_device *dev)
}
EXPORT_SYMBOL(drm_helper_disable_unused_functions);
/**
* drm_encoder_crtc_ok - can a given crtc drive a given encoder?
* @encoder: encoder to test
* @crtc: crtc to test
*
* Return false if @encoder can't be driven by @crtc, true otherwise.
*/
static bool drm_encoder_crtc_ok(struct drm_encoder *encoder,
struct drm_crtc *crtc)
{
struct drm_device *dev;
struct drm_crtc *tmp;
int crtc_mask = 1;
WARN(!crtc, "checking null crtc?\n");
dev = crtc->dev;
list_for_each_entry(tmp, &dev->mode_config.crtc_list, head) {
if (tmp == crtc)
break;
crtc_mask <<= 1;
}
if (encoder->possible_crtcs & crtc_mask)
return true;
return false;
}
/*
* Check the CRTC we're going to map each output to vs. its current
* CRTC. If they don't match, we have to disable the output and the CRTC
@ -536,7 +507,7 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc,
* are later needed by vblank and swap-completion
* timestamping. They are derived from true hwmode.
*/
drm_calc_timestamping_constants(crtc);
drm_calc_timestamping_constants(crtc, &crtc->hwmode);
/* FIXME: add subpixel order */
done:

View File

@ -315,9 +315,6 @@ long drm_ioctl(struct file *filp,
if (drm_device_is_unplugged(dev))
return -ENODEV;
atomic_inc(&dev->ioctl_count);
++file_priv->ioctl_count;
if ((nr >= DRM_CORE_IOCTL_COUNT) &&
((nr < DRM_COMMAND_BASE) || (nr >= DRM_COMMAND_END)))
goto err_i1;
@ -410,7 +407,6 @@ long drm_ioctl(struct file *filp,
if (kdata != stack_kdata)
kfree(kdata);
atomic_dec(&dev->ioctl_count);
if (retcode)
DRM_DEBUG("ret = %d\n", retcode);
return retcode;

View File

@ -605,347 +605,347 @@ static const struct drm_display_mode edid_cea_modes[] = {
{ DRM_MODE("640x480", DRM_MODE_TYPE_DRIVER, 25175, 640, 656,
752, 800, 0, 480, 490, 492, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 2 - 720x480@60Hz */
{ DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 27000, 720, 736,
798, 858, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 3 - 720x480@60Hz */
{ DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 27000, 720, 736,
798, 858, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 4 - 1280x720@60Hz */
{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 1390,
1430, 1650, 0, 720, 725, 730, 750, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 5 - 1920x1080i@60Hz */
{ DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2008,
2052, 2200, 0, 1080, 1084, 1094, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 6 - 1440x480i@60Hz */
{ DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1478,
1602, 1716, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 7 - 1440x480i@60Hz */
{ DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1478,
1602, 1716, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 8 - 1440x240@60Hz */
{ DRM_MODE("1440x240", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1478,
1602, 1716, 0, 240, 244, 247, 262, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 9 - 1440x240@60Hz */
{ DRM_MODE("1440x240", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1478,
1602, 1716, 0, 240, 244, 247, 262, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 10 - 2880x480i@60Hz */
{ DRM_MODE("2880x480i", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2956,
3204, 3432, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 11 - 2880x480i@60Hz */
{ DRM_MODE("2880x480i", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2956,
3204, 3432, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 12 - 2880x240@60Hz */
{ DRM_MODE("2880x240", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2956,
3204, 3432, 0, 240, 244, 247, 262, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 13 - 2880x240@60Hz */
{ DRM_MODE("2880x240", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2956,
3204, 3432, 0, 240, 244, 247, 262, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 14 - 1440x480@60Hz */
{ DRM_MODE("1440x480", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1472,
1596, 1716, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 15 - 1440x480@60Hz */
{ DRM_MODE("1440x480", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1472,
1596, 1716, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 16 - 1920x1080@60Hz */
{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2008,
2052, 2200, 0, 1080, 1084, 1089, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 17 - 720x576@50Hz */
{ DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 27000, 720, 732,
796, 864, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 18 - 720x576@50Hz */
{ DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 27000, 720, 732,
796, 864, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 19 - 1280x720@50Hz */
{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 1720,
1760, 1980, 0, 720, 725, 730, 750, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 20 - 1920x1080i@50Hz */
{ DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2448,
2492, 2640, 0, 1080, 1084, 1094, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 21 - 1440x576i@50Hz */
{ DRM_MODE("1440x576i", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1464,
1590, 1728, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 22 - 1440x576i@50Hz */
{ DRM_MODE("1440x576i", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1464,
1590, 1728, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 23 - 1440x288@50Hz */
{ DRM_MODE("1440x288", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1464,
1590, 1728, 0, 288, 290, 293, 312, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 24 - 1440x288@50Hz */
{ DRM_MODE("1440x288", DRM_MODE_TYPE_DRIVER, 27000, 1440, 1464,
1590, 1728, 0, 288, 290, 293, 312, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 25 - 2880x576i@50Hz */
{ DRM_MODE("2880x576i", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2928,
3180, 3456, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 26 - 2880x576i@50Hz */
{ DRM_MODE("2880x576i", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2928,
3180, 3456, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 27 - 2880x288@50Hz */
{ DRM_MODE("2880x288", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2928,
3180, 3456, 0, 288, 290, 293, 312, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 28 - 2880x288@50Hz */
{ DRM_MODE("2880x288", DRM_MODE_TYPE_DRIVER, 54000, 2880, 2928,
3180, 3456, 0, 288, 290, 293, 312, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 29 - 1440x576@50Hz */
{ DRM_MODE("1440x576", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1464,
1592, 1728, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 30 - 1440x576@50Hz */
{ DRM_MODE("1440x576", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1464,
1592, 1728, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 31 - 1920x1080@50Hz */
{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2448,
2492, 2640, 0, 1080, 1084, 1089, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 32 - 1920x1080@24Hz */
{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2558,
2602, 2750, 0, 1080, 1084, 1089, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 24, },
.vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 33 - 1920x1080@25Hz */
{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2448,
2492, 2640, 0, 1080, 1084, 1089, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 25, },
.vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 34 - 1920x1080@30Hz */
{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2008,
2052, 2200, 0, 1080, 1084, 1089, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 30, },
.vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 35 - 2880x480@60Hz */
{ DRM_MODE("2880x480", DRM_MODE_TYPE_DRIVER, 108000, 2880, 2944,
3192, 3432, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 36 - 2880x480@60Hz */
{ DRM_MODE("2880x480", DRM_MODE_TYPE_DRIVER, 108000, 2880, 2944,
3192, 3432, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 60, },
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 37 - 2880x576@50Hz */
{ DRM_MODE("2880x576", DRM_MODE_TYPE_DRIVER, 108000, 2880, 2928,
3184, 3456, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 38 - 2880x576@50Hz */
{ DRM_MODE("2880x576", DRM_MODE_TYPE_DRIVER, 108000, 2880, 2928,
3184, 3456, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 39 - 1920x1080i@50Hz */
{ DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 72000, 1920, 1952,
2120, 2304, 0, 1080, 1126, 1136, 1250, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 50, },
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 40 - 1920x1080i@100Hz */
{ DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2448,
2492, 2640, 0, 1080, 1084, 1094, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 100, },
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 41 - 1280x720@100Hz */
{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 148500, 1280, 1720,
1760, 1980, 0, 720, 725, 730, 750, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 100, },
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 42 - 720x576@100Hz */
{ DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 54000, 720, 732,
796, 864, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 100, },
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 43 - 720x576@100Hz */
{ DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 54000, 720, 732,
796, 864, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 100, },
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 44 - 1440x576i@100Hz */
{ DRM_MODE("1440x576", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1464,
1590, 1728, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 100, },
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 45 - 1440x576i@100Hz */
{ DRM_MODE("1440x576", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1464,
1590, 1728, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_DBLCLK),
.vrefresh = 100, },
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 46 - 1920x1080i@120Hz */
{ DRM_MODE("1920x1080i", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2008,
2052, 2200, 0, 1080, 1084, 1094, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC |
DRM_MODE_FLAG_INTERLACE),
.vrefresh = 120, },
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 47 - 1280x720@120Hz */
{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 148500, 1280, 1390,
1430, 1650, 0, 720, 725, 730, 750, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 120, },
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 48 - 720x480@120Hz */
{ DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 54000, 720, 736,
798, 858, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 120, },
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 49 - 720x480@120Hz */
{ DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 54000, 720, 736,
798, 858, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 120, },
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 50 - 1440x480i@120Hz */
{ DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1478,
1602, 1716, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 120, },
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 51 - 1440x480i@120Hz */
{ DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 54000, 1440, 1478,
1602, 1716, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 120, },
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 52 - 720x576@200Hz */
{ DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 108000, 720, 732,
796, 864, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 200, },
.vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 53 - 720x576@200Hz */
{ DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 108000, 720, 732,
796, 864, 0, 576, 581, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 200, },
.vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 54 - 1440x576i@200Hz */
{ DRM_MODE("1440x576i", DRM_MODE_TYPE_DRIVER, 108000, 1440, 1464,
1590, 1728, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 200, },
.vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 55 - 1440x576i@200Hz */
{ DRM_MODE("1440x576i", DRM_MODE_TYPE_DRIVER, 108000, 1440, 1464,
1590, 1728, 0, 576, 580, 586, 625, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 200, },
.vrefresh = 200, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 56 - 720x480@240Hz */
{ DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 108000, 720, 736,
798, 858, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 240, },
.vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 57 - 720x480@240Hz */
{ DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 108000, 720, 736,
798, 858, 0, 480, 489, 495, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC),
.vrefresh = 240, },
.vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 58 - 1440x480i@240 */
{ DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 108000, 1440, 1478,
1602, 1716, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 240, },
.vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_4_3, },
/* 59 - 1440x480i@240 */
{ DRM_MODE("1440x480i", DRM_MODE_TYPE_DRIVER, 108000, 1440, 1478,
1602, 1716, 0, 480, 488, 494, 525, 0,
DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC |
DRM_MODE_FLAG_INTERLACE | DRM_MODE_FLAG_DBLCLK),
.vrefresh = 240, },
.vrefresh = 240, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 60 - 1280x720@24Hz */
{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 59400, 1280, 3040,
3080, 3300, 0, 720, 725, 730, 750, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 24, },
.vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 61 - 1280x720@25Hz */
{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 3700,
3740, 3960, 0, 720, 725, 730, 750, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 25, },
.vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 62 - 1280x720@30Hz */
{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 3040,
3080, 3300, 0, 720, 725, 730, 750, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 30, },
.vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 63 - 1920x1080@120Hz */
{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 297000, 1920, 2008,
2052, 2200, 0, 1080, 1084, 1089, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 120, },
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
/* 64 - 1920x1080@100Hz */
{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 297000, 1920, 2448,
2492, 2640, 0, 1080, 1084, 1094, 1125, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 100, },
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
};
/*
@ -2562,25 +2562,40 @@ add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid)
return modes;
}
static struct drm_display_mode *
drm_display_mode_from_vic_index(struct drm_connector *connector,
const u8 *video_db, u8 video_len,
u8 video_index)
{
struct drm_device *dev = connector->dev;
struct drm_display_mode *newmode;
u8 cea_mode;
if (video_db == NULL || video_index >= video_len)
return NULL;
/* CEA modes are numbered 1..127 */
cea_mode = (video_db[video_index] & 127) - 1;
if (cea_mode >= ARRAY_SIZE(edid_cea_modes))
return NULL;
newmode = drm_mode_duplicate(dev, &edid_cea_modes[cea_mode]);
newmode->vrefresh = 0;
return newmode;
}
static int
do_cea_modes(struct drm_connector *connector, const u8 *db, u8 len)
{
struct drm_device *dev = connector->dev;
const u8 *mode;
u8 cea_mode;
int modes = 0;
int i, modes = 0;
for (mode = db; mode < db + len; mode++) {
cea_mode = (*mode & 127) - 1; /* CEA modes are numbered 1..127 */
if (cea_mode < ARRAY_SIZE(edid_cea_modes)) {
struct drm_display_mode *newmode;
newmode = drm_mode_duplicate(dev,
&edid_cea_modes[cea_mode]);
if (newmode) {
newmode->vrefresh = 0;
drm_mode_probed_add(connector, newmode);
modes++;
}
for (i = 0; i < len; i++) {
struct drm_display_mode *mode;
mode = drm_display_mode_from_vic_index(connector, db, len, i);
if (mode) {
drm_mode_probed_add(connector, mode);
modes++;
}
}
@ -2674,21 +2689,13 @@ static int add_hdmi_mode(struct drm_connector *connector, u8 vic)
static int add_3d_struct_modes(struct drm_connector *connector, u16 structure,
const u8 *video_db, u8 video_len, u8 video_index)
{
struct drm_device *dev = connector->dev;
struct drm_display_mode *newmode;
int modes = 0;
u8 cea_mode;
if (video_db == NULL || video_index >= video_len)
return 0;
/* CEA modes are numbered 1..127 */
cea_mode = (video_db[video_index] & 127) - 1;
if (cea_mode >= ARRAY_SIZE(edid_cea_modes))
return 0;
if (structure & (1 << 0)) {
newmode = drm_mode_duplicate(dev, &edid_cea_modes[cea_mode]);
newmode = drm_display_mode_from_vic_index(connector, video_db,
video_len,
video_index);
if (newmode) {
newmode->flags |= DRM_MODE_FLAG_3D_FRAME_PACKING;
drm_mode_probed_add(connector, newmode);
@ -2696,7 +2703,9 @@ static int add_3d_struct_modes(struct drm_connector *connector, u16 structure,
}
}
if (structure & (1 << 6)) {
newmode = drm_mode_duplicate(dev, &edid_cea_modes[cea_mode]);
newmode = drm_display_mode_from_vic_index(connector, video_db,
video_len,
video_index);
if (newmode) {
newmode->flags |= DRM_MODE_FLAG_3D_TOP_AND_BOTTOM;
drm_mode_probed_add(connector, newmode);
@ -2704,7 +2713,9 @@ static int add_3d_struct_modes(struct drm_connector *connector, u16 structure,
}
}
if (structure & (1 << 8)) {
newmode = drm_mode_duplicate(dev, &edid_cea_modes[cea_mode]);
newmode = drm_display_mode_from_vic_index(connector, video_db,
video_len,
video_index);
if (newmode) {
newmode->flags |= DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF;
drm_mode_probed_add(connector, newmode);
@ -2728,7 +2739,7 @@ static int
do_hdmi_vsdb_modes(struct drm_connector *connector, const u8 *db, u8 len,
const u8 *video_db, u8 video_len)
{
int modes = 0, offset = 0, i, multi_present = 0;
int modes = 0, offset = 0, i, multi_present = 0, multi_len;
u8 vic_len, hdmi_3d_len = 0;
u16 mask;
u16 structure_all;
@ -2774,32 +2785,84 @@ do_hdmi_vsdb_modes(struct drm_connector *connector, const u8 *db, u8 len,
}
offset += 1 + vic_len;
if (!(multi_present == 1 || multi_present == 2))
goto out;
if ((multi_present == 1 && len < (9 + offset)) ||
(multi_present == 2 && len < (11 + offset)))
goto out;
if ((multi_present == 1 && hdmi_3d_len < 2) ||
(multi_present == 2 && hdmi_3d_len < 4))
goto out;
/* 3D_Structure_ALL */
structure_all = (db[8 + offset] << 8) | db[9 + offset];
/* check if 3D_MASK is present */
if (multi_present == 2)
mask = (db[10 + offset] << 8) | db[11 + offset];
if (multi_present == 1)
multi_len = 2;
else if (multi_present == 2)
multi_len = 4;
else
mask = 0xffff;
multi_len = 0;
for (i = 0; i < 16; i++) {
if (mask & (1 << i))
modes += add_3d_struct_modes(connector,
structure_all,
video_db,
video_len, i);
if (len < (8 + offset + hdmi_3d_len - 1))
goto out;
if (hdmi_3d_len < multi_len)
goto out;
if (multi_present == 1 || multi_present == 2) {
/* 3D_Structure_ALL */
structure_all = (db[8 + offset] << 8) | db[9 + offset];
/* check if 3D_MASK is present */
if (multi_present == 2)
mask = (db[10 + offset] << 8) | db[11 + offset];
else
mask = 0xffff;
for (i = 0; i < 16; i++) {
if (mask & (1 << i))
modes += add_3d_struct_modes(connector,
structure_all,
video_db,
video_len, i);
}
}
offset += multi_len;
for (i = 0; i < (hdmi_3d_len - multi_len); i++) {
int vic_index;
struct drm_display_mode *newmode = NULL;
unsigned int newflag = 0;
bool detail_present;
detail_present = ((db[8 + offset + i] & 0x0f) > 7);
if (detail_present && (i + 1 == hdmi_3d_len - multi_len))
break;
/* 2D_VIC_order_X */
vic_index = db[8 + offset + i] >> 4;
/* 3D_Structure_X */
switch (db[8 + offset + i] & 0x0f) {
case 0:
newflag = DRM_MODE_FLAG_3D_FRAME_PACKING;
break;
case 6:
newflag = DRM_MODE_FLAG_3D_TOP_AND_BOTTOM;
break;
case 8:
/* 3D_Detail_X */
if ((db[9 + offset + i] >> 4) == 1)
newflag = DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF;
break;
}
if (newflag != 0) {
newmode = drm_display_mode_from_vic_index(connector,
video_db,
video_len,
vic_index);
if (newmode) {
newmode->flags |= newflag;
drm_mode_probed_add(connector, newmode);
modes++;
}
}
if (detail_present)
i++;
}
out:

View File

@ -141,7 +141,7 @@ static int edid_size(const u8 *edid, int data_size)
return (edid[0x7e] + 1) * EDID_LENGTH;
}
static u8 *edid_load(struct drm_connector *connector, const char *name,
static void *edid_load(struct drm_connector *connector, const char *name,
const char *connector_name)
{
const struct firmware *fw = NULL;
@ -263,7 +263,7 @@ int drm_load_edid_firmware(struct drm_connector *connector)
if (*last == '\n')
*last = '\0';
edid = (struct edid *) edid_load(connector, edidname, connector_name);
edid = edid_load(connector, edidname, connector_name);
if (IS_ERR_OR_NULL(edid))
return 0;

View File

@ -359,6 +359,11 @@ static bool drm_fb_helper_is_bound(struct drm_fb_helper *fb_helper)
struct drm_crtc *crtc;
int bound = 0, crtcs_bound = 0;
/* Sometimes user space wants everything disabled, so don't steal the
* display if there's a master. */
if (dev->primary->master)
return false;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
if (crtc->fb)
crtcs_bound++;
@ -368,6 +373,7 @@ static bool drm_fb_helper_is_bound(struct drm_fb_helper *fb_helper)
if (bound < crtcs_bound)
return false;
return true;
}

View File

@ -232,7 +232,6 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
goto out_put_pid;
}
priv->ioctl_count = 0;
/* for compatibility root is always authenticated */
priv->always_authenticated = capable(CAP_SYS_ADMIN);
priv->authenticated = priv->always_authenticated;
@ -392,9 +391,6 @@ static void drm_legacy_dev_reinit(struct drm_device *dev)
if (drm_core_check_feature(dev, DRIVER_MODESET))
return;
atomic_set(&dev->ioctl_count, 0);
atomic_set(&dev->vma_count, 0);
dev->sigdata.lock = NULL;
dev->context_flag = 0;
@ -578,12 +574,7 @@ int drm_release(struct inode *inode, struct file *filp)
*/
if (!--dev->open_count) {
if (atomic_read(&dev->ioctl_count)) {
DRM_ERROR("Device busy: %d\n",
atomic_read(&dev->ioctl_count));
retcode = -EBUSY;
} else
retcode = drm_lastclose(dev);
retcode = drm_lastclose(dev);
if (drm_device_is_unplugged(dev))
drm_put_dev(dev);
}

View File

@ -91,19 +91,19 @@
int
drm_gem_init(struct drm_device *dev)
{
struct drm_gem_mm *mm;
struct drm_vma_offset_manager *vma_offset_manager;
mutex_init(&dev->object_name_lock);
idr_init(&dev->object_name_idr);
mm = kzalloc(sizeof(struct drm_gem_mm), GFP_KERNEL);
if (!mm) {
vma_offset_manager = kzalloc(sizeof(*vma_offset_manager), GFP_KERNEL);
if (!vma_offset_manager) {
DRM_ERROR("out of memory\n");
return -ENOMEM;
}
dev->mm_private = mm;
drm_vma_offset_manager_init(&mm->vma_manager,
dev->vma_offset_manager = vma_offset_manager;
drm_vma_offset_manager_init(vma_offset_manager,
DRM_FILE_PAGE_OFFSET_START,
DRM_FILE_PAGE_OFFSET_SIZE);
@ -113,11 +113,10 @@ drm_gem_init(struct drm_device *dev)
void
drm_gem_destroy(struct drm_device *dev)
{
struct drm_gem_mm *mm = dev->mm_private;
drm_vma_offset_manager_destroy(&mm->vma_manager);
kfree(mm);
dev->mm_private = NULL;
drm_vma_offset_manager_destroy(dev->vma_offset_manager);
kfree(dev->vma_offset_manager);
dev->vma_offset_manager = NULL;
}
/**
@ -129,11 +128,12 @@ int drm_gem_object_init(struct drm_device *dev,
{
struct file *filp;
drm_gem_private_object_init(dev, obj, size);
filp = shmem_file_setup("drm mm object", size, VM_NORESERVE);
if (IS_ERR(filp))
return PTR_ERR(filp);
drm_gem_private_object_init(dev, obj, size);
obj->filp = filp;
return 0;
@ -175,11 +175,6 @@ drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp)
mutex_unlock(&filp->prime.lock);
}
static void drm_gem_object_ref_bug(struct kref *list_kref)
{
BUG();
}
/**
* Called after the last handle to the object has been closed
*
@ -195,13 +190,6 @@ static void drm_gem_object_handle_free(struct drm_gem_object *obj)
if (obj->name) {
idr_remove(&dev->object_name_idr, obj->name);
obj->name = 0;
/*
* The object name held a reference to this object, drop
* that now.
*
* This cannot be the last reference, since the handle holds one too.
*/
kref_put(&obj->refcount, drm_gem_object_ref_bug);
}
}
@ -374,9 +362,8 @@ void
drm_gem_free_mmap_offset(struct drm_gem_object *obj)
{
struct drm_device *dev = obj->dev;
struct drm_gem_mm *mm = dev->mm_private;
drm_vma_offset_remove(&mm->vma_manager, &obj->vma_node);
drm_vma_offset_remove(dev->vma_offset_manager, &obj->vma_node);
}
EXPORT_SYMBOL(drm_gem_free_mmap_offset);
@ -398,9 +385,8 @@ int
drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size)
{
struct drm_device *dev = obj->dev;
struct drm_gem_mm *mm = dev->mm_private;
return drm_vma_offset_add(&mm->vma_manager, &obj->vma_node,
return drm_vma_offset_add(dev->vma_offset_manager, &obj->vma_node,
size / PAGE_SIZE);
}
EXPORT_SYMBOL(drm_gem_create_mmap_offset_size);
@ -602,9 +588,6 @@ drm_gem_flink_ioctl(struct drm_device *dev, void *data,
goto err;
obj->name = ret;
/* Allocate a reference for the name table. */
drm_gem_object_reference(obj);
}
args->name = (uint64_t) obj->name;
@ -833,7 +816,6 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_gem_mm *mm = dev->mm_private;
struct drm_gem_object *obj;
struct drm_vma_offset_node *node;
int ret = 0;
@ -843,7 +825,8 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
mutex_lock(&dev->struct_mutex);
node = drm_vma_offset_exact_lookup(&mm->vma_manager, vma->vm_pgoff,
node = drm_vma_offset_exact_lookup(dev->vma_offset_manager,
vma->vm_pgoff,
vma_pages(vma));
if (!node) {
mutex_unlock(&dev->struct_mutex);

View File

@ -186,14 +186,14 @@ int drm_clients_info(struct seq_file *m, void *data)
struct drm_file *priv;
mutex_lock(&dev->struct_mutex);
seq_printf(m, "a dev pid uid magic ioctls\n\n");
seq_printf(m, "a dev pid uid magic\n\n");
list_for_each_entry(priv, &dev->filelist, lhead) {
seq_printf(m, "%c %3d %5d %5d %10u %10lu\n",
seq_printf(m, "%c %3d %5d %5d %10u\n",
priv->authenticated ? 'y' : 'n',
priv->minor->index,
pid_vnr(priv->pid),
from_kuid_munged(seq_user_ns(m), priv->uid),
priv->magic, priv->ioctl_count);
priv->magic);
}
mutex_unlock(&dev->struct_mutex);
return 0;
@ -234,14 +234,18 @@ int drm_vma_info(struct seq_file *m, void *data)
struct drm_device *dev = node->minor->dev;
struct drm_vma_entry *pt;
struct vm_area_struct *vma;
unsigned long vma_count = 0;
#if defined(__i386__)
unsigned int pgprot;
#endif
mutex_lock(&dev->struct_mutex);
seq_printf(m, "vma use count: %d, high_memory = %pK, 0x%pK\n",
atomic_read(&dev->vma_count),
high_memory, (void *)(unsigned long)virt_to_phys(high_memory));
list_for_each_entry(pt, &dev->vmalist, head)
vma_count++;
seq_printf(m, "vma use count: %lu, high_memory = %pK, 0x%pK\n",
vma_count, high_memory,
(void *)(unsigned long)virt_to_phys(high_memory));
list_for_each_entry(pt, &dev->vmalist, head) {
vma = pt->vma;

View File

@ -368,7 +368,7 @@ int drm_irq_uninstall(struct drm_device *dev)
if (dev->num_crtcs) {
spin_lock_irqsave(&dev->vbl_lock, irqflags);
for (i = 0; i < dev->num_crtcs; i++) {
DRM_WAKEUP(&dev->vblank[i].queue);
wake_up(&dev->vblank[i].queue);
dev->vblank[i].enabled = false;
dev->vblank[i].last =
dev->driver->get_vblank_counter(dev, i);
@ -436,45 +436,41 @@ int drm_control(struct drm_device *dev, void *data,
}
/**
* drm_calc_timestamping_constants - Calculate and
* store various constants which are later needed by
* vblank and swap-completion timestamping, e.g, by
* drm_calc_vbltimestamp_from_scanoutpos().
* They are derived from crtc's true scanout timing,
* so they take things like panel scaling or other
* adjustments into account.
* drm_calc_timestamping_constants - Calculate vblank timestamp constants
*
* @crtc drm_crtc whose timestamp constants should be updated.
* @mode display mode containing the scanout timings
*
* Calculate and store various constants which are later
* needed by vblank and swap-completion timestamping, e.g,
* by drm_calc_vbltimestamp_from_scanoutpos(). They are
* derived from crtc's true scanout timing, so they take
* things like panel scaling or other adjustments into account.
*/
void drm_calc_timestamping_constants(struct drm_crtc *crtc)
void drm_calc_timestamping_constants(struct drm_crtc *crtc,
const struct drm_display_mode *mode)
{
s64 linedur_ns = 0, pixeldur_ns = 0, framedur_ns = 0;
u64 dotclock;
/* Dot clock in Hz: */
dotclock = (u64) crtc->hwmode.clock * 1000;
/* Fields of interlaced scanout modes are only half a frame duration.
* Double the dotclock to get half the frame-/line-/pixelduration.
*/
if (crtc->hwmode.flags & DRM_MODE_FLAG_INTERLACE)
dotclock *= 2;
int linedur_ns = 0, pixeldur_ns = 0, framedur_ns = 0;
int dotclock = mode->crtc_clock;
/* Valid dotclock? */
if (dotclock > 0) {
int frame_size;
/* Convert scanline length in pixels and video dot clock to
* line duration, frame duration and pixel duration in
* nanoseconds:
int frame_size = mode->crtc_htotal * mode->crtc_vtotal;
/*
* Convert scanline length in pixels and video
* dot clock to line duration, frame duration
* and pixel duration in nanoseconds:
*/
pixeldur_ns = (s64) div64_u64(1000000000, dotclock);
linedur_ns = (s64) div64_u64(((u64) crtc->hwmode.crtc_htotal *
1000000000), dotclock);
frame_size = crtc->hwmode.crtc_htotal *
crtc->hwmode.crtc_vtotal;
framedur_ns = (s64) div64_u64((u64) frame_size * 1000000000,
dotclock);
pixeldur_ns = 1000000 / dotclock;
linedur_ns = div_u64((u64) mode->crtc_htotal * 1000000, dotclock);
framedur_ns = div_u64((u64) frame_size * 1000000, dotclock);
/*
* Fields of interlaced scanout modes are only half a frame duration.
*/
if (mode->flags & DRM_MODE_FLAG_INTERLACE)
framedur_ns /= 2;
} else
DRM_ERROR("crtc %d: Can't calculate constants, dotclock = 0!\n",
crtc->base.id);
@ -484,11 +480,11 @@ void drm_calc_timestamping_constants(struct drm_crtc *crtc)
crtc->framedur_ns = framedur_ns;
DRM_DEBUG("crtc %d: hwmode: htotal %d, vtotal %d, vdisplay %d\n",
crtc->base.id, crtc->hwmode.crtc_htotal,
crtc->hwmode.crtc_vtotal, crtc->hwmode.crtc_vdisplay);
crtc->base.id, mode->crtc_htotal,
mode->crtc_vtotal, mode->crtc_vdisplay);
DRM_DEBUG("crtc %d: clock %d kHz framedur %d linedur %d, pixeldur %d\n",
crtc->base.id, (int) dotclock/1000, (int) framedur_ns,
(int) linedur_ns, (int) pixeldur_ns);
crtc->base.id, dotclock, framedur_ns,
linedur_ns, pixeldur_ns);
}
EXPORT_SYMBOL(drm_calc_timestamping_constants);
@ -521,6 +517,7 @@ EXPORT_SYMBOL(drm_calc_timestamping_constants);
* 0 = Default.
* DRM_CALLED_FROM_VBLIRQ = If function is called from vbl irq handler.
* @refcrtc: drm_crtc* of crtc which defines scanout timing.
* @mode: mode which defines the scanout timings
*
* Returns negative value on error, failure or if not supported in current
* video mode:
@ -540,14 +537,14 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
int *max_error,
struct timeval *vblank_time,
unsigned flags,
struct drm_crtc *refcrtc)
const struct drm_crtc *refcrtc,
const struct drm_display_mode *mode)
{
ktime_t stime, etime, mono_time_offset;
struct timeval tv_etime;
struct drm_display_mode *mode;
int vbl_status, vtotal, vdisplay;
int vbl_status;
int vpos, hpos, i;
s64 framedur_ns, linedur_ns, pixeldur_ns, delta_ns, duration_ns;
int framedur_ns, linedur_ns, pixeldur_ns, delta_ns, duration_ns;
bool invbl;
if (crtc < 0 || crtc >= dev->num_crtcs) {
@ -561,10 +558,6 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
return -EIO;
}
mode = &refcrtc->hwmode;
vtotal = mode->crtc_vtotal;
vdisplay = mode->crtc_vdisplay;
/* Durations of frames, lines, pixels in nanoseconds. */
framedur_ns = refcrtc->framedur_ns;
linedur_ns = refcrtc->linedur_ns;
@ -573,7 +566,7 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
/* If mode timing undefined, just return as no-op:
* Happens during initial modesetting of a crtc.
*/
if (vtotal <= 0 || vdisplay <= 0 || framedur_ns == 0) {
if (framedur_ns == 0) {
DRM_DEBUG("crtc %d: Noop due to uninitialized mode.\n", crtc);
return -EAGAIN;
}
@ -590,7 +583,7 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
* Get vertical and horizontal scanout position vpos, hpos,
* and bounding timestamps stime, etime, pre/post query.
*/
vbl_status = dev->driver->get_scanout_position(dev, crtc, &vpos,
vbl_status = dev->driver->get_scanout_position(dev, crtc, flags, &vpos,
&hpos, &stime, &etime);
/*
@ -611,18 +604,18 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
duration_ns = ktime_to_ns(etime) - ktime_to_ns(stime);
/* Accept result with < max_error nsecs timing uncertainty. */
if (duration_ns <= (s64) *max_error)
if (duration_ns <= *max_error)
break;
}
/* Noisy system timing? */
if (i == DRM_TIMESTAMP_MAXRETRIES) {
DRM_DEBUG("crtc %d: Noisy timestamp %d us > %d us [%d reps].\n",
crtc, (int) duration_ns/1000, *max_error/1000, i);
crtc, duration_ns/1000, *max_error/1000, i);
}
/* Return upper bound of timestamp precision error. */
*max_error = (int) duration_ns;
*max_error = duration_ns;
/* Check if in vblank area:
* vpos is >=0 in video scanout area, but negative
@ -635,25 +628,7 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
* since start of scanout at first display scanline. delta_ns
* can be negative if start of scanout hasn't happened yet.
*/
delta_ns = (s64) vpos * linedur_ns + (s64) hpos * pixeldur_ns;
/* Is vpos outside nominal vblank area, but less than
* 1/100 of a frame height away from start of vblank?
* If so, assume this isn't a massively delayed vblank
* interrupt, but a vblank interrupt that fired a few
* microseconds before true start of vblank. Compensate
* by adding a full frame duration to the final timestamp.
* Happens, e.g., on ATI R500, R600.
*
* We only do this if DRM_CALLED_FROM_VBLIRQ.
*/
if ((flags & DRM_CALLED_FROM_VBLIRQ) && !invbl &&
((vdisplay - vpos) < vtotal / 100)) {
delta_ns = delta_ns - framedur_ns;
/* Signal this correction as "applied". */
vbl_status |= 0x8;
}
delta_ns = vpos * linedur_ns + hpos * pixeldur_ns;
if (!drm_timestamp_monotonic)
etime = ktime_sub(etime, mono_time_offset);
@ -673,7 +648,7 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
crtc, (int)vbl_status, hpos, vpos,
(long)tv_etime.tv_sec, (long)tv_etime.tv_usec,
(long)vblank_time->tv_sec, (long)vblank_time->tv_usec,
(int)duration_ns/1000, i);
duration_ns/1000, i);
vbl_status = DRM_VBLANKTIME_SCANOUTPOS_METHOD;
if (invbl)
@ -960,7 +935,7 @@ void drm_vblank_put(struct drm_device *dev, int crtc)
if (atomic_dec_and_test(&dev->vblank[crtc].refcount) &&
(drm_vblank_offdelay > 0))
mod_timer(&dev->vblank_disable_timer,
jiffies + ((drm_vblank_offdelay * DRM_HZ)/1000));
jiffies + ((drm_vblank_offdelay * HZ)/1000));
}
EXPORT_SYMBOL(drm_vblank_put);
@ -980,7 +955,7 @@ void drm_vblank_off(struct drm_device *dev, int crtc)
spin_lock_irqsave(&dev->vbl_lock, irqflags);
vblank_disable_and_save(dev, crtc);
DRM_WAKEUP(&dev->vblank[crtc].queue);
wake_up(&dev->vblank[crtc].queue);
/* Send any queued vblank events, lest the natives grow disquiet */
seq = drm_vblank_count_and_time(dev, crtc, &now);
@ -1244,7 +1219,7 @@ int drm_wait_vblank(struct drm_device *dev, void *data,
DRM_DEBUG("waiting on vblank count %d, crtc %d\n",
vblwait->request.sequence, crtc);
dev->vblank[crtc].last_wait = vblwait->request.sequence;
DRM_WAIT_ON(ret, dev->vblank[crtc].queue, 3 * DRM_HZ,
DRM_WAIT_ON(ret, dev->vblank[crtc].queue, 3 * HZ,
(((drm_vblank_count(dev, crtc) -
vblwait->request.sequence) <= (1 << 23)) ||
!dev->irq_enabled));
@ -1363,7 +1338,7 @@ bool drm_handle_vblank(struct drm_device *dev, int crtc)
crtc, (int) diff_ns);
}
DRM_WAKEUP(&dev->vblank[crtc].queue);
wake_up(&dev->vblank[crtc].queue);
drm_handle_vblank_events(dev, crtc);
spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);

View File

@ -82,19 +82,19 @@ static void *agp_remap(unsigned long offset, unsigned long size,
}
/** Wrapper around agp_free_memory() */
void drm_free_agp(DRM_AGP_MEM * handle, int pages)
void drm_free_agp(struct agp_memory * handle, int pages)
{
agp_free_memory(handle);
}
/** Wrapper around agp_bind_memory() */
int drm_bind_agp(DRM_AGP_MEM * handle, unsigned int start)
int drm_bind_agp(struct agp_memory * handle, unsigned int start)
{
return agp_bind_memory(handle, start);
}
/** Wrapper around agp_unbind_memory() */
int drm_unbind_agp(DRM_AGP_MEM * handle)
int drm_unbind_agp(struct agp_memory * handle)
{
return agp_unbind_memory(handle);
}
@ -110,8 +110,7 @@ static inline void *agp_remap(unsigned long offset, unsigned long size,
void drm_core_ioremap(struct drm_local_map *map, struct drm_device *dev)
{
if (drm_core_has_AGP(dev) &&
dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
if (dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
map->handle = agp_remap(map->offset, map->size, dev);
else
map->handle = ioremap(map->offset, map->size);
@ -120,8 +119,7 @@ EXPORT_SYMBOL(drm_core_ioremap);
void drm_core_ioremap_wc(struct drm_local_map *map, struct drm_device *dev)
{
if (drm_core_has_AGP(dev) &&
dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
if (dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
map->handle = agp_remap(map->offset, map->size, dev);
else
map->handle = ioremap_wc(map->offset, map->size);
@ -133,8 +131,7 @@ void drm_core_ioremapfree(struct drm_local_map *map, struct drm_device *dev)
if (!map->handle || !map->size)
return;
if (drm_core_has_AGP(dev) &&
dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
if (dev->agp && dev->agp->cant_use_aperture && map->type == _DRM_AGP)
vunmap(map->handle);
else
iounmap(map->handle);

View File

@ -0,0 +1,315 @@
/*
* MIPI DSI Bus
*
* Copyright (C) 2012-2013, Samsung Electronics, Co., Ltd.
* Andrzej Hajda <a.hajda@samsung.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sub license, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice (including the
* next paragraph) shall be included in all copies or substantial portions
* of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
* USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include <drm/drm_mipi_dsi.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <video/mipi_display.h>
static int mipi_dsi_device_match(struct device *dev, struct device_driver *drv)
{
return of_driver_match_device(dev, drv);
}
static const struct dev_pm_ops mipi_dsi_device_pm_ops = {
.runtime_suspend = pm_generic_runtime_suspend,
.runtime_resume = pm_generic_runtime_resume,
.suspend = pm_generic_suspend,
.resume = pm_generic_resume,
.freeze = pm_generic_freeze,
.thaw = pm_generic_thaw,
.poweroff = pm_generic_poweroff,
.restore = pm_generic_restore,
};
static struct bus_type mipi_dsi_bus_type = {
.name = "mipi-dsi",
.match = mipi_dsi_device_match,
.pm = &mipi_dsi_device_pm_ops,
};
static void mipi_dsi_dev_release(struct device *dev)
{
struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
of_node_put(dev->of_node);
kfree(dsi);
}
static const struct device_type mipi_dsi_device_type = {
.release = mipi_dsi_dev_release,
};
static struct mipi_dsi_device *mipi_dsi_device_alloc(struct mipi_dsi_host *host)
{
struct mipi_dsi_device *dsi;
dsi = kzalloc(sizeof(*dsi), GFP_KERNEL);
if (!dsi)
return ERR_PTR(-ENOMEM);
dsi->host = host;
dsi->dev.bus = &mipi_dsi_bus_type;
dsi->dev.parent = host->dev;
dsi->dev.type = &mipi_dsi_device_type;
device_initialize(&dsi->dev);
return dsi;
}
static int mipi_dsi_device_add(struct mipi_dsi_device *dsi)
{
struct mipi_dsi_host *host = dsi->host;
dev_set_name(&dsi->dev, "%s.%d", dev_name(host->dev), dsi->channel);
return device_add(&dsi->dev);
}
static struct mipi_dsi_device *
of_mipi_dsi_device_add(struct mipi_dsi_host *host, struct device_node *node)
{
struct mipi_dsi_device *dsi;
struct device *dev = host->dev;
int ret;
u32 reg;
ret = of_property_read_u32(node, "reg", &reg);
if (ret) {
dev_err(dev, "device node %s has no valid reg property: %d\n",
node->full_name, ret);
return ERR_PTR(-EINVAL);
}
if (reg > 3) {
dev_err(dev, "device node %s has invalid reg property: %u\n",
node->full_name, reg);
return ERR_PTR(-EINVAL);
}
dsi = mipi_dsi_device_alloc(host);
if (IS_ERR(dsi)) {
dev_err(dev, "failed to allocate DSI device %s: %ld\n",
node->full_name, PTR_ERR(dsi));
return dsi;
}
dsi->dev.of_node = of_node_get(node);
dsi->channel = reg;
ret = mipi_dsi_device_add(dsi);
if (ret) {
dev_err(dev, "failed to add DSI device %s: %d\n",
node->full_name, ret);
kfree(dsi);
return ERR_PTR(ret);
}
return dsi;
}
int mipi_dsi_host_register(struct mipi_dsi_host *host)
{
struct device_node *node;
for_each_available_child_of_node(host->dev->of_node, node)
of_mipi_dsi_device_add(host, node);
return 0;
}
EXPORT_SYMBOL(mipi_dsi_host_register);
static int mipi_dsi_remove_device_fn(struct device *dev, void *priv)
{
struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
device_unregister(&dsi->dev);
return 0;
}
void mipi_dsi_host_unregister(struct mipi_dsi_host *host)
{
device_for_each_child(host->dev, NULL, mipi_dsi_remove_device_fn);
}
EXPORT_SYMBOL(mipi_dsi_host_unregister);
/**
* mipi_dsi_attach - attach a DSI device to its DSI host
* @dsi: DSI peripheral
*/
int mipi_dsi_attach(struct mipi_dsi_device *dsi)
{
const struct mipi_dsi_host_ops *ops = dsi->host->ops;
if (!ops || !ops->attach)
return -ENOSYS;
return ops->attach(dsi->host, dsi);
}
EXPORT_SYMBOL(mipi_dsi_attach);
/**
* mipi_dsi_detach - detach a DSI device from its DSI host
* @dsi: DSI peripheral
*/
int mipi_dsi_detach(struct mipi_dsi_device *dsi)
{
const struct mipi_dsi_host_ops *ops = dsi->host->ops;
if (!ops || !ops->detach)
return -ENOSYS;
return ops->detach(dsi->host, dsi);
}
EXPORT_SYMBOL(mipi_dsi_detach);
/**
* mipi_dsi_dcs_write - send DCS write command
* @dsi: DSI device
* @channel: virtual channel
* @data: pointer to the command followed by parameters
* @len: length of @data
*/
int mipi_dsi_dcs_write(struct mipi_dsi_device *dsi, unsigned int channel,
const void *data, size_t len)
{
const struct mipi_dsi_host_ops *ops = dsi->host->ops;
struct mipi_dsi_msg msg = {
.channel = channel,
.tx_buf = data,
.tx_len = len
};
if (!ops || !ops->transfer)
return -ENOSYS;
switch (len) {
case 0:
return -EINVAL;
case 1:
msg.type = MIPI_DSI_DCS_SHORT_WRITE;
break;
case 2:
msg.type = MIPI_DSI_DCS_SHORT_WRITE_PARAM;
break;
default:
msg.type = MIPI_DSI_DCS_LONG_WRITE;
break;
}
return ops->transfer(dsi->host, &msg);
}
EXPORT_SYMBOL(mipi_dsi_dcs_write);
/**
* mipi_dsi_dcs_read - send DCS read request command
* @dsi: DSI device
* @channel: virtual channel
* @cmd: DCS read command
* @data: pointer to read buffer
* @len: length of @data
*
* Function returns number of read bytes or error code.
*/
ssize_t mipi_dsi_dcs_read(struct mipi_dsi_device *dsi, unsigned int channel,
u8 cmd, void *data, size_t len)
{
const struct mipi_dsi_host_ops *ops = dsi->host->ops;
struct mipi_dsi_msg msg = {
.channel = channel,
.type = MIPI_DSI_DCS_READ,
.tx_buf = &cmd,
.tx_len = 1,
.rx_buf = data,
.rx_len = len
};
if (!ops || !ops->transfer)
return -ENOSYS;
return ops->transfer(dsi->host, &msg);
}
EXPORT_SYMBOL(mipi_dsi_dcs_read);
static int mipi_dsi_drv_probe(struct device *dev)
{
struct mipi_dsi_driver *drv = to_mipi_dsi_driver(dev->driver);
struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
return drv->probe(dsi);
}
static int mipi_dsi_drv_remove(struct device *dev)
{
struct mipi_dsi_driver *drv = to_mipi_dsi_driver(dev->driver);
struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
return drv->remove(dsi);
}
/**
* mipi_dsi_driver_register - register a driver for DSI devices
* @drv: DSI driver structure
*/
int mipi_dsi_driver_register(struct mipi_dsi_driver *drv)
{
drv->driver.bus = &mipi_dsi_bus_type;
if (drv->probe)
drv->driver.probe = mipi_dsi_drv_probe;
if (drv->remove)
drv->driver.remove = mipi_dsi_drv_remove;
return driver_register(&drv->driver);
}
EXPORT_SYMBOL(mipi_dsi_driver_register);
/**
* mipi_dsi_driver_unregister - unregister a driver for DSI devices
* @drv: DSI driver structure
*/
void mipi_dsi_driver_unregister(struct mipi_dsi_driver *drv)
{
driver_unregister(&drv->driver);
}
EXPORT_SYMBOL(mipi_dsi_driver_unregister);
static int __init mipi_dsi_bus_init(void)
{
return bus_register(&mipi_dsi_bus_type);
}
postcore_initcall(mipi_dsi_bus_init);
MODULE_AUTHOR("Andrzej Hajda <a.hajda@samsung.com>");
MODULE_DESCRIPTION("MIPI DSI Bus");
MODULE_LICENSE("GPL and additional rights");

View File

@ -0,0 +1,100 @@
/*
* Copyright (C) 2013, NVIDIA Corporation. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sub license,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the
* next paragraph) shall be included in all copies or substantial portions
* of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <linux/err.h>
#include <linux/module.h>
#include <drm/drm_crtc.h>
#include <drm/drm_panel.h>
static DEFINE_MUTEX(panel_lock);
static LIST_HEAD(panel_list);
void drm_panel_init(struct drm_panel *panel)
{
INIT_LIST_HEAD(&panel->list);
}
EXPORT_SYMBOL(drm_panel_init);
int drm_panel_add(struct drm_panel *panel)
{
mutex_lock(&panel_lock);
list_add_tail(&panel->list, &panel_list);
mutex_unlock(&panel_lock);
return 0;
}
EXPORT_SYMBOL(drm_panel_add);
void drm_panel_remove(struct drm_panel *panel)
{
mutex_lock(&panel_lock);
list_del_init(&panel->list);
mutex_unlock(&panel_lock);
}
EXPORT_SYMBOL(drm_panel_remove);
int drm_panel_attach(struct drm_panel *panel, struct drm_connector *connector)
{
if (panel->connector)
return -EBUSY;
panel->connector = connector;
panel->drm = connector->dev;
return 0;
}
EXPORT_SYMBOL(drm_panel_attach);
int drm_panel_detach(struct drm_panel *panel)
{
panel->connector = NULL;
panel->drm = NULL;
return 0;
}
EXPORT_SYMBOL(drm_panel_detach);
#ifdef CONFIG_OF
struct drm_panel *of_drm_find_panel(struct device_node *np)
{
struct drm_panel *panel;
mutex_lock(&panel_lock);
list_for_each_entry(panel, &panel_list, list) {
if (panel->dev->of_node == np) {
mutex_unlock(&panel_lock);
return panel;
}
}
mutex_unlock(&panel_lock);
return NULL;
}
EXPORT_SYMBOL(of_drm_find_panel);
#endif
MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>");
MODULE_DESCRIPTION("DRM panel infrastructure");
MODULE_LICENSE("GPL and additional rights");

View File

@ -262,16 +262,11 @@ static int drm_pci_irq_by_busid(struct drm_device *dev, struct drm_irq_busid *p)
return 0;
}
static int drm_pci_agp_init(struct drm_device *dev)
static void drm_pci_agp_init(struct drm_device *dev)
{
if (drm_core_has_AGP(dev)) {
if (drm_core_check_feature(dev, DRIVER_USE_AGP)) {
if (drm_pci_device_is_agp(dev))
dev->agp = drm_agp_init(dev);
if (drm_core_check_feature(dev, DRIVER_REQUIRE_AGP)
&& (dev->agp == NULL)) {
DRM_ERROR("Cannot initialize the agpgart module.\n");
return -EINVAL;
}
if (dev->agp) {
dev->agp->agp_mtrr = arch_phys_wc_add(
dev->agp->agp_info.aper_base,
@ -279,15 +274,14 @@ static int drm_pci_agp_init(struct drm_device *dev)
1024 * 1024);
}
}
return 0;
}
static void drm_pci_agp_destroy(struct drm_device *dev)
void drm_pci_agp_destroy(struct drm_device *dev)
{
if (drm_core_has_AGP(dev) && dev->agp) {
if (dev->agp) {
arch_phys_wc_del(dev->agp->agp_mtrr);
drm_agp_clear(dev);
drm_agp_destroy(dev->agp);
kfree(dev->agp);
dev->agp = NULL;
}
}
@ -299,8 +293,6 @@ static struct drm_bus drm_pci_bus = {
.set_busid = drm_pci_set_busid,
.set_unique = drm_pci_set_unique,
.irq_by_busid = drm_pci_irq_by_busid,
.agp_init = drm_pci_agp_init,
.agp_destroy = drm_pci_agp_destroy,
};
/**
@ -338,17 +330,25 @@ int drm_get_pci_dev(struct pci_dev *pdev, const struct pci_device_id *ent,
if (drm_core_check_feature(dev, DRIVER_MODESET))
pci_set_drvdata(pdev, dev);
drm_pci_agp_init(dev);
ret = drm_dev_register(dev, ent->driver_data);
if (ret)
goto err_pci;
goto err_agp;
DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n",
driver->name, driver->major, driver->minor, driver->patchlevel,
driver->date, pci_name(pdev), dev->primary->index);
/* No locking needed since shadow-attach is single-threaded since it may
* only be called from the per-driver module init hook. */
if (!drm_core_check_feature(dev, DRIVER_MODESET))
list_add_tail(&dev->legacy_dev_list, &driver->legacy_dev_list);
return 0;
err_pci:
err_agp:
drm_pci_agp_destroy(dev);
pci_disable_device(pdev);
err_free:
drm_dev_free(dev);
@ -375,7 +375,6 @@ int drm_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
DRM_DEBUG("\n");
INIT_LIST_HEAD(&driver->device_list);
driver->kdriver.pci = pdriver;
driver->bus = &drm_pci_bus;
@ -383,6 +382,7 @@ int drm_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
return pci_register_driver(pdriver);
/* If not using KMS, fall back to stealth mode manual scanning. */
INIT_LIST_HEAD(&driver->legacy_dev_list);
for (i = 0; pdriver->id_table[i].vendor != 0; i++) {
pid = &pdriver->id_table[i];
@ -452,6 +452,7 @@ int drm_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
return -1;
}
void drm_pci_agp_destroy(struct drm_device *dev) {}
#endif
EXPORT_SYMBOL(drm_pci_init);
@ -465,8 +466,11 @@ void drm_pci_exit(struct drm_driver *driver, struct pci_driver *pdriver)
if (driver->driver_features & DRIVER_MODESET) {
pci_unregister_driver(pdriver);
} else {
list_for_each_entry_safe(dev, tmp, &driver->device_list, driver_item)
list_for_each_entry_safe(dev, tmp, &driver->legacy_dev_list,
legacy_dev_list) {
drm_put_dev(dev);
list_del(&dev->legacy_dev_list);
}
}
DRM_INFO("Module unloaded\n");
}

View File

@ -147,18 +147,6 @@ int drm_platform_init(struct drm_driver *driver, struct platform_device *platfor
driver->kdriver.platform_device = platform_device;
driver->bus = &drm_platform_bus;
INIT_LIST_HEAD(&driver->device_list);
return drm_get_platform_dev(platform_device, driver);
}
EXPORT_SYMBOL(drm_platform_init);
void drm_platform_exit(struct drm_driver *driver, struct platform_device *platform_device)
{
struct drm_device *dev, *tmp;
DRM_DEBUG("\n");
list_for_each_entry_safe(dev, tmp, &driver->device_list, driver_item)
drm_put_dev(dev);
DRM_INFO("Module unloaded\n");
}
EXPORT_SYMBOL(drm_platform_exit);

View File

@ -99,13 +99,19 @@ void drm_ut_debug_printk(unsigned int request_level,
const char *function_name,
const char *format, ...)
{
struct va_format vaf;
va_list args;
if (drm_debug & request_level) {
if (function_name)
printk(KERN_DEBUG "[%s:%s], ", prefix, function_name);
va_start(args, format);
vprintk(format, args);
vaf.fmt = format;
vaf.va = &args;
if (function_name)
printk(KERN_DEBUG "[%s:%s], %pV", prefix,
function_name, &vaf);
else
printk(KERN_DEBUG "%pV", &vaf);
va_end(args);
}
}
@ -521,16 +527,10 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
mutex_lock(&drm_global_mutex);
if (dev->driver->bus->agp_init) {
ret = dev->driver->bus->agp_init(dev);
if (ret)
goto out_unlock;
}
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = drm_get_minor(dev, &dev->control, DRM_MINOR_CONTROL);
if (ret)
goto err_agp;
goto out_unlock;
}
if (drm_core_check_feature(dev, DRIVER_RENDER) && drm_rnodes) {
@ -557,8 +557,6 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
goto err_unload;
}
list_add_tail(&dev->driver_item, &dev->driver->device_list);
ret = 0;
goto out_unlock;
@ -571,9 +569,6 @@ err_render_node:
drm_unplug_minor(dev->render);
err_control_node:
drm_unplug_minor(dev->control);
err_agp:
if (dev->driver->bus->agp_destroy)
dev->driver->bus->agp_destroy(dev);
out_unlock:
mutex_unlock(&drm_global_mutex);
return ret;
@ -597,8 +592,8 @@ void drm_dev_unregister(struct drm_device *dev)
if (dev->driver->unload)
dev->driver->unload(dev);
if (dev->driver->bus->agp_destroy)
dev->driver->bus->agp_destroy(dev);
if (dev->agp)
drm_pci_agp_destroy(dev);
drm_vblank_cleanup(dev);
@ -608,7 +603,5 @@ void drm_dev_unregister(struct drm_device *dev)
drm_unplug_minor(dev->control);
drm_unplug_minor(dev->render);
drm_unplug_minor(dev->primary);
list_del(&dev->driver_item);
}
EXPORT_SYMBOL(drm_dev_unregister);

View File

@ -1,4 +1,5 @@
#include <drm/drmP.h>
#include <drm/drm_usb.h>
#include <linux/usb.h>
#include <linux/module.h>
@ -63,7 +64,6 @@ int drm_usb_init(struct drm_driver *driver, struct usb_driver *udriver)
int res;
DRM_DEBUG("\n");
INIT_LIST_HEAD(&driver->device_list);
driver->kdriver.usb = udriver;
driver->bus = &drm_usb_bus;

View File

@ -101,7 +101,7 @@ static int drm_do_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
/*
* Find the right map
*/
if (!drm_core_has_AGP(dev))
if (!dev->agp)
goto vm_fault_error;
if (!dev->agp || !dev->agp->cant_use_aperture)
@ -220,7 +220,6 @@ static void drm_vm_shm_close(struct vm_area_struct *vma)
DRM_DEBUG("0x%08lx,0x%08lx\n",
vma->vm_start, vma->vm_end - vma->vm_start);
atomic_dec(&dev->vma_count);
map = vma->vm_private_data;
@ -266,9 +265,6 @@ static void drm_vm_shm_close(struct vm_area_struct *vma)
dmah.size = map->size;
__drm_pci_free(dev, &dmah);
break;
case _DRM_GEM:
DRM_ERROR("tried to rmmap GEM object\n");
break;
}
kfree(map);
}
@ -408,7 +404,6 @@ void drm_vm_open_locked(struct drm_device *dev,
DRM_DEBUG("0x%08lx,0x%08lx\n",
vma->vm_start, vma->vm_end - vma->vm_start);
atomic_inc(&dev->vma_count);
vma_entry = kmalloc(sizeof(*vma_entry), GFP_KERNEL);
if (vma_entry) {
@ -436,7 +431,6 @@ void drm_vm_close_locked(struct drm_device *dev,
DRM_DEBUG("0x%08lx,0x%08lx\n",
vma->vm_start, vma->vm_end - vma->vm_start);
atomic_dec(&dev->vma_count);
list_for_each_entry_safe(pt, temp, &dev->vmalist, head) {
if (pt->vma == vma) {
@ -595,7 +589,7 @@ int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
switch (map->type) {
#if !defined(__arm__)
case _DRM_AGP:
if (drm_core_has_AGP(dev) && dev->agp->cant_use_aperture) {
if (dev->agp && dev->agp->cant_use_aperture) {
/*
* On some platforms we can't talk to bus dma address from the CPU, so for
* memory of type DRM_AGP, we'll deal with sorting out the real physical

View File

@ -14,6 +14,8 @@
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include <linux/anon_inodes.h>
#include <drm/exynos_drm.h>
#include "exynos_drm_drv.h"
@ -119,6 +121,8 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags)
drm_vblank_offdelay = VBLANK_OFF_DELAY;
platform_set_drvdata(dev->platformdev, dev);
return 0;
err_drm_device:
@ -150,9 +154,14 @@ static int exynos_drm_unload(struct drm_device *dev)
return 0;
}
static const struct file_operations exynos_drm_gem_fops = {
.mmap = exynos_drm_gem_mmap_buffer,
};
static int exynos_drm_open(struct drm_device *dev, struct drm_file *file)
{
struct drm_exynos_file_private *file_priv;
struct file *anon_filp;
int ret;
file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
@ -167,6 +176,16 @@ static int exynos_drm_open(struct drm_device *dev, struct drm_file *file)
file->driver_priv = NULL;
}
anon_filp = anon_inode_getfile("exynos_gem", &exynos_drm_gem_fops,
NULL, 0);
if (IS_ERR(anon_filp)) {
kfree(file_priv);
return PTR_ERR(anon_filp);
}
anon_filp->f_mode = FMODE_READ | FMODE_WRITE;
file_priv->anon_filp = anon_filp;
return ret;
}
@ -179,6 +198,7 @@ static void exynos_drm_preclose(struct drm_device *dev,
static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
{
struct exynos_drm_private *private = dev->dev_private;
struct drm_exynos_file_private *file_priv;
struct drm_pending_vblank_event *v, *vt;
struct drm_pending_event *e, *et;
unsigned long flags;
@ -204,6 +224,9 @@ static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
}
spin_unlock_irqrestore(&dev->event_lock, flags);
file_priv = file->driver_priv;
if (file_priv->anon_filp)
fput(file_priv->anon_filp);
kfree(file->driver_priv);
file->driver_priv = NULL;
@ -305,7 +328,7 @@ static int exynos_drm_platform_probe(struct platform_device *pdev)
static int exynos_drm_platform_remove(struct platform_device *pdev)
{
drm_platform_exit(&exynos_drm_driver, pdev);
drm_put_dev(platform_get_drvdata(pdev));
return 0;
}

View File

@ -226,6 +226,7 @@ struct exynos_drm_ipp_private {
struct drm_exynos_file_private {
struct exynos_drm_g2d_private *g2d_priv;
struct exynos_drm_ipp_private *ipp_priv;
struct file *anon_filp;
};
/*

View File

@ -347,7 +347,7 @@ static void fimd_wait_for_vblank(struct device *dev)
*/
if (!wait_event_timeout(ctx->wait_vsync_queue,
!atomic_read(&ctx->wait_vsync_event),
DRM_HZ/20))
HZ/20))
DRM_DEBUG_KMS("vblank wait timed out.\n");
}
@ -706,7 +706,7 @@ static irqreturn_t fimd_irq_handler(int irq, void *dev_id)
/* set wait vsync event to zero and wake up queue. */
if (atomic_read(&ctx->wait_vsync_event)) {
atomic_set(&ctx->wait_vsync_event, 0);
DRM_WAKEUP(&ctx->wait_vsync_queue);
wake_up(&ctx->wait_vsync_queue);
}
out:
return IRQ_HANDLED;
@ -954,7 +954,7 @@ static int fimd_probe(struct platform_device *pdev)
}
ctx->driver_data = drm_fimd_get_driver_data(pdev);
DRM_INIT_WAITQUEUE(&ctx->wait_vsync_queue);
init_waitqueue_head(&ctx->wait_vsync_queue);
atomic_set(&ctx->wait_vsync_event, 0);
subdrv = &ctx->subdrv;

View File

@ -338,46 +338,22 @@ int exynos_drm_gem_map_offset_ioctl(struct drm_device *dev, void *data,
&args->offset);
}
static struct drm_file *exynos_drm_find_drm_file(struct drm_device *drm_dev,
struct file *filp)
{
struct drm_file *file_priv;
/* find current process's drm_file from filelist. */
list_for_each_entry(file_priv, &drm_dev->filelist, lhead)
if (file_priv->filp == filp)
return file_priv;
WARN_ON(1);
return ERR_PTR(-EFAULT);
}
static int exynos_drm_gem_mmap_buffer(struct file *filp,
int exynos_drm_gem_mmap_buffer(struct file *filp,
struct vm_area_struct *vma)
{
struct drm_gem_object *obj = filp->private_data;
struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
struct drm_device *drm_dev = obj->dev;
struct exynos_drm_gem_buf *buffer;
struct drm_file *file_priv;
unsigned long vm_size;
int ret;
WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_private_data = obj;
vma->vm_ops = drm_dev->driver->gem_vm_ops;
/* restore it to driver's fops. */
filp->f_op = fops_get(drm_dev->driver->fops);
file_priv = exynos_drm_find_drm_file(drm_dev, filp);
if (IS_ERR(file_priv))
return PTR_ERR(file_priv);
/* restore it to drm_file. */
filp->private_data = file_priv;
update_vm_cache_attr(exynos_gem_obj, vma);
vm_size = vma->vm_end - vma->vm_start;
@ -411,15 +387,13 @@ static int exynos_drm_gem_mmap_buffer(struct file *filp,
return 0;
}
static const struct file_operations exynos_drm_gem_fops = {
.mmap = exynos_drm_gem_mmap_buffer,
};
int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_exynos_file_private *exynos_file_priv;
struct drm_exynos_gem_mmap *args = data;
struct drm_gem_object *obj;
struct file *anon_filp;
unsigned long addr;
if (!(dev->driver->driver_features & DRIVER_GEM)) {
@ -427,47 +401,25 @@ int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
return -ENODEV;
}
mutex_lock(&dev->struct_mutex);
obj = drm_gem_object_lookup(dev, file_priv, args->handle);
if (!obj) {
DRM_ERROR("failed to lookup gem object.\n");
mutex_unlock(&dev->struct_mutex);
return -EINVAL;
}
/*
* We have to use gem object and its fops for specific mmaper,
* but vm_mmap() can deliver only filp. So we have to change
* filp->f_op and filp->private_data temporarily, then restore
* again. So it is important to keep lock until restoration the
* settings to prevent others from misuse of filp->f_op or
* filp->private_data.
*/
mutex_lock(&dev->struct_mutex);
exynos_file_priv = file_priv->driver_priv;
anon_filp = exynos_file_priv->anon_filp;
anon_filp->private_data = obj;
/*
* Set specific mmper's fops. And it will be restored by
* exynos_drm_gem_mmap_buffer to dev->driver->fops.
* This is used to call specific mapper temporarily.
*/
file_priv->filp->f_op = &exynos_drm_gem_fops;
/*
* Set gem object to private_data so that specific mmaper
* can get the gem object. And it will be restored by
* exynos_drm_gem_mmap_buffer to drm_file.
*/
file_priv->filp->private_data = obj;
addr = vm_mmap(file_priv->filp, 0, args->size,
PROT_READ | PROT_WRITE, MAP_SHARED, 0);
addr = vm_mmap(anon_filp, 0, args->size, PROT_READ | PROT_WRITE,
MAP_SHARED, 0);
drm_gem_object_unreference(obj);
if (IS_ERR_VALUE(addr)) {
/* check filp->f_op, filp->private_data are restored */
if (file_priv->filp->f_op == &exynos_drm_gem_fops) {
file_priv->filp->f_op = fops_get(dev->driver->fops);
file_priv->filp->private_data = file_priv;
}
mutex_unlock(&dev->struct_mutex);
return (int)addr;
}

View File

@ -122,6 +122,9 @@ int exynos_drm_gem_map_offset_ioctl(struct drm_device *dev, void *data,
int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int exynos_drm_gem_mmap_buffer(struct file *filp,
struct vm_area_struct *vma);
/* map user space allocated by malloc to pages. */
int exynos_drm_gem_userptr_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);

View File

@ -868,7 +868,7 @@ static void mixer_wait_for_vblank(void *ctx)
*/
if (!wait_event_timeout(mixer_ctx->wait_vsync_queue,
!atomic_read(&mixer_ctx->wait_vsync_event),
DRM_HZ/20))
HZ/20))
DRM_DEBUG_KMS("vblank wait timed out.\n");
}
@ -1019,7 +1019,7 @@ static irqreturn_t mixer_irq_handler(int irq, void *arg)
/* set wait vsync event to zero and wake up queue. */
if (atomic_read(&ctx->wait_vsync_event)) {
atomic_set(&ctx->wait_vsync_event, 0);
DRM_WAKEUP(&ctx->wait_vsync_queue);
wake_up(&ctx->wait_vsync_queue);
}
}
@ -1209,7 +1209,7 @@ static int mixer_probe(struct platform_device *pdev)
drm_hdmi_ctx->ctx = (void *)ctx;
ctx->vp_enabled = drv->is_vp_enabled;
ctx->mxr_ver = drv->version;
DRM_INIT_WAITQUEUE(&ctx->wait_vsync_queue);
init_waitqueue_head(&ctx->wait_vsync_queue);
atomic_set(&ctx->wait_vsync_event, 0);
platform_set_drvdata(pdev, drm_hdmi_ctx);

View File

@ -326,7 +326,7 @@ int psbfb_sync(struct fb_info *info)
struct psb_framebuffer *psbfb = &fbdev->pfb;
struct drm_device *dev = psbfb->base.dev;
struct drm_psb_private *dev_priv = dev->dev_private;
unsigned long _end = jiffies + DRM_HZ;
unsigned long _end = jiffies + HZ;
int busy = 0;
unsigned long flags;

View File

@ -483,7 +483,7 @@ cdv_intel_dp_aux_native_write(struct gma_encoder *encoder,
if (send_bytes > 16)
return -1;
msg[0] = AUX_NATIVE_WRITE << 4;
msg[0] = DP_AUX_NATIVE_WRITE << 4;
msg[1] = address >> 8;
msg[2] = address & 0xff;
msg[3] = send_bytes - 1;
@ -493,9 +493,10 @@ cdv_intel_dp_aux_native_write(struct gma_encoder *encoder,
ret = cdv_intel_dp_aux_ch(encoder, msg, msg_bytes, &ack, 1);
if (ret < 0)
return ret;
if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_ACK)
ack >>= 4;
if ((ack & DP_AUX_NATIVE_REPLY_MASK) == DP_AUX_NATIVE_REPLY_ACK)
break;
else if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_DEFER)
else if ((ack & DP_AUX_NATIVE_REPLY_MASK) == DP_AUX_NATIVE_REPLY_DEFER)
udelay(100);
else
return -EIO;
@ -523,7 +524,7 @@ cdv_intel_dp_aux_native_read(struct gma_encoder *encoder,
uint8_t ack;
int ret;
msg[0] = AUX_NATIVE_READ << 4;
msg[0] = DP_AUX_NATIVE_READ << 4;
msg[1] = address >> 8;
msg[2] = address & 0xff;
msg[3] = recv_bytes - 1;
@ -538,12 +539,12 @@ cdv_intel_dp_aux_native_read(struct gma_encoder *encoder,
return -EPROTO;
if (ret < 0)
return ret;
ack = reply[0];
if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_ACK) {
ack = reply[0] >> 4;
if ((ack & DP_AUX_NATIVE_REPLY_MASK) == DP_AUX_NATIVE_REPLY_ACK) {
memcpy(recv, reply + 1, ret - 1);
return ret - 1;
}
else if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_DEFER)
else if ((ack & DP_AUX_NATIVE_REPLY_MASK) == DP_AUX_NATIVE_REPLY_DEFER)
udelay(100);
else
return -EIO;
@ -569,12 +570,12 @@ cdv_intel_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode,
/* Set up the command byte */
if (mode & MODE_I2C_READ)
msg[0] = AUX_I2C_READ << 4;
msg[0] = DP_AUX_I2C_READ << 4;
else
msg[0] = AUX_I2C_WRITE << 4;
msg[0] = DP_AUX_I2C_WRITE << 4;
if (!(mode & MODE_I2C_STOP))
msg[0] |= AUX_I2C_MOT << 4;
msg[0] |= DP_AUX_I2C_MOT << 4;
msg[1] = address >> 8;
msg[2] = address;
@ -606,16 +607,16 @@ cdv_intel_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode,
return ret;
}
switch (reply[0] & AUX_NATIVE_REPLY_MASK) {
case AUX_NATIVE_REPLY_ACK:
switch ((reply[0] >> 4) & DP_AUX_NATIVE_REPLY_MASK) {
case DP_AUX_NATIVE_REPLY_ACK:
/* I2C-over-AUX Reply field is only valid
* when paired with AUX ACK.
*/
break;
case AUX_NATIVE_REPLY_NACK:
case DP_AUX_NATIVE_REPLY_NACK:
DRM_DEBUG_KMS("aux_ch native nack\n");
return -EREMOTEIO;
case AUX_NATIVE_REPLY_DEFER:
case DP_AUX_NATIVE_REPLY_DEFER:
udelay(100);
continue;
default:
@ -624,16 +625,16 @@ cdv_intel_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode,
return -EREMOTEIO;
}
switch (reply[0] & AUX_I2C_REPLY_MASK) {
case AUX_I2C_REPLY_ACK:
switch ((reply[0] >> 4) & DP_AUX_I2C_REPLY_MASK) {
case DP_AUX_I2C_REPLY_ACK:
if (mode == MODE_I2C_READ) {
*read_byte = reply[1];
}
return reply_bytes - 1;
case AUX_I2C_REPLY_NACK:
case DP_AUX_I2C_REPLY_NACK:
DRM_DEBUG_KMS("aux_i2c nack\n");
return -EREMOTEIO;
case AUX_I2C_REPLY_DEFER:
case DP_AUX_I2C_REPLY_DEFER:
DRM_DEBUG_KMS("aux_i2c defer\n");
udelay(100);
break;
@ -677,7 +678,7 @@ cdv_intel_dp_i2c_init(struct gma_connector *connector,
return ret;
}
void cdv_intel_fixed_panel_mode(struct drm_display_mode *fixed_mode,
static void cdv_intel_fixed_panel_mode(struct drm_display_mode *fixed_mode,
struct drm_display_mode *adjusted_mode)
{
adjusted_mode->hdisplay = fixed_mode->hdisplay;

View File

@ -349,6 +349,7 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc,
/* If we didn't get a handle then turn the cursor off */
if (!handle) {
temp = CURSOR_MODE_DISABLE;
mutex_lock(&dev->struct_mutex);
if (gma_power_begin(dev, false)) {
REG_WRITE(control, temp);
@ -365,6 +366,7 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc,
gma_crtc->cursor_obj = NULL;
}
mutex_unlock(&dev->struct_mutex);
return 0;
}
@ -374,9 +376,12 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc,
return -EINVAL;
}
mutex_lock(&dev->struct_mutex);
obj = drm_gem_object_lookup(dev, file_priv, handle);
if (!obj)
return -ENOENT;
if (!obj) {
ret = -ENOENT;
goto unlock;
}
if (obj->size < width * height * 4) {
dev_dbg(dev->dev, "Buffer is too small\n");
@ -440,10 +445,13 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc,
}
gma_crtc->cursor_obj = obj;
unlock:
mutex_unlock(&dev->struct_mutex);
return ret;
unref_cursor:
drm_gem_object_unreference(obj);
mutex_unlock(&dev->struct_mutex);
return ret;
}

View File

@ -212,8 +212,8 @@ enum {
#define PSB_HIGH_REG_OFFS 0x0600
#define PSB_NUM_VBLANKS 2
#define PSB_WATCHDOG_DELAY (DRM_HZ * 2)
#define PSB_LID_DELAY (DRM_HZ / 10)
#define PSB_WATCHDOG_DELAY (HZ * 2)
#define PSB_LID_DELAY (HZ / 10)
#define MDFLD_PNW_B0 0x04
#define MDFLD_PNW_C0 0x08
@ -232,7 +232,7 @@ enum {
#define MDFLD_DSR_RR 45
#define MDFLD_DPU_ENABLE (1 << 31)
#define MDFLD_DSR_FULLSCREEN (1 << 30)
#define MDFLD_DSR_DELAY (DRM_HZ / MDFLD_DSR_RR)
#define MDFLD_DSR_DELAY (HZ / MDFLD_DSR_RR)
#define PSB_PWR_STATE_ON 1
#define PSB_PWR_STATE_OFF 2
@ -769,7 +769,7 @@ extern void psb_mmu_remove_pages(struct psb_mmu_pd *pd,
*psb_irq.c
*/
extern irqreturn_t psb_irq_handler(DRM_IRQ_ARGS);
extern irqreturn_t psb_irq_handler(int irq, void *arg);
extern int psb_irq_enable_dpst(struct drm_device *dev);
extern int psb_irq_disable_dpst(struct drm_device *dev);
extern void psb_irq_preinstall(struct drm_device *dev);

View File

@ -250,11 +250,6 @@ extern void psb_intel_sdvo_set_hotplug(struct drm_connector *connector,
extern int intelfb_probe(struct drm_device *dev);
extern int intelfb_remove(struct drm_device *dev,
struct drm_framebuffer *fb);
extern struct drm_framebuffer *psb_intel_framebuffer_create(struct drm_device
*dev, struct
drm_mode_fb_cmd
*mode_cmd,
void *mm_private);
extern bool psb_intel_lvds_mode_fixup(struct drm_encoder *encoder,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode);

View File

@ -200,7 +200,7 @@ static void psb_vdc_interrupt(struct drm_device *dev, uint32_t vdc_stat)
mid_pipe_event_handler(dev, 1);
}
irqreturn_t psb_irq_handler(DRM_IRQ_ARGS)
irqreturn_t psb_irq_handler(int irq, void *arg)
{
struct drm_device *dev = arg;
struct drm_psb_private *dev_priv = dev->dev_private;
@ -253,7 +253,7 @@ irqreturn_t psb_irq_handler(DRM_IRQ_ARGS)
PSB_WVDC32(vdc_stat, PSB_INT_IDENTITY_R);
(void) PSB_RVDC32(PSB_INT_IDENTITY_R);
DRM_READMEMORYBARRIER();
rmb();
if (!handled)
return IRQ_NONE;
@ -450,21 +450,6 @@ int psb_irq_disable_dpst(struct drm_device *dev)
return 0;
}
#ifdef PSB_FIXME
static int psb_vblank_do_wait(struct drm_device *dev,
unsigned int *sequence, atomic_t *counter)
{
unsigned int cur_vblank;
int ret = 0;
DRM_WAIT_ON(ret, dev->vblank.queue, 3 * DRM_HZ,
(((cur_vblank = atomic_read(counter))
- *sequence) <= (1 << 23)));
*sequence = cur_vblank;
return ret;
}
#endif
/*
* It is used to enable VBLANK interrupt
*/

View File

@ -32,7 +32,7 @@ void sysirq_uninit(struct drm_device *dev);
void psb_irq_preinstall(struct drm_device *dev);
int psb_irq_postinstall(struct drm_device *dev);
void psb_irq_uninstall(struct drm_device *dev);
irqreturn_t psb_irq_handler(DRM_IRQ_ARGS);
irqreturn_t psb_irq_handler(int irq, void *arg);
int psb_irq_enable_dpst(struct drm_device *dev);
int psb_irq_disable_dpst(struct drm_device *dev);

View File

@ -1193,6 +1193,10 @@ static int i810_flip_bufs(struct drm_device *dev, void *data,
int i810_driver_load(struct drm_device *dev, unsigned long flags)
{
/* Our userspace depends upon the agp mapping support. */
if (!dev->agp)
return -EINVAL;
pci_set_master(dev->pdev);
return 0;

View File

@ -57,7 +57,7 @@ static const struct file_operations i810_driver_fops = {
static struct drm_driver driver = {
.driver_features =
DRIVER_USE_AGP | DRIVER_REQUIRE_AGP |
DRIVER_USE_AGP |
DRIVER_HAVE_DMA,
.dev_priv_size = sizeof(drm_i810_buf_priv_t),
.load = i810_driver_load,

View File

@ -1,8 +1,10 @@
config DRM_I915
tristate "Intel 8xx/9xx/G3x/G4x/HD Graphics"
depends on DRM
depends on AGP
depends on AGP_INTEL
depends on X86 && PCI
depends on (AGP || AGP=n)
select INTEL_GTT
select AGP_INTEL if AGP
# we need shmfs for the swappable backing store, and in particular
# the shmem_readpage() which depends upon tmpfs
select SHMEM
@ -35,15 +37,14 @@ config DRM_I915
config DRM_I915_KMS
bool "Enable modesetting on intel by default"
depends on DRM_I915
default y
help
Choose this option if you want kernel modesetting enabled by default,
and you have a new enough userspace to support this. Running old
userspaces with this enabled will cause pain. Note that this causes
the driver to bind to PCI devices, which precludes loading things
like intelfb.
Choose this option if you want kernel modesetting enabled by default.
If in doubt, say "Y".
config DRM_I915_FBDEV
bool "Enable legacy fbdev support for the modesettting intel driver"
bool "Enable legacy fbdev support for the modesetting intel driver"
depends on DRM_I915
select DRM_KMS_FB_HELPER
select FB_CFB_FILLRECT
@ -55,9 +56,12 @@ config DRM_I915_FBDEV
support. Note that this support also provide the linux console
support on top of the intel modesetting driver.
If in doubt, say "Y".
config DRM_I915_PRELIMINARY_HW_SUPPORT
bool "Enable preliminary support for prerelease Intel hardware by default"
depends on DRM_I915
default n
help
Choose this option if you have prerelease Intel hardware and want the
i915 driver to support it by default. You can enable such support at
@ -65,3 +69,15 @@ config DRM_I915_PRELIMINARY_HW_SUPPORT
option changes the default for that module option.
If in doubt, say "N".
config DRM_I915_UMS
bool "Enable userspace modesetting on Intel hardware (DEPRECATED)"
depends on DRM_I915
default n
help
Choose this option if you still need userspace modesetting.
Userspace modesetting is deprecated for quite some time now, so
enable this only if you have ancient versions of the DDX drivers.
If in doubt, say "N".

View File

@ -4,7 +4,6 @@
ccflags-y := -Iinclude/drm
i915-y := i915_drv.o i915_dma.o i915_irq.o \
i915_debugfs.o \
i915_gpu_error.o \
i915_suspend.o \
i915_gem.o \
@ -54,6 +53,8 @@ i915-$(CONFIG_ACPI) += intel_acpi.o intel_opregion.o
i915-$(CONFIG_DRM_I915_FBDEV) += intel_fbdev.o
i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o
obj-$(CONFIG_DRM_I915) += i915.o
CFLAGS_i915_trace_points.o := -I$(src)

View File

@ -87,49 +87,6 @@ struct ns2501_priv {
* when switching the resolution.
*/
static void enable_dvo(struct intel_dvo_device *dvo)
{
struct ns2501_priv *ns = (struct ns2501_priv *)(dvo->dev_priv);
struct i2c_adapter *adapter = dvo->i2c_bus;
struct intel_gmbus *bus = container_of(adapter,
struct intel_gmbus,
adapter);
struct drm_i915_private *dev_priv = bus->dev_priv;
DRM_DEBUG_KMS("%s: Trying to re-enable the DVO\n", __FUNCTION__);
ns->dvoc = I915_READ(DVO_C);
ns->pll_a = I915_READ(_DPLL_A);
ns->srcdim = I915_READ(DVOC_SRCDIM);
ns->fw_blc = I915_READ(FW_BLC);
I915_WRITE(DVOC, 0x10004084);
I915_WRITE(_DPLL_A, 0xd0820000);
I915_WRITE(DVOC_SRCDIM, 0x400300); // 1024x768
I915_WRITE(FW_BLC, 0x1080304);
I915_WRITE(DVOC, 0x90004084);
}
/*
* Restore the I915 registers modified by the above
* trigger function.
*/
static void restore_dvo(struct intel_dvo_device *dvo)
{
struct i2c_adapter *adapter = dvo->i2c_bus;
struct intel_gmbus *bus = container_of(adapter,
struct intel_gmbus,
adapter);
struct drm_i915_private *dev_priv = bus->dev_priv;
struct ns2501_priv *ns = (struct ns2501_priv *)(dvo->dev_priv);
I915_WRITE(DVOC, ns->dvoc);
I915_WRITE(_DPLL_A, ns->pll_a);
I915_WRITE(DVOC_SRCDIM, ns->srcdim);
I915_WRITE(FW_BLC, ns->fw_blc);
}
/*
** Read a register from the ns2501.
** Returns true if successful, false otherwise.
@ -300,7 +257,7 @@ static void ns2501_mode_set(struct intel_dvo_device *dvo,
struct drm_display_mode *adjusted_mode)
{
bool ok;
bool restore = false;
int retries = 10;
struct ns2501_priv *ns = (struct ns2501_priv *)(dvo->dev_priv);
DRM_DEBUG_KMS
@ -476,20 +433,7 @@ static void ns2501_mode_set(struct intel_dvo_device *dvo,
ns->reg_8_shadow |= NS2501_8_BPAS;
}
ok &= ns2501_writeb(dvo, NS2501_REG8, ns->reg_8_shadow);
if (!ok) {
if (restore)
restore_dvo(dvo);
enable_dvo(dvo);
restore = true;
}
} while (!ok);
/*
* Restore the old i915 registers before
* forcing the ns2501 on.
*/
if (restore)
restore_dvo(dvo);
} while (!ok && retries--);
}
/* set the NS2501 power state */
@ -510,7 +454,7 @@ static bool ns2501_get_hw_state(struct intel_dvo_device *dvo)
static void ns2501_dpms(struct intel_dvo_device *dvo, bool enable)
{
bool ok;
bool restore = false;
int retries = 10;
struct ns2501_priv *ns = (struct ns2501_priv *)(dvo->dev_priv);
unsigned char ch;
@ -537,16 +481,7 @@ static void ns2501_dpms(struct intel_dvo_device *dvo, bool enable)
ok &=
ns2501_writeb(dvo, 0x35,
enable ? 0xff : 0x00);
if (!ok) {
if (restore)
restore_dvo(dvo);
enable_dvo(dvo);
restore = true;
}
} while (!ok);
if (restore)
restore_dvo(dvo);
} while (!ok && retries--);
}
}

View File

@ -40,8 +40,6 @@
#include <drm/i915_drm.h>
#include "i915_drv.h"
#if defined(CONFIG_DEBUG_FS)
enum {
ACTIVE_LIST,
INACTIVE_LIST,
@ -406,16 +404,26 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
seq_putc(m, '\n');
list_for_each_entry_reverse(file, &dev->filelist, lhead) {
struct file_stats stats;
struct task_struct *task;
memset(&stats, 0, sizeof(stats));
idr_for_each(&file->object_idr, per_file_stats, &stats);
/*
* Although we have a valid reference on file->pid, that does
* not guarantee that the task_struct who called get_pid() is
* still alive (e.g. get_pid(current) => fork() => exit()).
* Therefore, we need to protect this ->comm access using RCU.
*/
rcu_read_lock();
task = pid_task(file->pid, PIDTYPE_PID);
seq_printf(m, "%s: %u objects, %zu bytes (%zu active, %zu inactive, %zu unbound)\n",
get_pid_task(file->pid, PIDTYPE_PID)->comm,
task ? task->comm : "<unknown>",
stats.count,
stats.total,
stats.active,
stats.inactive,
stats.unbound);
rcu_read_unlock();
}
mutex_unlock(&dev->struct_mutex);
@ -564,10 +572,12 @@ static int i915_gem_seqno_info(struct seq_file *m, void *data)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
for_each_ring(ring, dev_priv, i)
i915_ring_seqno_info(m, ring);
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
return 0;
@ -585,6 +595,7 @@ static int i915_interrupt_info(struct seq_file *m, void *data)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
if (INTEL_INFO(dev)->gen >= 8) {
int i;
@ -711,6 +722,7 @@ static int i915_interrupt_info(struct seq_file *m, void *data)
}
i915_ring_seqno_info(m, ring);
}
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
return 0;
@ -904,9 +916,11 @@ static int i915_rstdby_delays(struct seq_file *m, void *unused)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
crstanddelay = I915_READ16(CRSTANDVID);
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
seq_printf(m, "w/ctx: %d, w/o ctx: %d\n", (crstanddelay >> 8) & 0x3f, (crstanddelay & 0x3f));
@ -919,7 +933,9 @@ static int i915_cur_delayinfo(struct seq_file *m, void *unused)
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
drm_i915_private_t *dev_priv = dev->dev_private;
int ret;
int ret = 0;
intel_runtime_pm_get(dev_priv);
flush_delayed_work(&dev_priv->rps.delayed_resume_work);
@ -945,9 +961,9 @@ static int i915_cur_delayinfo(struct seq_file *m, void *unused)
/* RPSTAT1 is in the GT power well */
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
goto out;
gen6_gt_force_wake_get(dev_priv);
gen6_gt_force_wake_get(dev_priv, FORCEWAKE_ALL);
reqf = I915_READ(GEN6_RPNSWREQ);
reqf &= ~GEN6_TURBO_DISABLE;
@ -970,7 +986,7 @@ static int i915_cur_delayinfo(struct seq_file *m, void *unused)
cagf = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT;
cagf *= GT_FREQUENCY_MULTIPLIER;
gen6_gt_force_wake_put(dev_priv);
gen6_gt_force_wake_put(dev_priv, FORCEWAKE_ALL);
mutex_unlock(&dev->struct_mutex);
seq_printf(m, "GT_PERF_STATUS: 0x%08x\n", gt_perf_status);
@ -1018,23 +1034,24 @@ static int i915_cur_delayinfo(struct seq_file *m, void *unused)
seq_printf(m, "PUNIT_REG_GPU_FREQ_STS: 0x%08x\n", freq_sts);
seq_printf(m, "DDR freq: %d MHz\n", dev_priv->mem_freq);
val = vlv_punit_read(dev_priv, PUNIT_FUSE_BUS1);
val = valleyview_rps_max_freq(dev_priv);
seq_printf(m, "max GPU freq: %d MHz\n",
vlv_gpu_freq(dev_priv->mem_freq, val));
vlv_gpu_freq(dev_priv, val));
val = vlv_punit_read(dev_priv, PUNIT_REG_GPU_LFM);
val = valleyview_rps_min_freq(dev_priv);
seq_printf(m, "min GPU freq: %d MHz\n",
vlv_gpu_freq(dev_priv->mem_freq, val));
vlv_gpu_freq(dev_priv, val));
seq_printf(m, "current GPU freq: %d MHz\n",
vlv_gpu_freq(dev_priv->mem_freq,
(freq_sts >> 8) & 0xff));
vlv_gpu_freq(dev_priv, (freq_sts >> 8) & 0xff));
mutex_unlock(&dev_priv->rps.hw_lock);
} else {
seq_puts(m, "no P-state info available\n");
}
return 0;
out:
intel_runtime_pm_put(dev_priv);
return ret;
}
static int i915_delayfreq_table(struct seq_file *m, void *unused)
@ -1048,6 +1065,7 @@ static int i915_delayfreq_table(struct seq_file *m, void *unused)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
for (i = 0; i < 16; i++) {
delayfreq = I915_READ(PXVFREQ_BASE + i * 4);
@ -1055,6 +1073,8 @@ static int i915_delayfreq_table(struct seq_file *m, void *unused)
(delayfreq & PXVFREQ_PX_MASK) >> PXVFREQ_PX_SHIFT);
}
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
return 0;
@ -1076,12 +1096,14 @@ static int i915_inttoext_table(struct seq_file *m, void *unused)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
for (i = 1; i <= 32; i++) {
inttoext = I915_READ(INTTOEXT_BASE_ILK + i * 4);
seq_printf(m, "INTTOEXT%02d: 0x%08x\n", i, inttoext);
}
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
return 0;
@ -1099,11 +1121,13 @@ static int ironlake_drpc_info(struct seq_file *m)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
rgvmodectl = I915_READ(MEMMODECTL);
rstdbyctl = I915_READ(RSTDBYCTL);
crstandvid = I915_READ16(CRSTANDVID);
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
seq_printf(m, "HD boost: %s\n", (rgvmodectl & MEMMODE_BOOST_EN) ?
@ -1154,6 +1178,50 @@ static int ironlake_drpc_info(struct seq_file *m)
return 0;
}
static int vlv_drpc_info(struct seq_file *m)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
u32 rpmodectl1, rcctl1;
unsigned fw_rendercount = 0, fw_mediacount = 0;
rpmodectl1 = I915_READ(GEN6_RP_CONTROL);
rcctl1 = I915_READ(GEN6_RC_CONTROL);
seq_printf(m, "Video Turbo Mode: %s\n",
yesno(rpmodectl1 & GEN6_RP_MEDIA_TURBO));
seq_printf(m, "Turbo enabled: %s\n",
yesno(rpmodectl1 & GEN6_RP_ENABLE));
seq_printf(m, "HW control enabled: %s\n",
yesno(rpmodectl1 & GEN6_RP_ENABLE));
seq_printf(m, "SW control enabled: %s\n",
yesno((rpmodectl1 & GEN6_RP_MEDIA_MODE_MASK) ==
GEN6_RP_MEDIA_SW_MODE));
seq_printf(m, "RC6 Enabled: %s\n",
yesno(rcctl1 & (GEN7_RC_CTL_TO_MODE |
GEN6_RC_CTL_EI_MODE(1))));
seq_printf(m, "Render Power Well: %s\n",
(I915_READ(VLV_GTLC_PW_STATUS) &
VLV_GTLC_PW_RENDER_STATUS_MASK) ? "Up" : "Down");
seq_printf(m, "Media Power Well: %s\n",
(I915_READ(VLV_GTLC_PW_STATUS) &
VLV_GTLC_PW_MEDIA_STATUS_MASK) ? "Up" : "Down");
spin_lock_irq(&dev_priv->uncore.lock);
fw_rendercount = dev_priv->uncore.fw_rendercount;
fw_mediacount = dev_priv->uncore.fw_mediacount;
spin_unlock_irq(&dev_priv->uncore.lock);
seq_printf(m, "Forcewake Render Count = %u\n", fw_rendercount);
seq_printf(m, "Forcewake Media Count = %u\n", fw_mediacount);
return 0;
}
static int gen6_drpc_info(struct seq_file *m)
{
@ -1167,6 +1235,7 @@ static int gen6_drpc_info(struct seq_file *m)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
spin_lock_irq(&dev_priv->uncore.lock);
forcewake_count = dev_priv->uncore.forcewake_count;
@ -1192,6 +1261,8 @@ static int gen6_drpc_info(struct seq_file *m)
sandybridge_pcode_read(dev_priv, GEN6_PCODE_READ_RC6VIDS, &rc6vids);
mutex_unlock(&dev_priv->rps.hw_lock);
intel_runtime_pm_put(dev_priv);
seq_printf(m, "Video Turbo Mode: %s\n",
yesno(rpmodectl1 & GEN6_RP_MEDIA_TURBO));
seq_printf(m, "HW control enabled: %s\n",
@ -1256,7 +1327,9 @@ static int i915_drpc_info(struct seq_file *m, void *unused)
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
if (IS_GEN6(dev) || IS_GEN7(dev))
if (IS_VALLEYVIEW(dev))
return vlv_drpc_info(m);
else if (IS_GEN6(dev) || IS_GEN7(dev))
return gen6_drpc_info(m);
else
return ironlake_drpc_info(m);
@ -1268,7 +1341,7 @@ static int i915_fbc_status(struct seq_file *m, void *unused)
struct drm_device *dev = node->minor->dev;
drm_i915_private_t *dev_priv = dev->dev_private;
if (!I915_HAS_FBC(dev)) {
if (!HAS_FBC(dev)) {
seq_puts(m, "FBC unsupported on this chipset\n");
return 0;
}
@ -1330,7 +1403,7 @@ static int i915_ips_status(struct seq_file *m, void *unused)
return 0;
}
if (I915_READ(IPS_CTL) & IPS_ENABLE)
if (IS_BROADWELL(dev) || I915_READ(IPS_CTL) & IPS_ENABLE)
seq_puts(m, "enabled\n");
else
seq_puts(m, "disabled\n");
@ -1406,6 +1479,7 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused)
ret = mutex_lock_interruptible(&dev_priv->rps.hw_lock);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
seq_puts(m, "GPU freq (MHz)\tEffective CPU freq (MHz)\tEffective Ring freq (MHz)\n");
@ -1422,6 +1496,7 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused)
((ia_freq >> 8) & 0xff) * 100);
}
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev_priv->rps.hw_lock);
return 0;
@ -1437,8 +1512,10 @@ static int i915_gfxec(struct seq_file *m, void *unused)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
seq_printf(m, "GFXEC: %ld\n", (unsigned long)I915_READ(0x112f4));
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
@ -1565,13 +1642,21 @@ static int i915_gen6_forcewake_count_info(struct seq_file *m, void *data)
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned forcewake_count;
unsigned forcewake_count = 0, fw_rendercount = 0, fw_mediacount = 0;
spin_lock_irq(&dev_priv->uncore.lock);
forcewake_count = dev_priv->uncore.forcewake_count;
if (IS_VALLEYVIEW(dev)) {
fw_rendercount = dev_priv->uncore.fw_rendercount;
fw_mediacount = dev_priv->uncore.fw_mediacount;
} else
forcewake_count = dev_priv->uncore.forcewake_count;
spin_unlock_irq(&dev_priv->uncore.lock);
seq_printf(m, "forcewake count = %u\n", forcewake_count);
if (IS_VALLEYVIEW(dev)) {
seq_printf(m, "fw_rendercount = %u\n", fw_rendercount);
seq_printf(m, "fw_mediacount = %u\n", fw_mediacount);
} else
seq_printf(m, "forcewake count = %u\n", forcewake_count);
return 0;
}
@ -1610,6 +1695,7 @@ static int i915_swizzle_info(struct seq_file *m, void *data)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
seq_printf(m, "bit6 swizzle for X-tiling = %s\n",
swizzle_string(dev_priv->mm.bit_6_swizzle_x));
@ -1641,6 +1727,7 @@ static int i915_swizzle_info(struct seq_file *m, void *data)
seq_printf(m, "DISP_ARB_CTL = 0x%08x\n",
I915_READ(DISP_ARB_CTL));
}
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
return 0;
@ -1701,16 +1788,19 @@ static int i915_ppgtt_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
if (INTEL_INFO(dev)->gen >= 8)
gen8_ppgtt_info(m, dev);
else if (INTEL_INFO(dev)->gen >= 6)
gen6_ppgtt_info(m, dev);
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
return 0;
@ -1735,28 +1825,28 @@ static int i915_dpio_info(struct seq_file *m, void *data)
seq_printf(m, "DPIO_CTL: 0x%08x\n", I915_READ(DPIO_CTL));
seq_printf(m, "DPIO_DIV_A: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, _DPIO_DIV_A));
seq_printf(m, "DPIO_DIV_B: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, _DPIO_DIV_B));
seq_printf(m, "DPIO PLL DW3 CH0 : 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, VLV_PLL_DW3(0)));
seq_printf(m, "DPIO PLL DW3 CH1: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, VLV_PLL_DW3(1)));
seq_printf(m, "DPIO_REFSFR_A: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, _DPIO_REFSFR_A));
seq_printf(m, "DPIO_REFSFR_B: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, _DPIO_REFSFR_B));
seq_printf(m, "DPIO PLL DW5 CH0: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, VLV_PLL_DW5(0)));
seq_printf(m, "DPIO PLL DW5 CH1: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, VLV_PLL_DW5(1)));
seq_printf(m, "DPIO_CORE_CLK_A: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, _DPIO_CORE_CLK_A));
seq_printf(m, "DPIO_CORE_CLK_B: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, _DPIO_CORE_CLK_B));
seq_printf(m, "DPIO PLL DW7 CH0: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, VLV_PLL_DW7(0)));
seq_printf(m, "DPIO PLL DW7 CH1: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, VLV_PLL_DW7(1)));
seq_printf(m, "DPIO_LPF_COEFF_A: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, _DPIO_LPF_COEFF_A));
seq_printf(m, "DPIO_LPF_COEFF_B: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, _DPIO_LPF_COEFF_B));
seq_printf(m, "DPIO PLL DW10 CH0: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, VLV_PLL_DW10(0)));
seq_printf(m, "DPIO PLL DW10 CH1: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, VLV_PLL_DW10(1)));
seq_printf(m, "DPIO_FASTCLK_DISABLE: 0x%08x\n",
vlv_dpio_read(dev_priv, PIPE_A, DPIO_FASTCLK_DISABLE));
vlv_dpio_read(dev_priv, PIPE_A, VLV_CMN_DW0));
mutex_unlock(&dev_priv->dpio_lock);
@ -1784,6 +1874,8 @@ static int i915_edp_psr_status(struct seq_file *m, void *data)
u32 psrperf = 0;
bool enabled = false;
intel_runtime_pm_get(dev_priv);
seq_printf(m, "Sink_Support: %s\n", yesno(dev_priv->psr.sink_support));
seq_printf(m, "Source_OK: %s\n", yesno(dev_priv->psr.source_ok));
@ -1796,6 +1888,7 @@ static int i915_edp_psr_status(struct seq_file *m, void *data)
EDP_PSR_PERF_CNT_MASK;
seq_printf(m, "Performance_Counter: %u\n", psrperf);
intel_runtime_pm_put(dev_priv);
return 0;
}
@ -1845,6 +1938,76 @@ static int i915_pc8_status(struct seq_file *m, void *unused)
return 0;
}
static const char *power_domain_str(enum intel_display_power_domain domain)
{
switch (domain) {
case POWER_DOMAIN_PIPE_A:
return "PIPE_A";
case POWER_DOMAIN_PIPE_B:
return "PIPE_B";
case POWER_DOMAIN_PIPE_C:
return "PIPE_C";
case POWER_DOMAIN_PIPE_A_PANEL_FITTER:
return "PIPE_A_PANEL_FITTER";
case POWER_DOMAIN_PIPE_B_PANEL_FITTER:
return "PIPE_B_PANEL_FITTER";
case POWER_DOMAIN_PIPE_C_PANEL_FITTER:
return "PIPE_C_PANEL_FITTER";
case POWER_DOMAIN_TRANSCODER_A:
return "TRANSCODER_A";
case POWER_DOMAIN_TRANSCODER_B:
return "TRANSCODER_B";
case POWER_DOMAIN_TRANSCODER_C:
return "TRANSCODER_C";
case POWER_DOMAIN_TRANSCODER_EDP:
return "TRANSCODER_EDP";
case POWER_DOMAIN_VGA:
return "VGA";
case POWER_DOMAIN_AUDIO:
return "AUDIO";
case POWER_DOMAIN_INIT:
return "INIT";
default:
WARN_ON(1);
return "?";
}
}
static int i915_power_domain_info(struct seq_file *m, void *unused)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_power_domains *power_domains = &dev_priv->power_domains;
int i;
mutex_lock(&power_domains->lock);
seq_printf(m, "%-25s %s\n", "Power well/domain", "Use count");
for (i = 0; i < power_domains->power_well_count; i++) {
struct i915_power_well *power_well;
enum intel_display_power_domain power_domain;
power_well = &power_domains->power_wells[i];
seq_printf(m, "%-25s %d\n", power_well->name,
power_well->count);
for (power_domain = 0; power_domain < POWER_DOMAIN_NUM;
power_domain++) {
if (!(BIT(power_domain) & power_well->domains))
continue;
seq_printf(m, " %-23s %d\n",
power_domain_str(power_domain),
power_domains->domain_use_count[power_domain]);
}
}
mutex_unlock(&power_domains->lock);
return 0;
}
struct pipe_crc_info {
const char *name;
struct drm_device *dev;
@ -1857,6 +2020,9 @@ static int i915_pipe_crc_open(struct inode *inode, struct file *filep)
struct drm_i915_private *dev_priv = info->dev->dev_private;
struct intel_pipe_crc *pipe_crc = &dev_priv->pipe_crc[info->pipe];
if (info->pipe >= INTEL_INFO(info->dev)->num_pipes)
return -ENODEV;
spin_lock_irq(&pipe_crc->lock);
if (pipe_crc->opened) {
@ -2005,8 +2171,8 @@ static int i915_pipe_crc_create(struct dentry *root, struct drm_minor *minor,
info->dev = dev;
ent = debugfs_create_file(info->name, S_IRUGO, root, info,
&i915_pipe_crc_fops);
if (IS_ERR(ent))
return PTR_ERR(ent);
if (!ent)
return -ENOMEM;
return drm_add_fake_info_node(minor, ent, info);
}
@ -2347,7 +2513,7 @@ static int pipe_crc_set_source(struct drm_device *dev, enum pipe pipe,
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_pipe_crc *pipe_crc = &dev_priv->pipe_crc[pipe];
u32 val;
u32 val = 0; /* shut up gcc */
int ret;
if (pipe_crc->source == source)
@ -2742,7 +2908,7 @@ i915_drop_caches_set(void *data, u64 val)
struct i915_vma *vma, *x;
int ret;
DRM_DEBUG_DRIVER("Dropping caches: 0x%08llx\n", val);
DRM_DEBUG("Dropping caches: 0x%08llx\n", val);
/* No need to check and wait for gpu resets, only libdrm auto-restarts
* on ioctls on -EAGAIN. */
@ -2810,8 +2976,7 @@ i915_max_freq_get(void *data, u64 *val)
return ret;
if (IS_VALLEYVIEW(dev))
*val = vlv_gpu_freq(dev_priv->mem_freq,
dev_priv->rps.max_delay);
*val = vlv_gpu_freq(dev_priv, dev_priv->rps.max_delay);
else
*val = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER;
mutex_unlock(&dev_priv->rps.hw_lock);
@ -2841,9 +3006,9 @@ i915_max_freq_set(void *data, u64 val)
* Turbo will still be enabled, but won't go above the set value.
*/
if (IS_VALLEYVIEW(dev)) {
val = vlv_freq_opcode(dev_priv->mem_freq, val);
val = vlv_freq_opcode(dev_priv, val);
dev_priv->rps.max_delay = val;
gen6_set_rps(dev, val);
valleyview_set_rps(dev, val);
} else {
do_div(val, GT_FREQUENCY_MULTIPLIER);
dev_priv->rps.max_delay = val;
@ -2876,8 +3041,7 @@ i915_min_freq_get(void *data, u64 *val)
return ret;
if (IS_VALLEYVIEW(dev))
*val = vlv_gpu_freq(dev_priv->mem_freq,
dev_priv->rps.min_delay);
*val = vlv_gpu_freq(dev_priv, dev_priv->rps.min_delay);
else
*val = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER;
mutex_unlock(&dev_priv->rps.hw_lock);
@ -2907,7 +3071,7 @@ i915_min_freq_set(void *data, u64 val)
* Turbo will still be enabled, but won't go below the set value.
*/
if (IS_VALLEYVIEW(dev)) {
val = vlv_freq_opcode(dev_priv->mem_freq, val);
val = vlv_freq_opcode(dev_priv, val);
dev_priv->rps.min_delay = val;
valleyview_set_rps(dev, val);
} else {
@ -2938,8 +3102,11 @@ i915_cache_sharing_get(void *data, u64 *val)
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
snpcr = I915_READ(GEN6_MBCUNIT_SNPCR);
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev_priv->dev->struct_mutex);
*val = (snpcr & GEN6_MBC_SNPCR_MASK) >> GEN6_MBC_SNPCR_SHIFT;
@ -2960,6 +3127,7 @@ i915_cache_sharing_set(void *data, u64 val)
if (val > 3)
return -EINVAL;
intel_runtime_pm_get(dev_priv);
DRM_DEBUG_DRIVER("Manually setting uncore sharing to %llu\n", val);
/* Update the cache sharing policy here as well */
@ -2968,6 +3136,7 @@ i915_cache_sharing_set(void *data, u64 val)
snpcr |= (val << GEN6_MBC_SNPCR_SHIFT);
I915_WRITE(GEN6_MBCUNIT_SNPCR, snpcr);
intel_runtime_pm_put(dev_priv);
return 0;
}
@ -2983,7 +3152,8 @@ static int i915_forcewake_open(struct inode *inode, struct file *file)
if (INTEL_INFO(dev)->gen < 6)
return 0;
gen6_gt_force_wake_get(dev_priv);
intel_runtime_pm_get(dev_priv);
gen6_gt_force_wake_get(dev_priv, FORCEWAKE_ALL);
return 0;
}
@ -2996,7 +3166,8 @@ static int i915_forcewake_release(struct inode *inode, struct file *file)
if (INTEL_INFO(dev)->gen < 6)
return 0;
gen6_gt_force_wake_put(dev_priv);
gen6_gt_force_wake_put(dev_priv, FORCEWAKE_ALL);
intel_runtime_pm_put(dev_priv);
return 0;
}
@ -3016,8 +3187,8 @@ static int i915_forcewake_create(struct dentry *root, struct drm_minor *minor)
S_IRUSR,
root, dev,
&i915_forcewake_fops);
if (IS_ERR(ent))
return PTR_ERR(ent);
if (!ent)
return -ENOMEM;
return drm_add_fake_info_node(minor, ent, &i915_forcewake_fops);
}
@ -3034,8 +3205,8 @@ static int i915_debugfs_create(struct dentry *root,
S_IRUGO | S_IWUSR,
root, dev,
fops);
if (IS_ERR(ent))
return PTR_ERR(ent);
if (!ent)
return -ENOMEM;
return drm_add_fake_info_node(minor, ent, fops);
}
@ -3079,6 +3250,7 @@ static const struct drm_info_list i915_debugfs_list[] = {
{"i915_edp_psr_status", i915_edp_psr_status, 0},
{"i915_energy_uJ", i915_energy_uJ, 0},
{"i915_pc8_status", i915_pc8_status, 0},
{"i915_power_domain_info", i915_power_domain_info, 0},
};
#define I915_DEBUGFS_ENTRIES ARRAY_SIZE(i915_debugfs_list)
@ -3102,10 +3274,10 @@ static const struct i915_debugfs_files {
void intel_display_crc_init(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int i;
enum pipe pipe;
for (i = 0; i < INTEL_INFO(dev)->num_pipes; i++) {
struct intel_pipe_crc *pipe_crc = &dev_priv->pipe_crc[i];
for_each_pipe(pipe) {
struct intel_pipe_crc *pipe_crc = &dev_priv->pipe_crc[pipe];
pipe_crc->opened = false;
spin_lock_init(&pipe_crc->lock);
@ -3164,5 +3336,3 @@ void i915_debugfs_cleanup(struct drm_minor *minor)
drm_debugfs_remove_files(info_list, 1, minor);
}
}
#endif /* CONFIG_DEBUG_FS */

View File

@ -42,6 +42,8 @@
#include <linux/vga_switcheroo.h>
#include <linux/slab.h>
#include <acpi/video.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#define LP_RING(d) (&((struct drm_i915_private *)(d))->ring[RCS])
@ -791,7 +793,7 @@ static int i915_wait_irq(struct drm_device * dev, int irq_nr)
master_priv->sarea_priv->perf_boxes |= I915_BOX_WAIT;
if (ring->irq_get(ring)) {
DRM_WAIT_ON(ret, ring->irq_queue, 3 * DRM_HZ,
DRM_WAIT_ON(ret, ring->irq_queue, 3 * HZ,
READ_BREADCRUMB(dev_priv) >= irq_nr);
ring->irq_put(ring);
} else if (wait_for(READ_BREADCRUMB(dev_priv) >= irq_nr, 3000))
@ -828,7 +830,7 @@ static int i915_irq_emit(struct drm_device *dev, void *data,
result = i915_emit_irq(dev);
mutex_unlock(&dev->struct_mutex);
if (DRM_COPY_TO_USER(emit->irq_seq, &result, sizeof(int))) {
if (copy_to_user(emit->irq_seq, &result, sizeof(int))) {
DRM_ERROR("copy_to_user\n");
return -EFAULT;
}
@ -1016,8 +1018,8 @@ static int i915_getparam(struct drm_device *dev, void *data,
return -EINVAL;
}
if (DRM_COPY_TO_USER(param->value, &value, sizeof(int))) {
DRM_ERROR("DRM_COPY_TO_USER failed\n");
if (copy_to_user(param->value, &value, sizeof(int))) {
DRM_ERROR("copy_to_user failed\n");
return -EFAULT;
}
@ -1411,7 +1413,7 @@ void i915_master_destroy(struct drm_device *dev, struct drm_master *master)
master->driver_priv = NULL;
}
#ifdef CONFIG_DRM_I915_FBDEV
#if IS_ENABLED(CONFIG_FB)
static void i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv)
{
struct apertures_struct *ap;
@ -1484,6 +1486,10 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
return -ENODEV;
}
/* UMS needs agp support. */
if (!drm_core_check_feature(dev, DRIVER_MODESET) && !dev->agp)
return -EINVAL;
dev_priv = kzalloc(sizeof(*dev_priv), GFP_KERNEL);
if (dev_priv == NULL)
return -ENOMEM;
@ -1494,7 +1500,7 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
spin_lock_init(&dev_priv->irq_lock);
spin_lock_init(&dev_priv->gpu_error.lock);
spin_lock_init(&dev_priv->backlight.lock);
spin_lock_init(&dev_priv->backlight_lock);
spin_lock_init(&dev_priv->uncore.lock);
spin_lock_init(&dev_priv->mm.object_stat_lock);
mutex_init(&dev_priv->dpio_lock);
@ -1639,8 +1645,7 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
goto out_gem_unload;
}
if (HAS_POWER_WELL(dev))
intel_power_domains_init(dev);
intel_power_domains_init(dev);
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = i915_load_modeset_init(dev);
@ -1664,11 +1669,12 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
if (IS_GEN5(dev))
intel_gpu_ips_init(dev_priv);
intel_init_runtime_pm(dev_priv);
return 0;
out_power_well:
if (HAS_POWER_WELL(dev))
intel_power_domains_remove(dev);
intel_power_domains_remove(dev);
drm_vblank_cleanup(dev);
out_gem_unload:
if (dev_priv->mm.inactive_shrinker.scan_objects)
@ -1679,6 +1685,7 @@ out_gem_unload:
intel_teardown_gmbus(dev);
intel_teardown_mchbar(dev);
pm_qos_remove_request(&dev_priv->pm_qos);
destroy_workqueue(dev_priv->wq);
out_mtrrfree:
arch_phys_wc_del(dev_priv->gtt.mtrr);
@ -1704,25 +1711,27 @@ int i915_driver_unload(struct drm_device *dev)
struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
ret = i915_gem_suspend(dev);
if (ret) {
DRM_ERROR("failed to idle hardware: %d\n", ret);
return ret;
}
intel_fini_runtime_pm(dev_priv);
intel_gpu_ips_teardown();
if (HAS_POWER_WELL(dev)) {
/* The i915.ko module is still not prepared to be loaded when
* the power well is not enabled, so just enable it in case
* we're going to unload/reload. */
intel_display_set_init_power(dev, true);
intel_power_domains_remove(dev);
}
/* The i915.ko module is still not prepared to be loaded when
* the power well is not enabled, so just enable it in case
* we're going to unload/reload. */
intel_display_set_init_power(dev, true);
intel_power_domains_remove(dev);
i915_teardown_sysfs(dev);
if (dev_priv->mm.inactive_shrinker.scan_objects)
unregister_shrinker(&dev_priv->mm.inactive_shrinker);
ret = i915_gem_suspend(dev);
if (ret)
DRM_ERROR("failed to idle hardware: %d\n", ret);
io_mapping_free(dev_priv->gtt.mappable);
arch_phys_wc_del(dev_priv->gtt.mtrr);
@ -1777,7 +1786,6 @@ int i915_driver_unload(struct drm_device *dev)
list_del(&dev_priv->gtt.base.global_link);
WARN_ON(!list_empty(&dev_priv->vm_list));
drm_mm_takedown(&dev_priv->gtt.base.mm);
drm_vblank_cleanup(dev);
@ -1910,6 +1918,7 @@ const struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_CREATE, i915_gem_context_create_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_DESTROY, i915_gem_context_destroy_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_REG_READ, i915_reg_read_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GET_RESET_STATS, i915_get_reset_stats_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
};
int i915_max_ioctl = DRM_ARRAY_SIZE(i915_ioctls);

View File

@ -59,7 +59,7 @@ MODULE_PARM_DESC(powersave,
"Enable powersavings, fbc, downclocking, etc. (default: true)");
int i915_semaphores __read_mostly = -1;
module_param_named(semaphores, i915_semaphores, int, 0600);
module_param_named(semaphores, i915_semaphores, int, 0400);
MODULE_PARM_DESC(semaphores,
"Use semaphores for inter-ring sync (default: -1 (use per-chip defaults))");
@ -114,7 +114,7 @@ MODULE_PARM_DESC(enable_hangcheck,
"(default: true)");
int i915_enable_ppgtt __read_mostly = -1;
module_param_named(i915_enable_ppgtt, i915_enable_ppgtt, int, 0600);
module_param_named(i915_enable_ppgtt, i915_enable_ppgtt, int, 0400);
MODULE_PARM_DESC(i915_enable_ppgtt,
"Enable PPGTT (default: true)");
@ -155,7 +155,6 @@ MODULE_PARM_DESC(prefault_disable,
"Disable page prefaulting for pread/pwrite/reloc (default:false). For developers only.");
static struct drm_driver driver;
extern int intel_agp_enabled;
static const struct intel_device_info intel_i830_info = {
.gen = 2, .is_mobile = 1, .cursor_needs_physical = 1, .num_pipes = 2,
@ -173,6 +172,7 @@ static const struct intel_device_info intel_i85x_info = {
.gen = 2, .is_i85x = 1, .is_mobile = 1, .num_pipes = 2,
.cursor_needs_physical = 1,
.has_overlay = 1, .overlay_needs_physical = 1,
.has_fbc = 1,
.ring_mask = RENDER_RING,
};
@ -192,6 +192,7 @@ static const struct intel_device_info intel_i915gm_info = {
.cursor_needs_physical = 1,
.has_overlay = 1, .overlay_needs_physical = 1,
.supports_tv = 1,
.has_fbc = 1,
.ring_mask = RENDER_RING,
};
static const struct intel_device_info intel_i945g_info = {
@ -204,6 +205,7 @@ static const struct intel_device_info intel_i945gm_info = {
.has_hotplug = 1, .cursor_needs_physical = 1,
.has_overlay = 1, .overlay_needs_physical = 1,
.supports_tv = 1,
.has_fbc = 1,
.ring_mask = RENDER_RING,
};
@ -265,6 +267,7 @@ static const struct intel_device_info intel_ironlake_m_info = {
static const struct intel_device_info intel_sandybridge_d_info = {
.gen = 6, .num_pipes = 2,
.need_gfx_hws = 1, .has_hotplug = 1,
.has_fbc = 1,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING,
.has_llc = 1,
};
@ -280,6 +283,7 @@ static const struct intel_device_info intel_sandybridge_m_info = {
#define GEN7_FEATURES \
.gen = 7, .num_pipes = 3, \
.need_gfx_hws = 1, .has_hotplug = 1, \
.has_fbc = 1, \
.ring_mask = RENDER_RING | BSD_RING | BLT_RING, \
.has_llc = 1
@ -292,7 +296,6 @@ static const struct intel_device_info intel_ivybridge_m_info = {
GEN7_FEATURES,
.is_ivybridge = 1,
.is_mobile = 1,
.has_fbc = 1,
};
static const struct intel_device_info intel_ivybridge_q_info = {
@ -307,6 +310,7 @@ static const struct intel_device_info intel_valleyview_m_info = {
.num_pipes = 2,
.is_valleyview = 1,
.display_mmio_offset = VLV_DISPLAY_BASE,
.has_fbc = 0, /* legal, last one wins */
.has_llc = 0, /* legal, last one wins */
};
@ -315,6 +319,7 @@ static const struct intel_device_info intel_valleyview_d_info = {
.num_pipes = 2,
.is_valleyview = 1,
.display_mmio_offset = VLV_DISPLAY_BASE,
.has_fbc = 0, /* legal, last one wins */
.has_llc = 0, /* legal, last one wins */
};
@ -332,12 +337,10 @@ static const struct intel_device_info intel_haswell_m_info = {
.is_mobile = 1,
.has_ddi = 1,
.has_fpga_dbg = 1,
.has_fbc = 1,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING,
};
static const struct intel_device_info intel_broadwell_d_info = {
.is_preliminary = 1,
.gen = 8, .num_pipes = 3,
.need_gfx_hws = 1, .has_hotplug = 1,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING,
@ -346,7 +349,6 @@ static const struct intel_device_info intel_broadwell_d_info = {
};
static const struct intel_device_info intel_broadwell_m_info = {
.is_preliminary = 1,
.gen = 8, .is_mobile = 1, .num_pipes = 3,
.need_gfx_hws = 1, .has_hotplug = 1,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING,
@ -476,12 +478,12 @@ check_next:
bool i915_semaphore_is_enabled(struct drm_device *dev)
{
if (INTEL_INFO(dev)->gen < 6)
return 0;
return false;
/* Until we get further testing... */
if (IS_GEN8(dev)) {
WARN_ON(!i915_preliminary_hw_support);
return 0;
return false;
}
if (i915_semaphores >= 0)
@ -493,7 +495,7 @@ bool i915_semaphore_is_enabled(struct drm_device *dev)
return false;
#endif
return 1;
return true;
}
static int i915_drm_freeze(struct drm_device *dev)
@ -501,6 +503,8 @@ static int i915_drm_freeze(struct drm_device *dev)
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_crtc *crtc;
intel_runtime_pm_get(dev_priv);
/* ignore lid events during suspend */
mutex_lock(&dev_priv->modeset_restore_lock);
dev_priv->modeset_restore = MODESET_SUSPENDED;
@ -688,6 +692,8 @@ static int __i915_drm_thaw(struct drm_device *dev, bool restore_gtt_mappings)
mutex_lock(&dev_priv->modeset_restore_lock);
dev_priv->modeset_restore = MODESET_DONE;
mutex_unlock(&dev_priv->modeset_restore_lock);
intel_runtime_pm_put(dev_priv);
return error;
}
@ -762,14 +768,14 @@ int i915_reset(struct drm_device *dev)
DRM_INFO("Simulated gpu hang, resetting stop_rings\n");
dev_priv->gpu_error.stop_rings = 0;
if (ret == -ENODEV) {
DRM_ERROR("Reset not implemented, but ignoring "
"error for simulated gpu hangs\n");
DRM_INFO("Reset not implemented, but ignoring "
"error for simulated gpu hangs\n");
ret = 0;
}
}
if (ret) {
DRM_ERROR("Failed to reset chip.\n");
DRM_ERROR("Failed to reset chip: %i\n", ret);
mutex_unlock(&dev->struct_mutex);
return ret;
}
@ -790,12 +796,9 @@ int i915_reset(struct drm_device *dev)
*/
if (drm_core_check_feature(dev, DRIVER_MODESET) ||
!dev_priv->ums.mm_suspended) {
bool hw_contexts_disabled = dev_priv->hw_contexts_disabled;
dev_priv->ums.mm_suspended = 0;
ret = i915_gem_init_hw(dev);
if (!hw_contexts_disabled && dev_priv->hw_contexts_disabled)
DRM_ERROR("HW contexts didn't survive reset\n");
mutex_unlock(&dev->struct_mutex);
if (ret) {
DRM_ERROR("Failed hw init on reset %d\n", ret);
@ -831,17 +834,7 @@ static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (PCI_FUNC(pdev->devfn))
return -ENODEV;
/* We've managed to ship a kms-enabled ddx that shipped with an XvMC
* implementation for gen3 (and only gen3) that used legacy drm maps
* (gasp!) to share buffers between X and the client. Hence we need to
* keep around the fake agp stuff for gen3, even when kms is enabled. */
if (intel_info->gen != 3) {
driver.driver_features &=
~(DRIVER_USE_AGP | DRIVER_REQUIRE_AGP);
} else if (!intel_agp_enabled) {
DRM_ERROR("drm/i915 can't work without intel_agp module!\n");
return -ENODEV;
}
driver.driver_features &= ~(DRIVER_USE_AGP);
return drm_get_pci_dev(pdev, ent, &driver);
}
@ -915,6 +908,49 @@ static int i915_pm_poweroff(struct device *dev)
return i915_drm_freeze(drm_dev);
}
static int i915_runtime_suspend(struct device *device)
{
struct pci_dev *pdev = to_pci_dev(device);
struct drm_device *dev = pci_get_drvdata(pdev);
struct drm_i915_private *dev_priv = dev->dev_private;
WARN_ON(!HAS_RUNTIME_PM(dev));
DRM_DEBUG_KMS("Suspending device\n");
i915_gem_release_all_mmaps(dev_priv);
del_timer_sync(&dev_priv->gpu_error.hangcheck_timer);
dev_priv->pm.suspended = true;
/*
* current versions of firmware which depend on this opregion
* notification have repurposed the D1 definition to mean
* "runtime suspended" vs. what you would normally expect (D3)
* to distinguish it from notifications that might be sent
* via the suspend path.
*/
intel_opregion_notify_adapter(dev, PCI_D1);
return 0;
}
static int i915_runtime_resume(struct device *device)
{
struct pci_dev *pdev = to_pci_dev(device);
struct drm_device *dev = pci_get_drvdata(pdev);
struct drm_i915_private *dev_priv = dev->dev_private;
WARN_ON(!HAS_RUNTIME_PM(dev));
DRM_DEBUG_KMS("Resuming device\n");
intel_opregion_notify_adapter(dev, PCI_D0);
dev_priv->pm.suspended = false;
return 0;
}
static const struct dev_pm_ops i915_pm_ops = {
.suspend = i915_pm_suspend,
.resume = i915_pm_resume,
@ -922,6 +958,8 @@ static const struct dev_pm_ops i915_pm_ops = {
.thaw = i915_pm_thaw,
.poweroff = i915_pm_poweroff,
.restore = i915_pm_resume,
.runtime_suspend = i915_runtime_suspend,
.runtime_resume = i915_runtime_resume,
};
static const struct vm_operations_struct i915_gem_vm_ops = {
@ -949,7 +987,7 @@ static struct drm_driver driver = {
* deal with them for Intel hardware.
*/
.driver_features =
DRIVER_USE_AGP | DRIVER_REQUIRE_AGP |
DRIVER_USE_AGP |
DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_GEM | DRIVER_PRIME |
DRIVER_RENDER,
.load = i915_driver_load,
@ -1024,14 +1062,24 @@ static int __init i915_init(void)
driver.driver_features &= ~DRIVER_MODESET;
#endif
if (!(driver.driver_features & DRIVER_MODESET))
if (!(driver.driver_features & DRIVER_MODESET)) {
driver.get_vblank_timestamp = NULL;
#ifndef CONFIG_DRM_I915_UMS
/* Silently fail loading to not upset userspace. */
return 0;
#endif
}
return drm_pci_init(&driver, &i915_pci_driver);
}
static void __exit i915_exit(void)
{
#ifndef CONFIG_DRM_I915_UMS
if (!(driver.driver_features & DRIVER_MODESET))
return; /* Never loaded a driver. */
#endif
drm_pci_exit(&driver, &i915_pci_driver);
}

View File

@ -89,6 +89,18 @@ enum port {
};
#define port_name(p) ((p) + 'A')
#define I915_NUM_PHYS_VLV 1
enum dpio_channel {
DPIO_CH0,
DPIO_CH1
};
enum dpio_phy {
DPIO_PHY0,
DPIO_PHY1
};
enum intel_display_power_domain {
POWER_DOMAIN_PIPE_A,
POWER_DOMAIN_PIPE_B,
@ -101,6 +113,7 @@ enum intel_display_power_domain {
POWER_DOMAIN_TRANSCODER_C,
POWER_DOMAIN_TRANSCODER_EDP,
POWER_DOMAIN_VGA,
POWER_DOMAIN_AUDIO,
POWER_DOMAIN_INIT,
POWER_DOMAIN_NUM,
@ -310,13 +323,14 @@ struct drm_i915_error_state {
u32 instps[I915_NUM_RINGS];
u32 extra_instdone[I915_NUM_INSTDONE_REG];
u32 seqno[I915_NUM_RINGS];
u64 bbaddr;
u64 bbaddr[I915_NUM_RINGS];
u32 fault_reg[I915_NUM_RINGS];
u32 done_reg;
u32 faddr[I915_NUM_RINGS];
u64 fence[I915_MAX_NUM_FENCES];
struct timeval time;
struct drm_i915_error_ring {
bool valid;
struct drm_i915_error_object {
int page_count;
u32 gtt_offset;
@ -351,6 +365,7 @@ struct drm_i915_error_state {
enum intel_ring_hangcheck_action hangcheck_action[I915_NUM_RINGS];
};
struct intel_connector;
struct intel_crtc_config;
struct intel_crtc;
struct intel_limit;
@ -358,7 +373,7 @@ struct dpll;
struct drm_i915_display_funcs {
bool (*fbc_enabled)(struct drm_device *dev);
void (*enable_fbc)(struct drm_crtc *crtc, unsigned long interval);
void (*enable_fbc)(struct drm_crtc *crtc);
void (*disable_fbc)(struct drm_device *dev);
int (*get_display_clock_speed)(struct drm_device *dev);
int (*get_fifo_size)(struct drm_device *dev, int plane);
@ -413,11 +428,20 @@ struct drm_i915_display_funcs {
/* render clock increase/decrease */
/* display clock increase/decrease */
/* pll clock increase/decrease */
int (*setup_backlight)(struct intel_connector *connector);
uint32_t (*get_backlight)(struct intel_connector *connector);
void (*set_backlight)(struct intel_connector *connector,
uint32_t level);
void (*disable_backlight)(struct intel_connector *connector);
void (*enable_backlight)(struct intel_connector *connector);
};
struct intel_uncore_funcs {
void (*force_wake_get)(struct drm_i915_private *dev_priv);
void (*force_wake_put)(struct drm_i915_private *dev_priv);
void (*force_wake_get)(struct drm_i915_private *dev_priv,
int fw_engine);
void (*force_wake_put)(struct drm_i915_private *dev_priv,
int fw_engine);
uint8_t (*mmio_readb)(struct drm_i915_private *dev_priv, off_t offset, bool trace);
uint16_t (*mmio_readw)(struct drm_i915_private *dev_priv, off_t offset, bool trace);
@ -442,6 +466,9 @@ struct intel_uncore {
unsigned fifo_count;
unsigned forcewake_count;
unsigned fw_rendercount;
unsigned fw_mediacount;
struct delayed_work force_wake_work;
};
@ -669,7 +696,6 @@ struct i915_fbc {
struct delayed_work work;
struct drm_crtc *crtc;
struct drm_framebuffer *fb;
int interval;
} *fbc_work;
enum no_fbc_reason {
@ -708,7 +734,6 @@ enum intel_sbi_destination {
#define QUIRK_PIPEA_FORCE (1<<0)
#define QUIRK_LVDS_SSC_DISABLE (1<<1)
#define QUIRK_INVERT_BRIGHTNESS (1<<2)
#define QUIRK_NO_PCH_PWM_ENABLE (1<<3)
struct intel_fbdev;
struct intel_fbc_work;
@ -761,8 +786,6 @@ struct i915_suspend_saved_registers {
u32 saveBLC_PWM_CTL;
u32 saveBLC_PWM_CTL2;
u32 saveBLC_HIST_CTL_B;
u32 saveBLC_PWM_CTL_B;
u32 saveBLC_PWM_CTL2_B;
u32 saveBLC_CPU_PWM_CTL;
u32 saveBLC_CPU_PWM_CTL2;
u32 saveFPB0;
@ -932,21 +955,29 @@ struct intel_ilk_power_mgmt {
/* Power well structure for haswell */
struct i915_power_well {
const char *name;
bool always_on;
/* power well enable/disable usage count */
int count;
unsigned long domains;
void *data;
void (*set)(struct drm_device *dev, struct i915_power_well *power_well,
bool enable);
bool (*is_enabled)(struct drm_device *dev,
struct i915_power_well *power_well);
};
#define I915_MAX_POWER_WELLS 1
struct i915_power_domains {
/*
* Power wells needed for initialization at driver init and suspend
* time are on. They are kept on until after the first modeset.
*/
bool init_power_on;
int power_well_count;
struct mutex lock;
struct i915_power_well power_wells[I915_MAX_POWER_WELLS];
int domain_use_count[POWER_DOMAIN_NUM];
struct i915_power_well *power_wells;
};
struct i915_dri1_state {
@ -1077,34 +1108,30 @@ struct i915_gpu_error {
unsigned long missed_irq_rings;
/**
* State variable and reset counter controlling the reset flow
* State variable controlling the reset flow and count
*
* Upper bits are for the reset counter. This counter is used by the
* wait_seqno code to race-free noticed that a reset event happened and
* that it needs to restart the entire ioctl (since most likely the
* seqno it waited for won't ever signal anytime soon).
* This is a counter which gets incremented when reset is triggered,
* and again when reset has been handled. So odd values (lowest bit set)
* means that reset is in progress and even values that
* (reset_counter >> 1):th reset was successfully completed.
*
* If reset is not completed succesfully, the I915_WEDGE bit is
* set meaning that hardware is terminally sour and there is no
* recovery. All waiters on the reset_queue will be woken when
* that happens.
*
* This counter is used by the wait_seqno code to notice that reset
* event happened and it needs to restart the entire ioctl (since most
* likely the seqno it waited for won't ever signal anytime soon).
*
* This is important for lock-free wait paths, where no contended lock
* naturally enforces the correct ordering between the bail-out of the
* waiter and the gpu reset work code.
*
* Lowest bit controls the reset state machine: Set means a reset is in
* progress. This state will (presuming we don't have any bugs) decay
* into either unset (successful reset) or the special WEDGED value (hw
* terminally sour). All waiters on the reset_queue will be woken when
* that happens.
*/
atomic_t reset_counter;
/**
* Special values/flags for reset_counter
*
* Note that the code relies on
* I915_WEDGED & I915_RESET_IN_PROGRESS_FLAG
* being true.
*/
#define I915_RESET_IN_PROGRESS_FLAG 1
#define I915_WEDGED 0xffffffff
#define I915_WEDGED (1 << 31)
/**
* Waitqueue to signal when the reset has completed. Used by clients
@ -1158,6 +1185,11 @@ struct intel_vbt_data {
int edp_bpp;
struct edp_power_seq edp_pps;
struct {
u16 pwm_freq_hz;
bool active_low_pwm;
} backlight;
/* MIPI DSI */
struct {
u16 panel_id;
@ -1184,7 +1216,7 @@ struct intel_wm_level {
uint32_t fbc_val;
};
struct hsw_wm_values {
struct ilk_wm_values {
uint32_t wm_pipe[3];
uint32_t wm_lp[3];
uint32_t wm_lp_spr[3];
@ -1262,6 +1294,10 @@ struct i915_package_c8 {
} regsave;
};
struct i915_runtime_pm {
bool suspended;
};
enum intel_pipe_crc_source {
INTEL_PIPE_CRC_SOURCE_NONE,
INTEL_PIPE_CRC_SOURCE_PLANE1,
@ -1366,15 +1402,9 @@ typedef struct drm_i915_private {
/* overlay */
struct intel_overlay *overlay;
unsigned int sprite_scaling_enabled;
/* backlight */
struct {
int level;
bool enabled;
spinlock_t lock; /* bl registers and the above bl fields */
struct backlight_device *device;
} backlight;
/* backlight registers and fields in struct intel_panel */
spinlock_t backlight_lock;
/* LVDS info */
bool no_aux_handshake;
@ -1426,6 +1456,7 @@ typedef struct drm_i915_private {
int num_shared_dpll;
struct intel_shared_dpll shared_dplls[I915_NUM_PLLS];
struct intel_ddi_plls ddi_plls;
int dpio_phy_iosf_port[I915_NUM_PHYS_VLV];
/* Reclocking support */
bool render_reclock_avail;
@ -1470,7 +1501,6 @@ typedef struct drm_i915_private {
struct drm_property *broadcast_rgb_property;
struct drm_property *force_audio_property;
bool hw_contexts_disabled;
uint32_t hw_context_size;
struct list_head context_list;
@ -1492,11 +1522,13 @@ typedef struct drm_i915_private {
uint16_t cur_latency[5];
/* current hardware state */
struct hsw_wm_values hw;
struct ilk_wm_values hw;
} wm;
struct i915_package_c8 pc8;
struct i915_runtime_pm pm;
/* Old dri1 support infrastructure, beware the dragons ya fools entering
* here! */
struct i915_dri1_state dri1;
@ -1813,15 +1845,15 @@ struct drm_i915_file_private {
#define HAS_FW_BLC(dev) (INTEL_INFO(dev)->gen > 2)
#define HAS_PIPE_CXSR(dev) (INTEL_INFO(dev)->has_pipe_cxsr)
#define I915_HAS_FBC(dev) (INTEL_INFO(dev)->has_fbc)
#define HAS_FBC(dev) (INTEL_INFO(dev)->has_fbc)
#define HAS_IPS(dev) (IS_ULT(dev) || IS_BROADWELL(dev))
#define HAS_DDI(dev) (INTEL_INFO(dev)->has_ddi)
#define HAS_POWER_WELL(dev) (IS_HASWELL(dev) || IS_BROADWELL(dev))
#define HAS_FPGA_DBG_UNCLAIMED(dev) (INTEL_INFO(dev)->has_fpga_dbg)
#define HAS_PSR(dev) (IS_HASWELL(dev) || IS_BROADWELL(dev))
#define HAS_PC8(dev) (IS_HASWELL(dev)) /* XXX HSW:ULX */
#define HAS_RUNTIME_PM(dev) (IS_HASWELL(dev))
#define INTEL_PCH_DEVICE_ID_MASK 0xff00
#define INTEL_PCH_IBX_DEVICE_ID_TYPE 0x3b00
@ -1911,7 +1943,6 @@ extern void intel_hpd_init(struct drm_device *dev);
extern void intel_uncore_sanitize(struct drm_device *dev);
extern void intel_uncore_early_sanitize(struct drm_device *dev);
extern void intel_uncore_init(struct drm_device *dev);
extern void intel_uncore_clear_errors(struct drm_device *dev);
extern void intel_uncore_check_errors(struct drm_device *dev);
extern void intel_uncore_fini(struct drm_device *dev);
@ -1987,6 +2018,7 @@ void i915_gem_object_unpin(struct drm_i915_gem_object *obj);
int __must_check i915_vma_unbind(struct i915_vma *vma);
int __must_check i915_gem_object_ggtt_unbind(struct drm_i915_gem_object *obj);
int i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
void i915_gem_release_all_mmaps(struct drm_i915_private *dev_priv);
void i915_gem_release_mmap(struct drm_i915_gem_object *obj);
void i915_gem_lastclose(struct drm_device *dev);
@ -2063,12 +2095,17 @@ int __must_check i915_gem_check_wedge(struct i915_gpu_error *error,
static inline bool i915_reset_in_progress(struct i915_gpu_error *error)
{
return unlikely(atomic_read(&error->reset_counter)
& I915_RESET_IN_PROGRESS_FLAG);
& (I915_RESET_IN_PROGRESS_FLAG | I915_WEDGED));
}
static inline bool i915_terminally_wedged(struct i915_gpu_error *error)
{
return atomic_read(&error->reset_counter) == I915_WEDGED;
return atomic_read(&error->reset_counter) & I915_WEDGED;
}
static inline u32 i915_reset_count(struct i915_gpu_error *error)
{
return ((atomic_read(&error->reset_counter) & ~I915_WEDGED) + 1) / 2;
}
void i915_gem_reset(struct drm_device *dev);
@ -2180,7 +2217,7 @@ i915_gem_obj_ggtt_pin(struct drm_i915_gem_object *obj,
}
/* i915_gem_context.c */
void i915_gem_context_init(struct drm_device *dev);
int __must_check i915_gem_context_init(struct drm_device *dev);
void i915_gem_context_fini(struct drm_device *dev);
void i915_gem_context_close(struct drm_device *dev, struct drm_file *file);
int i915_switch_context(struct intel_ring_buffer *ring,
@ -2399,6 +2436,8 @@ extern int intel_enable_rc6(const struct drm_device *dev);
extern bool i915_semaphore_is_enabled(struct drm_device *dev);
int i915_reg_read_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_get_reset_stats_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
/* overlay */
extern struct intel_overlay_error_state *intel_overlay_capture_error_state(struct drm_device *dev);
@ -2414,8 +2453,8 @@ extern void intel_display_print_error_state(struct drm_i915_error_state_buf *e,
* must be set to prevent GT core from power down and stale values being
* returned.
*/
void gen6_gt_force_wake_get(struct drm_i915_private *dev_priv);
void gen6_gt_force_wake_put(struct drm_i915_private *dev_priv);
void gen6_gt_force_wake_get(struct drm_i915_private *dev_priv, int fw_engine);
void gen6_gt_force_wake_put(struct drm_i915_private *dev_priv, int fw_engine);
int sandybridge_pcode_read(struct drm_i915_private *dev_priv, u8 mbox, u32 *val);
int sandybridge_pcode_write(struct drm_i915_private *dev_priv, u8 mbox, u32 val);
@ -2430,6 +2469,8 @@ u32 vlv_cck_read(struct drm_i915_private *dev_priv, u32 reg);
void vlv_cck_write(struct drm_i915_private *dev_priv, u32 reg, u32 val);
u32 vlv_ccu_read(struct drm_i915_private *dev_priv, u32 reg);
void vlv_ccu_write(struct drm_i915_private *dev_priv, u32 reg, u32 val);
u32 vlv_bunit_read(struct drm_i915_private *dev_priv, u32 reg);
void vlv_bunit_write(struct drm_i915_private *dev_priv, u32 reg, u32 val);
u32 vlv_gps_core_read(struct drm_i915_private *dev_priv, u32 reg);
void vlv_gps_core_write(struct drm_i915_private *dev_priv, u32 reg, u32 val);
u32 vlv_dpio_read(struct drm_i915_private *dev_priv, enum pipe pipe, int reg);
@ -2438,9 +2479,30 @@ u32 intel_sbi_read(struct drm_i915_private *dev_priv, u16 reg,
enum intel_sbi_destination destination);
void intel_sbi_write(struct drm_i915_private *dev_priv, u16 reg, u32 value,
enum intel_sbi_destination destination);
u32 vlv_flisdsi_read(struct drm_i915_private *dev_priv, u32 reg);
void vlv_flisdsi_write(struct drm_i915_private *dev_priv, u32 reg, u32 val);
int vlv_gpu_freq(struct drm_i915_private *dev_priv, int val);
int vlv_freq_opcode(struct drm_i915_private *dev_priv, int val);
void vlv_force_wake_get(struct drm_i915_private *dev_priv, int fw_engine);
void vlv_force_wake_put(struct drm_i915_private *dev_priv, int fw_engine);
#define FORCEWAKE_VLV_RENDER_RANGE_OFFSET(reg) \
(((reg) >= 0x2000 && (reg) < 0x4000) ||\
((reg) >= 0x5000 && (reg) < 0x8000) ||\
((reg) >= 0xB000 && (reg) < 0x12000) ||\
((reg) >= 0x2E000 && (reg) < 0x30000))
#define FORCEWAKE_VLV_MEDIA_RANGE_OFFSET(reg)\
(((reg) >= 0x12000 && (reg) < 0x14000) ||\
((reg) >= 0x22000 && (reg) < 0x24000) ||\
((reg) >= 0x30000 && (reg) < 0x40000))
#define FORCEWAKE_RENDER (1 << 0)
#define FORCEWAKE_MEDIA (1 << 1)
#define FORCEWAKE_ALL (FORCEWAKE_RENDER | FORCEWAKE_MEDIA)
int vlv_gpu_freq(int ddr_freq, int val);
int vlv_freq_opcode(int ddr_freq, int val);
#define I915_READ8(reg) dev_priv->uncore.funcs.mmio_readb(dev_priv, (reg), true)
#define I915_WRITE8(reg, val) dev_priv->uncore.funcs.mmio_writeb(dev_priv, (reg), (val), true)

View File

@ -1015,9 +1015,11 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno,
struct drm_i915_file_private *file_priv)
{
drm_i915_private_t *dev_priv = ring->dev->dev_private;
const bool irq_test_in_progress =
ACCESS_ONCE(dev_priv->gpu_error.test_irq_rings) & intel_ring_flag(ring);
struct timespec before, now;
DEFINE_WAIT(wait);
long timeout_jiffies;
unsigned long timeout_expire;
int ret;
WARN(dev_priv->pc8.irqs_disabled, "IRQs disabled\n");
@ -1025,7 +1027,7 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno,
if (i915_seqno_passed(ring->get_seqno(ring, true), seqno))
return 0;
timeout_jiffies = timeout ? timespec_to_jiffies_timeout(timeout) : 1;
timeout_expire = timeout ? jiffies + timespec_to_jiffies_timeout(timeout) : 0;
if (dev_priv->info->gen >= 6 && can_wait_boost(file_priv)) {
gen6_rps_boost(dev_priv);
@ -1035,8 +1037,7 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno,
msecs_to_jiffies(100));
}
if (!(dev_priv->gpu_error.test_irq_rings & intel_ring_flag(ring)) &&
WARN_ON(!ring->irq_get(ring)))
if (!irq_test_in_progress && WARN_ON(!ring->irq_get(ring)))
return -ENODEV;
/* Record current time in case interrupted by signal, or wedged */
@ -1044,7 +1045,6 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno,
getrawmonotonic(&before);
for (;;) {
struct timer_list timer;
unsigned long expire;
prepare_to_wait(&ring->irq_queue, &wait,
interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE);
@ -1070,23 +1070,22 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno,
break;
}
if (timeout_jiffies <= 0) {
if (timeout && time_after_eq(jiffies, timeout_expire)) {
ret = -ETIME;
break;
}
timer.function = NULL;
if (timeout || missed_irq(dev_priv, ring)) {
unsigned long expire;
setup_timer_on_stack(&timer, fake_irq, (unsigned long)current);
expire = jiffies + (missed_irq(dev_priv, ring) ? 1: timeout_jiffies);
expire = missed_irq(dev_priv, ring) ? jiffies + 1 : timeout_expire;
mod_timer(&timer, expire);
}
io_schedule();
if (timeout)
timeout_jiffies = expire - jiffies;
if (timer.function) {
del_singleshot_timer_sync(&timer);
destroy_timer_on_stack(&timer);
@ -1095,7 +1094,8 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno,
getrawmonotonic(&now);
trace_i915_gem_request_wait_end(ring, seqno);
ring->irq_put(ring);
if (!irq_test_in_progress)
ring->irq_put(ring);
finish_wait(&ring->irq_queue, &wait);
@ -1380,6 +1380,8 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
int ret = 0;
bool write = !!(vmf->flags & FAULT_FLAG_WRITE);
intel_runtime_pm_get(dev_priv);
/* We don't use vmf->pgoff since that has the fake offset */
page_offset = ((unsigned long)vmf->virtual_address - vma->vm_start) >>
PAGE_SHIFT;
@ -1427,8 +1429,10 @@ out:
/* If this -EIO is due to a gpu hang, give the reset code a
* chance to clean up the mess. Otherwise return the proper
* SIGBUS. */
if (i915_terminally_wedged(&dev_priv->gpu_error))
return VM_FAULT_SIGBUS;
if (i915_terminally_wedged(&dev_priv->gpu_error)) {
ret = VM_FAULT_SIGBUS;
break;
}
case -EAGAIN:
/*
* EAGAIN means the gpu is hung and we'll wait for the error
@ -1443,15 +1447,38 @@ out:
* EBUSY is ok: this just means that another thread
* already did the job.
*/
return VM_FAULT_NOPAGE;
ret = VM_FAULT_NOPAGE;
break;
case -ENOMEM:
return VM_FAULT_OOM;
ret = VM_FAULT_OOM;
break;
case -ENOSPC:
return VM_FAULT_SIGBUS;
ret = VM_FAULT_SIGBUS;
break;
default:
WARN_ONCE(ret, "unhandled error in i915_gem_fault: %i\n", ret);
return VM_FAULT_SIGBUS;
ret = VM_FAULT_SIGBUS;
break;
}
intel_runtime_pm_put(dev_priv);
return ret;
}
void i915_gem_release_all_mmaps(struct drm_i915_private *dev_priv)
{
struct i915_vma *vma;
/*
* Only the global gtt is relevant for gtt memory mappings, so restrict
* list traversal to objects bound into the global address space. Note
* that the active list should be empty, but better safe than sorry.
*/
WARN_ON(!list_empty(&dev_priv->gtt.base.active_list));
list_for_each_entry(vma, &dev_priv->gtt.base.active_list, mm_list)
i915_gem_release_mmap(vma->obj);
list_for_each_entry(vma, &dev_priv->gtt.base.inactive_list, mm_list)
i915_gem_release_mmap(vma->obj);
}
/**
@ -2303,7 +2330,7 @@ static void i915_set_reset_status(struct intel_ring_buffer *ring,
if (ring->hangcheck.action != HANGCHECK_WAIT &&
i915_request_guilty(request, acthd, &inside)) {
DRM_ERROR("%s hung %s bo (0x%lx ctx %d) at 0x%x\n",
DRM_DEBUG("%s hung %s bo (0x%lx ctx %d) at 0x%x\n",
ring->name,
inside ? "inside" : "flushing",
offset,
@ -2361,16 +2388,6 @@ static void i915_gem_reset_ring_status(struct drm_i915_private *dev_priv,
static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
struct intel_ring_buffer *ring)
{
while (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;
request = list_first_entry(&ring->request_list,
struct drm_i915_gem_request,
list);
i915_gem_free_request(request);
}
while (!list_empty(&ring->active_list)) {
struct drm_i915_gem_object *obj;
@ -2380,6 +2397,23 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
i915_gem_object_move_to_inactive(obj);
}
/*
* We must free the requests after all the corresponding objects have
* been moved off active lists. Which is the same order as the normal
* retire_requests function does. This is important if object hold
* implicit references on things like e.g. ppgtt address spaces through
* the request.
*/
while (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;
request = list_first_entry(&ring->request_list,
struct drm_i915_gem_request,
list);
i915_gem_free_request(request);
}
}
void i915_gem_restore_fences(struct drm_device *dev)
@ -2760,7 +2794,6 @@ int i915_vma_unbind(struct i915_vma *vma)
obj->has_aliasing_ppgtt_mapping = 0;
}
i915_gem_gtt_finish_object(obj);
i915_gem_object_unpin_pages(obj);
list_del(&vma->mm_list);
/* Avoid an unnecessary call to unbind on rebind. */
@ -2768,7 +2801,6 @@ int i915_vma_unbind(struct i915_vma *vma)
obj->map_and_fenceable = true;
drm_mm_remove_node(&vma->node);
i915_gem_vma_destroy(vma);
/* Since the unbound list is global, only move to that list if
@ -2776,6 +2808,12 @@ int i915_vma_unbind(struct i915_vma *vma)
if (list_empty(&obj->vma_list))
list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);
/* And finally now the object is completely decoupled from this vma,
* we can drop its hold on the backing storage and allow it to be
* reaped by the shrinker.
*/
i915_gem_object_unpin_pages(obj);
return 0;
}
@ -3068,7 +3106,7 @@ i915_find_fence_reg(struct drm_device *dev)
}
if (avail == NULL)
return NULL;
goto deadlock;
/* None available, try to steal one or wait for a user to finish */
list_for_each_entry(reg, &dev_priv->mm.fence_list, lru_list) {
@ -3078,7 +3116,12 @@ i915_find_fence_reg(struct drm_device *dev)
return reg;
}
return NULL;
deadlock:
/* Wait for completion of pending flips which consume fences */
if (intel_has_pending_fb_unpin(dev))
return ERR_PTR(-EAGAIN);
return ERR_PTR(-EDEADLK);
}
/**
@ -3123,8 +3166,8 @@ i915_gem_object_get_fence(struct drm_i915_gem_object *obj)
}
} else if (enable) {
reg = i915_find_fence_reg(dev);
if (reg == NULL)
return -EDEADLK;
if (IS_ERR(reg))
return PTR_ERR(reg);
if (reg->obj) {
struct drm_i915_gem_object *old = reg->obj;
@ -4179,6 +4222,8 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
drm_i915_private_t *dev_priv = dev->dev_private;
struct i915_vma *vma, *next;
intel_runtime_pm_get(dev_priv);
trace_i915_gem_object_destroy(obj);
if (obj->phys_obj)
@ -4223,6 +4268,8 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
kfree(obj->bit_17);
i915_gem_object_free(obj);
intel_runtime_pm_put(dev_priv);
}
struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
@ -4479,7 +4526,13 @@ i915_gem_init_hw(struct drm_device *dev)
* XXX: There was some w/a described somewhere suggesting loading
* contexts before PPGTT.
*/
i915_gem_context_init(dev);
ret = i915_gem_context_init(dev);
if (ret) {
i915_gem_cleanup_ringbuffer(dev);
DRM_ERROR("Context initialization failed %d\n", ret);
return ret;
}
if (dev_priv->mm.aliasing_ppgtt) {
ret = dev_priv->mm.aliasing_ppgtt->enable(dev);
if (ret) {

View File

@ -247,36 +247,34 @@ err_destroy:
return ret;
}
void i915_gem_context_init(struct drm_device *dev)
int i915_gem_context_init(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
if (!HAS_HW_CONTEXTS(dev)) {
dev_priv->hw_contexts_disabled = true;
DRM_DEBUG_DRIVER("Disabling HW Contexts; old hardware\n");
return;
}
if (!HAS_HW_CONTEXTS(dev))
return 0;
/* If called from reset, or thaw... we've been here already */
if (dev_priv->hw_contexts_disabled ||
dev_priv->ring[RCS].default_context)
return;
if (dev_priv->ring[RCS].default_context)
return 0;
dev_priv->hw_context_size = round_up(get_context_size(dev), 4096);
if (dev_priv->hw_context_size > (1<<20)) {
dev_priv->hw_contexts_disabled = true;
DRM_DEBUG_DRIVER("Disabling HW Contexts; invalid size\n");
return;
return -E2BIG;
}
if (create_default_context(dev_priv)) {
dev_priv->hw_contexts_disabled = true;
DRM_DEBUG_DRIVER("Disabling HW Contexts; create failed\n");
return;
ret = create_default_context(dev_priv);
if (ret) {
DRM_DEBUG_DRIVER("Disabling HW Contexts; create failed %d\n",
ret);
return ret;
}
DRM_DEBUG_DRIVER("HW context support initialized\n");
return 0;
}
void i915_gem_context_fini(struct drm_device *dev)
@ -284,7 +282,7 @@ void i915_gem_context_fini(struct drm_device *dev)
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_hw_context *dctx = dev_priv->ring[RCS].default_context;
if (dev_priv->hw_contexts_disabled)
if (!HAS_HW_CONTEXTS(dev))
return;
/* The only known way to stop the gpu from accessing the hw context is
@ -327,16 +325,16 @@ i915_gem_context_get_hang_stats(struct drm_device *dev,
struct drm_file *file,
u32 id)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_file_private *file_priv = file->driver_priv;
struct i915_hw_context *ctx;
if (id == DEFAULT_CONTEXT_ID)
return &file_priv->hang_stats;
ctx = NULL;
if (!dev_priv->hw_contexts_disabled)
ctx = i915_gem_context_get(file->driver_priv, id);
if (!HAS_HW_CONTEXTS(dev))
return ERR_PTR(-ENOENT);
ctx = i915_gem_context_get(file->driver_priv, id);
if (ctx == NULL)
return ERR_PTR(-ENOENT);
@ -502,8 +500,6 @@ static int do_switch(struct i915_hw_context *to)
* @ring: ring for which we'll execute the context switch
* @file_priv: file_priv associated with the context, may be NULL
* @id: context id number
* @seqno: sequence number by which the new context will be switched to
* @flags:
*
* The context life cycle is simple. The context refcount is incremented and
* decremented by 1 and create and destroy. If the context is in use by the GPU,
@ -517,7 +513,7 @@ int i915_switch_context(struct intel_ring_buffer *ring,
struct drm_i915_private *dev_priv = ring->dev->dev_private;
struct i915_hw_context *to;
if (dev_priv->hw_contexts_disabled)
if (!HAS_HW_CONTEXTS(ring->dev))
return 0;
WARN_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex));
@ -542,7 +538,6 @@ int i915_switch_context(struct intel_ring_buffer *ring,
int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_context_create *args = data;
struct drm_i915_file_private *file_priv = file->driver_priv;
struct i915_hw_context *ctx;
@ -551,7 +546,7 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
if (!(dev->driver->driver_features & DRIVER_GEM))
return -ENODEV;
if (dev_priv->hw_contexts_disabled)
if (!HAS_HW_CONTEXTS(dev))
return -ENODEV;
ret = i915_mutex_lock_interruptible(dev);

View File

@ -27,8 +27,10 @@
*/
#include <drm/drmP.h>
#include "i915_drv.h"
#include <drm/i915_drm.h>
#include "i915_drv.h"
#include "intel_drv.h"
#include "i915_trace.h"
static bool
@ -53,6 +55,7 @@ i915_gem_evict_something(struct drm_device *dev, struct i915_address_space *vm,
struct list_head eviction_list, unwind_list;
struct i915_vma *vma;
int ret = 0;
int pass = 0;
trace_i915_gem_evict(dev, min_size, alignment, mappable);
@ -119,14 +122,24 @@ none:
/* Can we unpin some objects such as idle hw contents,
* or pending flips?
*/
ret = nonblocking ? -ENOSPC : i915_gpu_idle(dev);
if (ret)
return ret;
if (nonblocking)
return -ENOSPC;
/* Only idle the GPU and repeat the search once */
i915_gem_retire_requests(dev);
nonblocking = true;
goto search_again;
if (pass++ == 0) {
ret = i915_gpu_idle(dev);
if (ret)
return ret;
i915_gem_retire_requests(dev);
goto search_again;
}
/* If we still have pending pageflip completions, drop
* back to userspace to give our workqueues time to
* acquire our locks and unpin the old scanouts.
*/
return intel_has_pending_fb_unpin(dev) ? -EAGAIN : -ENOSPC;
found:
/* drm_mm doesn't allow any other other operations while

View File

@ -46,7 +46,7 @@ struct eb_vmas {
};
static struct eb_vmas *
eb_create(struct drm_i915_gem_execbuffer2 *args, struct i915_address_space *vm)
eb_create(struct drm_i915_gem_execbuffer2 *args)
{
struct eb_vmas *eb = NULL;
@ -252,7 +252,7 @@ relocate_entry_cpu(struct drm_i915_gem_object *obj,
struct drm_device *dev = obj->base.dev;
uint32_t page_offset = offset_in_page(reloc->offset);
char *vaddr;
int ret = -EINVAL;
int ret;
ret = i915_gem_object_set_to_cpu_domain(obj, true);
if (ret)
@ -287,7 +287,7 @@ relocate_entry_gtt(struct drm_i915_gem_object *obj,
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t __iomem *reloc_entry;
void __iomem *reloc_page;
int ret = -EINVAL;
int ret;
ret = i915_gem_object_set_to_gtt_domain(obj, true);
if (ret)
@ -335,7 +335,7 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
struct drm_i915_gem_object *target_i915_obj;
struct i915_vma *target_vma;
uint32_t target_offset;
int ret = -EINVAL;
int ret;
/* we've already hold a reference to all valid objects */
target_vma = eb_get_vma(eb, reloc->target_handle);
@ -344,7 +344,7 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
target_i915_obj = target_vma->obj;
target_obj = &target_vma->obj->base;
target_offset = i915_gem_obj_ggtt_offset(target_i915_obj);
target_offset = target_vma->node.start;
/* Sandybridge PPGTT errata: We need a global gtt mapping for MI and
* pipe_control writes because the gpu doesn't properly redirect them
@ -365,7 +365,7 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
(int) reloc->offset,
reloc->read_domains,
reloc->write_domain);
return ret;
return -EINVAL;
}
if (unlikely((reloc->write_domain | reloc->read_domains)
& ~I915_GEM_GPU_DOMAINS)) {
@ -376,7 +376,7 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
(int) reloc->offset,
reloc->read_domains,
reloc->write_domain);
return ret;
return -EINVAL;
}
target_obj->pending_read_domains |= reloc->read_domains;
@ -396,14 +396,14 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
obj, reloc->target_handle,
(int) reloc->offset,
(int) obj->base.size);
return ret;
return -EINVAL;
}
if (unlikely(reloc->offset & 3)) {
DRM_DEBUG("Relocation not 4-byte aligned: "
"obj %p target %d offset %d.\n",
obj, reloc->target_handle,
(int) reloc->offset);
return ret;
return -EINVAL;
}
/* We can't wait for rendering with pagefaults disabled */
@ -491,8 +491,7 @@ i915_gem_execbuffer_relocate_vma_slow(struct i915_vma *vma,
}
static int
i915_gem_execbuffer_relocate(struct eb_vmas *eb,
struct i915_address_space *vm)
i915_gem_execbuffer_relocate(struct eb_vmas *eb)
{
struct i915_vma *vma;
int ret = 0;
@ -901,6 +900,24 @@ validate_exec_list(struct drm_i915_gem_exec_object2 *exec,
return 0;
}
static int
i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
const u32 ctx_id)
{
struct i915_ctx_hang_stats *hs;
hs = i915_gem_context_get_hang_stats(dev, file, ctx_id);
if (IS_ERR(hs))
return PTR_ERR(hs);
if (hs->banned) {
DRM_DEBUG("Context %u tried to submit while banned\n", ctx_id);
return -EIO;
}
return 0;
}
static void
i915_gem_execbuffer_move_to_active(struct list_head *vmas,
struct intel_ring_buffer *ring)
@ -980,8 +997,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
struct drm_i915_gem_object *batch_obj;
struct drm_clip_rect *cliprects = NULL;
struct intel_ring_buffer *ring;
struct i915_ctx_hang_stats *hs;
u32 ctx_id = i915_execbuffer2_get_context_id(*args);
const u32 ctx_id = i915_execbuffer2_get_context_id(*args);
u32 exec_start, exec_len;
u32 mask, flags;
int ret, mode, i;
@ -1108,6 +1124,8 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
}
}
intel_runtime_pm_get(dev_priv);
ret = i915_mutex_lock_interruptible(dev);
if (ret)
goto pre_mutex_err;
@ -1118,7 +1136,13 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
goto pre_mutex_err;
}
eb = eb_create(args, vm);
ret = i915_gem_validate_context(dev, file, ctx_id);
if (ret) {
mutex_unlock(&dev->struct_mutex);
goto pre_mutex_err;
}
eb = eb_create(args);
if (eb == NULL) {
mutex_unlock(&dev->struct_mutex);
ret = -ENOMEM;
@ -1141,7 +1165,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
/* The objects are in their final locations, apply the relocations. */
if (need_relocs)
ret = i915_gem_execbuffer_relocate(eb, vm);
ret = i915_gem_execbuffer_relocate(eb);
if (ret) {
if (ret == -EFAULT) {
ret = i915_gem_execbuffer_relocate_slow(dev, args, file, ring,
@ -1170,17 +1194,6 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
if (ret)
goto err;
hs = i915_gem_context_get_hang_stats(dev, file, ctx_id);
if (IS_ERR(hs)) {
ret = PTR_ERR(hs);
goto err;
}
if (hs->banned) {
ret = -EIO;
goto err;
}
ret = i915_switch_context(ring, file, ctx_id);
if (ret)
goto err;
@ -1242,6 +1255,10 @@ err:
pre_mutex_err:
kfree(cliprects);
/* intel_gpu_busy should also get a ref, so it will free when the device
* is really idle. */
intel_runtime_pm_put(dev_priv);
return ret;
}

View File

@ -240,10 +240,16 @@ static int gen8_ppgtt_enable(struct drm_device *dev)
for_each_ring(ring, dev_priv, j) {
ret = gen8_write_pdp(ring, i, addr);
if (ret)
return ret;
goto err_out;
}
}
return 0;
err_out:
for_each_ring(ring, dev_priv, j)
I915_WRITE(RING_MODE_GEN7(ring),
_MASKED_BIT_DISABLE(GFX_PPGTT_ENABLE));
return ret;
}
static void gen8_ppgtt_clear_range(struct i915_address_space *vm,
@ -293,23 +299,23 @@ static void gen8_ppgtt_insert_entries(struct i915_address_space *vm,
unsigned act_pte = first_entry % GEN8_PTES_PER_PAGE;
struct sg_page_iter sg_iter;
pt_vaddr = kmap_atomic(&ppgtt->gen8_pt_pages[act_pt]);
pt_vaddr = NULL;
for_each_sg_page(pages->sgl, &sg_iter, pages->nents, 0) {
dma_addr_t page_addr;
if (pt_vaddr == NULL)
pt_vaddr = kmap_atomic(&ppgtt->gen8_pt_pages[act_pt]);
page_addr = sg_dma_address(sg_iter.sg) +
(sg_iter.sg_pgoffset << PAGE_SHIFT);
pt_vaddr[act_pte] = gen8_pte_encode(page_addr, cache_level,
true);
pt_vaddr[act_pte] =
gen8_pte_encode(sg_page_iter_dma_address(&sg_iter),
cache_level, true);
if (++act_pte == GEN8_PTES_PER_PAGE) {
kunmap_atomic(pt_vaddr);
pt_vaddr = NULL;
act_pt++;
pt_vaddr = kmap_atomic(&ppgtt->gen8_pt_pages[act_pt]);
act_pte = 0;
}
}
kunmap_atomic(pt_vaddr);
if (pt_vaddr)
kunmap_atomic(pt_vaddr);
}
static void gen8_ppgtt_cleanup(struct i915_address_space *vm)
@ -318,6 +324,8 @@ static void gen8_ppgtt_cleanup(struct i915_address_space *vm)
container_of(vm, struct i915_hw_ppgtt, base);
int i, j;
drm_mm_takedown(&vm->mm);
for (i = 0; i < ppgtt->num_pd_pages ; i++) {
if (ppgtt->pd_dma_addr[i]) {
pci_unmap_page(ppgtt->base.dev->pdev,
@ -381,6 +389,8 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt, uint64_t size)
ppgtt->base.clear_range = gen8_ppgtt_clear_range;
ppgtt->base.insert_entries = gen8_ppgtt_insert_entries;
ppgtt->base.cleanup = gen8_ppgtt_cleanup;
ppgtt->base.start = 0;
ppgtt->base.total = ppgtt->num_pt_pages * GEN8_PTES_PER_PAGE * PAGE_SIZE;
BUG_ON(ppgtt->num_pd_pages > GEN8_LEGACY_PDPS);
@ -573,21 +583,23 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
unsigned act_pte = first_entry % I915_PPGTT_PT_ENTRIES;
struct sg_page_iter sg_iter;
pt_vaddr = kmap_atomic(ppgtt->pt_pages[act_pt]);
pt_vaddr = NULL;
for_each_sg_page(pages->sgl, &sg_iter, pages->nents, 0) {
dma_addr_t page_addr;
if (pt_vaddr == NULL)
pt_vaddr = kmap_atomic(ppgtt->pt_pages[act_pt]);
page_addr = sg_page_iter_dma_address(&sg_iter);
pt_vaddr[act_pte] = vm->pte_encode(page_addr, cache_level, true);
pt_vaddr[act_pte] =
vm->pte_encode(sg_page_iter_dma_address(&sg_iter),
cache_level, true);
if (++act_pte == I915_PPGTT_PT_ENTRIES) {
kunmap_atomic(pt_vaddr);
pt_vaddr = NULL;
act_pt++;
pt_vaddr = kmap_atomic(ppgtt->pt_pages[act_pt]);
act_pte = 0;
}
}
kunmap_atomic(pt_vaddr);
if (pt_vaddr)
kunmap_atomic(pt_vaddr);
}
static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
@ -632,6 +644,8 @@ static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
ppgtt->base.insert_entries = gen6_ppgtt_insert_entries;
ppgtt->base.cleanup = gen6_ppgtt_cleanup;
ppgtt->base.scratch = dev_priv->gtt.base.scratch;
ppgtt->base.start = 0;
ppgtt->base.total = GEN6_PPGTT_PD_ENTRIES * I915_PPGTT_PT_ENTRIES * PAGE_SIZE;
ppgtt->pt_pages = kcalloc(ppgtt->num_pd_entries, sizeof(struct page *),
GFP_KERNEL);
if (!ppgtt->pt_pages)
@ -1124,7 +1138,6 @@ void i915_gem_setup_global_gtt(struct drm_device *dev,
if (ret)
DRM_DEBUG_KMS("Reservation failed\n");
obj->has_global_gtt_mapping = 1;
list_add(&vma->vma_link, &obj->vma_list);
}
dev_priv->gtt.base.start = start;
@ -1400,6 +1413,8 @@ static void gen6_gmch_remove(struct i915_address_space *vm)
{
struct i915_gtt *gtt = container_of(vm, struct i915_gtt, base);
drm_mm_takedown(&vm->mm);
iounmap(gtt->gsm);
teardown_scratch_page(vm->dev);
}
@ -1425,6 +1440,9 @@ static int i915_gmch_probe(struct drm_device *dev,
dev_priv->gtt.base.clear_range = i915_ggtt_clear_range;
dev_priv->gtt.base.insert_entries = i915_ggtt_insert_entries;
if (unlikely(dev_priv->gtt.do_idle_maps))
DRM_INFO("applying Ironlake quirks for intel_iommu\n");
return 0;
}

View File

@ -250,7 +250,7 @@ i915_pages_create_for_stolen(struct drm_device *dev,
}
sg = st->sgl;
sg->offset = offset;
sg->offset = 0;
sg->length = size;
sg_dma_address(sg) = (dma_addr_t)dev_priv->mm.stolen_base + offset;
@ -420,6 +420,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
list_add_tail(&obj->global_list, &dev_priv->mm.bound_list);
list_add_tail(&vma->mm_list, &ggtt->inactive_list);
i915_gem_object_pin_pages(obj);
return obj;

View File

@ -239,6 +239,9 @@ static void i915_ring_error_state(struct drm_i915_error_state_buf *m,
unsigned ring)
{
BUG_ON(ring >= I915_NUM_RINGS); /* shut up confused gcc */
if (!error->ring[ring].valid)
return;
err_printf(m, "%s command stream:\n", ring_str(ring));
err_printf(m, " HEAD: 0x%08x\n", error->head[ring]);
err_printf(m, " TAIL: 0x%08x\n", error->tail[ring]);
@ -247,12 +250,11 @@ static void i915_ring_error_state(struct drm_i915_error_state_buf *m,
err_printf(m, " IPEIR: 0x%08x\n", error->ipeir[ring]);
err_printf(m, " IPEHR: 0x%08x\n", error->ipehr[ring]);
err_printf(m, " INSTDONE: 0x%08x\n", error->instdone[ring]);
if (ring == RCS && INTEL_INFO(dev)->gen >= 4)
err_printf(m, " BBADDR: 0x%08llx\n", error->bbaddr);
if (INTEL_INFO(dev)->gen >= 4)
if (INTEL_INFO(dev)->gen >= 4) {
err_printf(m, " BBADDR: 0x%08llx\n", error->bbaddr[ring]);
err_printf(m, " BB_STATE: 0x%08x\n", error->bbstate[ring]);
if (INTEL_INFO(dev)->gen >= 4)
err_printf(m, " INSTPS: 0x%08x\n", error->instps[ring]);
}
err_printf(m, " INSTPM: 0x%08x\n", error->instpm[ring]);
err_printf(m, " FADDR: 0x%08x\n", error->faddr[ring]);
if (INTEL_INFO(dev)->gen >= 6) {
@ -294,7 +296,6 @@ int i915_error_state_to_str(struct drm_i915_error_state_buf *m,
struct drm_device *dev = error_priv->dev;
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_error_state *error = error_priv->error;
struct intel_ring_buffer *ring;
int i, j, page, offset, elt;
if (!error) {
@ -329,7 +330,7 @@ int i915_error_state_to_str(struct drm_i915_error_state_buf *m,
if (INTEL_INFO(dev)->gen == 7)
err_printf(m, "ERR_INT: 0x%08x\n", error->err_int);
for_each_ring(ring, dev_priv, i)
for (i = 0; i < ARRAY_SIZE(error->ring); i++)
i915_ring_error_state(m, dev, error, i);
if (error->active_bo)
@ -386,8 +387,7 @@ int i915_error_state_to_str(struct drm_i915_error_state_buf *m,
}
}
obj = error->ring[i].ctx;
if (obj) {
if ((obj = error->ring[i].ctx)) {
err_printf(m, "%s --- HW Context = 0x%08x\n",
dev_priv->ring[i].name,
obj->gtt_offset);
@ -668,7 +668,8 @@ i915_error_first_batchbuffer(struct drm_i915_private *dev_priv,
return NULL;
obj = ring->scratch.obj;
if (acthd >= i915_gem_obj_ggtt_offset(obj) &&
if (obj != NULL &&
acthd >= i915_gem_obj_ggtt_offset(obj) &&
acthd < i915_gem_obj_ggtt_offset(obj) + obj->base.size)
return i915_error_object_create(dev_priv, obj);
}
@ -725,8 +726,9 @@ static void i915_record_ring_state(struct drm_device *dev,
error->ipehr[ring->id] = I915_READ(RING_IPEHR(ring->mmio_base));
error->instdone[ring->id] = I915_READ(RING_INSTDONE(ring->mmio_base));
error->instps[ring->id] = I915_READ(RING_INSTPS(ring->mmio_base));
if (ring->id == RCS)
error->bbaddr = I915_READ64(BB_ADDR);
error->bbaddr[ring->id] = I915_READ(RING_BBADDR(ring->mmio_base));
if (INTEL_INFO(dev)->gen >= 8)
error->bbaddr[ring->id] |= (u64) I915_READ(RING_BBADDR_UDW(ring->mmio_base)) << 32;
error->bbstate[ring->id] = I915_READ(RING_BBSTATE(ring->mmio_base));
} else {
error->faddr[ring->id] = I915_READ(DMA_FADD_I8XX);
@ -775,11 +777,17 @@ static void i915_gem_record_rings(struct drm_device *dev,
struct drm_i915_error_state *error)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ring_buffer *ring;
struct drm_i915_gem_request *request;
int i, count;
for_each_ring(ring, dev_priv, i) {
for (i = 0; i < I915_NUM_RINGS; i++) {
struct intel_ring_buffer *ring = &dev_priv->ring[i];
if (ring->dev == NULL)
continue;
error->ring[i].valid = true;
i915_record_ring_state(dev, error, ring);
error->ring[i].batchbuffer =

View File

@ -62,7 +62,7 @@ static const u32 hpd_mask_i915[] = {
[HPD_PORT_D] = PORTD_HOTPLUG_INT_EN
};
static const u32 hpd_status_gen4[] = {
static const u32 hpd_status_g4x[] = {
[HPD_CRT] = CRT_HOTPLUG_INT_STATUS,
[HPD_SDVO_B] = SDVOB_HOTPLUG_INT_STATUS_G4X,
[HPD_SDVO_C] = SDVOC_HOTPLUG_INT_STATUS_G4X,
@ -600,7 +600,7 @@ static u32 i915_get_vblank_counter(struct drm_device *dev, int pipe)
* Cook up a vblank counter by also checking the pixel
* counter against vblank start.
*/
return ((high1 << 8) | low) + (pixel >= vbl_start);
return (((high1 << 8) | low) + (pixel >= vbl_start)) & 0xffffff;
}
static u32 gm45_get_vblank_counter(struct drm_device *dev, int pipe)
@ -621,36 +621,15 @@ static u32 gm45_get_vblank_counter(struct drm_device *dev, int pipe)
#define __raw_i915_read32(dev_priv__, reg__) readl((dev_priv__)->regs + (reg__))
#define __raw_i915_read16(dev_priv__, reg__) readw((dev_priv__)->regs + (reg__))
static bool intel_pipe_in_vblank_locked(struct drm_device *dev, enum pipe pipe)
static bool ilk_pipe_in_vblank_locked(struct drm_device *dev, enum pipe pipe)
{
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t status;
int reg;
if (IS_VALLEYVIEW(dev)) {
status = pipe == PIPE_A ?
I915_DISPLAY_PIPE_A_VBLANK_INTERRUPT :
I915_DISPLAY_PIPE_B_VBLANK_INTERRUPT;
reg = VLV_ISR;
} else if (IS_GEN2(dev)) {
status = pipe == PIPE_A ?
I915_DISPLAY_PIPE_A_VBLANK_INTERRUPT :
I915_DISPLAY_PIPE_B_VBLANK_INTERRUPT;
reg = ISR;
} else if (INTEL_INFO(dev)->gen < 5) {
status = pipe == PIPE_A ?
I915_DISPLAY_PIPE_A_VBLANK_INTERRUPT :
I915_DISPLAY_PIPE_B_VBLANK_INTERRUPT;
reg = ISR;
} else if (INTEL_INFO(dev)->gen < 7) {
if (INTEL_INFO(dev)->gen < 7) {
status = pipe == PIPE_A ?
DE_PIPEA_VBLANK :
DE_PIPEB_VBLANK;
reg = DEISR;
} else {
switch (pipe) {
default:
@ -664,18 +643,14 @@ static bool intel_pipe_in_vblank_locked(struct drm_device *dev, enum pipe pipe)
status = DE_PIPEC_VBLANK_IVB;
break;
}
reg = DEISR;
}
if (IS_GEN2(dev))
return __raw_i915_read16(dev_priv, reg) & status;
else
return __raw_i915_read32(dev_priv, reg) & status;
return __raw_i915_read32(dev_priv, DEISR) & status;
}
static int i915_get_crtc_scanoutpos(struct drm_device *dev, int pipe,
int *vpos, int *hpos, ktime_t *stime, ktime_t *etime)
unsigned int flags, int *vpos, int *hpos,
ktime_t *stime, ktime_t *etime)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
@ -698,6 +673,12 @@ static int i915_get_crtc_scanoutpos(struct drm_device *dev, int pipe,
vbl_start = mode->crtc_vblank_start;
vbl_end = mode->crtc_vblank_end;
if (mode->flags & DRM_MODE_FLAG_INTERLACE) {
vbl_start = DIV_ROUND_UP(vbl_start, 2);
vbl_end /= 2;
vtotal /= 2;
}
ret |= DRM_SCANOUTPOS_VALID | DRM_SCANOUTPOS_ACCURATE;
/*
@ -722,17 +703,42 @@ static int i915_get_crtc_scanoutpos(struct drm_device *dev, int pipe,
else
position = __raw_i915_read32(dev_priv, PIPEDSL(pipe)) & DSL_LINEMASK_GEN3;
/*
* The scanline counter increments at the leading edge
* of hsync, ie. it completely misses the active portion
* of the line. Fix up the counter at both edges of vblank
* to get a more accurate picture whether we're in vblank
* or not.
*/
in_vbl = intel_pipe_in_vblank_locked(dev, pipe);
if ((in_vbl && position == vbl_start - 1) ||
(!in_vbl && position == vbl_end - 1))
position = (position + 1) % vtotal;
if (HAS_PCH_SPLIT(dev)) {
/*
* The scanline counter increments at the leading edge
* of hsync, ie. it completely misses the active portion
* of the line. Fix up the counter at both edges of vblank
* to get a more accurate picture whether we're in vblank
* or not.
*/
in_vbl = ilk_pipe_in_vblank_locked(dev, pipe);
if ((in_vbl && position == vbl_start - 1) ||
(!in_vbl && position == vbl_end - 1))
position = (position + 1) % vtotal;
} else {
/*
* ISR vblank status bits don't work the way we'd want
* them to work on non-PCH platforms (for
* ilk_pipe_in_vblank_locked()), and there doesn't
* appear any other way to determine if we're currently
* in vblank.
*
* Instead let's assume that we're already in vblank if
* we got called from the vblank interrupt and the
* scanline counter value indicates that we're on the
* line just prior to vblank start. This should result
* in the correct answer, unless the vblank interrupt
* delivery really got delayed for almost exactly one
* full frame/field.
*/
if (flags & DRM_CALLED_FROM_VBLIRQ &&
position == vbl_start - 1) {
position = (position + 1) % vtotal;
/* Signal this correction as "applied". */
ret |= 0x8;
}
}
} else {
/* Have access to pixelcount since start of frame.
* We can split this into vertical and horizontal
@ -809,7 +815,8 @@ static int i915_get_vblank_timestamp(struct drm_device *dev, int pipe,
/* Helper routine in DRM core does all the work: */
return drm_calc_vbltimestamp_from_scanoutpos(dev, pipe, max_error,
vblank_time, flags,
crtc);
crtc,
&to_intel_crtc(crtc)->config.adjusted_mode);
}
static bool intel_hpd_irq_event(struct drm_device *dev,
@ -1015,10 +1022,8 @@ static void gen6_pm_rps_work(struct work_struct *work)
/* sysfs frequency interfaces may have snuck in while servicing the
* interrupt
*/
if (new_delay < (int)dev_priv->rps.min_delay)
new_delay = dev_priv->rps.min_delay;
if (new_delay > (int)dev_priv->rps.max_delay)
new_delay = dev_priv->rps.max_delay;
new_delay = clamp_t(int, new_delay,
dev_priv->rps.min_delay, dev_priv->rps.max_delay);
dev_priv->rps.last_adj = new_delay - dev_priv->rps.cur_delay;
if (IS_VALLEYVIEW(dev_priv->dev))
@ -1235,9 +1240,10 @@ static inline void intel_hpd_irq_handler(struct drm_device *dev,
spin_lock(&dev_priv->irq_lock);
for (i = 1; i < HPD_NUM_PINS; i++) {
WARN(((hpd[i] & hotplug_trigger) &&
dev_priv->hpd_stats[i].hpd_mark != HPD_ENABLED),
"Received HPD interrupt although disabled\n");
WARN_ONCE(hpd[i] & hotplug_trigger &&
dev_priv->hpd_stats[i].hpd_mark == HPD_DISABLED,
"Received HPD interrupt (0x%08x) on pin %d (0x%08x) although disabled\n",
hotplug_trigger, i, hpd[i]);
if (!(hpd[i] & hotplug_trigger) ||
dev_priv->hpd_stats[i].hpd_mark != HPD_ENABLED)
@ -1474,6 +1480,9 @@ static irqreturn_t valleyview_irq_handler(int irq, void *arg)
intel_hpd_irq_handler(dev, hotplug_trigger, hpd_status_i915);
if (hotplug_status & DP_AUX_CHANNEL_MASK_INT_STATUS_G4X)
dp_aux_irq_handler(dev);
I915_WRITE(PORT_HOTPLUG_STAT, hotplug_status);
I915_READ(PORT_HOTPLUG_STAT);
}
@ -1993,7 +2002,7 @@ static void i915_error_work_func(struct work_struct *work)
kobject_uevent_env(&dev->primary->kdev->kobj,
KOBJ_CHANGE, reset_done_event);
} else {
atomic_set(&error->reset_counter, I915_WEDGED);
atomic_set_mask(I915_WEDGED, &error->reset_counter);
}
/*
@ -3140,10 +3149,10 @@ static int i8xx_irq_postinstall(struct drm_device *dev)
* Returns true when a page flip has completed.
*/
static bool i8xx_handle_vblank(struct drm_device *dev,
int pipe, u16 iir)
int plane, int pipe, u32 iir)
{
drm_i915_private_t *dev_priv = dev->dev_private;
u16 flip_pending = DISPLAY_PLANE_FLIP_PENDING(pipe);
u16 flip_pending = DISPLAY_PLANE_FLIP_PENDING(plane);
if (!drm_handle_vblank(dev, pipe))
return false;
@ -3151,7 +3160,7 @@ static bool i8xx_handle_vblank(struct drm_device *dev,
if ((iir & flip_pending) == 0)
return false;
intel_prepare_page_flip(dev, pipe);
intel_prepare_page_flip(dev, plane);
/* We detect FlipDone by looking for the change in PendingFlip from '1'
* to '0' on the following vblank, i.e. IIR has the Pendingflip
@ -3220,9 +3229,13 @@ static irqreturn_t i8xx_irq_handler(int irq, void *arg)
notify_ring(dev, &dev_priv->ring[RCS]);
for_each_pipe(pipe) {
int plane = pipe;
if (HAS_FBC(dev))
plane = !plane;
if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS &&
i8xx_handle_vblank(dev, pipe, iir))
flip_mask &= ~DISPLAY_PLANE_FLIP_PENDING(pipe);
i8xx_handle_vblank(dev, plane, pipe, iir))
flip_mask &= ~DISPLAY_PLANE_FLIP_PENDING(plane);
if (pipe_stats[pipe] & PIPE_CRC_DONE_INTERRUPT_STATUS)
i9xx_pipe_crc_irq_handler(dev, pipe);
@ -3418,7 +3431,7 @@ static irqreturn_t i915_irq_handler(int irq, void *arg)
for_each_pipe(pipe) {
int plane = pipe;
if (IS_MOBILE(dev))
if (HAS_FBC(dev))
plane = !plane;
if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS &&
@ -3655,7 +3668,11 @@ static irqreturn_t i965_irq_handler(int irq, void *arg)
hotplug_status);
intel_hpd_irq_handler(dev, hotplug_trigger,
IS_G4X(dev) ? hpd_status_gen4 : hpd_status_i915);
IS_G4X(dev) ? hpd_status_g4x : hpd_status_i915);
if (IS_G4X(dev) &&
(hotplug_status & DP_AUX_CHANNEL_MASK_INT_STATUS_G4X))
dp_aux_irq_handler(dev);
I915_WRITE(PORT_HOTPLUG_STAT, hotplug_status);
I915_READ(PORT_HOTPLUG_STAT);
@ -3893,8 +3910,8 @@ void hsw_pc8_disable_interrupts(struct drm_device *dev)
dev_priv->pc8.regsave.gtier = I915_READ(GTIER);
dev_priv->pc8.regsave.gen6_pmimr = I915_READ(GEN6_PMIMR);
ironlake_disable_display_irq(dev_priv, ~DE_PCH_EVENT_IVB);
ibx_disable_display_interrupt(dev_priv, ~SDE_HOTPLUG_MASK_CPT);
ironlake_disable_display_irq(dev_priv, 0xffffffff);
ibx_disable_display_interrupt(dev_priv, 0xffffffff);
ilk_disable_gt_irq(dev_priv, 0xffffffff);
snb_disable_pm_irq(dev_priv, 0xffffffff);
@ -3908,34 +3925,26 @@ void hsw_pc8_restore_interrupts(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long irqflags;
uint32_t val, expected;
uint32_t val;
spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
val = I915_READ(DEIMR);
expected = ~DE_PCH_EVENT_IVB;
WARN(val != expected, "DEIMR is 0x%08x, not 0x%08x\n", val, expected);
WARN(val != 0xffffffff, "DEIMR is 0x%08x\n", val);
val = I915_READ(SDEIMR) & ~SDE_HOTPLUG_MASK_CPT;
expected = ~SDE_HOTPLUG_MASK_CPT;
WARN(val != expected, "SDEIMR non-HPD bits are 0x%08x, not 0x%08x\n",
val, expected);
val = I915_READ(SDEIMR);
WARN(val != 0xffffffff, "SDEIMR is 0x%08x\n", val);
val = I915_READ(GTIMR);
expected = 0xffffffff;
WARN(val != expected, "GTIMR is 0x%08x, not 0x%08x\n", val, expected);
WARN(val != 0xffffffff, "GTIMR is 0x%08x\n", val);
val = I915_READ(GEN6_PMIMR);
expected = 0xffffffff;
WARN(val != expected, "GEN6_PMIMR is 0x%08x, not 0x%08x\n", val,
expected);
WARN(val != 0xffffffff, "GEN6_PMIMR is 0x%08x\n", val);
dev_priv->pc8.irqs_disabled = false;
ironlake_enable_display_irq(dev_priv, ~dev_priv->pc8.regsave.deimr);
ibx_enable_display_interrupt(dev_priv,
~dev_priv->pc8.regsave.sdeimr &
~SDE_HOTPLUG_MASK_CPT);
ibx_enable_display_interrupt(dev_priv, ~dev_priv->pc8.regsave.sdeimr);
ilk_enable_gt_irq(dev_priv, ~dev_priv->pc8.regsave.gtimr);
snb_enable_pm_irq(dev_priv, ~dev_priv->pc8.regsave.gen6_pmimr);
I915_WRITE(GTIER, dev_priv->pc8.regsave.gtier);

View File

@ -193,10 +193,13 @@
#define MI_SCENE_COUNT (1 << 3) /* just increment scene count */
#define MI_END_SCENE (1 << 4) /* flush binner and incr scene count */
#define MI_INVALIDATE_ISP (1 << 5) /* invalidate indirect state pointers */
#define MI_REPORT_HEAD MI_INSTR(0x07, 0)
#define MI_ARB_ON_OFF MI_INSTR(0x08, 0)
#define MI_ARB_ENABLE (1<<0)
#define MI_ARB_DISABLE (0<<0)
#define MI_BATCH_BUFFER_END MI_INSTR(0x0a, 0)
#define MI_SUSPEND_FLUSH MI_INSTR(0x0b, 0)
#define MI_SUSPEND_FLUSH_EN (1<<0)
#define MI_REPORT_HEAD MI_INSTR(0x07, 0)
#define MI_OVERLAY_FLIP MI_INSTR(0x11, 0)
#define MI_OVERLAY_CONTINUE (0x0<<21)
#define MI_OVERLAY_ON (0x1<<21)
@ -212,10 +215,24 @@
#define MI_DISPLAY_FLIP_IVB_SPRITE_B (3 << 19)
#define MI_DISPLAY_FLIP_IVB_PLANE_C (4 << 19)
#define MI_DISPLAY_FLIP_IVB_SPRITE_C (5 << 19)
#define MI_ARB_ON_OFF MI_INSTR(0x08, 0)
#define MI_ARB_ENABLE (1<<0)
#define MI_ARB_DISABLE (0<<0)
#define MI_SEMAPHORE_MBOX MI_INSTR(0x16, 1) /* gen6+ */
#define MI_SEMAPHORE_GLOBAL_GTT (1<<22)
#define MI_SEMAPHORE_UPDATE (1<<21)
#define MI_SEMAPHORE_COMPARE (1<<20)
#define MI_SEMAPHORE_REGISTER (1<<18)
#define MI_SEMAPHORE_SYNC_VR (0<<16) /* RCS wait for VCS (RVSYNC) */
#define MI_SEMAPHORE_SYNC_VER (1<<16) /* RCS wait for VECS (RVESYNC) */
#define MI_SEMAPHORE_SYNC_BR (2<<16) /* RCS wait for BCS (RBSYNC) */
#define MI_SEMAPHORE_SYNC_BV (0<<16) /* VCS wait for BCS (VBSYNC) */
#define MI_SEMAPHORE_SYNC_VEV (1<<16) /* VCS wait for VECS (VVESYNC) */
#define MI_SEMAPHORE_SYNC_RV (2<<16) /* VCS wait for RCS (VRSYNC) */
#define MI_SEMAPHORE_SYNC_RB (0<<16) /* BCS wait for RCS (BRSYNC) */
#define MI_SEMAPHORE_SYNC_VEB (1<<16) /* BCS wait for VECS (BVESYNC) */
#define MI_SEMAPHORE_SYNC_VB (2<<16) /* BCS wait for VCS (BVSYNC) */
#define MI_SEMAPHORE_SYNC_BVE (0<<16) /* VECS wait for BCS (VEBSYNC) */
#define MI_SEMAPHORE_SYNC_VVE (1<<16) /* VECS wait for VCS (VEVSYNC) */
#define MI_SEMAPHORE_SYNC_RVE (2<<16) /* VECS wait for RCS (VERSYNC) */
#define MI_SEMAPHORE_SYNC_INVALID (3<<16)
#define MI_SET_CONTEXT MI_INSTR(0x18, 0)
#define MI_MM_SPACE_GTT (1<<8)
#define MI_MM_SPACE_PHYSICAL (0<<8)
@ -235,7 +252,7 @@
*/
#define MI_LOAD_REGISTER_IMM(x) MI_INSTR(0x22, 2*x-1)
#define MI_STORE_REGISTER_MEM(x) MI_INSTR(0x24, 2*x-1)
#define MI_SRM_LRM_GLOBAL_GTT (1<<22)
#define MI_SRM_LRM_GLOBAL_GTT (1<<22)
#define MI_FLUSH_DW MI_INSTR(0x26, 1) /* for GEN6 */
#define MI_FLUSH_DW_STORE_INDEX (1<<21)
#define MI_INVALIDATE_TLB (1<<18)
@ -246,30 +263,13 @@
#define MI_BATCH_BUFFER MI_INSTR(0x30, 1)
#define MI_BATCH_NON_SECURE (1)
/* for snb/ivb/vlv this also means "batch in ppgtt" when ppgtt is enabled. */
#define MI_BATCH_NON_SECURE_I965 (1<<8)
#define MI_BATCH_NON_SECURE_I965 (1<<8)
#define MI_BATCH_PPGTT_HSW (1<<8)
#define MI_BATCH_NON_SECURE_HSW (1<<13)
#define MI_BATCH_NON_SECURE_HSW (1<<13)
#define MI_BATCH_BUFFER_START MI_INSTR(0x31, 0)
#define MI_BATCH_GTT (2<<6) /* aliased with (1<<7) on gen4 */
#define MI_BATCH_BUFFER_START_GEN8 MI_INSTR(0x31, 1)
#define MI_SEMAPHORE_MBOX MI_INSTR(0x16, 1) /* gen6+ */
#define MI_SEMAPHORE_GLOBAL_GTT (1<<22)
#define MI_SEMAPHORE_UPDATE (1<<21)
#define MI_SEMAPHORE_COMPARE (1<<20)
#define MI_SEMAPHORE_REGISTER (1<<18)
#define MI_SEMAPHORE_SYNC_VR (0<<16) /* RCS wait for VCS (RVSYNC) */
#define MI_SEMAPHORE_SYNC_VER (1<<16) /* RCS wait for VECS (RVESYNC) */
#define MI_SEMAPHORE_SYNC_BR (2<<16) /* RCS wait for BCS (RBSYNC) */
#define MI_SEMAPHORE_SYNC_BV (0<<16) /* VCS wait for BCS (VBSYNC) */
#define MI_SEMAPHORE_SYNC_VEV (1<<16) /* VCS wait for VECS (VVESYNC) */
#define MI_SEMAPHORE_SYNC_RV (2<<16) /* VCS wait for RCS (VRSYNC) */
#define MI_SEMAPHORE_SYNC_RB (0<<16) /* BCS wait for RCS (BRSYNC) */
#define MI_SEMAPHORE_SYNC_VEB (1<<16) /* BCS wait for VECS (BVESYNC) */
#define MI_SEMAPHORE_SYNC_VB (2<<16) /* BCS wait for VCS (BVSYNC) */
#define MI_SEMAPHORE_SYNC_BVE (0<<16) /* VECS wait for BCS (VEBSYNC) */
#define MI_SEMAPHORE_SYNC_VVE (1<<16) /* VECS wait for VCS (VEVSYNC) */
#define MI_SEMAPHORE_SYNC_RVE (2<<16) /* VECS wait for RCS (VERSYNC) */
#define MI_SEMAPHORE_SYNC_INVALID (3<<16)
#define MI_PREDICATE_RESULT_2 (0x2214)
#define LOWER_SLICE_ENABLED (1<<0)
@ -354,6 +354,7 @@
#define IOSF_BYTE_ENABLES_SHIFT 4
#define IOSF_BAR_SHIFT 1
#define IOSF_SB_BUSY (1<<0)
#define IOSF_PORT_BUNIT 0x3
#define IOSF_PORT_PUNIT 0x4
#define IOSF_PORT_NC 0x11
#define IOSF_PORT_DPIO 0x12
@ -361,12 +362,21 @@
#define IOSF_PORT_CCK 0x14
#define IOSF_PORT_CCU 0xA9
#define IOSF_PORT_GPS_CORE 0x48
#define IOSF_PORT_FLISDSI 0x1B
#define VLV_IOSF_DATA (VLV_DISPLAY_BASE + 0x2104)
#define VLV_IOSF_ADDR (VLV_DISPLAY_BASE + 0x2108)
/* See configdb bunit SB addr map */
#define BUNIT_REG_BISOC 0x11
#define PUNIT_OPCODE_REG_READ 6
#define PUNIT_OPCODE_REG_WRITE 7
#define PUNIT_REG_DSPFREQ 0x36
#define DSPFREQSTAT_SHIFT 30
#define DSPFREQSTAT_MASK (0x3 << DSPFREQSTAT_SHIFT)
#define DSPFREQGUAR_SHIFT 14
#define DSPFREQGUAR_MASK (0x3 << DSPFREQGUAR_SHIFT)
#define PUNIT_REG_PWRGT_CTRL 0x60
#define PUNIT_REG_PWRGT_STATUS 0x61
#define PUNIT_CLK_GATE 1
@ -429,6 +439,7 @@
#define DSI_PLL_N1_DIV_MASK (3 << 16)
#define DSI_PLL_M1_DIV_SHIFT 0
#define DSI_PLL_M1_DIV_MASK (0x1ff << 0)
#define CCK_DISPLAY_CLOCK_CONTROL 0x6b
/*
* DPIO - a special bus for various display related registers to hide behind
@ -447,15 +458,13 @@
#define DPIO_SFR_BYPASS (1<<1)
#define DPIO_CMNRST (1<<0)
#define _DPIO_TX3_SWING_CTL4_A 0x690
#define _DPIO_TX3_SWING_CTL4_B 0x2a90
#define DPIO_TX3_SWING_CTL4(pipe) _PIPE(pipe, _DPIO_TX3_SWING_CTL4_A, \
_DPIO_TX3_SWING_CTL4_B)
#define DPIO_PHY(pipe) ((pipe) >> 1)
#define DPIO_PHY_IOSF_PORT(phy) (dev_priv->dpio_phy_iosf_port[phy])
/*
* Per pipe/PLL DPIO regs
*/
#define _DPIO_DIV_A 0x800c
#define _VLV_PLL_DW3_CH0 0x800c
#define DPIO_POST_DIV_SHIFT (28) /* 3 bits */
#define DPIO_POST_DIV_DAC 0
#define DPIO_POST_DIV_HDMIDP 1 /* DAC 225-400M rate */
@ -468,10 +477,10 @@
#define DPIO_ENABLE_CALIBRATION (1<<11)
#define DPIO_M1DIV_SHIFT (8) /* 3 bits */
#define DPIO_M2DIV_MASK 0xff
#define _DPIO_DIV_B 0x802c
#define DPIO_DIV(pipe) _PIPE(pipe, _DPIO_DIV_A, _DPIO_DIV_B)
#define _VLV_PLL_DW3_CH1 0x802c
#define VLV_PLL_DW3(ch) _PIPE(ch, _VLV_PLL_DW3_CH0, _VLV_PLL_DW3_CH1)
#define _DPIO_REFSFR_A 0x8014
#define _VLV_PLL_DW5_CH0 0x8014
#define DPIO_REFSEL_OVERRIDE 27
#define DPIO_PLL_MODESEL_SHIFT 24 /* 3 bits */
#define DPIO_BIAS_CURRENT_CTL_SHIFT 21 /* 3 bits, always 0x7 */
@ -479,118 +488,112 @@
#define DPIO_PLL_REFCLK_SEL_MASK 3
#define DPIO_DRIVER_CTL_SHIFT 12 /* always set to 0x8 */
#define DPIO_CLK_BIAS_CTL_SHIFT 8 /* always set to 0x5 */
#define _DPIO_REFSFR_B 0x8034
#define DPIO_REFSFR(pipe) _PIPE(pipe, _DPIO_REFSFR_A, _DPIO_REFSFR_B)
#define _VLV_PLL_DW5_CH1 0x8034
#define VLV_PLL_DW5(ch) _PIPE(ch, _VLV_PLL_DW5_CH0, _VLV_PLL_DW5_CH1)
#define _DPIO_CORE_CLK_A 0x801c
#define _DPIO_CORE_CLK_B 0x803c
#define DPIO_CORE_CLK(pipe) _PIPE(pipe, _DPIO_CORE_CLK_A, _DPIO_CORE_CLK_B)
#define _VLV_PLL_DW7_CH0 0x801c
#define _VLV_PLL_DW7_CH1 0x803c
#define VLV_PLL_DW7(ch) _PIPE(ch, _VLV_PLL_DW7_CH0, _VLV_PLL_DW7_CH1)
#define _DPIO_IREF_CTL_A 0x8040
#define _DPIO_IREF_CTL_B 0x8060
#define DPIO_IREF_CTL(pipe) _PIPE(pipe, _DPIO_IREF_CTL_A, _DPIO_IREF_CTL_B)
#define _VLV_PLL_DW8_CH0 0x8040
#define _VLV_PLL_DW8_CH1 0x8060
#define VLV_PLL_DW8(ch) _PIPE(ch, _VLV_PLL_DW8_CH0, _VLV_PLL_DW8_CH1)
#define DPIO_IREF_BCAST 0xc044
#define _DPIO_IREF_A 0x8044
#define _DPIO_IREF_B 0x8064
#define DPIO_IREF(pipe) _PIPE(pipe, _DPIO_IREF_A, _DPIO_IREF_B)
#define VLV_PLL_DW9_BCAST 0xc044
#define _VLV_PLL_DW9_CH0 0x8044
#define _VLV_PLL_DW9_CH1 0x8064
#define VLV_PLL_DW9(ch) _PIPE(ch, _VLV_PLL_DW9_CH0, _VLV_PLL_DW9_CH1)
#define _DPIO_PLL_CML_A 0x804c
#define _DPIO_PLL_CML_B 0x806c
#define DPIO_PLL_CML(pipe) _PIPE(pipe, _DPIO_PLL_CML_A, _DPIO_PLL_CML_B)
#define _VLV_PLL_DW10_CH0 0x8048
#define _VLV_PLL_DW10_CH1 0x8068
#define VLV_PLL_DW10(ch) _PIPE(ch, _VLV_PLL_DW10_CH0, _VLV_PLL_DW10_CH1)
#define _DPIO_LPF_COEFF_A 0x8048
#define _DPIO_LPF_COEFF_B 0x8068
#define DPIO_LPF_COEFF(pipe) _PIPE(pipe, _DPIO_LPF_COEFF_A, _DPIO_LPF_COEFF_B)
#define _VLV_PLL_DW11_CH0 0x804c
#define _VLV_PLL_DW11_CH1 0x806c
#define VLV_PLL_DW11(ch) _PIPE(ch, _VLV_PLL_DW11_CH0, _VLV_PLL_DW11_CH1)
#define DPIO_CALIBRATION 0x80ac
/* Spec for ref block start counts at DW10 */
#define VLV_REF_DW13 0x80ac
#define DPIO_FASTCLK_DISABLE 0x8100
#define VLV_CMN_DW0 0x8100
/*
* Per DDI channel DPIO regs
*/
#define _DPIO_PCS_TX_0 0x8200
#define _DPIO_PCS_TX_1 0x8400
#define _VLV_PCS_DW0_CH0 0x8200
#define _VLV_PCS_DW0_CH1 0x8400
#define DPIO_PCS_TX_LANE2_RESET (1<<16)
#define DPIO_PCS_TX_LANE1_RESET (1<<7)
#define DPIO_PCS_TX(port) _PORT(port, _DPIO_PCS_TX_0, _DPIO_PCS_TX_1)
#define VLV_PCS_DW0(ch) _PORT(ch, _VLV_PCS_DW0_CH0, _VLV_PCS_DW0_CH1)
#define _DPIO_PCS_CLK_0 0x8204
#define _DPIO_PCS_CLK_1 0x8404
#define _VLV_PCS_DW1_CH0 0x8204
#define _VLV_PCS_DW1_CH1 0x8404
#define DPIO_PCS_CLK_CRI_RXEB_EIOS_EN (1<<22)
#define DPIO_PCS_CLK_CRI_RXDIGFILTSG_EN (1<<21)
#define DPIO_PCS_CLK_DATAWIDTH_SHIFT (6)
#define DPIO_PCS_CLK_SOFT_RESET (1<<5)
#define DPIO_PCS_CLK(port) _PORT(port, _DPIO_PCS_CLK_0, _DPIO_PCS_CLK_1)
#define VLV_PCS_DW1(ch) _PORT(ch, _VLV_PCS_DW1_CH0, _VLV_PCS_DW1_CH1)
#define _DPIO_PCS_CTL_OVR1_A 0x8224
#define _DPIO_PCS_CTL_OVR1_B 0x8424
#define DPIO_PCS_CTL_OVER1(port) _PORT(port, _DPIO_PCS_CTL_OVR1_A, \
_DPIO_PCS_CTL_OVR1_B)
#define _VLV_PCS_DW8_CH0 0x8220
#define _VLV_PCS_DW8_CH1 0x8420
#define VLV_PCS_DW8(ch) _PORT(ch, _VLV_PCS_DW8_CH0, _VLV_PCS_DW8_CH1)
#define _DPIO_PCS_STAGGER0_A 0x822c
#define _DPIO_PCS_STAGGER0_B 0x842c
#define DPIO_PCS_STAGGER0(port) _PORT(port, _DPIO_PCS_STAGGER0_A, \
_DPIO_PCS_STAGGER0_B)
#define _VLV_PCS01_DW8_CH0 0x0220
#define _VLV_PCS23_DW8_CH0 0x0420
#define _VLV_PCS01_DW8_CH1 0x2620
#define _VLV_PCS23_DW8_CH1 0x2820
#define VLV_PCS01_DW8(port) _PORT(port, _VLV_PCS01_DW8_CH0, _VLV_PCS01_DW8_CH1)
#define VLV_PCS23_DW8(port) _PORT(port, _VLV_PCS23_DW8_CH0, _VLV_PCS23_DW8_CH1)
#define _DPIO_PCS_STAGGER1_A 0x8230
#define _DPIO_PCS_STAGGER1_B 0x8430
#define DPIO_PCS_STAGGER1(port) _PORT(port, _DPIO_PCS_STAGGER1_A, \
_DPIO_PCS_STAGGER1_B)
#define _VLV_PCS_DW9_CH0 0x8224
#define _VLV_PCS_DW9_CH1 0x8424
#define VLV_PCS_DW9(ch) _PORT(ch, _VLV_PCS_DW9_CH0, _VLV_PCS_DW9_CH1)
#define _DPIO_PCS_CLOCKBUF0_A 0x8238
#define _DPIO_PCS_CLOCKBUF0_B 0x8438
#define DPIO_PCS_CLOCKBUF0(port) _PORT(port, _DPIO_PCS_CLOCKBUF0_A, \
_DPIO_PCS_CLOCKBUF0_B)
#define _VLV_PCS_DW11_CH0 0x822c
#define _VLV_PCS_DW11_CH1 0x842c
#define VLV_PCS_DW11(ch) _PORT(ch, _VLV_PCS_DW11_CH0, _VLV_PCS_DW11_CH1)
#define _DPIO_PCS_CLOCKBUF8_A 0x825c
#define _DPIO_PCS_CLOCKBUF8_B 0x845c
#define DPIO_PCS_CLOCKBUF8(port) _PORT(port, _DPIO_PCS_CLOCKBUF8_A, \
_DPIO_PCS_CLOCKBUF8_B)
#define _VLV_PCS_DW12_CH0 0x8230
#define _VLV_PCS_DW12_CH1 0x8430
#define VLV_PCS_DW12(ch) _PORT(ch, _VLV_PCS_DW12_CH0, _VLV_PCS_DW12_CH1)
#define _DPIO_TX_SWING_CTL2_A 0x8288
#define _DPIO_TX_SWING_CTL2_B 0x8488
#define DPIO_TX_SWING_CTL2(port) _PORT(port, _DPIO_TX_SWING_CTL2_A, \
_DPIO_TX_SWING_CTL2_B)
#define _VLV_PCS_DW14_CH0 0x8238
#define _VLV_PCS_DW14_CH1 0x8438
#define VLV_PCS_DW14(ch) _PORT(ch, _VLV_PCS_DW14_CH0, _VLV_PCS_DW14_CH1)
#define _DPIO_TX_SWING_CTL3_A 0x828c
#define _DPIO_TX_SWING_CTL3_B 0x848c
#define DPIO_TX_SWING_CTL3(port) _PORT(port, _DPIO_TX_SWING_CTL3_A, \
_DPIO_TX_SWING_CTL3_B)
#define _VLV_PCS_DW23_CH0 0x825c
#define _VLV_PCS_DW23_CH1 0x845c
#define VLV_PCS_DW23(ch) _PORT(ch, _VLV_PCS_DW23_CH0, _VLV_PCS_DW23_CH1)
#define _DPIO_TX_SWING_CTL4_A 0x8290
#define _DPIO_TX_SWING_CTL4_B 0x8490
#define DPIO_TX_SWING_CTL4(port) _PORT(port, _DPIO_TX_SWING_CTL4_A, \
_DPIO_TX_SWING_CTL4_B)
#define _VLV_TX_DW2_CH0 0x8288
#define _VLV_TX_DW2_CH1 0x8488
#define VLV_TX_DW2(ch) _PORT(ch, _VLV_TX_DW2_CH0, _VLV_TX_DW2_CH1)
#define _DPIO_TX_OCALINIT_0 0x8294
#define _DPIO_TX_OCALINIT_1 0x8494
#define _VLV_TX_DW3_CH0 0x828c
#define _VLV_TX_DW3_CH1 0x848c
#define VLV_TX_DW3(ch) _PORT(ch, _VLV_TX_DW3_CH0, _VLV_TX_DW3_CH1)
#define _VLV_TX_DW4_CH0 0x8290
#define _VLV_TX_DW4_CH1 0x8490
#define VLV_TX_DW4(ch) _PORT(ch, _VLV_TX_DW4_CH0, _VLV_TX_DW4_CH1)
#define _VLV_TX3_DW4_CH0 0x690
#define _VLV_TX3_DW4_CH1 0x2a90
#define VLV_TX3_DW4(ch) _PORT(ch, _VLV_TX3_DW4_CH0, _VLV_TX3_DW4_CH1)
#define _VLV_TX_DW5_CH0 0x8294
#define _VLV_TX_DW5_CH1 0x8494
#define DPIO_TX_OCALINIT_EN (1<<31)
#define DPIO_TX_OCALINIT(port) _PORT(port, _DPIO_TX_OCALINIT_0, \
_DPIO_TX_OCALINIT_1)
#define VLV_TX_DW5(ch) _PORT(ch, _VLV_TX_DW5_CH0, _VLV_TX_DW5_CH1)
#define _DPIO_TX_CTL_0 0x82ac
#define _DPIO_TX_CTL_1 0x84ac
#define DPIO_TX_CTL(port) _PORT(port, _DPIO_TX_CTL_0, _DPIO_TX_CTL_1)
#define _VLV_TX_DW11_CH0 0x82ac
#define _VLV_TX_DW11_CH1 0x84ac
#define VLV_TX_DW11(ch) _PORT(ch, _VLV_TX_DW11_CH0, _VLV_TX_DW11_CH1)
#define _DPIO_TX_LANE_0 0x82b8
#define _DPIO_TX_LANE_1 0x84b8
#define DPIO_TX_LANE(port) _PORT(port, _DPIO_TX_LANE_0, _DPIO_TX_LANE_1)
#define _DPIO_DATA_CHANNEL1 0x8220
#define _DPIO_DATA_CHANNEL2 0x8420
#define DPIO_DATA_CHANNEL(port) _PORT(port, _DPIO_DATA_CHANNEL1, _DPIO_DATA_CHANNEL2)
#define _DPIO_PORT0_PCS0 0x0220
#define _DPIO_PORT0_PCS1 0x0420
#define _DPIO_PORT1_PCS2 0x2620
#define _DPIO_PORT1_PCS3 0x2820
#define DPIO_DATA_LANE_A(port) _PORT(port, _DPIO_PORT0_PCS0, _DPIO_PORT1_PCS2)
#define DPIO_DATA_LANE_B(port) _PORT(port, _DPIO_PORT0_PCS1, _DPIO_PORT1_PCS3)
#define DPIO_DATA_CHANNEL1 0x8220
#define DPIO_DATA_CHANNEL2 0x8420
#define _VLV_TX_DW14_CH0 0x82b8
#define _VLV_TX_DW14_CH1 0x84b8
#define VLV_TX_DW14(ch) _PORT(ch, _VLV_TX_DW14_CH0, _VLV_TX_DW14_CH1)
/*
* Fence registers
@ -732,6 +735,8 @@
#define HWSTAM 0x02098
#define DMA_FADD_I8XX 0x020d0
#define RING_BBSTATE(base) ((base)+0x110)
#define RING_BBADDR(base) ((base)+0x140)
#define RING_BBADDR_UDW(base) ((base)+0x168) /* gen8+ */
#define ERROR_GEN6 0x040a0
#define GEN7_ERR_INT 0x44040
@ -922,7 +927,6 @@
#define CM0_COLOR_EVICT_DISABLE (1<<3)
#define CM0_DEPTH_WRITE_DISABLE (1<<1)
#define CM0_RC_OP_FLUSH_DISABLE (1<<0)
#define BB_ADDR 0x02140 /* 8 bytes */
#define GFX_FLSH_CNTL 0x02170 /* 915+ only */
#define GFX_FLSH_CNTL_GEN6 0x101008
#define GFX_FLSH_CNTL_EN (1<<0)
@ -999,6 +1003,7 @@
#define GEN7_FF_THREAD_MODE 0x20a0
#define GEN7_FF_SCHED_MASK 0x0077070
#define GEN8_FF_DS_REF_CNT_FFME (1 << 19)
#define GEN7_FF_TS_SCHED_HS1 (0x5<<16)
#define GEN7_FF_TS_SCHED_HS0 (0x3<<16)
#define GEN7_FF_TS_SCHED_LOAD_BALANCE (0x1<<16)
@ -1026,14 +1031,14 @@
#define FBC_CTL_UNCOMPRESSIBLE (1<<14)
#define FBC_CTL_C3_IDLE (1<<13)
#define FBC_CTL_STRIDE_SHIFT (5)
#define FBC_CTL_FENCENO (1<<0)
#define FBC_CTL_FENCENO_SHIFT (0)
#define FBC_COMMAND 0x0320c
#define FBC_CMD_COMPRESS (1<<0)
#define FBC_STATUS 0x03210
#define FBC_STAT_COMPRESSING (1<<31)
#define FBC_STAT_COMPRESSED (1<<30)
#define FBC_STAT_MODIFIED (1<<29)
#define FBC_STAT_CURRENT_LINE (1<<0)
#define FBC_STAT_CURRENT_LINE_SHIFT (0)
#define FBC_CONTROL2 0x03214
#define FBC_CTL_FENCE_DBL (0<<4)
#define FBC_CTL_IDLE_IMM (0<<2)
@ -2117,9 +2122,13 @@
* Please check the detailed lore in the commit message for for experimental
* evidence.
*/
#define PORTD_HOTPLUG_LIVE_STATUS (1 << 29)
#define PORTC_HOTPLUG_LIVE_STATUS (1 << 28)
#define PORTB_HOTPLUG_LIVE_STATUS (1 << 27)
#define PORTD_HOTPLUG_LIVE_STATUS_G4X (1 << 29)
#define PORTC_HOTPLUG_LIVE_STATUS_G4X (1 << 28)
#define PORTB_HOTPLUG_LIVE_STATUS_G4X (1 << 27)
/* VLV DP/HDMI bits again match Bspec */
#define PORTD_HOTPLUG_LIVE_STATUS_VLV (1 << 27)
#define PORTC_HOTPLUG_LIVE_STATUS_VLV (1 << 28)
#define PORTB_HOTPLUG_LIVE_STATUS_VLV (1 << 29)
#define PORTD_HOTPLUG_INT_STATUS (3 << 21)
#define PORTC_HOTPLUG_INT_STATUS (3 << 19)
#define PORTB_HOTPLUG_INT_STATUS (3 << 17)
@ -2130,6 +2139,11 @@
#define CRT_HOTPLUG_MONITOR_COLOR (3 << 8)
#define CRT_HOTPLUG_MONITOR_MONO (2 << 8)
#define CRT_HOTPLUG_MONITOR_NONE (0 << 8)
#define DP_AUX_CHANNEL_D_INT_STATUS_G4X (1 << 6)
#define DP_AUX_CHANNEL_C_INT_STATUS_G4X (1 << 5)
#define DP_AUX_CHANNEL_B_INT_STATUS_G4X (1 << 4)
#define DP_AUX_CHANNEL_MASK_INT_STATUS_G4X (7 << 4)
/* SDVO is different across gen3/4 */
#define SDVOC_HOTPLUG_INT_STATUS_G4X (1 << 3)
#define SDVOB_HOTPLUG_INT_STATUS_G4X (1 << 2)
@ -3421,42 +3435,6 @@
/* the unit of memory self-refresh latency time is 0.5us */
#define ILK_SRLT_MASK 0x3f
/* define the fifo size on Ironlake */
#define ILK_DISPLAY_FIFO 128
#define ILK_DISPLAY_MAXWM 64
#define ILK_DISPLAY_DFTWM 8
#define ILK_CURSOR_FIFO 32
#define ILK_CURSOR_MAXWM 16
#define ILK_CURSOR_DFTWM 8
#define ILK_DISPLAY_SR_FIFO 512
#define ILK_DISPLAY_MAX_SRWM 0x1ff
#define ILK_DISPLAY_DFT_SRWM 0x3f
#define ILK_CURSOR_SR_FIFO 64
#define ILK_CURSOR_MAX_SRWM 0x3f
#define ILK_CURSOR_DFT_SRWM 8
#define ILK_FIFO_LINE_SIZE 64
/* define the WM info on Sandybridge */
#define SNB_DISPLAY_FIFO 128
#define SNB_DISPLAY_MAXWM 0x7f /* bit 16:22 */
#define SNB_DISPLAY_DFTWM 8
#define SNB_CURSOR_FIFO 32
#define SNB_CURSOR_MAXWM 0x1f /* bit 4:0 */
#define SNB_CURSOR_DFTWM 8
#define SNB_DISPLAY_SR_FIFO 512
#define SNB_DISPLAY_MAX_SRWM 0x1ff /* bit 16:8 */
#define SNB_DISPLAY_DFT_SRWM 0x3f
#define SNB_CURSOR_SR_FIFO 64
#define SNB_CURSOR_MAX_SRWM 0x3f /* bit 5:0 */
#define SNB_CURSOR_DFT_SRWM 8
#define SNB_FBC_MAX_SRWM 0xf /* bit 23:20 */
#define SNB_FIFO_LINE_SIZE 64
/* the address where we get all kinds of latency value */
#define SSKPD 0x5d10
@ -3600,8 +3578,6 @@
#define DISP_BASEADDR_MASK (0xfffff000)
#define I915_LO_DISPBASE(val) (val & ~DISP_BASEADDR_MASK)
#define I915_HI_DISPBASE(val) (val & DISP_BASEADDR_MASK)
#define I915_MODIFY_DISPBASE(reg, gfx_addr) \
(I915_WRITE((reg), (gfx_addr) | I915_LO_DISPBASE(I915_READ(reg))))
/* VBIOS flags */
#define SWF00 (dev_priv->info->display_mmio_offset + 0x71410)
@ -3787,7 +3763,7 @@
#define _SPACNTR (VLV_DISPLAY_BASE + 0x72180)
#define SP_ENABLE (1<<31)
#define SP_GEAMMA_ENABLE (1<<30)
#define SP_GAMMA_ENABLE (1<<30)
#define SP_PIXFORMAT_MASK (0xf<<26)
#define SP_FORMAT_YUV422 (0<<26)
#define SP_FORMAT_BGR565 (5<<26)
@ -4139,6 +4115,8 @@
#define DISP_ARB_CTL 0x45000
#define DISP_TILE_SURFACE_SWIZZLING (1<<13)
#define DISP_FBC_WM_DIS (1<<15)
#define DISP_ARB_CTL2 0x45004
#define DISP_DATA_PARTITION_5_6 (1<<6)
#define GEN7_MSG_CTL 0x45010
#define WAIT_FOR_PCH_RESET_ACK (1<<1)
#define WAIT_FOR_PCH_FLR_ACK (1<<0)
@ -4159,6 +4137,10 @@
#define GEN7_L3SQCREG4 0xb034
#define L3SQ_URB_READ_CAM_MATCH_DISABLE (1<<27)
/* GEN8 chicken */
#define HDC_CHICKEN0 0x7300
#define HDC_FORCE_NON_COHERENT (1<<4)
/* WaCatErrorRejectionIssue */
#define GEN7_SQ_CHICKEN_MBCUNIT_CONFIG 0x9030
#define GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB (1<<11)
@ -4843,6 +4825,8 @@
#define FORCEWAKE_ACK 0x130090
#define VLV_GTLC_WAKE_CTRL 0x130090
#define VLV_GTLC_PW_STATUS 0x130094
#define VLV_GTLC_PW_RENDER_STATUS_MASK 0x80
#define VLV_GTLC_PW_MEDIA_STATUS_MASK 0x20
#define FORCEWAKE_MT 0xa188 /* multi-threaded */
#define FORCEWAKE_KERNEL 0x1
#define FORCEWAKE_USER 0x2
@ -4851,12 +4835,16 @@
#define FORCEWAKE_MT_ENABLE (1<<5)
#define GTFIFODBG 0x120000
#define GT_FIFO_CPU_ERROR_MASK 7
#define GT_FIFO_SBDROPERR (1<<6)
#define GT_FIFO_BLOBDROPERR (1<<5)
#define GT_FIFO_SB_READ_ABORTERR (1<<4)
#define GT_FIFO_DROPERR (1<<3)
#define GT_FIFO_OVFERR (1<<2)
#define GT_FIFO_IAWRERR (1<<1)
#define GT_FIFO_IARDERR (1<<0)
#define GT_FIFO_FREE_ENTRIES 0x120008
#define GTFIFOCTL 0x120008
#define GT_FIFO_FREE_ENTRIES_MASK 0x7f
#define GT_FIFO_NUM_RESERVED_ENTRIES 20
#define HSW_IDICR 0x9008
@ -4890,6 +4878,7 @@
#define GEN6_RC_CTL_RC6_ENABLE (1<<18)
#define GEN6_RC_CTL_RC1e_ENABLE (1<<20)
#define GEN6_RC_CTL_RC7_ENABLE (1<<22)
#define VLV_RC_CTL_CTX_RST_PARALLEL (1<<24)
#define GEN7_RC_CTL_TO_MODE (1<<28)
#define GEN6_RC_CTL_EI_MODE(x) ((x)<<27)
#define GEN6_RC_CTL_HW_ENABLE (1<<31)

View File

@ -192,7 +192,6 @@ static void i915_restore_vga(struct drm_device *dev)
static void i915_save_display(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
/* Display arbitration control */
if (INTEL_INFO(dev)->gen <= 4)
@ -203,46 +202,27 @@ static void i915_save_display(struct drm_device *dev)
if (!drm_core_check_feature(dev, DRIVER_MODESET))
i915_save_display_reg(dev);
spin_lock_irqsave(&dev_priv->backlight.lock, flags);
/* LVDS state */
if (HAS_PCH_SPLIT(dev)) {
dev_priv->regfile.savePP_CONTROL = I915_READ(PCH_PP_CONTROL);
dev_priv->regfile.saveBLC_PWM_CTL = I915_READ(BLC_PWM_PCH_CTL1);
dev_priv->regfile.saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_PCH_CTL2);
dev_priv->regfile.saveBLC_CPU_PWM_CTL = I915_READ(BLC_PWM_CPU_CTL);
dev_priv->regfile.saveBLC_CPU_PWM_CTL2 = I915_READ(BLC_PWM_CPU_CTL2);
if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev))
dev_priv->regfile.saveLVDS = I915_READ(PCH_LVDS);
} else if (IS_VALLEYVIEW(dev)) {
dev_priv->regfile.savePP_CONTROL = I915_READ(PP_CONTROL);
dev_priv->regfile.savePFIT_PGM_RATIOS = I915_READ(PFIT_PGM_RATIOS);
dev_priv->regfile.saveBLC_PWM_CTL =
I915_READ(VLV_BLC_PWM_CTL(PIPE_A));
dev_priv->regfile.saveBLC_HIST_CTL =
I915_READ(VLV_BLC_HIST_CTL(PIPE_A));
dev_priv->regfile.saveBLC_PWM_CTL2 =
I915_READ(VLV_BLC_PWM_CTL2(PIPE_A));
dev_priv->regfile.saveBLC_PWM_CTL_B =
I915_READ(VLV_BLC_PWM_CTL(PIPE_B));
dev_priv->regfile.saveBLC_HIST_CTL_B =
I915_READ(VLV_BLC_HIST_CTL(PIPE_B));
dev_priv->regfile.saveBLC_PWM_CTL2_B =
I915_READ(VLV_BLC_PWM_CTL2(PIPE_B));
} else {
dev_priv->regfile.savePP_CONTROL = I915_READ(PP_CONTROL);
dev_priv->regfile.savePFIT_PGM_RATIOS = I915_READ(PFIT_PGM_RATIOS);
dev_priv->regfile.saveBLC_PWM_CTL = I915_READ(BLC_PWM_CTL);
dev_priv->regfile.saveBLC_HIST_CTL = I915_READ(BLC_HIST_CTL);
if (INTEL_INFO(dev)->gen >= 4)
dev_priv->regfile.saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_CTL2);
if (IS_MOBILE(dev) && !IS_I830(dev))
dev_priv->regfile.saveLVDS = I915_READ(LVDS);
}
spin_unlock_irqrestore(&dev_priv->backlight.lock, flags);
if (!IS_I830(dev) && !IS_845G(dev) && !HAS_PCH_SPLIT(dev))
dev_priv->regfile.savePFIT_CONTROL = I915_READ(PFIT_CONTROL);
@ -257,7 +237,7 @@ static void i915_save_display(struct drm_device *dev)
}
/* Only regfile.save FBC state on the platform that supports FBC */
if (I915_HAS_FBC(dev)) {
if (HAS_FBC(dev)) {
if (HAS_PCH_SPLIT(dev)) {
dev_priv->regfile.saveDPFC_CB_BASE = I915_READ(ILK_DPFC_CB_BASE);
} else if (IS_GM45(dev)) {
@ -278,7 +258,6 @@ static void i915_restore_display(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u32 mask = 0xffffffff;
unsigned long flags;
/* Display arbitration */
if (INTEL_INFO(dev)->gen <= 4)
@ -287,12 +266,6 @@ static void i915_restore_display(struct drm_device *dev)
if (!drm_core_check_feature(dev, DRIVER_MODESET))
i915_restore_display_reg(dev);
spin_lock_irqsave(&dev_priv->backlight.lock, flags);
/* LVDS state */
if (INTEL_INFO(dev)->gen >= 4 && !HAS_PCH_SPLIT(dev))
I915_WRITE(BLC_PWM_CTL2, dev_priv->regfile.saveBLC_PWM_CTL2);
if (drm_core_check_feature(dev, DRIVER_MODESET))
mask = ~LVDS_PORT_EN;
@ -305,13 +278,6 @@ static void i915_restore_display(struct drm_device *dev)
I915_WRITE(PFIT_CONTROL, dev_priv->regfile.savePFIT_CONTROL);
if (HAS_PCH_SPLIT(dev)) {
I915_WRITE(BLC_PWM_PCH_CTL1, dev_priv->regfile.saveBLC_PWM_CTL);
I915_WRITE(BLC_PWM_PCH_CTL2, dev_priv->regfile.saveBLC_PWM_CTL2);
/* NOTE: BLC_PWM_CPU_CTL must be written after BLC_PWM_CPU_CTL2;
* otherwise we get blank eDP screen after S3 on some machines
*/
I915_WRITE(BLC_PWM_CPU_CTL2, dev_priv->regfile.saveBLC_CPU_PWM_CTL2);
I915_WRITE(BLC_PWM_CPU_CTL, dev_priv->regfile.saveBLC_CPU_PWM_CTL);
I915_WRITE(PCH_PP_ON_DELAYS, dev_priv->regfile.savePP_ON_DELAYS);
I915_WRITE(PCH_PP_OFF_DELAYS, dev_priv->regfile.savePP_OFF_DELAYS);
I915_WRITE(PCH_PP_DIVISOR, dev_priv->regfile.savePP_DIVISOR);
@ -319,21 +285,12 @@ static void i915_restore_display(struct drm_device *dev)
I915_WRITE(RSTDBYCTL,
dev_priv->regfile.saveMCHBAR_RENDER_STANDBY);
} else if (IS_VALLEYVIEW(dev)) {
I915_WRITE(VLV_BLC_PWM_CTL(PIPE_A),
dev_priv->regfile.saveBLC_PWM_CTL);
I915_WRITE(VLV_BLC_HIST_CTL(PIPE_A),
dev_priv->regfile.saveBLC_HIST_CTL);
I915_WRITE(VLV_BLC_PWM_CTL2(PIPE_A),
dev_priv->regfile.saveBLC_PWM_CTL2);
I915_WRITE(VLV_BLC_PWM_CTL(PIPE_B),
dev_priv->regfile.saveBLC_PWM_CTL);
I915_WRITE(VLV_BLC_HIST_CTL(PIPE_B),
dev_priv->regfile.saveBLC_HIST_CTL);
I915_WRITE(VLV_BLC_PWM_CTL2(PIPE_B),
dev_priv->regfile.saveBLC_PWM_CTL2);
} else {
I915_WRITE(PFIT_PGM_RATIOS, dev_priv->regfile.savePFIT_PGM_RATIOS);
I915_WRITE(BLC_PWM_CTL, dev_priv->regfile.saveBLC_PWM_CTL);
I915_WRITE(BLC_HIST_CTL, dev_priv->regfile.saveBLC_HIST_CTL);
I915_WRITE(PP_ON_DELAYS, dev_priv->regfile.savePP_ON_DELAYS);
I915_WRITE(PP_OFF_DELAYS, dev_priv->regfile.savePP_OFF_DELAYS);
@ -341,11 +298,9 @@ static void i915_restore_display(struct drm_device *dev)
I915_WRITE(PP_CONTROL, dev_priv->regfile.savePP_CONTROL);
}
spin_unlock_irqrestore(&dev_priv->backlight.lock, flags);
/* only restore FBC info on the platform that supports FBC*/
intel_disable_fbc(dev);
if (I915_HAS_FBC(dev)) {
if (HAS_FBC(dev)) {
if (HAS_PCH_SPLIT(dev)) {
I915_WRITE(ILK_DPFC_CB_BASE, dev_priv->regfile.saveDPFC_CB_BASE);
} else if (IS_GM45(dev)) {

View File

@ -40,10 +40,13 @@ static u32 calc_residency(struct drm_device *dev, const u32 reg)
struct drm_i915_private *dev_priv = dev->dev_private;
u64 raw_time; /* 32b value may overflow during fixed point math */
u64 units = 128ULL, div = 100000ULL, bias = 100ULL;
u32 ret;
if (!intel_enable_rc6(dev))
return 0;
intel_runtime_pm_get(dev_priv);
/* On VLV, residency time is in CZ units rather than 1.28us */
if (IS_VALLEYVIEW(dev)) {
u32 clkctl2;
@ -52,7 +55,8 @@ static u32 calc_residency(struct drm_device *dev, const u32 reg)
CLK_CTL2_CZCOUNT_30NS_SHIFT;
if (!clkctl2) {
WARN(!clkctl2, "bogus CZ count value");
return 0;
ret = 0;
goto out;
}
units = DIV_ROUND_UP_ULL(30ULL * bias, (u64)clkctl2);
if (I915_READ(VLV_COUNTER_CONTROL) & VLV_COUNT_RANGE_HIGH)
@ -62,7 +66,11 @@ static u32 calc_residency(struct drm_device *dev, const u32 reg)
}
raw_time = I915_READ(reg) * units;
return DIV_ROUND_UP_ULL(raw_time, div);
ret = DIV_ROUND_UP_ULL(raw_time, div);
out:
intel_runtime_pm_put(dev_priv);
return ret;
}
static ssize_t
@ -183,13 +191,13 @@ i915_l3_write(struct file *filp, struct kobject *kobj,
int slice = (int)(uintptr_t)attr->private;
int ret;
if (!HAS_HW_CONTEXTS(drm_dev))
return -ENXIO;
ret = l3_access_valid(drm_dev, offset);
if (ret)
return ret;
if (dev_priv->hw_contexts_disabled)
return -ENXIO;
ret = i915_mutex_lock_interruptible(drm_dev);
if (ret)
return ret;
@ -259,7 +267,7 @@ static ssize_t gt_cur_freq_mhz_show(struct device *kdev,
if (IS_VALLEYVIEW(dev_priv->dev)) {
u32 freq;
freq = vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS);
ret = vlv_gpu_freq(dev_priv->mem_freq, (freq >> 8) & 0xff);
ret = vlv_gpu_freq(dev_priv, (freq >> 8) & 0xff);
} else {
ret = dev_priv->rps.cur_delay * GT_FREQUENCY_MULTIPLIER;
}
@ -276,8 +284,7 @@ static ssize_t vlv_rpe_freq_mhz_show(struct device *kdev,
struct drm_i915_private *dev_priv = dev->dev_private;
return snprintf(buf, PAGE_SIZE, "%d\n",
vlv_gpu_freq(dev_priv->mem_freq,
dev_priv->rps.rpe_delay));
vlv_gpu_freq(dev_priv, dev_priv->rps.rpe_delay));
}
static ssize_t gt_max_freq_mhz_show(struct device *kdev, struct device_attribute *attr, char *buf)
@ -291,7 +298,7 @@ static ssize_t gt_max_freq_mhz_show(struct device *kdev, struct device_attribute
mutex_lock(&dev_priv->rps.hw_lock);
if (IS_VALLEYVIEW(dev_priv->dev))
ret = vlv_gpu_freq(dev_priv->mem_freq, dev_priv->rps.max_delay);
ret = vlv_gpu_freq(dev_priv, dev_priv->rps.max_delay);
else
ret = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER;
mutex_unlock(&dev_priv->rps.hw_lock);
@ -318,7 +325,7 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
mutex_lock(&dev_priv->rps.hw_lock);
if (IS_VALLEYVIEW(dev_priv->dev)) {
val = vlv_freq_opcode(dev_priv->mem_freq, val);
val = vlv_freq_opcode(dev_priv, val);
hw_max = valleyview_rps_max_freq(dev_priv);
hw_min = valleyview_rps_min_freq(dev_priv);
@ -342,15 +349,15 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
DRM_DEBUG("User requested overclocking to %d\n",
val * GT_FREQUENCY_MULTIPLIER);
if (dev_priv->rps.cur_delay > val) {
if (IS_VALLEYVIEW(dev_priv->dev))
valleyview_set_rps(dev_priv->dev, val);
else
gen6_set_rps(dev_priv->dev, val);
}
dev_priv->rps.max_delay = val;
if (dev_priv->rps.cur_delay > val) {
if (IS_VALLEYVIEW(dev))
valleyview_set_rps(dev, val);
else
gen6_set_rps(dev, val);
}
mutex_unlock(&dev_priv->rps.hw_lock);
return count;
@ -367,7 +374,7 @@ static ssize_t gt_min_freq_mhz_show(struct device *kdev, struct device_attribute
mutex_lock(&dev_priv->rps.hw_lock);
if (IS_VALLEYVIEW(dev_priv->dev))
ret = vlv_gpu_freq(dev_priv->mem_freq, dev_priv->rps.min_delay);
ret = vlv_gpu_freq(dev_priv, dev_priv->rps.min_delay);
else
ret = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER;
mutex_unlock(&dev_priv->rps.hw_lock);
@ -394,7 +401,7 @@ static ssize_t gt_min_freq_mhz_store(struct device *kdev,
mutex_lock(&dev_priv->rps.hw_lock);
if (IS_VALLEYVIEW(dev)) {
val = vlv_freq_opcode(dev_priv->mem_freq, val);
val = vlv_freq_opcode(dev_priv, val);
hw_max = valleyview_rps_max_freq(dev_priv);
hw_min = valleyview_rps_min_freq(dev_priv);
@ -411,15 +418,15 @@ static ssize_t gt_min_freq_mhz_store(struct device *kdev,
return -EINVAL;
}
dev_priv->rps.min_delay = val;
if (dev_priv->rps.cur_delay < val) {
if (IS_VALLEYVIEW(dev))
valleyview_set_rps(dev, val);
else
gen6_set_rps(dev_priv->dev, val);
gen6_set_rps(dev, val);
}
dev_priv->rps.min_delay = val;
mutex_unlock(&dev_priv->rps.hw_lock);
return count;
@ -449,7 +456,9 @@ static ssize_t gt_rp_mhz_show(struct device *kdev, struct device_attribute *attr
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
intel_runtime_pm_get(dev_priv);
rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
intel_runtime_pm_put(dev_priv);
mutex_unlock(&dev->struct_mutex);
if (attr == &dev_attr_gt_RP0_freq_mhz) {

View File

@ -270,6 +270,18 @@ void i915_save_display_reg(struct drm_device *dev)
}
/* FIXME: regfile.save TV & SDVO state */
/* Backlight */
if (HAS_PCH_SPLIT(dev)) {
dev_priv->regfile.saveBLC_PWM_CTL = I915_READ(BLC_PWM_PCH_CTL1);
dev_priv->regfile.saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_PCH_CTL2);
dev_priv->regfile.saveBLC_CPU_PWM_CTL = I915_READ(BLC_PWM_CPU_CTL);
dev_priv->regfile.saveBLC_CPU_PWM_CTL2 = I915_READ(BLC_PWM_CPU_CTL2);
} else {
dev_priv->regfile.saveBLC_PWM_CTL = I915_READ(BLC_PWM_CTL);
if (INTEL_INFO(dev)->gen >= 4)
dev_priv->regfile.saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_CTL2);
}
return;
}
@ -280,6 +292,21 @@ void i915_restore_display_reg(struct drm_device *dev)
int dpll_b_reg, fpb0_reg, fpb1_reg;
int i;
/* Backlight */
if (HAS_PCH_SPLIT(dev)) {
I915_WRITE(BLC_PWM_PCH_CTL1, dev_priv->regfile.saveBLC_PWM_CTL);
I915_WRITE(BLC_PWM_PCH_CTL2, dev_priv->regfile.saveBLC_PWM_CTL2);
/* NOTE: BLC_PWM_CPU_CTL must be written after BLC_PWM_CPU_CTL2;
* otherwise we get blank eDP screen after S3 on some machines
*/
I915_WRITE(BLC_PWM_CPU_CTL2, dev_priv->regfile.saveBLC_CPU_PWM_CTL2);
I915_WRITE(BLC_PWM_CPU_CTL, dev_priv->regfile.saveBLC_CPU_PWM_CTL);
} else {
if (INTEL_INFO(dev)->gen >= 4)
I915_WRITE(BLC_PWM_CTL2, dev_priv->regfile.saveBLC_PWM_CTL2);
I915_WRITE(BLC_PWM_CTL, dev_priv->regfile.saveBLC_PWM_CTL);
}
/* Display port ratios (must be done before clock is set) */
if (SUPPORTS_INTEGRATED_DP(dev)) {
I915_WRITE(_PIPEA_DATA_M_G4X, dev_priv->regfile.savePIPEA_GMCH_DATA_M);

View File

@ -281,6 +281,34 @@ parse_lfp_panel_data(struct drm_i915_private *dev_priv,
}
}
static void
parse_lfp_backlight(struct drm_i915_private *dev_priv, struct bdb_header *bdb)
{
const struct bdb_lfp_backlight_data *backlight_data;
const struct bdb_lfp_backlight_data_entry *entry;
backlight_data = find_section(bdb, BDB_LVDS_BACKLIGHT);
if (!backlight_data)
return;
if (backlight_data->entry_size != sizeof(backlight_data->data[0])) {
DRM_DEBUG_KMS("Unsupported backlight data entry size %u\n",
backlight_data->entry_size);
return;
}
entry = &backlight_data->data[panel_type];
dev_priv->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz;
dev_priv->vbt.backlight.active_low_pwm = entry->active_low_pwm;
DRM_DEBUG_KMS("VBT backlight PWM modulation frequency %u Hz, "
"active %s, min brightness %u, level %u\n",
dev_priv->vbt.backlight.pwm_freq_hz,
dev_priv->vbt.backlight.active_low_pwm ? "low" : "high",
entry->min_brightness,
backlight_data->level[panel_type]);
}
/* Try to find sdvo panel data */
static void
parse_sdvo_panel_data(struct drm_i915_private *dev_priv,
@ -327,12 +355,12 @@ static int intel_bios_ssc_frequency(struct drm_device *dev,
{
switch (INTEL_INFO(dev)->gen) {
case 2:
return alternate ? 66 : 48;
return alternate ? 66667 : 48000;
case 3:
case 4:
return alternate ? 100 : 96;
return alternate ? 100000 : 96000;
default:
return alternate ? 100 : 120;
return alternate ? 100000 : 120000;
}
}
@ -796,7 +824,7 @@ init_vbt_defaults(struct drm_i915_private *dev_priv)
*/
dev_priv->vbt.lvds_ssc_freq = intel_bios_ssc_frequency(dev,
!HAS_PCH_SPLIT(dev));
DRM_DEBUG_KMS("Set default to SSC at %dMHz\n", dev_priv->vbt.lvds_ssc_freq);
DRM_DEBUG_KMS("Set default to SSC at %d kHz\n", dev_priv->vbt.lvds_ssc_freq);
for (port = PORT_A; port < I915_MAX_PORTS; port++) {
struct ddi_vbt_port_info *info =
@ -894,6 +922,7 @@ intel_parse_bios(struct drm_device *dev)
parse_general_features(dev_priv, bdb);
parse_general_definitions(dev_priv, bdb);
parse_lfp_panel_data(dev_priv, bdb);
parse_lfp_backlight(dev_priv, bdb);
parse_sdvo_panel_data(dev_priv, bdb);
parse_sdvo_device_mapping(dev_priv, bdb);
parse_device_mapping(dev_priv, bdb);

View File

@ -39,7 +39,7 @@ struct vbt_header {
u8 reserved0;
u32 bdb_offset; /**< from beginning of VBT */
u32 aim_offset[4]; /**< from beginning of VBT */
} __attribute__((packed));
} __packed;
struct bdb_header {
u8 signature[16]; /**< Always 'BIOS_DATA_BLOCK' */
@ -65,7 +65,7 @@ struct vbios_data {
u8 rsvd4; /* popup memory size */
u8 resize_pci_bios;
u8 rsvd5; /* is crt already on ddc2 */
} __attribute__((packed));
} __packed;
/*
* There are several types of BIOS data blocks (BDBs), each block has
@ -142,7 +142,7 @@ struct bdb_general_features {
u8 dp_ssc_enb:1; /* PCH attached eDP supports SSC */
u8 dp_ssc_freq:1; /* SSC freq for PCH attached eDP */
u8 rsvd11:3; /* finish byte */
} __attribute__((packed));
} __packed;
/* pre-915 */
#define GPIO_PIN_DVI_LVDS 0x03 /* "DVI/LVDS DDC GPIO pins" */
@ -225,7 +225,7 @@ struct old_child_dev_config {
u8 dvo2_wiring;
u16 extended_type;
u8 dvo_function;
} __attribute__((packed));
} __packed;
/* This one contains field offsets that are known to be common for all BDB
* versions. Notice that the meaning of the contents contents may still change,
@ -238,7 +238,7 @@ struct common_child_dev_config {
u8 not_common2[2];
u8 ddc_pin;
u16 edid_ptr;
} __attribute__((packed));
} __packed;
/* This field changes depending on the BDB version, so the most reliable way to
* read it is by checking the BDB version and reading the raw pointer. */
@ -279,7 +279,7 @@ struct bdb_general_definitions {
* sizeof(child_device_config);
*/
union child_device_config devices[0];
} __attribute__((packed));
} __packed;
struct bdb_lvds_options {
u8 panel_type;
@ -293,7 +293,7 @@ struct bdb_lvds_options {
u8 lvds_edid:1;
u8 rsvd2:1;
u8 rsvd4;
} __attribute__((packed));
} __packed;
/* LFP pointer table contains entries to the struct below */
struct bdb_lvds_lfp_data_ptr {
@ -303,12 +303,12 @@ struct bdb_lvds_lfp_data_ptr {
u8 dvo_table_size;
u16 panel_pnp_id_offset;
u8 pnp_table_size;
} __attribute__((packed));
} __packed;
struct bdb_lvds_lfp_data_ptrs {
u8 lvds_entries; /* followed by one or more lvds_data_ptr structs */
struct bdb_lvds_lfp_data_ptr ptr[16];
} __attribute__((packed));
} __packed;
/* LFP data has 3 blocks per entry */
struct lvds_fp_timing {
@ -325,7 +325,7 @@ struct lvds_fp_timing {
u32 pfit_reg;
u32 pfit_reg_val;
u16 terminator;
} __attribute__((packed));
} __packed;
struct lvds_dvo_timing {
u16 clock; /**< In 10khz */
@ -353,7 +353,7 @@ struct lvds_dvo_timing {
u8 vsync_positive:1;
u8 hsync_positive:1;
u8 rsvd2:1;
} __attribute__((packed));
} __packed;
struct lvds_pnp_id {
u16 mfg_name;
@ -361,17 +361,33 @@ struct lvds_pnp_id {
u32 serial;
u8 mfg_week;
u8 mfg_year;
} __attribute__((packed));
} __packed;
struct bdb_lvds_lfp_data_entry {
struct lvds_fp_timing fp_timing;
struct lvds_dvo_timing dvo_timing;
struct lvds_pnp_id pnp_id;
} __attribute__((packed));
} __packed;
struct bdb_lvds_lfp_data {
struct bdb_lvds_lfp_data_entry data[16];
} __attribute__((packed));
} __packed;
struct bdb_lfp_backlight_data_entry {
u8 type:2;
u8 active_low_pwm:1;
u8 obsolete1:5;
u16 pwm_freq_hz;
u8 min_brightness;
u8 obsolete2;
u8 obsolete3;
} __packed;
struct bdb_lfp_backlight_data {
u8 entry_size;
struct bdb_lfp_backlight_data_entry data[16];
u8 level[16];
} __packed;
struct aimdb_header {
char signature[16];
@ -379,12 +395,12 @@ struct aimdb_header {
u16 aimdb_version;
u16 aimdb_header_size;
u16 aimdb_size;
} __attribute__((packed));
} __packed;
struct aimdb_block {
u8 aimdb_id;
u16 aimdb_size;
} __attribute__((packed));
} __packed;
struct vch_panel_data {
u16 fp_timing_offset;
@ -395,12 +411,12 @@ struct vch_panel_data {
u8 text_fitting_size;
u16 graphics_fitting_offset;
u8 graphics_fitting_size;
} __attribute__((packed));
} __packed;
struct vch_bdb_22 {
struct aimdb_block aimdb_block;
struct vch_panel_data panels[16];
} __attribute__((packed));
} __packed;
struct bdb_sdvo_lvds_options {
u8 panel_backlight;
@ -416,7 +432,7 @@ struct bdb_sdvo_lvds_options {
u8 panel_misc_bits_2;
u8 panel_misc_bits_3;
u8 panel_misc_bits_4;
} __attribute__((packed));
} __packed;
#define BDB_DRIVER_FEATURE_NO_LVDS 0
@ -462,7 +478,7 @@ struct bdb_driver_features {
u8 hdmi_termination;
u8 custom_vbt_version;
} __attribute__((packed));
} __packed;
#define EDP_18BPP 0
#define EDP_24BPP 1
@ -487,14 +503,14 @@ struct edp_power_seq {
u16 t9;
u16 t10;
u16 t11_t12;
} __attribute__ ((packed));
} __packed;
struct edp_link_params {
u8 rate:4;
u8 lanes:4;
u8 preemphasis:4;
u8 vswing:4;
} __attribute__ ((packed));
} __packed;
struct bdb_edp {
struct edp_power_seq power_seqs[16];
@ -505,7 +521,7 @@ struct bdb_edp {
/* ith bit indicates enabled/disabled for (i+1)th panel */
u16 edp_s3d_feature;
u16 edp_t3_optimization;
} __attribute__ ((packed));
} __packed;
void intel_setup_bios(struct drm_device *dev);
int intel_parse_bios(struct drm_device *dev);
@ -733,6 +749,6 @@ struct bdb_mipi {
u32 hl_switch_cnt;
u32 lp_byte_clk;
u32 clk_lane_switch_cnt;
} __attribute__((packed));
} __packed;
#endif /* _I830_BIOS_H_ */

View File

@ -222,8 +222,9 @@ static void intel_crt_dpms(struct drm_connector *connector, int mode)
intel_modeset_check_state(connector->dev);
}
static int intel_crt_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
static enum drm_mode_status
intel_crt_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct drm_device *dev = connector->dev;

View File

@ -73,7 +73,7 @@ static const u32 hsw_ddi_translations_hdmi[] = {
};
static const u32 bdw_ddi_translations_edp[] = {
0x00FFFFFF, 0x00000012, /* DP parameters */
0x00FFFFFF, 0x00000012, /* eDP parameters */
0x00EBAFFF, 0x00020011,
0x00C71FFF, 0x0006000F,
0x00FFFFFF, 0x00020011,
@ -696,25 +696,25 @@ intel_ddi_calculate_wrpll(int clock /* in Hz */,
*n2_out = best.n2;
*p_out = best.p;
*r2_out = best.r2;
DRM_DEBUG_KMS("WRPLL: %dHz refresh rate with p=%d, n2=%d r2=%d\n",
clock, *p_out, *n2_out, *r2_out);
}
bool intel_ddi_pll_mode_set(struct drm_crtc *crtc)
/*
* Tries to find a PLL for the CRTC. If it finds, it increases the refcount and
* stores it in intel_crtc->ddi_pll_sel, so other mode sets won't be able to
* steal the selected PLL. You need to call intel_ddi_pll_enable to actually
* enable the PLL.
*/
bool intel_ddi_pll_select(struct intel_crtc *intel_crtc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct drm_crtc *crtc = &intel_crtc->base;
struct intel_encoder *intel_encoder = intel_ddi_get_crtc_encoder(crtc);
struct drm_encoder *encoder = &intel_encoder->base;
struct drm_i915_private *dev_priv = crtc->dev->dev_private;
struct intel_ddi_plls *plls = &dev_priv->ddi_plls;
int type = intel_encoder->type;
enum pipe pipe = intel_crtc->pipe;
uint32_t reg, val;
int clock = intel_crtc->config.port_clock;
/* TODO: reuse PLLs when possible (compare values) */
intel_ddi_put_crtc_pll(crtc);
if (type == INTEL_OUTPUT_DISPLAYPORT || type == INTEL_OUTPUT_EDP) {
@ -736,66 +736,145 @@ bool intel_ddi_pll_mode_set(struct drm_crtc *crtc)
return false;
}
/* We don't need to turn any PLL on because we'll use LCPLL. */
return true;
} else if (type == INTEL_OUTPUT_HDMI) {
uint32_t reg, val;
unsigned p, n2, r2;
if (plls->wrpll1_refcount == 0) {
DRM_DEBUG_KMS("Using WRPLL 1 on pipe %c\n",
pipe_name(pipe));
plls->wrpll1_refcount++;
reg = WRPLL_CTL1;
intel_crtc->ddi_pll_sel = PORT_CLK_SEL_WRPLL1;
} else if (plls->wrpll2_refcount == 0) {
DRM_DEBUG_KMS("Using WRPLL 2 on pipe %c\n",
pipe_name(pipe));
plls->wrpll2_refcount++;
reg = WRPLL_CTL2;
intel_crtc->ddi_pll_sel = PORT_CLK_SEL_WRPLL2;
} else {
DRM_ERROR("No WRPLLs available!\n");
return false;
}
WARN(I915_READ(reg) & WRPLL_PLL_ENABLE,
"WRPLL already enabled\n");
intel_ddi_calculate_wrpll(clock * 1000, &r2, &n2, &p);
val = WRPLL_PLL_ENABLE | WRPLL_PLL_SELECT_LCPLL_2700 |
WRPLL_DIVIDER_REFERENCE(r2) | WRPLL_DIVIDER_FEEDBACK(n2) |
WRPLL_DIVIDER_POST(p);
if (val == I915_READ(WRPLL_CTL1)) {
DRM_DEBUG_KMS("Reusing WRPLL 1 on pipe %c\n",
pipe_name(pipe));
reg = WRPLL_CTL1;
} else if (val == I915_READ(WRPLL_CTL2)) {
DRM_DEBUG_KMS("Reusing WRPLL 2 on pipe %c\n",
pipe_name(pipe));
reg = WRPLL_CTL2;
} else if (plls->wrpll1_refcount == 0) {
DRM_DEBUG_KMS("Using WRPLL 1 on pipe %c\n",
pipe_name(pipe));
reg = WRPLL_CTL1;
} else if (plls->wrpll2_refcount == 0) {
DRM_DEBUG_KMS("Using WRPLL 2 on pipe %c\n",
pipe_name(pipe));
reg = WRPLL_CTL2;
} else {
DRM_ERROR("No WRPLLs available!\n");
return false;
}
DRM_DEBUG_KMS("WRPLL: %dKHz refresh rate with p=%d, n2=%d r2=%d\n",
clock, p, n2, r2);
if (reg == WRPLL_CTL1) {
plls->wrpll1_refcount++;
intel_crtc->ddi_pll_sel = PORT_CLK_SEL_WRPLL1;
} else {
plls->wrpll2_refcount++;
intel_crtc->ddi_pll_sel = PORT_CLK_SEL_WRPLL2;
}
} else if (type == INTEL_OUTPUT_ANALOG) {
if (plls->spll_refcount == 0) {
DRM_DEBUG_KMS("Using SPLL on pipe %c\n",
pipe_name(pipe));
plls->spll_refcount++;
reg = SPLL_CTL;
intel_crtc->ddi_pll_sel = PORT_CLK_SEL_SPLL;
} else {
DRM_ERROR("SPLL already in use\n");
return false;
}
WARN(I915_READ(reg) & SPLL_PLL_ENABLE,
"SPLL already enabled\n");
val = SPLL_PLL_ENABLE | SPLL_PLL_FREQ_1350MHz | SPLL_PLL_SSC;
} else {
WARN(1, "Invalid DDI encoder type %d\n", type);
return false;
}
I915_WRITE(reg, val);
udelay(20);
return true;
}
/*
* To be called after intel_ddi_pll_select(). That one selects the PLL to be
* used, this one actually enables the PLL.
*/
void intel_ddi_pll_enable(struct intel_crtc *crtc)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ddi_plls *plls = &dev_priv->ddi_plls;
int clock = crtc->config.port_clock;
uint32_t reg, cur_val, new_val;
int refcount;
const char *pll_name;
uint32_t enable_bit = (1 << 31);
unsigned int p, n2, r2;
BUILD_BUG_ON(enable_bit != SPLL_PLL_ENABLE);
BUILD_BUG_ON(enable_bit != WRPLL_PLL_ENABLE);
switch (crtc->ddi_pll_sel) {
case PORT_CLK_SEL_LCPLL_2700:
case PORT_CLK_SEL_LCPLL_1350:
case PORT_CLK_SEL_LCPLL_810:
/*
* LCPLL should always be enabled at this point of the mode set
* sequence, so nothing to do.
*/
return;
case PORT_CLK_SEL_SPLL:
pll_name = "SPLL";
reg = SPLL_CTL;
refcount = plls->spll_refcount;
new_val = SPLL_PLL_ENABLE | SPLL_PLL_FREQ_1350MHz |
SPLL_PLL_SSC;
break;
case PORT_CLK_SEL_WRPLL1:
case PORT_CLK_SEL_WRPLL2:
if (crtc->ddi_pll_sel == PORT_CLK_SEL_WRPLL1) {
pll_name = "WRPLL1";
reg = WRPLL_CTL1;
refcount = plls->wrpll1_refcount;
} else {
pll_name = "WRPLL2";
reg = WRPLL_CTL2;
refcount = plls->wrpll2_refcount;
}
intel_ddi_calculate_wrpll(clock * 1000, &r2, &n2, &p);
new_val = WRPLL_PLL_ENABLE | WRPLL_PLL_SELECT_LCPLL_2700 |
WRPLL_DIVIDER_REFERENCE(r2) |
WRPLL_DIVIDER_FEEDBACK(n2) | WRPLL_DIVIDER_POST(p);
break;
case PORT_CLK_SEL_NONE:
WARN(1, "Bad selected pll: PORT_CLK_SEL_NONE\n");
return;
default:
WARN(1, "Bad selected pll: 0x%08x\n", crtc->ddi_pll_sel);
return;
}
cur_val = I915_READ(reg);
WARN(refcount < 1, "Bad %s refcount: %d\n", pll_name, refcount);
if (refcount == 1) {
WARN(cur_val & enable_bit, "%s already enabled\n", pll_name);
I915_WRITE(reg, new_val);
POSTING_READ(reg);
udelay(20);
} else {
WARN((cur_val & enable_bit) == 0, "%s disabled\n", pll_name);
}
}
void intel_ddi_set_pipe_settings(struct drm_crtc *crtc)
{
struct drm_i915_private *dev_priv = crtc->dev->dev_private;
@ -1121,9 +1200,7 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder)
if (type == INTEL_OUTPUT_EDP) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
ironlake_edp_panel_vdd_on(intel_dp);
ironlake_edp_panel_on(intel_dp);
ironlake_edp_panel_vdd_off(intel_dp, true);
}
WARN_ON(intel_crtc->ddi_pll_sel == PORT_CLK_SEL_NONE);
@ -1166,7 +1243,6 @@ static void intel_ddi_post_disable(struct intel_encoder *intel_encoder)
if (type == INTEL_OUTPUT_DISPLAYPORT || type == INTEL_OUTPUT_EDP) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
ironlake_edp_panel_vdd_on(intel_dp);
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
ironlake_edp_panel_off(intel_dp);
}

File diff suppressed because it is too large Load Diff

View File

@ -142,7 +142,7 @@ intel_dp_max_data_rate(int max_link_clock, int max_lanes)
return (max_link_clock * max_lanes * 8) / 10;
}
static int
static enum drm_mode_status
intel_dp_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
@ -404,7 +404,7 @@ intel_dp_aux_ch(struct intel_dp *intel_dp,
int i, ret, recv_bytes;
uint32_t status;
int try, precharge, clock = 0;
bool has_aux_irq = INTEL_INFO(dev)->gen >= 5 && !IS_VALLEYVIEW(dev);
bool has_aux_irq = true;
uint32_t timeout;
/* dp aux is extremely sensitive to irq latency, hence request the
@ -542,7 +542,7 @@ intel_dp_aux_native_write(struct intel_dp *intel_dp,
return -E2BIG;
intel_dp_check_edp(intel_dp);
msg[0] = AUX_NATIVE_WRITE << 4;
msg[0] = DP_AUX_NATIVE_WRITE << 4;
msg[1] = address >> 8;
msg[2] = address & 0xff;
msg[3] = send_bytes - 1;
@ -552,9 +552,10 @@ intel_dp_aux_native_write(struct intel_dp *intel_dp,
ret = intel_dp_aux_ch(intel_dp, msg, msg_bytes, &ack, 1);
if (ret < 0)
return ret;
if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_ACK)
ack >>= 4;
if ((ack & DP_AUX_NATIVE_REPLY_MASK) == DP_AUX_NATIVE_REPLY_ACK)
break;
else if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_DEFER)
else if ((ack & DP_AUX_NATIVE_REPLY_MASK) == DP_AUX_NATIVE_REPLY_DEFER)
udelay(100);
else
return -EIO;
@ -586,7 +587,7 @@ intel_dp_aux_native_read(struct intel_dp *intel_dp,
return -E2BIG;
intel_dp_check_edp(intel_dp);
msg[0] = AUX_NATIVE_READ << 4;
msg[0] = DP_AUX_NATIVE_READ << 4;
msg[1] = address >> 8;
msg[2] = address & 0xff;
msg[3] = recv_bytes - 1;
@ -601,12 +602,12 @@ intel_dp_aux_native_read(struct intel_dp *intel_dp,
return -EPROTO;
if (ret < 0)
return ret;
ack = reply[0];
if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_ACK) {
ack = reply[0] >> 4;
if ((ack & DP_AUX_NATIVE_REPLY_MASK) == DP_AUX_NATIVE_REPLY_ACK) {
memcpy(recv, reply + 1, ret - 1);
return ret - 1;
}
else if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_DEFER)
else if ((ack & DP_AUX_NATIVE_REPLY_MASK) == DP_AUX_NATIVE_REPLY_DEFER)
udelay(100);
else
return -EIO;
@ -633,12 +634,12 @@ intel_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode,
intel_dp_check_edp(intel_dp);
/* Set up the command byte */
if (mode & MODE_I2C_READ)
msg[0] = AUX_I2C_READ << 4;
msg[0] = DP_AUX_I2C_READ << 4;
else
msg[0] = AUX_I2C_WRITE << 4;
msg[0] = DP_AUX_I2C_WRITE << 4;
if (!(mode & MODE_I2C_STOP))
msg[0] |= AUX_I2C_MOT << 4;
msg[0] |= DP_AUX_I2C_MOT << 4;
msg[1] = address >> 8;
msg[2] = address;
@ -675,17 +676,17 @@ intel_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode,
goto out;
}
switch (reply[0] & AUX_NATIVE_REPLY_MASK) {
case AUX_NATIVE_REPLY_ACK:
switch ((reply[0] >> 4) & DP_AUX_NATIVE_REPLY_MASK) {
case DP_AUX_NATIVE_REPLY_ACK:
/* I2C-over-AUX Reply field is only valid
* when paired with AUX ACK.
*/
break;
case AUX_NATIVE_REPLY_NACK:
case DP_AUX_NATIVE_REPLY_NACK:
DRM_DEBUG_KMS("aux_ch native nack\n");
ret = -EREMOTEIO;
goto out;
case AUX_NATIVE_REPLY_DEFER:
case DP_AUX_NATIVE_REPLY_DEFER:
/*
* For now, just give more slack to branch devices. We
* could check the DPCD for I2C bit rate capabilities,
@ -706,18 +707,18 @@ intel_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode,
goto out;
}
switch (reply[0] & AUX_I2C_REPLY_MASK) {
case AUX_I2C_REPLY_ACK:
switch ((reply[0] >> 4) & DP_AUX_I2C_REPLY_MASK) {
case DP_AUX_I2C_REPLY_ACK:
if (mode == MODE_I2C_READ) {
*read_byte = reply[1];
}
ret = reply_bytes - 1;
goto out;
case AUX_I2C_REPLY_NACK:
case DP_AUX_I2C_REPLY_NACK:
DRM_DEBUG_KMS("aux_i2c nack\n");
ret = -EREMOTEIO;
goto out;
case AUX_I2C_REPLY_DEFER:
case DP_AUX_I2C_REPLY_DEFER:
DRM_DEBUG_KMS("aux_i2c defer\n");
udelay(100);
break;
@ -1037,6 +1038,8 @@ static void ironlake_wait_panel_status(struct intel_dp *intel_dp,
I915_READ(pp_stat_reg),
I915_READ(pp_ctrl_reg));
}
DRM_DEBUG_KMS("Wait complete\n");
}
static void ironlake_wait_panel_on(struct intel_dp *intel_dp)
@ -1092,6 +1095,8 @@ void ironlake_edp_panel_vdd_on(struct intel_dp *intel_dp)
if (ironlake_edp_have_panel_vdd(intel_dp))
return;
intel_runtime_pm_get(dev_priv);
DRM_DEBUG_KMS("Turning eDP VDD on\n");
if (!ironlake_edp_have_panel_power(intel_dp))
@ -1140,7 +1145,11 @@ static void ironlake_panel_vdd_off_sync(struct intel_dp *intel_dp)
/* Make sure sequencer is idle before allowing subsequent activity */
DRM_DEBUG_KMS("PP_STATUS: 0x%08x PP_CONTROL: 0x%08x\n",
I915_READ(pp_stat_reg), I915_READ(pp_ctrl_reg));
msleep(intel_dp->panel_power_down_delay);
if ((pp & POWER_TARGET_ON) == 0)
msleep(intel_dp->panel_power_cycle_delay);
intel_runtime_pm_put(dev_priv);
}
}
@ -1233,20 +1242,16 @@ void ironlake_edp_panel_off(struct intel_dp *intel_dp)
DRM_DEBUG_KMS("Turn eDP power off\n");
WARN(!intel_dp->want_panel_vdd, "Need VDD to turn off panel\n");
pp = ironlake_get_pp_control(intel_dp);
/* We need to switch off panel power _and_ force vdd, for otherwise some
* panels get very unhappy and cease to work. */
pp &= ~(POWER_TARGET_ON | EDP_FORCE_VDD | PANEL_POWER_RESET | EDP_BLC_ENABLE);
pp &= ~(POWER_TARGET_ON | PANEL_POWER_RESET | EDP_BLC_ENABLE);
pp_ctrl_reg = _pp_ctrl_reg(intel_dp);
I915_WRITE(pp_ctrl_reg, pp);
POSTING_READ(pp_ctrl_reg);
intel_dp->want_panel_vdd = false;
ironlake_wait_panel_off(intel_dp);
}
@ -1772,7 +1777,6 @@ static void intel_disable_dp(struct intel_encoder *encoder)
/* Make sure the panel is off before trying to change the mode. But also
* ensure that we have vdd while we switch off the panel. */
ironlake_edp_panel_vdd_on(intel_dp);
ironlake_edp_backlight_off(intel_dp);
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
ironlake_edp_panel_off(intel_dp);
@ -1845,23 +1849,23 @@ static void vlv_pre_enable_dp(struct intel_encoder *encoder)
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
int port = vlv_dport_to_channel(dport);
enum dpio_channel port = vlv_dport_to_channel(dport);
int pipe = intel_crtc->pipe;
struct edp_power_seq power_seq;
u32 val;
mutex_lock(&dev_priv->dpio_lock);
val = vlv_dpio_read(dev_priv, pipe, DPIO_DATA_LANE_A(port));
val = vlv_dpio_read(dev_priv, pipe, VLV_PCS01_DW8(port));
val = 0;
if (pipe)
val |= (1<<21);
else
val &= ~(1<<21);
val |= 0x001000c4;
vlv_dpio_write(dev_priv, pipe, DPIO_DATA_CHANNEL(port), val);
vlv_dpio_write(dev_priv, pipe, DPIO_PCS_CLOCKBUF0(port), 0x00760018);
vlv_dpio_write(dev_priv, pipe, DPIO_PCS_CLOCKBUF8(port), 0x00400888);
vlv_dpio_write(dev_priv, pipe, VLV_PCS_DW8(port), val);
vlv_dpio_write(dev_priv, pipe, VLV_PCS_DW14(port), 0x00760018);
vlv_dpio_write(dev_priv, pipe, VLV_PCS_DW23(port), 0x00400888);
mutex_unlock(&dev_priv->dpio_lock);
@ -1872,7 +1876,7 @@ static void vlv_pre_enable_dp(struct intel_encoder *encoder)
intel_enable_dp(encoder);
vlv_wait_port_ready(dev_priv, port);
vlv_wait_port_ready(dev_priv, dport);
}
static void vlv_dp_pre_pll_enable(struct intel_encoder *encoder)
@ -1882,24 +1886,24 @@ static void vlv_dp_pre_pll_enable(struct intel_encoder *encoder)
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc =
to_intel_crtc(encoder->base.crtc);
int port = vlv_dport_to_channel(dport);
enum dpio_channel port = vlv_dport_to_channel(dport);
int pipe = intel_crtc->pipe;
/* Program Tx lane resets to default */
mutex_lock(&dev_priv->dpio_lock);
vlv_dpio_write(dev_priv, pipe, DPIO_PCS_TX(port),
vlv_dpio_write(dev_priv, pipe, VLV_PCS_DW0(port),
DPIO_PCS_TX_LANE2_RESET |
DPIO_PCS_TX_LANE1_RESET);
vlv_dpio_write(dev_priv, pipe, DPIO_PCS_CLK(port),
vlv_dpio_write(dev_priv, pipe, VLV_PCS_DW1(port),
DPIO_PCS_CLK_CRI_RXEB_EIOS_EN |
DPIO_PCS_CLK_CRI_RXDIGFILTSG_EN |
(1<<DPIO_PCS_CLK_DATAWIDTH_SHIFT) |
DPIO_PCS_CLK_SOFT_RESET);
/* Fix up inter-pair skew failure */
vlv_dpio_write(dev_priv, pipe, DPIO_PCS_STAGGER1(port), 0x00750f00);
vlv_dpio_write(dev_priv, pipe, DPIO_TX_CTL(port), 0x00001500);
vlv_dpio_write(dev_priv, pipe, DPIO_TX_LANE(port), 0x40400000);
vlv_dpio_write(dev_priv, pipe, VLV_PCS_DW12(port), 0x00750f00);
vlv_dpio_write(dev_priv, pipe, VLV_TX_DW11(port), 0x00001500);
vlv_dpio_write(dev_priv, pipe, VLV_TX_DW14(port), 0x40400000);
mutex_unlock(&dev_priv->dpio_lock);
}
@ -1941,18 +1945,6 @@ intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_
DP_LINK_STATUS_SIZE);
}
#if 0
static char *voltage_names[] = {
"0.4V", "0.6V", "0.8V", "1.2V"
};
static char *pre_emph_names[] = {
"0dB", "3.5dB", "6dB", "9.5dB"
};
static char *link_train_names[] = {
"pattern 1", "pattern 2", "idle", "off"
};
#endif
/*
* These are source-specific values; current Intel hardware supports
* a maximum voltage of 800mV and a maximum pre-emphasis of 6dB
@ -2050,7 +2042,7 @@ static uint32_t intel_vlv_signal_levels(struct intel_dp *intel_dp)
unsigned long demph_reg_value, preemph_reg_value,
uniqtranscale_reg_value;
uint8_t train_set = intel_dp->train_set[0];
int port = vlv_dport_to_channel(dport);
enum dpio_channel port = vlv_dport_to_channel(dport);
int pipe = intel_crtc->pipe;
switch (train_set & DP_TRAIN_PRE_EMPHASIS_MASK) {
@ -2127,14 +2119,14 @@ static uint32_t intel_vlv_signal_levels(struct intel_dp *intel_dp)
}
mutex_lock(&dev_priv->dpio_lock);
vlv_dpio_write(dev_priv, pipe, DPIO_TX_OCALINIT(port), 0x00000000);
vlv_dpio_write(dev_priv, pipe, DPIO_TX_SWING_CTL4(port), demph_reg_value);
vlv_dpio_write(dev_priv, pipe, DPIO_TX_SWING_CTL2(port),
vlv_dpio_write(dev_priv, pipe, VLV_TX_DW5(port), 0x00000000);
vlv_dpio_write(dev_priv, pipe, VLV_TX_DW4(port), demph_reg_value);
vlv_dpio_write(dev_priv, pipe, VLV_TX_DW2(port),
uniqtranscale_reg_value);
vlv_dpio_write(dev_priv, pipe, DPIO_TX_SWING_CTL3(port), 0x0C782040);
vlv_dpio_write(dev_priv, pipe, DPIO_PCS_STAGGER0(port), 0x00030000);
vlv_dpio_write(dev_priv, pipe, DPIO_PCS_CTL_OVER1(port), preemph_reg_value);
vlv_dpio_write(dev_priv, pipe, DPIO_TX_OCALINIT(port), 0x80000000);
vlv_dpio_write(dev_priv, pipe, VLV_TX_DW3(port), 0x0C782040);
vlv_dpio_write(dev_priv, pipe, VLV_PCS_DW11(port), 0x00030000);
vlv_dpio_write(dev_priv, pipe, VLV_PCS_DW9(port), preemph_reg_value);
vlv_dpio_write(dev_priv, pipe, VLV_TX_DW5(port), 0x80000000);
mutex_unlock(&dev_priv->dpio_lock);
return 0;
@ -2646,7 +2638,6 @@ intel_dp_complete_link_train(struct intel_dp *intel_dp)
if (cr_tries > 5) {
DRM_ERROR("failed to train DP, aborting\n");
intel_dp_link_down(intel_dp);
break;
}
@ -2899,13 +2890,11 @@ intel_dp_check_link_status(struct intel_dp *intel_dp)
/* Try to read receiver status if the link appears to be up */
if (!intel_dp_get_link_status(intel_dp, link_status)) {
intel_dp_link_down(intel_dp);
return;
}
/* Now read the DPCD to see if it's actually running */
if (!intel_dp_get_dpcd(intel_dp)) {
intel_dp_link_down(intel_dp);
return;
}
@ -3020,18 +3009,34 @@ g4x_dp_detect(struct intel_dp *intel_dp)
return status;
}
switch (intel_dig_port->port) {
case PORT_B:
bit = PORTB_HOTPLUG_LIVE_STATUS;
break;
case PORT_C:
bit = PORTC_HOTPLUG_LIVE_STATUS;
break;
case PORT_D:
bit = PORTD_HOTPLUG_LIVE_STATUS;
break;
default:
return connector_status_unknown;
if (IS_VALLEYVIEW(dev)) {
switch (intel_dig_port->port) {
case PORT_B:
bit = PORTB_HOTPLUG_LIVE_STATUS_VLV;
break;
case PORT_C:
bit = PORTC_HOTPLUG_LIVE_STATUS_VLV;
break;
case PORT_D:
bit = PORTD_HOTPLUG_LIVE_STATUS_VLV;
break;
default:
return connector_status_unknown;
}
} else {
switch (intel_dig_port->port) {
case PORT_B:
bit = PORTB_HOTPLUG_LIVE_STATUS_G4X;
break;
case PORT_C:
bit = PORTC_HOTPLUG_LIVE_STATUS_G4X;
break;
case PORT_D:
bit = PORTD_HOTPLUG_LIVE_STATUS_G4X;
break;
default:
return connector_status_unknown;
}
}
if ((I915_READ(PORT_HOTPLUG_STAT) & bit) == 0)
@ -3082,9 +3087,12 @@ intel_dp_detect(struct drm_connector *connector, bool force)
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
struct intel_encoder *intel_encoder = &intel_dig_port->base;
struct drm_device *dev = connector->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
enum drm_connector_status status;
struct edid *edid = NULL;
intel_runtime_pm_get(dev_priv);
DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
connector->base.id, drm_get_connector_name(connector));
@ -3096,7 +3104,7 @@ intel_dp_detect(struct drm_connector *connector, bool force)
status = g4x_dp_detect(intel_dp);
if (status != connector_status_connected)
return status;
goto out;
intel_dp_probe_oui(intel_dp);
@ -3112,7 +3120,11 @@ intel_dp_detect(struct drm_connector *connector, bool force)
if (intel_encoder->type != INTEL_OUTPUT_EDP)
intel_encoder->type = INTEL_OUTPUT_DISPLAYPORT;
return connector_status_connected;
status = connector_status_connected;
out:
intel_runtime_pm_put(dev_priv);
return status;
}
static int intel_dp_get_modes(struct drm_connector *connector)

View File

@ -65,8 +65,8 @@
#define wait_for_atomic_us(COND, US) _wait_for((COND), \
DIV_ROUND_UP((US), 1000), 0)
#define KHz(x) (1000*x)
#define MHz(x) KHz(1000*x)
#define KHz(x) (1000 * (x))
#define MHz(x) KHz(1000 * (x))
/*
* Display related stuff
@ -155,7 +155,19 @@ struct intel_encoder {
struct intel_panel {
struct drm_display_mode *fixed_mode;
struct drm_display_mode *downclock_mode;
int fitting_mode;
/* backlight */
struct {
bool present;
u32 level;
u32 max;
bool enabled;
bool combination_mode; /* gen 2/4 only */
bool active_low_pwm;
struct backlight_device *device;
} backlight;
};
struct intel_connector {
@ -443,7 +455,7 @@ struct intel_hdmi {
bool rgb_quant_range_selectable;
void (*write_infoframe)(struct drm_encoder *encoder,
enum hdmi_infoframe_type type,
const uint8_t *frame, ssize_t len);
const void *frame, ssize_t len);
void (*set_infoframes)(struct drm_encoder *encoder,
struct drm_display_mode *adjusted_mode);
};
@ -490,9 +502,9 @@ vlv_dport_to_channel(struct intel_digital_port *dport)
{
switch (dport->port) {
case PORT_B:
return 0;
return DPIO_CH0;
case PORT_C:
return 1;
return DPIO_CH1;
default:
BUG();
}
@ -601,7 +613,8 @@ void intel_ddi_disable_transcoder_func(struct drm_i915_private *dev_priv,
void intel_ddi_enable_pipe_clock(struct intel_crtc *intel_crtc);
void intel_ddi_disable_pipe_clock(struct intel_crtc *intel_crtc);
void intel_ddi_setup_hw_pll_state(struct drm_device *dev);
bool intel_ddi_pll_mode_set(struct drm_crtc *crtc);
bool intel_ddi_pll_select(struct intel_crtc *crtc);
void intel_ddi_pll_enable(struct intel_crtc *crtc);
void intel_ddi_put_crtc_pll(struct drm_crtc *crtc);
void intel_ddi_set_pipe_settings(struct drm_crtc *crtc);
void intel_ddi_prepare_link_retrain(struct drm_encoder *encoder);
@ -612,6 +625,8 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
/* intel_display.c */
const char *intel_output_name(int output);
bool intel_has_pending_fb_unpin(struct drm_device *dev);
int intel_pch_rawclk(struct drm_device *dev);
void intel_mark_busy(struct drm_device *dev);
void intel_mark_fb_busy(struct drm_i915_gem_object *obj,
@ -638,7 +653,8 @@ enum transcoder intel_pipe_to_cpu_transcoder(struct drm_i915_private *dev_priv,
void intel_wait_for_vblank(struct drm_device *dev, int pipe);
void intel_wait_for_pipe_off(struct drm_device *dev, int pipe);
int ironlake_get_lanes_required(int target_clock, int link_bw, int bpp);
void vlv_wait_port_ready(struct drm_i915_private *dev_priv, int port);
void vlv_wait_port_ready(struct drm_i915_private *dev_priv,
struct intel_digital_port *dport);
bool intel_get_load_detect_pipe(struct drm_connector *connector,
struct drm_display_mode *mode,
struct intel_load_detect_pipe *old);
@ -690,11 +706,10 @@ void
ironlake_check_encoder_dotclock(const struct intel_crtc_config *pipe_config,
int dotclock);
bool intel_crtc_active(struct drm_crtc *crtc);
void i915_disable_vga_mem(struct drm_device *dev);
void hsw_enable_ips(struct intel_crtc *crtc);
void hsw_disable_ips(struct intel_crtc *crtc);
void intel_display_set_init_power(struct drm_device *dev, bool enable);
int valleyview_get_vco(struct drm_i915_private *dev_priv);
/* intel_dp.c */
void intel_dp_init(struct drm_device *dev, int output_reg, enum port port);
@ -808,9 +823,13 @@ void intel_panel_set_backlight(struct intel_connector *connector, u32 level,
int intel_panel_setup_backlight(struct drm_connector *connector);
void intel_panel_enable_backlight(struct intel_connector *connector);
void intel_panel_disable_backlight(struct intel_connector *connector);
void intel_panel_destroy_backlight(struct drm_device *dev);
void intel_panel_destroy_backlight(struct drm_connector *connector);
void intel_panel_init_backlight_funcs(struct drm_device *dev);
enum drm_connector_status intel_panel_detect(struct drm_device *dev);
extern struct drm_display_mode *intel_find_panel_downclock(
struct drm_device *dev,
struct drm_display_mode *fixed_mode,
struct drm_connector *connector);
/* intel_pm.c */
void intel_init_clock_gating(struct drm_device *dev);
@ -830,6 +849,8 @@ int intel_power_domains_init(struct drm_device *dev);
void intel_power_domains_remove(struct drm_device *dev);
bool intel_display_power_enabled(struct drm_device *dev,
enum intel_display_power_domain domain);
bool intel_display_power_enabled_sw(struct drm_device *dev,
enum intel_display_power_domain domain);
void intel_display_power_get(struct drm_device *dev,
enum intel_display_power_domain domain);
void intel_display_power_put(struct drm_device *dev,
@ -844,6 +865,10 @@ void gen6_rps_idle(struct drm_i915_private *dev_priv);
void gen6_rps_boost(struct drm_i915_private *dev_priv);
void intel_aux_display_runtime_get(struct drm_i915_private *dev_priv);
void intel_aux_display_runtime_put(struct drm_i915_private *dev_priv);
void intel_runtime_pm_get(struct drm_i915_private *dev_priv);
void intel_runtime_pm_put(struct drm_i915_private *dev_priv);
void intel_init_runtime_pm(struct drm_i915_private *dev_priv);
void intel_fini_runtime_pm(struct drm_i915_private *dev_priv);
void ilk_wm_get_hw_state(struct drm_device *dev);

View File

@ -37,49 +37,18 @@
static const struct intel_dsi_device intel_dsi_devices[] = {
};
static void vlv_cck_modify(struct drm_i915_private *dev_priv, u32 reg, u32 val,
u32 mask)
{
u32 tmp = vlv_cck_read(dev_priv, reg);
tmp &= ~mask;
tmp |= val;
vlv_cck_write(dev_priv, reg, tmp);
}
static void band_gap_wa(struct drm_i915_private *dev_priv)
static void band_gap_reset(struct drm_i915_private *dev_priv)
{
mutex_lock(&dev_priv->dpio_lock);
/* Enable bandgap fix in GOP driver */
vlv_cck_modify(dev_priv, 0x6D, 0x00010000, 0x00030000);
msleep(20);
vlv_cck_modify(dev_priv, 0x6E, 0x00010000, 0x00030000);
msleep(20);
vlv_cck_modify(dev_priv, 0x6F, 0x00010000, 0x00030000);
msleep(20);
vlv_cck_modify(dev_priv, 0x00, 0x00008000, 0x00008000);
msleep(20);
vlv_cck_modify(dev_priv, 0x00, 0x00000000, 0x00008000);
msleep(20);
/* Turn Display Trunk on */
vlv_cck_modify(dev_priv, 0x6B, 0x00020000, 0x00030000);
msleep(20);
vlv_cck_modify(dev_priv, 0x6C, 0x00020000, 0x00030000);
msleep(20);
vlv_cck_modify(dev_priv, 0x6D, 0x00020000, 0x00030000);
msleep(20);
vlv_cck_modify(dev_priv, 0x6E, 0x00020000, 0x00030000);
msleep(20);
vlv_cck_modify(dev_priv, 0x6F, 0x00020000, 0x00030000);
vlv_flisdsi_write(dev_priv, 0x08, 0x0001);
vlv_flisdsi_write(dev_priv, 0x0F, 0x0005);
vlv_flisdsi_write(dev_priv, 0x0F, 0x0025);
udelay(150);
vlv_flisdsi_write(dev_priv, 0x0F, 0x0000);
vlv_flisdsi_write(dev_priv, 0x08, 0x0000);
mutex_unlock(&dev_priv->dpio_lock);
/* Need huge delay, otherwise clock is not stable */
msleep(100);
}
static struct intel_dsi *intel_attached_dsi(struct drm_connector *connector)
@ -132,14 +101,47 @@ static void intel_dsi_pre_pll_enable(struct intel_encoder *encoder)
vlv_enable_dsi_pll(encoder);
}
static void intel_dsi_device_ready(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
int pipe = intel_crtc->pipe;
u32 val;
DRM_DEBUG_KMS("\n");
val = I915_READ(MIPI_PORT_CTRL(pipe));
I915_WRITE(MIPI_PORT_CTRL(pipe), val | LP_OUTPUT_HOLD);
usleep_range(1000, 1500);
I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT);
usleep_range(2000, 2500);
I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY);
usleep_range(2000, 2500);
I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00);
usleep_range(2000, 2500);
I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY);
usleep_range(2000, 2500);
}
static void intel_dsi_pre_enable(struct intel_encoder *encoder)
{
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
DRM_DEBUG_KMS("\n");
if (intel_dsi->dev.dev_ops->panel_reset)
intel_dsi->dev.dev_ops->panel_reset(&intel_dsi->dev);
/* put device in ready state */
intel_dsi_device_ready(encoder);
if (intel_dsi->dev.dev_ops->send_otp_cmds)
intel_dsi->dev.dev_ops->send_otp_cmds(&intel_dsi->dev);
}
static void intel_dsi_enable(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
int pipe = intel_crtc->pipe;
@ -147,41 +149,28 @@ static void intel_dsi_enable(struct intel_encoder *encoder)
DRM_DEBUG_KMS("\n");
temp = I915_READ(MIPI_DEVICE_READY(pipe));
if ((temp & DEVICE_READY) == 0) {
temp &= ~ULPS_STATE_MASK;
I915_WRITE(MIPI_DEVICE_READY(pipe), temp | DEVICE_READY);
} else if (temp & ULPS_STATE_MASK) {
temp &= ~ULPS_STATE_MASK;
I915_WRITE(MIPI_DEVICE_READY(pipe), temp | ULPS_STATE_EXIT);
/*
* We need to ensure that there is a minimum of 1 ms time
* available before clearing the UPLS exit state.
*/
msleep(2);
I915_WRITE(MIPI_DEVICE_READY(pipe), temp);
}
if (is_cmd_mode(intel_dsi))
I915_WRITE(MIPI_MAX_RETURN_PKT_SIZE(pipe), 8 * 4);
if (is_vid_mode(intel_dsi)) {
else {
msleep(20); /* XXX */
dpi_send_cmd(intel_dsi, TURN_ON);
msleep(100);
/* assert ip_tg_enable signal */
temp = I915_READ(MIPI_PORT_CTRL(pipe));
temp = I915_READ(MIPI_PORT_CTRL(pipe)) & ~LANE_CONFIGURATION_MASK;
temp = temp | intel_dsi->port_bits;
I915_WRITE(MIPI_PORT_CTRL(pipe), temp | DPI_ENABLE);
POSTING_READ(MIPI_PORT_CTRL(pipe));
}
intel_dsi->dev.dev_ops->enable(&intel_dsi->dev);
if (intel_dsi->dev.dev_ops->enable)
intel_dsi->dev.dev_ops->enable(&intel_dsi->dev);
}
static void intel_dsi_disable(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
int pipe = intel_crtc->pipe;
@ -189,8 +178,6 @@ static void intel_dsi_disable(struct intel_encoder *encoder)
DRM_DEBUG_KMS("\n");
intel_dsi->dev.dev_ops->disable(&intel_dsi->dev);
if (is_vid_mode(intel_dsi)) {
dpi_send_cmd(intel_dsi, SHUTDOWN);
msleep(10);
@ -203,20 +190,54 @@ static void intel_dsi_disable(struct intel_encoder *encoder)
msleep(2);
}
temp = I915_READ(MIPI_DEVICE_READY(pipe));
if (temp & DEVICE_READY) {
temp &= ~DEVICE_READY;
temp &= ~ULPS_STATE_MASK;
I915_WRITE(MIPI_DEVICE_READY(pipe), temp);
}
/* if disable packets are sent before sending shutdown packet then in
* some next enable sequence send turn on packet error is observed */
if (intel_dsi->dev.dev_ops->disable)
intel_dsi->dev.dev_ops->disable(&intel_dsi->dev);
}
static void intel_dsi_post_disable(struct intel_encoder *encoder)
static void intel_dsi_clear_device_ready(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
int pipe = intel_crtc->pipe;
u32 val;
DRM_DEBUG_KMS("\n");
I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER);
usleep_range(2000, 2500);
I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT);
usleep_range(2000, 2500);
I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER);
usleep_range(2000, 2500);
val = I915_READ(MIPI_PORT_CTRL(pipe));
I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD);
usleep_range(1000, 1500);
if (wait_for(((I915_READ(MIPI_PORT_CTRL(pipe)) & AFE_LATCHOUT)
== 0x00000), 30))
DRM_ERROR("DSI LP not going Low\n");
I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00);
usleep_range(2000, 2500);
vlv_disable_dsi_pll(encoder);
}
static void intel_dsi_post_disable(struct intel_encoder *encoder)
{
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
DRM_DEBUG_KMS("\n");
intel_dsi_clear_device_ready(encoder);
if (intel_dsi->dev.dev_ops->disable_panel_power)
intel_dsi->dev.dev_ops->disable_panel_power(&intel_dsi->dev);
}
static bool intel_dsi_get_hw_state(struct intel_encoder *encoder,
enum pipe *pipe)
@ -251,8 +272,9 @@ static void intel_dsi_get_config(struct intel_encoder *encoder,
/* XXX: read flags, set to adjusted_mode */
}
static int intel_dsi_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
static enum drm_mode_status
intel_dsi_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct intel_connector *intel_connector = to_intel_connector(connector);
struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode;
@ -352,11 +374,8 @@ static void intel_dsi_mode_set(struct intel_encoder *intel_encoder)
DRM_DEBUG_KMS("pipe %c\n", pipe_name(pipe));
/* Update the DSI PLL */
vlv_enable_dsi_pll(intel_encoder);
/* XXX: Location of the call */
band_gap_wa(dev_priv);
band_gap_reset(dev_priv);
/* escape clock divider, 20MHz, shared for A and C. device ready must be
* off when doing this! txclkesc? */
@ -373,11 +392,7 @@ static void intel_dsi_mode_set(struct intel_encoder *intel_encoder)
I915_WRITE(MIPI_INTR_STAT(pipe), 0xffffffff);
I915_WRITE(MIPI_INTR_EN(pipe), 0xffffffff);
I915_WRITE(MIPI_DPHY_PARAM(pipe),
0x3c << EXIT_ZERO_COUNT_SHIFT |
0x1f << TRAIL_COUNT_SHIFT |
0xc5 << CLK_ZERO_COUNT_SHIFT |
0x1f << PREPARE_COUNT_SHIFT);
I915_WRITE(MIPI_DPHY_PARAM(pipe), intel_dsi->dphy_reg);
I915_WRITE(MIPI_DPI_RESOLUTION(pipe),
adjusted_mode->vdisplay << VERTICAL_ADDRESS_SHIFT |
@ -425,9 +440,9 @@ static void intel_dsi_mode_set(struct intel_encoder *intel_encoder)
adjusted_mode->htotal,
bpp, intel_dsi->lane_count) + 1);
}
I915_WRITE(MIPI_LP_RX_TIMEOUT(pipe), 8309); /* max */
I915_WRITE(MIPI_TURN_AROUND_TIMEOUT(pipe), 0x14); /* max */
I915_WRITE(MIPI_DEVICE_RESET_TIMER(pipe), 0xffff); /* max */
I915_WRITE(MIPI_LP_RX_TIMEOUT(pipe), intel_dsi->lp_rx_timeout);
I915_WRITE(MIPI_TURN_AROUND_TIMEOUT(pipe), intel_dsi->turn_arnd_val);
I915_WRITE(MIPI_DEVICE_RESET_TIMER(pipe), intel_dsi->rst_timer_val);
/* dphy stuff */
@ -442,29 +457,31 @@ static void intel_dsi_mode_set(struct intel_encoder *intel_encoder)
*
* XXX: write MIPI_STOP_STATE_STALL?
*/
I915_WRITE(MIPI_HIGH_LOW_SWITCH_COUNT(pipe), 0x46);
I915_WRITE(MIPI_HIGH_LOW_SWITCH_COUNT(pipe),
intel_dsi->hs_to_lp_count);
/* XXX: low power clock equivalence in terms of byte clock. the number
* of byte clocks occupied in one low power clock. based on txbyteclkhs
* and txclkesc. txclkesc time / txbyteclk time * (105 +
* MIPI_STOP_STATE_STALL) / 105.???
*/
I915_WRITE(MIPI_LP_BYTECLK(pipe), 4);
I915_WRITE(MIPI_LP_BYTECLK(pipe), intel_dsi->lp_byte_clk);
/* the bw essential for transmitting 16 long packets containing 252
* bytes meant for dcs write memory command is programmed in this
* register in terms of byte clocks. based on dsi transfer rate and the
* number of lanes configured the time taken to transmit 16 long packets
* in a dsi stream varies. */
I915_WRITE(MIPI_DBI_BW_CTRL(pipe), 0x820);
I915_WRITE(MIPI_DBI_BW_CTRL(pipe), intel_dsi->bw_timer);
I915_WRITE(MIPI_CLK_LANE_SWITCH_TIME_CNT(pipe),
0xa << LP_HS_SSW_CNT_SHIFT |
0x14 << HS_LP_PWR_SW_CNT_SHIFT);
intel_dsi->clk_lp_to_hs_count << LP_HS_SSW_CNT_SHIFT |
intel_dsi->clk_hs_to_lp_count << HS_LP_PWR_SW_CNT_SHIFT);
if (is_vid_mode(intel_dsi))
I915_WRITE(MIPI_VIDEO_MODE_FORMAT(pipe),
intel_dsi->video_mode_format);
intel_dsi->video_frmt_cfg_bits |
intel_dsi->video_mode_format);
}
static enum drm_connector_status

View File

@ -39,6 +39,13 @@ struct intel_dsi_device {
struct intel_dsi_dev_ops {
bool (*init)(struct intel_dsi_device *dsi);
void (*panel_reset)(struct intel_dsi_device *dsi);
void (*disable_panel_power)(struct intel_dsi_device *dsi);
/* one time programmable commands if needed */
void (*send_otp_cmds)(struct intel_dsi_device *dsi);
/* This callback must be able to assume DSI commands can be sent */
void (*enable)(struct intel_dsi_device *dsi);
@ -89,6 +96,20 @@ struct intel_dsi {
/* eot for MIPI_EOT_DISABLE register */
u32 eot_disable;
u32 port_bits;
u32 bw_timer;
u32 dphy_reg;
u32 video_frmt_cfg_bits;
u16 lp_byte_clk;
/* timeouts in byte clocks */
u16 lp_rx_timeout;
u16 turn_arnd_val;
u16 rst_timer_val;
u16 hs_to_lp_count;
u16 clk_lp_to_hs_count;
u16 clk_hs_to_lp_count;
};
static inline struct intel_dsi *enc_to_intel_dsi(struct drm_encoder *encoder)

View File

@ -50,6 +50,8 @@ static const u32 lfsr_converts[] = {
71, 35 /* 91 - 92 */
};
#ifdef DSI_CLK_FROM_RR
static u32 dsi_rr_formula(const struct drm_display_mode *mode,
int pixel_format, int video_mode_format,
int lane_count, bool eotp)
@ -121,7 +123,7 @@ static u32 dsi_rr_formula(const struct drm_display_mode *mode,
/* the dsi clock is divided by 2 in the hardware to get dsi ddr clock */
dsi_bit_clock_hz = bytes_per_x_frames_x_lanes * 8;
dsi_clk = dsi_bit_clock_hz / (1000 * 1000);
dsi_clk = dsi_bit_clock_hz / 1000;
if (eotp && video_mode_format == VIDEO_MODE_BURST)
dsi_clk *= 2;
@ -129,64 +131,37 @@ static u32 dsi_rr_formula(const struct drm_display_mode *mode,
return dsi_clk;
}
#ifdef MNP_FROM_TABLE
#else
struct dsi_clock_table {
u32 freq;
u8 m;
u8 p;
};
static const struct dsi_clock_table dsi_clk_tbl[] = {
{300, 72, 6}, {313, 75, 6}, {323, 78, 6}, {333, 80, 6},
{343, 82, 6}, {353, 85, 6}, {363, 87, 6}, {373, 90, 6},
{383, 92, 6}, {390, 78, 5}, {393, 79, 5}, {400, 80, 5},
{401, 80, 5}, {402, 80, 5}, {403, 81, 5}, {404, 81, 5},
{405, 81, 5}, {406, 81, 5}, {407, 81, 5}, {408, 82, 5},
{409, 82, 5}, {410, 82, 5}, {411, 82, 5}, {412, 82, 5},
{413, 83, 5}, {414, 83, 5}, {415, 83, 5}, {416, 83, 5},
{417, 83, 5}, {418, 84, 5}, {419, 84, 5}, {420, 84, 5},
{430, 86, 5}, {440, 88, 5}, {450, 90, 5}, {460, 92, 5},
{470, 75, 4}, {480, 77, 4}, {490, 78, 4}, {500, 80, 4},
{510, 82, 4}, {520, 83, 4}, {530, 85, 4}, {540, 86, 4},
{550, 88, 4}, {560, 90, 4}, {570, 91, 4}, {580, 70, 3},
{590, 71, 3}, {600, 72, 3}, {610, 73, 3}, {620, 74, 3},
{630, 76, 3}, {640, 77, 3}, {650, 78, 3}, {660, 79, 3},
{670, 80, 3}, {680, 82, 3}, {690, 83, 3}, {700, 84, 3},
{710, 85, 3}, {720, 86, 3}, {730, 88, 3}, {740, 89, 3},
{750, 90, 3}, {760, 91, 3}, {770, 92, 3}, {780, 62, 2},
{790, 63, 2}, {800, 64, 2}, {880, 70, 2}, {900, 72, 2},
{1000, 80, 2}, /* dsi clock frequency in Mhz*/
};
static int dsi_calc_mnp(u32 dsi_clk, struct dsi_mnp *dsi_mnp)
/* Get DSI clock from pixel clock */
static u32 dsi_clk_from_pclk(const struct drm_display_mode *mode,
int pixel_format, int lane_count)
{
unsigned int i;
u8 m;
u8 n;
u8 p;
u32 m_seed;
u32 dsi_clk_khz;
u32 bpp;
if (dsi_clk < 300 || dsi_clk > 1000)
return -ECHRNG;
for (i = 0; i <= ARRAY_SIZE(dsi_clk_tbl); i++) {
if (dsi_clk_tbl[i].freq > dsi_clk)
break;
switch (pixel_format) {
default:
case VID_MODE_FORMAT_RGB888:
case VID_MODE_FORMAT_RGB666_LOOSE:
bpp = 24;
break;
case VID_MODE_FORMAT_RGB666:
bpp = 18;
break;
case VID_MODE_FORMAT_RGB565:
bpp = 16;
break;
}
m = dsi_clk_tbl[i].m;
p = dsi_clk_tbl[i].p;
m_seed = lfsr_converts[m - 62];
n = 1;
dsi_mnp->dsi_pll_ctrl = 1 << (DSI_PLL_P1_POST_DIV_SHIFT + p - 2);
dsi_mnp->dsi_pll_div = (n - 1) << DSI_PLL_N1_DIV_SHIFT |
m_seed << DSI_PLL_M1_DIV_SHIFT;
/* DSI data rate = pixel clock * bits per pixel / lane count
pixel clock is converted from KHz to Hz */
dsi_clk_khz = DIV_ROUND_CLOSEST(mode->clock * bpp, lane_count);
return 0;
return dsi_clk_khz;
}
#else
#endif
static int dsi_calc_mnp(u32 dsi_clk, struct dsi_mnp *dsi_mnp)
{
@ -194,36 +169,47 @@ static int dsi_calc_mnp(u32 dsi_clk, struct dsi_mnp *dsi_mnp)
u32 ref_clk;
u32 error;
u32 tmp_error;
u32 target_dsi_clk;
u32 calc_dsi_clk;
int target_dsi_clk;
int calc_dsi_clk;
u32 calc_m;
u32 calc_p;
u32 m_seed;
if (dsi_clk < 300 || dsi_clk > 1150) {
/* dsi_clk is expected in KHZ */
if (dsi_clk < 300000 || dsi_clk > 1150000) {
DRM_ERROR("DSI CLK Out of Range\n");
return -ECHRNG;
}
ref_clk = 25000;
target_dsi_clk = dsi_clk * 1000;
target_dsi_clk = dsi_clk;
error = 0xFFFFFFFF;
tmp_error = 0xFFFFFFFF;
calc_m = 0;
calc_p = 0;
for (m = 62; m <= 92; m++) {
for (p = 2; p <= 6; p++) {
/* Find the optimal m and p divisors
with minimal error +/- the required clock */
calc_dsi_clk = (m * ref_clk) / p;
if (calc_dsi_clk >= target_dsi_clk) {
tmp_error = calc_dsi_clk - target_dsi_clk;
if (tmp_error < error) {
error = tmp_error;
calc_m = m;
calc_p = p;
}
if (calc_dsi_clk == target_dsi_clk) {
calc_m = m;
calc_p = p;
error = 0;
break;
} else
tmp_error = abs(target_dsi_clk - calc_dsi_clk);
if (tmp_error < error) {
error = tmp_error;
calc_m = m;
calc_p = p;
}
}
if (error == 0)
break;
}
m_seed = lfsr_converts[calc_m - 62];
@ -235,8 +221,6 @@ static int dsi_calc_mnp(u32 dsi_clk, struct dsi_mnp *dsi_mnp)
return 0;
}
#endif
/*
* XXX: The muxing and gating is hard coded for now. Need to add support for
* sharing PLLs with two DSI outputs.
@ -251,9 +235,8 @@ static void vlv_configure_dsi_pll(struct intel_encoder *encoder)
struct dsi_mnp dsi_mnp;
u32 dsi_clk;
dsi_clk = dsi_rr_formula(mode, intel_dsi->pixel_format,
intel_dsi->video_mode_format,
intel_dsi->lane_count, !intel_dsi->eot_disable);
dsi_clk = dsi_clk_from_pclk(mode, intel_dsi->pixel_format,
intel_dsi->lane_count);
ret = dsi_calc_mnp(dsi_clk, &dsi_mnp);
if (ret) {

Some files were not shown because too many files have changed in this diff Show More